id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.14800
The Entanglement of Elastic and Inelastic Scattering
The entanglement properties of systems in which elastic and inelastic reactions occur in projectile-target interactions is studied. A new measure of entanglement, the scattering entropy, based on the unitarity of the $S-$matrix (probability conservation), is suggested. Using simple models for both low- and high-energy interactions, the amount of entanglement is found to track with the strength of the inelastic interaction. The familiar example of the classical ``black disk", total absorption, model is found to correspond to maximum entanglement. An analysis of high-energy $pp$ scattering data shows that entanglement is near maximum for lab energies greater than about 1 GeV, showing that the total absorption model is a reasonable starting point for understanding the data.
Gerald A. Miller
2023-06-26T16:02:26Z
http://arxiv.org/abs/2306.14800v1
# The Entanglement of Elastic and Inelastic Scattering ###### Abstract The entanglement properties of systems in which elastic and inelastic reactions occur in projectile-target interactions is studied. A new measure of entanglement, the scattering entropy, based on the unitarity of the \(S-\)matrix (probability conservation), is suggested. Using simple models for both low- and high-energy interactions, the amount of entanglement is found to track with the strength of the inelastic interaction. The familiar example of the classical "black disk", total absorption, model is found to correspond to maximum entanglement. An analysis of high-energy \(pp\) scattering data shows that entanglement is near maximum for lab energies greater than about 1 GeV, showing that the total absorption model is a reasonable starting point for understanding the data. + Footnote †: preprint: NT@UW-23-07 ## I Introduction The implications of entanglement in quantum mechanics and quantum field theory have recently been studied in many papers. For a long list of recent references see Ref. [1]. This new interest has been stimulated by the connection with quantum computing. Work related to hadron, QCD and EIC physics appears in Refs. [2; 3; 4; 5; 6; 7]. The entanglement properties of nucleon-nucleon scattering and nucleon-nucleus elastic scattering are discussed in Refs. [8; 9; 10; 11; 12]. The connections between entanglement and nuclear structure are presented in [13; 14; 15; 16; 17; 18; 19; 20; 21]. There is also a possible deep connection between entanglement and underlying symmetries of the Standard Model [8; 9; 10; 11; 22]. The present paper is concerned with situations in which a projectile can excite a target. One of the challenges in studying entropy and entanglement for scattering is the need to develop proper definitions for the necessary infinite dimensional Hilbert space. This is done here using the requirements of unitarity. A special and somewhat ubiquitous case is the scattering of a particle from a totally absorbing "black disk" of radius \(R\)[23; 24; 25]. This situation approximately occurs in low-energy \(\alpha\)-nucleus scattering, and in high-energy proton-proton scattering. In the total absorption limit, following the requirement of unitarity of the \(S\)-matrix, the elastic \(\sigma_{\rm el}\) and inelastic \(\sigma_{\rm inel}\) cross sections are equal. The inelastic cross section cross section, is \(\pi R^{2}\), so that the total cross section is \(2\pi R^{2}\), twice the geometric cross section. I will argue that when \(\sigma_{\rm el}=\sigma_{\rm inel}\) the entanglement entropy is maximized. ## II Low-Energy projectile-target scattering and a new measure of entropy Consider projectile-target scattering at energies sufficient low so that there is only \(s\)-wave scattering. Furthermore, the model definition is that there is only inelastic scattering to a single excited state, \(X\). I consider examples in which the inelastic scattering ranges from relatively small, corresponding, for example, to neutron-nucleus scattering, to relatively large, corresponding to alpha-nucleus scattering. Another example, discussed below, is nucleon-nucleon scattering in which interactions cause either the target or projectile to be in an excited state. The initial state is a product of a plane wave state and the target ground state, \(G\). As a product state there is no entanglement. Interactions occur such that after the scattering event the projectile-target wave function is given by \[|\Psi\rangle=|u_{1}\rangle\otimes|G\rangle+|u_{2}\rangle\otimes|X\rangle, \tag{1}\] where \(|u_{1}\rangle\) represents a projectile with energy corresponding to elastic scattering and \(|u_{2}\rangle\) represents a projectile with an energy corresponding to inelastic scattering. Measurement of the energy of the projectile determines whether or not the nucleus is in its ground or excited state. Thus the state represented by Eq. (1) is an entangled state. The next step is to work out a way to calculate entanglement properties. The wave function, \(|\Psi\rangle\) is almost of the form of the Schmidt decomposition in which the different coefficients represent probability amplitudes. Here the wave functions are in the continuum, so that discrete normalization conventions are not applicable. It seems necessary to develop a new method to compute entropy. The procedure is to use an exactly soluble model [26] to illustrate and develop the necessary formalism. I argue below that the formalism is more general than the model. In this model the interactions are represented by delta-shell interactions [25] that can be thought of as approximating projectile-target interactions at the surface of the target. Then the radial wave functions \(u_{1,2}(r)\) satisfy the coupled-channels equations: \[d^{2}u_{1}/dr^{2}+[k^{2}-V_{1}\delta(r-a)]u_{1}=V_{12}\delta(r-a) u_{2}, \tag{2}\] \[d^{2}u_{2}/dr^{2}+[k^{2}-\Delta^{2}-V_{2}\delta(r-a)]u_{1}=V_{21 }\delta(r-a)u_{2}. \tag{3}\] Hermiticity demands \(V_{12}=V_{21}\) and calculations are limited to the case \(V_{1}\neq 0\), \(V_{12}\neq 0,V_{2}=0\) to gain analytic insight. The parameter \(\Delta\) is proportional to the energy difference between the excited and ground states. The solution of Eq. (2) for \(u_{1}\) is expressed in terms of the free-particle Green's function \(g_{1}(r,r^{\prime})\) as \[u_{1}(r)=\frac{\sin kr}{k}+V_{1}g_{1}(r,a)u_{1}(a)+V_{12}g_{1}(r, a)u_{2}(a), \tag{4}\] with \[g_{1}(r,r^{\prime})=-(1/k)\sin kr_{<}e^{ikr_{>}}, \tag{5}\] \(r_{<}(r_{>})\) is the smaller (larger) of \((r,r^{\prime})\). The solution of Eq. (3) for \(u_{2}\) is given by \[u_{2}(r)=V_{12}g_{2}(r,a)u_{1}(a) \tag{6}\] with \[g_{2}(r,r^{\prime})=-(1/k_{2})\sin k_{2}r_{<}e^{ik_{2}r_{>}}, \tag{7}\] where \(k_{2}\equiv\sqrt{k^{2}-\Delta^{2}}\). The results for \(u_{1,2}(r)\) express the condition that the initial state is a plane wave incident on the ground state of the target nucleus. The use of Eq. (6) in Eq. (4) leads to the result \[u_{1}(r)=(1/k)\sin kr+T_{11}e^{ikr} \tag{8}\] for \(r>a\), with the \(T\)-matrix element given by \[T_{11}=\frac{(\frac{\sin ka}{k})^{2}[V_{1}+V_{12}^{2}g_{2}(a,a) ]}{1-[V_{1}+V_{12}^{2}g_{2}(a,a)]g_{1}(a,a)}. \tag{9}\] The relation between \(T_{11}\) and the complex-valued scattering phase shift, \(\delta_{0}\), is given by \[T_{11}=\frac{e^{2i\delta_{0}}-1}{2ik}. \tag{10}\] Similarly \[u_{2}(r)=T_{12}e^{ik_{2}r}, \tag{11}\] with \[T_{12}=\frac{V_{12}(\frac{\sin ka}{k})(\frac{\sin ka_{2}}{k_{2} })}{1-[V_{1}+V_{12}^{2}g_{2}(a,a)]g_{1}(a,a)}. \tag{12}\] Next, turn to the entanglement properties of the model. The textbook definition is the entanglement entropy, the von Neumann entropy, given by \(S=-\mathrm{Tr}[\rho\log_{2}\rho]\), where \(\rho\) is the one-body density matrix. This is typically evaluated by diagonalizing \(\rho\) in a discrete basis. Here continuum wave functions, normalized as delta functions, are used. So there is a need to obtain an appropriate definition of probability. This is done through the optical theorem, an expression of the unitarily of the \(S\)-matrix: \[\sigma_{tot}=\frac{4\pi}{k_{1}}\mathrm{Im}[T_{11}]. \tag{13}\] The left-hand side is sum of the elastic and inelastic scattering cross sections, integrated over all angles. The result for the present model is expressed as \[1=\frac{k_{1}|T_{11}|^{2}+k_{2}|T_{12}|^{2}}{\mathrm{Im}[T_{11}]}, \tag{14}\] a relation that can be checked using the expressions for \(T_{11}\) and \(T_{12}\). The Eq. (14) leads to a natural definition of probabilities based on the number of counts detected at an asymptoticaly located detector. The ground state probability \(P_{G}\) is given by \[P_{G}=\frac{k_{1}|T_{11}|^{2}}{\mathrm{Im}[T_{11}]} \tag{15}\] and the excited state probability \(P_{X}\) is given by \[P_{X}=\frac{k_{2}|T_{12}|^{2}}{\mathrm{Im}[T_{11}]}, \tag{16}\] and via Eq. (14): \(P_{G}+P_{X}=1\). Therefore, one may define the projectile-target (\(pT\)) entanglement entropy \(S_{pT}\) of the final state as \[S_{pT}=-P_{G}\ln_{2}P_{G}-P_{X}\ln_{2}P_{X}. \tag{17}\] This entanglement entropy, termed the _scattering entropy_, is minimized if either of \(P_{G}\) or \(P_{X}\) vanishes. In that case the final scattering state is a simple tensor product. The scattering entropy is maximized at \(S_{pT}=1\) when \(P_{G}=P_{X}.\) Note also that Eq. (10) shows that \(T_{11}\) is periodic in \(k\), vanishing whenever \(k=n\pi\). Fig. 1 shows \(S_{pT}\) for parameters \(a=3.5\,\mathrm{fm},\mathrm{V}_{1}=0.25\,\mathrm{fm}^{-1}\) for different ratios \(V_{12}/V_{1}\) as a function of \(k\) the incident momentum. The parameter \(\Delta=0.1\) fm\({}^{-1}\). The situation of \(V_{12}/V_{1}=0.2\) is similar to that of neutron-nucleus interactions in which the inelastic scattering is relatively small. The stronger absorption situation of \(V_{12}/V_{1}=1\) is similar to that of alpha-nucleus interactions in which the inelastic scattering is large. For values of \(k<\Delta\) the entanglement entropy vanishes because the target cannot be excited. For higher Figure 1: \(S_{nA}\) as a function of \(k=k_{1}\) for the four different values of \(V_{12}/V\) shown in the figure. values the scattering entropy is at its maximum value when \(V_{12}/V_{1}=1\). This result can be understood directly from Eq. (15) and Eq. (16). These quantities are approximately equal if \(V/(ka)\ll 1\) and \(k\gg\Delta\). This result is similar to that of the total absorption model in which the elastic and inelastic cross sections are the same. But here there is only one phase shift. The unusual cusp-like near-threshold behavior for the case when \(V_{12}/V_{1}=1\) arises from the non-analytic square root behavior of \(k_{2}\) combined with the increasing importance of the second term in the numerator of Eq. (9). The key lesson of Fig. 1 is that entanglement entropy, as measured by the scattering entropy, increases as the tendency for inelastic scattering increases. ## III High energy scattering in a two-channel model The scattering wave function \(|\Psi\rangle\) is given again by Eq. (1). In the high energy limit the wave number \(k\) is large compared to the inverse size of the system and large compared to the energy difference between the ground and excited states represented by \(\Delta\). Thus \(\Delta\) is neglected in solving the relevant wave equations, but kept as very small, but non-zero, to maintain the entanglement property that measuring energy of the projectile in the final state determines whether or not the target is in the ground state. The coupled-channel equations for high-energy scattering are then given by \[\nabla^{2}\psi_{1}+(k^{2}-V)\psi_{1} = U\psi_{2} \tag{18}\] \[\nabla^{2}\psi_{2}+(k^{2}-V)\psi_{2} = U\psi_{1} \tag{19}\] The implementation of the eikonal or short-wavelength approximation is made by using \(\psi_{1,2}({\bf r})=e^{ikz}\phi_{1,2}({\bf b},z)\) in which the direction of the beam is denoted as \(\hat{z}\) and the direction transverse to that by \({\bf b}\). The procedure [27] is to use these in the coupled-channel equations and with large \(k\) neglect the terms \(\nabla^{2}\phi_{1,2}\). This approximation is valid under two conditions [27]: (i) the short-wavelength limit that \(1/k\) is less than any distance scale in the problem, and (ii) \((V,U)/k^{2}\ll 1\) to prevent back-scattering. Then the coupled-channel equations become \[2ik\frac{\partial\phi_{1}}{\partial z}-V\phi_{1}=U\phi_{2} \tag{20}\] \[2ik\frac{\partial\phi_{2}}{\partial z}-V\phi_{2}=U\phi_{1}. \tag{21}\] Let \(\phi\equiv\phi_{1}+\phi_{2}\) and \(\chi\equiv\phi_{1}-\phi_{2}\). Adding the two equations gives \[2ik\frac{\partial\phi}{\partial z}=(U+V)\phi, \tag{22}\] and subtracting the two gives \[2ik\frac{\partial\chi}{\partial z}=(V-U)\phi \tag{23}\] with solutions \[\phi({\bf b},z)=\exp[\frac{-i}{2k}\int_{-\infty}^{z}dz^{\prime}( V({\bf b},z^{\prime})+U({\bf b},z^{\prime})) \tag{24}\] \[\chi({\bf b},z)=\exp[\frac{-i}{2k}\int_{-\infty}^{z}dz^{\prime}( V({\bf b},z^{\prime})-U({\bf b},z^{\prime})). \tag{25}\] The two-component scattering amplitude is given by \[\hat{f}({\bf k}^{\prime},{\bf k})=\frac{-1}{4\pi}\int d^{3}re^{- i{\bf k}^{\prime}\cdot{\bf b}}\begin{bmatrix}V&U\\ U&V\end{bmatrix}\begin{bmatrix}\phi_{1}\\ \phi_{2}\end{bmatrix}, \tag{26}\] with the upper row of \(\hat{f}\), \(f_{G}\), corresponding to elastic scattering and the lower row, \(f_{X}\), to inelastic scattering. Then evaluation leads to the results \[f_{G}({\bf k}^{\prime},{\bf k})=\frac{ik}{2\pi}\int d^{2}be^{-i{ \bf k}^{\prime}\cdot{\bf b}}(1-e^{-i\delta_{V}({\bf b})}\cos\delta_{U}({\bf b })) \tag{27}\] \[f_{X}({\bf k}^{\prime},{\bf k})=\frac{-k}{2\pi}\int d^{2}be^{-i{ \bf k}^{\prime}\cdot{\bf b}}e^{-i\delta_{V}({\bf b})}\sin\delta_{U}({\bf b}), \tag{28}\] where \[\delta_{V}\equiv\frac{1}{2k}\int_{-\infty}^{\infty}dz^{\prime}V({\bf b},z^{ \prime}),\,\delta_{U}\equiv\frac{1}{2k}\int_{-\infty}^{\infty}dz^{\prime}U({ \bf b},z^{\prime}). \tag{29}\] The evaluation of entanglement entropy requires an understanding of unitarity. The statement of unitarity via the optical theorem is \[\sigma_{Tot}=\int d\Omega(|f_{G}|^{2}+|f_{X}|^{2})=\frac{4\pi}{k }{\rm Im}[{\rm f_{G}}({\bf k}^{\prime},{\bf k})], \tag{30}\] a relationship that must be checked within the current model. Taking the imaginary part of Eq. (27) yields \[{\rm Im}[f_{G}({\bf k},{\bf k})]=\frac{k}{2\pi}\int d^{2}b(1- \cos\delta_{V}({\bf b})\cos\delta_{U}({\bf b})). \tag{31}\] The evaluation of the angular integrals of \(|f_{G,X}|^{2}\) may be done using an approximation, valid when the eikonal approximation is valid, namely \[\int d\Omega e^{i{\bf k}^{\prime}\cdot({\bf b}-{\bf b}^{\prime })}\approx 2\pi\frac{1}{k^{2}b}\delta(b-b^{\prime}). \tag{32}\] Using this leads to the results \[\int d\Omega|f_{G}({\bf k}^{\prime},{\bf k})|^{2}=\int d^{2}b(1- 2\cos\delta_{V}\cos\delta_{U}+\cos^{2}\delta_{U})\] \[\int d\Omega|f_{X}({\bf k}^{\prime},{\bf k})|^{2}=\int d^{2}b\sin ^{2}\delta_{U}, \tag{33}\] so that the validity of Eq. (30) is maintained. Therefore we may again define the eikonal probability, \(P^{e}_{G,X}\), as \[P^{e}_{G}=\frac{\int d^{2}b(1-2\cos\delta_{V}(b)\cos\delta_{U} (b)+\cos^{2}\delta_{U}(b))}{2\int d^{2}b(1-\cos\delta_{V}(b)\cos\delta_{U}(b))} \tag{34}\] \[P^{e}_{X}=\frac{\int d^{2}b\sin^{2}\delta_{U}(b)}{\int d^{2}b2(1- \cos\delta_{V}(b)\cos\delta_{U}(b))}, \tag{35}\] and \[S^{e}=-P^{e}_{G}\ln_{2}P^{e}_{G}-P^{e}_{X}\ln_{2}P^{e}_{X}. \tag{36}\] The case with \(U=\pm V\) yields \(P_{G}^{e}=P_{X}^{e}=1/2\), and a maximum of entropy. This corresponds to the total absorption limit in which elastic and inelastic cross sections are equal. This means that the black disk limit corresponds to maximum scattering entropy. Presenting a brief discussion of the total absorption limit is worthwhile. The partial wave decomposition of the scattering amplitude \(f(\theta)\) for a spinless particle is: \[f(\theta)=\frac{-i}{2k}\sum_{l}(2l+1)(\eta_{l}-1)P_{l}(\cos\theta). \tag{37}\] The strong absorption model is defined by \(\eta_{l}=0\) for \(l\leq L\) and \(\eta_{l}=1\) for \(l>L\), with \(L\approxeq kR\). The sum is then given by \(f(\theta)\approx\frac{i}{k}L(L+1)\frac{j(L\theta)}{L\theta}\), a form familiar form Frauenhoffer diffraction. In nuclear physics this is known as the Blair model [28; 29]. See [30]. Data were reproduced using a distribution without a sharp edge, for example \(\eta_{l}=1/(1+\exp{(L-l)}/b)\) with \(b>1/2\). This is a grey disc model. To see if the total absorption or grey disc model is is a result of the present calculation, I provide a specific example, based on parameters typical of proton-proton scattering Use a Gaussian density function \(\rho(r)=\exp((-r^{2})/R^{2})\), where \(R\) is the radius parameter, taken here as \(\sqrt{2}\) fm obtained by convoluting Gaussian densities, of radius parameter 1 fm) of two protons. Then let \(V(r)=V_{0}(r)\) and \(U(r)=U_{0}\rho(r)\). Treating \(u\) and \(v\) as constants corresponds to treating the interactions as coming from vector exchanges-the typical treatment of high-energy hadron-hadron scattering. The value of scattering entropy is then independent of energy for sufficiently high energies. In line with the high-energy behavior, I define \(v\equiv 2\lambda_{V}k\) and \(u\equiv 2\lambda_{U}k\) so that evaluation of Eq. (29) yields the results \(\delta_{V,U}(b)=\lambda_{V,U}\sqrt{\pi}R\exp(-b^{2}/R^{2})\). Then using Eq. (30), a value of \(\lambda_{V}\) of about 100 MeV gives a total cross section of about 40 mb, the typical value of the high-energy, proton-proton cross section. The results, independent of the signs of \(U_{0}\) and \(V_{0}\), are shown in Fig. 2 in terms of \(u\equiv\lambda_{U}\sqrt{\pi}R\) and \(v\equiv\lambda_{V}\sqrt{\pi}R\). Maximum entanglement is reached, as expected, for cases with \(u=v\). Observe that, except for very small values of \(u\) (small inelastic scattering) the entanglement entropy is always substantial. It is useful to learn if the results of the present calculation correspond to the total absorption or gray disc model. To do this, refer to Eq. (27) and define \(\eta(b)\equiv e^{-i\delta_{V}(b)}\cos\delta_{U}(b)\). This quantity is shown in Fig. 3 for the case \(u=v=1.3\). The present calculation is seen to correspond to the grey disc model, not far from the total absorption model. ## V Extension to more than one excited state and a general result Can the models of the previous two models be extended to include more than one excited state? What then can one say about entanglement? If there is more than one excited state, a single measurement of the projectile energy cannot be used to determine the specific excited state of the target. The entanglement properties are then unknown. However, a single measurement of the projectile energy can determine whether or not the target is excited. Therefore it seems sensible to consider the previous terms \(P_{X}\) and \(P_{X}^{\kappa}\) to represent the probability that the target has been excited to any excited. In that case, the expressions for the scattering entropy of Eq. (17) and Eq. (36) can be thought of as general measures of entanglement for any projectile-target system that involves inelastic excitation. Figure 3: Real and imaginary parts of \(\eta(b)\). Figure 2: \(S^{e}\) as a function of the dimensionless variable \(u\) for the three different values of \(v\). The values of \(v\) are 0.9 (solid), 1.1 (dashed) and 1.3 (dotted). These values correspond to total cross sections of 22 mb, 40 mb and 56 mb. High energy proton-proton scattering Data for total cross sections and total elastic cross sections are available from the Particle Data Group [31]. Then, the high-energy analysis presented above can be used with the identifications: \(P_{G}=\sigma_{\rm el}/\sigma_{\rm tot}\), \(P_{X}=1-P_{G}\) along with Eq. (36). The results are shown in Fig. 4. At low energies there is no inelastic scattering, so the scattering entropy must vanish. This result is similar to the results shown in Fig. 1 for small values of \(k\). and to Fig. 2 for small values of \(u\). As energies rise above inelastic scattering thresholds the entanglement increases. At still higher energies the ratio of elastic to total cross sections is approximately flat. The entanglement entropy is substantial at laboratory momenta greater than about 2 GeV/c (kinetic energy about 1.3 GeV). At higher energies than are shown \(S\) is approximately flat with energy because the ratio \(\sigma_{|rmel}/\sigma_{\rm tot}\) is approximately independent of energy. The large value of entanglement entropy indicates that the total absorption or gray disc model are reasonable first approximations to understanding the data. The net result is that computing the entanglement energy provides insight regarding the underlying dynamics of proton-proton scattering, in particular and more generally of projectile-target scattering. ###### Acknowledgements. This work was supported by the U. S. Department of Energy Office of Science, Office of Nuclear Physics under Award Number DE-FG02-97ER-41014.
2310.07624
Surgery and Matter for 3d Theories
We try to give a geometric construction for 3d $\mathcal{N}=2$ gauge theories using three-manifolds and Dehn surgeries. We follow the story that wrapping M5-branes on plumbing three-manifolds leads to 3d theories with mixed Chern-Simons levels. This construction can be decorated by adding non-compact Lagrangian submanifolds in the cotangent bundles of three-manifolds. The M5-branes wrapping on these submanifolds through M-theory/IIB string duality lead to flavor D5-branes that engineer chiral multiplets. In this note, we only consider unknotted matter circles, which are intersections between these non-compact M5-branes and plumbing manifolds. Then various dualities of 3d theories can be interpreted as Kirby moves and equivalent surgeries. We also find the dictionary between geometric structures of three-manifolds and physical aspects of gauge theories.
Shi Cheng
2023-10-11T16:07:30Z
http://arxiv.org/abs/2310.07624v2
# Surgery Constructions for 3d Theories, Part I: Matter Circles and Links ###### Abstract We try to give a geometric construction of 3d \(\mathcal{N}=2\) gauge theories using three-manifolds and Dehn surgeries. It is known that wrapping M5-branes on plumbing three-manifolds leads to 3d theories with mixed Chern-Simons levels. We find that this construction can be extended by adding Lagrangian defects on the tangent bundles of three-manifolds. After wrapping M5-branes on these defects and through M-theory/IIB string duality, one can get D5-branes that introduce chiral multiplets. This is analogous to Ooguri-Vafa construction of Wilson loop operators along knots on the three-sphere. In this note, we only consider unknotted matter circles, which are intersections between defect M5-branes and plumbing manifolds. After introducing matters, mirror dualities of 3d theories match with Kirby moves of plumbing manifolds. We also find the dictionary between various structures of three-manifolds and physical aspects of 3d gauge theories. ## 1 Introduction There are already many distinguished works on 3d \(\mathcal{N}=2\) gauge theories e.g. [2; 3; 4; 5; 6; 7; 8; 9; 10]. However the geometric engineering is not very well established yet. Brane webs for some examples are known in e.g.[11; 12; 13], while at this stage brane webs cannot encode mixed Chern-Simons levels and generic superpotentials. We think therefore-manifolds are promising because of the fruitful geometric structures and transformations, such as Kirby moves [21]. We try to propose a new construction based on Dehn surgeries in complement with DGG/GPV construction given by 3d-3d correspondence [14; 15; 16; 17]. Basically, surgery construction is also the compactification of 6d \((2,0)\) theories on three-manifolds. Putting M5-branes on some three-manifolds such as Lens spaces could lead to 3d \(\mathcal{N}=2\) gauge theories. M5-branes can also wrap generic three-manifolds, such as plumbing manifolds to generate colorful 3d theories. In recent years, there are attentions on computing the WRT invariants of plumbing manifolds [15; 18; 19]. On the physical side, linking numbers of these plumbing manifolds are interpreted as the mixed Chern-Simons levels of 3d theories [20]. In our previous work [1], we extend this story by introducing chiral multiplets and gauging basic mirror dualities. To describe these introduced matter fields, we found plumbing graphs should be extended by adding gray boxes to denote chiral multiplets, as shown in Figure 2. We also found that in the presence of these new objects, some physical dualities match very well with the first type Kirby moves of three-manifolds. This implies that the introduced gray boxes are necessary and should have geometric interpretations. However, in [1], the existence of gray boxes (matter nodes) on three-manifolds is a conjecture. In this note, we fill this gap by exploring the geometric engineering of matters, and find these chiral multiplets should correspond to some codimensional-two defects given by wrapping M5-branes on the tangent bundle of three-manifolds. To solve this problem, many steps need to be done for finding the dictionary between three-manifolds and 3d gauge theories. We firstly notice that the charges of chiral multiplets under gauge groups should be interpreted as winding numbers. This can be known by analyzing Kirby moves and in particular the handle-slides of circles [21]. This step is helpful to interpret geometric transformations as physical dualities, such as \(ST\)-moves (gauged mirror duality) [1]. To get some geometric objects that have winding numbers, we should at least consider loops as candidates. We use the M-theory/IIB string dualities to figure out that some Lagrangian defects satisfy this property, which are analogous to Ooguri-Vafa's constructions [22] of Wilson loops by introducing Lagrangian M5-branes to intersect the three-sphere along knots \(L_{K}\cap S^{3}=K\). In our context, the defects intersect three-manifolds along some circles \(L_{\circ}\cap M_{3}=\bigcirc\), which fully characterize chiral multiplets and hence we call them matter circles. These defects can be checked and confirmed by dualing M5-brane webs to 3d brane webs in IIB string theory. The defect M5-brane is then dual to the D5-brane. This looks nice, as strings between D3-brane and D5-brane provide chiral multiplets. Moreover, there are various ways to put these matter circles on \(M_{3}\), which engineer chiral multiplets of charges \(q_{i}\) under the \(i\)-th gauge group \(U(1)_{i}\). To see how these circles tangle with three-manifolds, Dehn surgeries have to be considered as all closed orientable three-manifolds can be obtained by Dehn surgeries along links of circles. These circles give gauge groups \(U(1)\times\cdots\times U(1)\), and hence we call them gauge circles. Matter circles should link to gauge circles, otherwise matters are decoupled. We notice only \(S^{3}\) is special and natural for carrying matter defects, which could even give a geometric derivation of \(ST\)-moves. More explicitly, Ooguri-Vafa defects giving flavor symmetries should be combined with the Dehn surgeries of three-manifolds to complete the geometric engineering. We illustrate the surgery construction in Figure 1. The left graph shows the Dehn surgery construction of three-manifolds \(M_{3}\), which is defined by drilling out the neighborhood of a knot \(K\), and then a solid torus is filled in. The right graph shows that the codimensional-two defect \(L\) can be introduced to intersect the solid torus. When this solid torus is glued back to form the closed three-manifold \(M_{3}\), flavor symmetry is introduced and the chiral multiplet is geometrically realized. It is convenient to use plumbing graphs (which are quiver diagrams) to represent 3d theories given by three-manifolds and defects. We illustrate an example in Figure 2. The main ingredients that we introduce are the non-compact defects on the tangent bundle \(T^{*}M_{3}\). The three-manifold is given by Dehn surgeries along unknotted links of gauge circles, namely \(M_{3}=\big{(}S^{3}\backslash N(\bigcirc_{i})\big{)}\bigcup_{f_{i}}(D_{2} \times S^{1})_{i}\) where we have filled in solid tori o make it compact and hence \(\partial M_{3}=\emptyset\). We show the dictionary between geometry and gauge theories in Table 1. In section 2 we show the winding numbers between matter and gauge circles can be interpreted as charges by considering the handle-slide operation, which is the second type of Kirby moves. In section 3 we use M-theory/IIB string duality and brane webs to show that Lagrangian defect M5-branes along circles are dual to D5-branes, and hence engineer chiral multiplets. In section 4 we discuss the locations of matter circles and gauge circles on plumbing three-manifolds given by Dehn surgeries. We will also give a geometric derivation of \(ST\)-moves, in which we use a drilling trick and rational equivalent surgeries. \begin{table} \begin{tabular}{c|c|c} \hline three-manifolds & plumbing graphs & abelian gauge theories \\ \hline matter circle & & chiral multiplet \(\mathbf{F}\) \\ gauge circle & & gauge group \(U(1)_{k_{i}}\) \\ winding num. between & and & charges \(q_{i}\) of fund. rep. \\ linking num. between & and & effective CS levels \(k_{ij}\) \\ \(\alpha\)-Kirby moves on & blow up/down of & integrate in/out \(U(1)_{k}\) \\ rational equivalent surgery (4.25) & \(ST\)-moves (4.26) & gauged mirror triality \\ handle-slides of & add & add chiral multiplets \\ \hline \end{tabular} \end{table} Table 1: The dictionary between three-manifolds, plumbing graphs (which can be viewed as quiver diagrams), and gauge theories. Figure 1: Left graph is the surgery construction of three-manifolds. Right graph shows the surgery construction of the 3d theories with matters. Figure 2: Blue circles \(\bigcirc\) are gauge circles for surgeries. Gray cylinders denote Lagrangian defects \(L\subset T^{*}M_{3}\). Red circles \(\bigcirc\) as intersections of defects with \(M_{3}\) are matter circles. This geometry corresponds to a 3d theory with gauge group \(U(1)_{k_{1}}\times U(1)_{k_{2}}\) and three chiral multiplets \(\Phi_{i}\) of charges \((q_{i}^{1},q_{i}^{2})\). One can represent this theory by its plumbing graph. ## 2 Charges as winding numbers In this section, we briefly review the \(ST\)-moves, and discuss another type of Kirby moves -- handle-slides that have not been addressed in physical literature. We found that handle-slides can be used as an geometric operation to introduced matter nodes to plumbing graphs, and the charges of matters can be interpreted as winding numbers. This implies that matter nodes can be viewed as some kinds of circles, which motivates us to find their geometric realizations. This section contains the background for other sections. ### Mirror triality and \(St\)-moves Before taking about the geometric aspects, let us present what we already know about 3d gauge theories. A well know duality is the basic mirror duality between a free chiral multiplet and a \(U(1)\) theory with a charge-1 fundamental chiral multiplet. \[1\mathbf{F} \longleftrightarrow U(1)_{\pm\frac{1}{2}}+1\mathbf{F} \tag{1}\] It is convenient to use plumbing graphs to denote this mirror duality: (2) where the numbers \(\pm 1/2\) are bare Chern-Simons levels. This duality is found in [14] by decoupling the antifundamental matter \(1\mathbf{AF}\) in the \(\mathcal{N}=2\) representation of the basic \(\mathcal{N}=4\) dual pair: \[1\mathbf{F}+1\mathbf{AF} \longleftrightarrow U(1)_{0}+1\mathbf{F}+1\mathbf{AF}\,. \tag{3}\] It is noticed and extensively used in [1, 23] that after gauging the \(U(1)\) global symmetry on both side of (1), a new duality is obtained: (4) where \(q\) is the charge of the matter under the gauge group. The mirror duality (1) corresponds to the operator \(ST\in SL(2,\mathbb{Z})\)[24] and hence the gauged version (4) is called \(ST\)-move. We can use plumbing graphs to denote this gauged duality, see Table 1 for details of the notation. Note that in (4), we have assigned the bare Chern-Simons levels. In the following sections, we will only use the effective Chern-Simons levels which receive corrections from all chiral multiplets by the formula [2]: \[k^{\text{eff}}=k^{\text{bare}}+\sum_{I=1}^{N_{f}}\frac{q^{2}}{2} \operatorname{sign}(q_{I})\text{sign}(m_{I})\,. \tag{5}\] ### Handle-slides: \(\beta\)-Kirby move For given three-manifolds that can be obtained by Dehn surgeries on links, such as the one in Figure 2. There are Kirby moves on this link to transform it into other links, while the three-manifolds is invariant [21], so Kirby moves are geometrically equivalent operations. Kirby moves contains two types, which are called first and second types, while we call them \(\alpha\)-type and \(\beta\)-type in this note for convenience1. The \(\alpha\)-type is discussed in [1; 20]. Basically, \(\alpha\)-Kirby moves introduce or reduce black nodes of plumbing graphs, which can be interpreted as integrating in/out gauge symmetries, and the \(ST\)-move reduces to a special case of \(\alpha\)-Kirby moves if matter nodes are decoupled. However, generic \(\alpha\)-Kirby moves do not match the duality (4) if matter nodes are present. Footnote 1: In literature like [21], the first type is called \(\kappa\)-move. The \(\beta\)-type of Kirby moves is often called handle-slide. In literature [21], \(\beta\)-type Kirby move is the operation of recombining some components of the links for Dehn surgeries, which can be illustrated in the graph below \[\includegraphics[]{figures/Kirby_1}\quad\includegraphics[]{figures/Kirby_2}\quad \includegraphics[]{figures/Kirby_3}\quad\includegraphics[]{figures/Kirby_4}\quad \includegraphics[]{figures/Kirby_5}\quad\includegraphics[]{figures/Kirby_6} \tag{6}\] Let us use \(L_{1}\) and \(L_{2}\) to denote these two circles. Their framing numbers and linking numbers are as follows \[L_{1}\cdot L_{1}=r\,,\ \ L_{1}\cdot L_{2}=L_{2}\cdot L_{1}=k\,,\ \ L_{2}\cdot L _{2}=s\,. \tag{7}\] After \(\beta\)-type Kirby move, the circle \(L_{1}\) is unchanged, while \(L_{2}\) combines with \(L_{1}\) to form a new circle \(\tilde{L}_{2}=L_{2}\pm L_{1}\). Using (7), one can compute linking numbers between new cycles: \[L_{1}\cdot L_{1}=r\,,\ \ L_{1}\cdot\widetilde{L}_{2}=\widetilde{L}_{2}\cdot L _{1}=k\pm r\,,\ \ \tilde{L}_{2}\cdot\widetilde{L}_{2}=r+s\pm 2k\,. \tag{8}\] These changes are assigned on graphs in (6). It is hard to physically understand this type of Kirby move, although it can partly be integrated as integrating out gauge nodes. One can check that if integrating out the node \(U(1)_{r}\), then the two graphs in (6) reduce to the same one: \[\includegraphics[]{figures/Kirby_2}\quad\includegraphics[]{figures/Kirby_3}\quad \includegraphics[]{figures/Kirby_4}\quad\includegraphics[]{figures/Kirby_5}\quad \includegraphics[]{figures/Kirby_6}\quad\includegraphics[]{figures/Kirby_7}\quad \includegraphics[]{figures/Kirby_8}\quad\includegraphics[]{figures/Kirby_9}\quad \includegraphics[]{figures/Kirby_10}\quad\includegraphics[]{figures/Kirby_11}\quad \includegraphics[]{figures/Kirby_12}\quad\includegraphics[]{figures/Kirby_13}\quad \includegraphics[]{figures/Kirby_14}\quad\includegraphics[]{figures/Kirby_15}\quad \includegraphics[]{figures/Kirby_16}\quad\includegraphics[]{figures/Kirby_17}\quad \includegraphics[]{figures/Kirby_18}\quad\includegraphics[]{figures/Kirby_19}\quad \includegraphics[]{figures/Kirby_19}\quad\includegraphics[]{figures/Kirby_18}\quad \includegraphics[ From another perspective, this move can be viewed as iteratively applying the original \(\beta\)-Kirby move for \(n\) times. Note that the reverse process from the right to left is also a \(\beta\)-Kirby move, as \(n\) can be any negative integer. There is a special case that when \(k=0\) and \(n=1\), \(\alpha\)-Kirby move and \(\beta\)-Kirby move can be directly related \(K_{\alpha}\simeq K_{\beta}\): change: \[\left\{L,L_{1}\right\}\,\,\,\xrightarrow{ST_{\beta}}\,\,\,\,\left\{L\,,L+L_{1} \right\}, \tag{16}\] for which \(\left\{L^{2}=+1\,,L_{1}^{2}=k\,,L\cdot L_{1}=0\right\}\). During the \(ST_{\beta}\)-move, the matter node is always on \(L\). The \(ST_{\beta}\)-move is a special case of (12). By comparing (13) and (15), one can see that \(ST_{\beta}\)-move is not an equivalent operation, since \(\bullet_{+1}-\leavevmode\hbox{\small\vbox{\hbox{\small 1 \kern-3.8pt\hbox{\small I}\kern-3.8pt\hbox{\small I}\kern-3.8pt\hbox{\small I} \kern-3.8pt\hbox{\small I}\kern-3.8pt\hbox{\small I}\kern-3. Combining (19) and (20), one can see that \((ST_{\beta})^{q}\) introduces a charge \(q\) matter to \(\bullet_{k}\): \[\begin{array}{c}\includegraphics[scale=0.5]{22.eps}\end{array} \tag{21}\] This charge is the multiple number of the circle \(L\in\widetilde{L}_{1}\), and is also the winding number between \(\widetilde{L}\) and \(\widetilde{L}_{1}\). One can also couple the matter node to many gauge nodes to form the matter with a charge \(q_{i}\) under each \(U(1)_{k_{i}}\). For instance, the bifundatmental matter can be introduced by \(ST_{\beta}\)-moves (handle-slides) below \[\begin{array}{c}\includegraphics[scale=0.5]{22.eps}\end{array} \tag{22}\] where we have assigned effective CS levels. The recombined circles are \[\widetilde{L}=L\,,\ \ \widetilde{L}_{1}=L_{1}+q_{1}L\,,\ \ \widetilde{L}_{2}=L_{2}+q_{2}L\,. \tag{23}\] In addition, one can add many matter nodes using handle-slides (\(ST_{\beta}\)-moves). Recall that introducing or removing \(\pm 1\) gauge nodes \(\bullet_{\pm 1}\) do not change the three-manifolds. However in the presence of matter nodes on \(\bullet_{\pm 1}-\leavevmode\hbox{\kern 1.0pt\vbox{\hrule height 0.4pt width 100 pt depth 0.0pt\kern-3.0pt\vrule width 0.4pt height 6.0pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}}\), this property is somehow broken, since bare CS levels \(k_{i}\) are lifted to effective CS levels \(k_{i}\). Namely bare CS levels are shifted by the matter node. This looks strange, but can be understood through 3d brane webs; see [13]. From the above examples, one can see that both types of Kirby moves have physical interpretations. One can conjecture that the matter node should be some kinds of circles on three-manifolds, because the charge \(q_{i}\) is surprisingly interpreted as the winding numbers between the matter and gauge circles \(\bullet_{k_{i}}\). Since the matter node can be viewed as through mirror duality (2), matter circles can indirectly use this gauge node \(\bullet_{+1}\) to link gauge nodes. In following sections and in particular section 4.4, we will show that matter circles do indeed exist and are given by intersections between Lagrangian defects and three-manifolds, as is illustrated in Figure 2. **Effective and bare \(ST\)-moves.** In this note, we only consider circles with integral framing and linking numbers, and hence they are effective CS levels, while in gauge theory analysis, one can also use bare CS levels and then counts the corrections from matters. At this stage, we do not clearly know how to geometrically describe bare CS levels (bare linking numbers), but we can mention some differences below. The above examples in this section have shown that the handle-slides (\(\beta\)-Kirby moves) work and match with effective CS levels of 3d theories. If using bare CS levels, we think it is not easy to see how the corrections from matters emerge. However, both effective version and bare version have benefits and drawbacks. For the bare version of \(ST\)-moves shown in (4), one can perform \(ST_{\alpha}\)-moves recursively to form the chain \(\bullet_{k\pm q^{2}/2}-\bullet_{\pm 1}-\cdots-\bullet_{\pm 1}- \leavevmode\hbox{\kern 1.0pt\vbox{\hrule height 0.4pt width 100 pt depth 0.0pt\kern-3.0pt\vrule width 0.4pt height 6.0pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}}\), although finally all \(\bullet_{\pm 1}\) can be integrated out, and only two types of moves left as shown in (4). However, one cannot consistently do that for the effective version shown in (20). For the effective version, \(ST_{\beta}\)-moves can be interpreted geometrically as handle-slides, and performing handle-slides many times just means increase charges. Therefore, the meanings of repeating \(ST\)-moves for bare version and effective version are different. ### 1-form symmetries There are 1-form symmetries for 3d \(\mathcal{N}=2\) theories. In this section, we first compute the 1-form symmetries that can be read off from effective Chern-Simons levels and then check if Kirby moves preserve this symmetry. For related works, see e.g. [25; 26]. Using the Smith decomposition, the effective CS levels can be diagonalized \[K^{\text{eff}}=U\Lambda V\,, \tag{24}\] where \(\Lambda=\text{diag}(\Lambda_{1},\Lambda_{2}\,,\cdots,\Lambda_{n})\) is the diagonal matrix, and then the 1-from symmetry reads \[G^{1}=\oplus_{i}\mathbb{Z}_{\Lambda_{i}}\,. \tag{25}\] Dualities should preserve 1-from symmetries, but the equivalence of 1-form symmetries does not mean dualities. For the \(\beta\)-Kirby moves in (10), one can check that the 1-form symmetry is invariant and is independent of the winding number \(n\), although in the presence of matter nodes, \(ST_{\beta}\) is not a duality. When \(r=\pm 1\) and for any \(n\in\mathbb{Z}\), \(G^{1}=\mathbb{Z}\oplus\mathbb{Z}_{s-k^{2}}\), which is consistent with (9). The handle-slide of plumbing graph in (21) gives \(G^{1}=\mathbb{Z}\oplus\mathbb{Z}_{k}\) where the \(\mathbb{Z}\) is from \(\bullet_{+1}\). Note that when \(r=\pm 1\), \(\alpha\)-Kirby moves and \(\beta\)-Kirby moves are equivalent, which means that a circle with framing number \(\pm 1\) can be freely removed from plumbing graphs and \(G^{1}\) does change. In addition, for the \(\beta\)-Kirby moves (10), if \(r=0,k=0\), then \(G^{1}=\mathbb{Z}_{s}\), and similarly, if \(s=0,k=0\), then \(G^{1}=\mathbb{Z}_{r}\). From the above computations, we can draw the following conclusion. If the matter is not present, then the original \(\alpha\)-and generic \(\beta\)-Kirby moves preserve the 1-form symmetry. In this presence of matter node, even though \(\beta\)-Kirby moves are not dualities on the level of partition functions, 1-from symmetries still match. Moreover, one can also check more generic combinations of cycles \(\{L_{i}\}\to\{\widetilde{L}_{i}\}\) given by handle sliding the node \(\bullet_{\pm 1}\), such as (22). For handle-slid graphs, 1-form symmetries usually take the form \(\mathbb{Z}\oplus\mathbb{Z}_{*}\) where the \(\mathbb{Z}\) comes from the decorated gauge node \(\bullet_{\pm 1}-\blacksquare\). Note that the charges \(q_{i}\) of the matter node do not explicitly contribute to the 1-form symmetry during the process of coupling them to gauge nodes, since their corrections are absorbed in effective CS levels. #### SQED mirror duality. It is interesting to consider generic mirror dualities, such as the well known one (which are called theory A and theory B respectively) found in [6]: \[U(1)_{-N_{f}/2}-[N_{f}] \longleftrightarrow [1]-U(1)-U(1)-\cdots-U(1)-[1] \tag{26}\] where theory B has bifundamental chiral multiplets between gauge groups, and the bare CS levels are \(k_{ij}=\delta_{ij}-\frac{1}{2}\delta_{i,j-1}-\frac{1}{2}\delta_{i-1,j}\) with \(i,j=1,\ldots,N_{f}-1\). The theory A has a vanishing effective CS level, while that of theory B are \(k^{\text{eff}}_{11}=k^{\text{eff}}_{N_{f}-1,N_{f}-1}=1\,,k^{\text{eff}}_{i,i\pm 1 }=1\,,k^{\text{eff}}_{ii}=2\). Note that this mirror duality can also be obtained by decoupling a half of bifundamental matters from its mother 3d \(\mathcal{N}=4\) SQED duality. The 1-from symmetry for theory B is \(G^{1}=\mathbb{Z}^{N_{f}-2}\). One can expect that theory A also has this 1-form symmetry, as this mirror pair can be related through \(\alpha\)-Kirby moves [1]. ## 3 Defect M5-branes on Lens spaces There are various promising string configurations for constructing 3d theories, such as [27; 28; 29]. These configurations should look locally similar or dual to each other. In this section, we consider M5-branes and 3d brane webs, and show that defect M5-branes are dual to D5-branes that could engineer matters. **Ooguri-Vafa defects.** The M5-brane configuration is the topologically twisted compactification of the 6d \((2,0)\) theory on three-manifolds, which is known as DGG/PGV construction [14; 15]. The number of M5-branes on the three-manifold is the rank of non-abelian gauge groups. In this note, we only consider abelian theories and hence only a single M5-brane is wrapped on the three-manifold \(M_{3}\). M5-branes can be reduced to IIA-string theory and then dual to 3d brane webs in type-IIB string theory. From the 3d brane webs, it is more easy to identify the theory and read off matter content. Wrapping M5-branes on Lens spaces \(L(k,1)\) have been considered in [15; 20; 32; 33]. However, in the previous work, chiral multiplets associated with matter nodes have not been found yet. In this section, we try to solve this problem. To engineer matters, one can first recall their common features. Chiral multiplets and hypermultiplets usually associate with flavor symmetries that are given by some non-compact manifolds. One can naively guess that three-manifolds should have boundaries, or external non-compact manifolds. Ooguri-Vafa (OV) construction of Wilson loops in Chern-Simons theories tells us that Lagrangian M5-branes intersecting with three-sphere along knots \(K\), namely \(L_{K}\cap S^{3}=K\) can be introduced. Inspired by this, we can consider the codimensional-two Lagrangian defect \(L_{\circ}\) that intersects the Lens space along a circle \(L_{\circ}\cap L(k,1)=\bigcirc\). To check this defect candidate, one can consider M5-brane configurations and see if it is consistent with 3d brane webs, since it is well known that chiral multiplets are given by D5-branes, and the simple theory \(\bullet_{k}-\blacksquare\) (namely \(U(1)_{k}+1\mathbf{F}\)) can be engineered by 3d brane webs in IIB-string theory, see e.g. [30; 31; 13]. **Lens spaces in M-theory.** Let us first review the Lens spaces in the spacetime of M-theory to give the background for discussing defects. The 11-dimensional spacetime of M-theory is \[\mathbb{R}^{2}_{\epsilon}\times S^{1}\times T^{*}L(k,1)\times \mathbb{R}^{2}\,, \tag{3.1}\] where the 3d theories is on the sub-spacetime \(\mathbb{R}^{2}_{\epsilon}\times S^{1}\) and \(\epsilon\) is the Omega deformation parameter that can be viewed as the mass parameter for the adjoint chiral matter that descends from the vector multiplet in 3d \(\mathcal{N}=4\) theories. \(T^{*}L(k,1)\) is the tangent bundle \(N\hookrightarrow\ T^{*}L(k,1)\to L(k,1)\) with \(N=\mathbb{R}^{3}\) as the fiber6. The special direction is the 11-th direction that arises in the strong coupling limit of IIA string theory, namely M-theory/\(S^{1}_{\sharp}=\text{IIA}\). In the above spacetime, we should set \(S^{1}_{\sharp}\subset L(k,1)\), since Lens spaces have the nice property that they can be obtained by gluing the boundary tori of two solid tori \(D^{1}\times S^{1}\). If we rewrite \(D^{2}=S^{1}\times I\), then Lens spaces are realized as torus bundles over the interval \(I\), namely \(T^{2}\hookrightarrow L(k,1)\to I\), where on each endpoint of the interval \(I\), the meridian of a solid torus shrinks and only longitude \(S^{1}=\{*\}\times S^{1}\in D^{2}\times S^{1}\) survives. In particular, for the Lens space \(L(0,1)=S^{2}\times S^{1}=T^{2}\times I\), the longitudes at both endpoints are the same. Footnote 6: Note that \(N=\mathbb{R}^{3}=S^{2}\times\mathbb{R}_{+}\) where \(S^{2}\) shrinks at the endpoint \(\{0\}\in\mathbb{R}_{+}\). In addition, \(N=\mathbb{C}\times\mathbb{R}\). Thus M2-brane as disc could be inserted in \(N\) if the Lagrangian M5-brane is present. The theories with the gauge group \(U(N_{c})_{k}\) are given by wrapping \(N_{c}\) M5-branes on \(\mathbb{R}^{2}_{\epsilon}\times S^{1}\times L(k,1)\). The rotation symmetry on \(N\times\mathbb{R}^{2}\) gives \(U(1)_{R}\times U(1)_{N_{\perp}}\), where the first one is the R-symmetry for 3d \(\mathcal{N}=2\) theories. In this note, we set \(N_{c}=1\) and only wrap a single M5-brane one time on three-manifolds to engineer abelian theories7. The Lagrangian defect \(L_{\circ}\) should be along the fiber of tangent bundle \(T^{*}L(k,1)\), and intersects the Lens space on a circle, namely \(L_{\circ}=\mathbb{R}^{2}\times\bigcirc\subset T^{*}L(k,1)\) with \(\mathbb{R}^{2}\subset N\) and \(\bigcirc\subset L(k,1)\). Since tangle bundles of three-manifolds are always trivial vector bundles, there is no need to worry about the extension of defects on the fiber8. However, it is not easy to imagine how the matter circles \(\bigcirc\) are inserted in Lens spaces. In Figure 3, we illustrate a possible position of defects, in which we show that gauge group is realized by the M5-brane wrapping on the whole Lens space. Footnote 7: There are differences between the color number \(N_{c}\) and the winding number. Footnote 8: We would like to thank Kewei Li for discussion on this point. **M-theory/IIB duality.** Let us recall the tool, which is the duality between M-theory and IIB string theory. The duality applies on the spacetime \[\mathbb{R}^{2}_{12}\times S^{1}_{0}\times N_{345}\times L(k,1)_{69^{\sharp}_{ \sharp}}\times\mathbb{R}^{2}_{78}\,, \tag{3.2}\] where we have assigned coordinates on all directions. The elliptic fiber bundle reads \(T^{2}_{9^{\sharp}_{\sharp}}\hookrightarrow L(k,1)\to I_{6}\), where the torus fiber \(T^{2}_{9^{\sharp}_{\sharp}}=S^{1}_{9}\times S^{1}_{\sharp}\) has longitude \(S^{1}_{9}\) and meridian \(S^{1}_{\sharp}\). We Figure 3: The Lens space \(L(k,1)\) is the torus bundle fibered over an interval. Each half of this bundle is a solid torus \(T^{2}\times I_{L,R}\). The M5-brane on the whole Lens space gives the gauge group \(U(1)_{k}\) and the defect M5-brane leads to matter and the flavor symmetry \(U(1)\). In this Figure, we put the defect M5-brane on the left endpoint, which could move to the right endpoint; see Figure 4. In addition, the red circle as the intersection is matter circle. remind that the identification of meridian and longitude is important, which cannot be switched randomly. The M/IIB duality is a combination of T- and S-duality in string theories. Basically, the T-duality is between Dp-branes: \[\text{Dp wrap on }x_{i}\;\;\xleftrightarrow{\text{T-dual on }x_{i}\atop\text{$ \leftrightarrow$ }}\;\;\text{D(p-1) at a point on }x_{i}\,, \tag{3.3}\] where the direction \(x_{i}\) is a direction that Dp-brane extends on. The M-theory and IIB are related through the torus \(\text{M}/T_{9\sharp}^{2}\simeq\text{IIB}/S_{9}^{1}\). More specifically, \[\text{M-theory}\;\xrightarrow{\text{\tiny shrink }S_{\sharp}^{1}}\;\;\text{ IIA}\;\xrightarrow{\text{\tiny T-dual along }x^{9}\atop\text{$ \leftrightarrow$ }}\;\;\text{IIB}\,. \tag{3.4}\] In addition, we need to mention that because of the relation below \[\frac{l_{p}^{3}}{R_{9}R_{\sharp}}=\tilde{R}_{9}\,, \tag{3.5}\] the radius of the dual circle in IIB is infinite large \(\tilde{S}_{9}^{1}\to\infty\), as the area of the the torus \(\text{Area}\big{(}T_{9\sharp}^{2}\big{)}\simeq R_{9}R_{\sharp}\to 0\) vanishes at both endpoints of the interval \(I_{6}\). ### Defect M5-branes Using M-theory/IIB string duality and following the discussion in [30], one can find defect M5-branes lead to \((p,q)5\)-branes in IIB string theory, depending on whether M5-branes take the M-theory circle \(S_{\sharp}^{1}\) or \(S_{9}^{1}\). One can first study brane configurations at one endpoint of the interval \(I_{6}\in L(k,1)\). We fix the brane configuration and present it in Table 2. The branes on the other endpoint can be obtained by redefining coordinates through gluing maps. In short, we find the Lagrangian defects should wrap the longitude \(S_{9}^{1}\) of \(T_{9\sharp}^{2}\) to engineer matters, because only these defect M5-branes are dual to flavor D5-branes, and hence each defect M5-brane corresponds to a hypermultiplet in fundamental representation, which contains two chiral matter \(\mathbf{F}\) and \(\mathbf{AF}\) with opposite charges \(\pm q\). Interestingly, we notice that the winding number of the defect M5-brane on \(S_{9}^{1}\) is the charge \(q\) of the chiral multiplet, which matches with the conjecture we draw in section 2.3. In the following, we give detailed analysis. **From Lens spaces to 3d brane webs.** To begin with, the gauge group is engineered by a single M5-brane wrapping on the whole Lens space \(L(k,1)\). It reduces to a D4-brane after shrinking the 11-th circle \(S_{\sharp}^{1}\), and T-duality along the meridian \(S_{9}^{1}\) of the torus \(T_{9\sharp}^{2}\) turns it into a D3-brane in IIB string theory, which gives rise to the gauge group \(U(1)\). In short, \(\text{M5}(1269\sharp)\to\text{D4}(1269)\to\text{D3}(126)\). This process is present in Table 2. The bare CS level of this gauge group \(U(1)_{k}\) is determined by the relative angle between two NS5-branes. These NS5-branes comes from the reduction of D6-branes on two endpoints of the interval \(I_{6}\in L(k,1)\), which are magnetic monopoles of D0-branes that are Kaluza-Klein modes of the gravition along \(S_{\sharp}^{1}\), and hence D6-branes emerge only at points where the \(S_{\sharp}^{1}\) shrinks. For Lens spaces, only at endpoints of the interval \(I_{6}\), \(S_{\sharp}^{1}\) shrinks and hence D6-branes emerge; see [32] for more discussions. This property of Lens space is nice, as it naturally engineers NS5-branes. Moreover, since D6-branes lead to D5-branes in IIB string theory, S-duality in IIB must be applied to turn these D5-branes into NS5-branes; see Table 2. In short, the process of engineering NS5-branes is \(\text{D}0\rightarrow\text{D6}(\text{IIA})\rightarrow\text{D5}(\text{IIB}) \xrightarrow{S}\text{NS5}(\text{IIB})\). At the other endpoint of \(I_{6}\), another NS5\({}^{\prime}\)-brane is emerged, which differs from NS5 by a relative angle \(\theta\), and the Chern-Simons level yields \(k=\tan\theta\), which is also the framing number of \(L(k,1)\). The above construction gives the 3d brane web \(\text{NS5}-\text{D3}-\text{NS5}^{\prime}\). We need to check various known brane configuration in [30] to see if this construction is correct. In [30], there are three ways to break \(\mathcal{N}=4\) to \(\mathcal{N}=2\) by turning on various relative angles. Fortunately, the 3d brane web we just obtained from Lens space is one of them, which is the case that requires the condition \(\rho=\theta\), where \(\rho\) and \(\theta\) are the relative angles on planes \(x_{9\sharp}\) and \(x_{59}\) respectively. Because of this particular condition, there is no obvious difference between NS5 and D5, as they only differ by an angle on the plane \(x_{59}\). This special case is also the 3d brane webs that can be obtained by Higgsing 5d \(\mathcal{N}=1\) brane webs, see e.g.[13; 31; 34]. In IIB-string theory, we should have \(\theta=\pi/2\) to distinguish NS5 and D5, which means NS5 is along vertical direction and D5 is along horizontal direction on plane \(x_{59}\). Therefore, the NS5\({}^{\prime}\)-brane should be denoted more precisely by \((k,1)\) 5-brane9. Footnote 9: For the \((p,q)\)-brane, \(p\) is the electric charge and \(q\) is the magnetic charge. Let us consider the defect M5-brane. The defect M5\({}^{\prime\prime}\)-brane on \(\mathbb{R}^{2}_{45}\times S^{1}_{9}\subset N_{345}\times S^{1}_{9}\) is the defect M5\({}^{\prime}\)-brane on \(\mathbb{R}^{2}_{45}\times S^{1}_{9}\subset N_{345}\times S^{1}_{9}\). The defect M5\({}^{\prime\prime}\)-brane on \(\mathbb{R}^{2}_{45}\times S^{1}_{9}\subset N_{345}\times S^{1}_{9}\) is the defect M5\({}^{\prime\prime}\)-brane on \(\mathbb{R}^{2}_{45}\times S^{1}_{9}\subset N_{345}\times S^{1}_{9}\). The defect M5\({}^{\prime\prime}\)-brane on \(\mathbb{R}^{2}_{45}\times S^{1}_{9}\subset N_{345}\times S^{1}_{9}\) is the defect M5\({}^{\prime\prime}\)-brane on \(\mathbb{R}^{2}_{45}\times S^{1}_{9}\subset N_{345}\times S^{1}_{9}\). \(L(k,1)_{69\sharp}\) leads to a D5-brane via M/IIB duality and S-duality. This D5-brane is responsible for the flavor symmetry and introduces a hypermultiplet. For \(L(0,1)\), what we get is a \(\mathcal{N}=4\) theory \(U(1)_{0}\) with a hypermultiplet. If \(k\neq 0\), then \(\mathcal{N}=4\) breaks to \(\mathcal{N}=2\), and the hypermultiplet can be viewed as a fundamental (**F**) and an antifundamental chiral multiplet (**AF**). In plumbing graphs of this note, all **AF** are decoupled, such as (4). The evolution of the defect brane is \[\text{M5}^{\prime\prime}(12349)\to\text{NS5}(12349)(\text{IIA})\to\text{NS5}(12 349)(\text{IIB})\xrightarrow{S}\text{D5}(12349)(\text{IIB})\,.\] For this defect, the intersection is the longitude \(\text{M5}\cap\text{M5}^{\prime\prime}=S^{1}_{9}\). This defect brane is suitable for engineering matter since the M2-branes also agree with strings in 3d brane webs. Now, we can try to find mass or FI parameters. Recall that the length of the F1 strings between the D3 and D5 is the real mass parameter, while that of D1-branes between D3 and NS5 is the FI parameter. F1 and D1 can be exchanged through S-duality. Since defect M5 and bulk M5 are separated along direction \(x_{5}\), the M2-brane should stretch between these two M5-branes, and one of its boundaries is the intersection \(\partial(\text{M2})=\text{M5}\cap\text{M5}^{\prime}=S^{1}_{9}\), and the other boundary is the extension of \(S^{1}_{9}\) into the fiber. The M2-brane should be a cylinder on direction \(x_{59_{\text{A}}}\), namely \(I_{5}\times S^{1}_{9}\). The M2-brane descends \(\text{M2}(59)\to\text{D2}(59)\xrightarrow{T_{9}}\text{D1}(5)\xrightarrow{S} \text{F1}(5)\) under dualities. The mass parameter as the coordinate \(m=x_{5}\) takes values from \((-\infty,+\infty)\). If the matter is massless, then \(m=x_{5}=0\). Note that when \(S^{1}_{9}\to 0\), the matter has to be massless. We need to remind that because of the relation (10), at a generic point on the interval \(I_{6}\) the fiber torus has a non-vanishing area; therefore, the D5-brane introduced by defect is a loop on \(S^{1}_{9_{\text{B}}}\) with finite radius. Only when this defect is moved to endpoints, the radius of this loop becomes infinite large. One can also consider the defect M5\({}^{\prime}\) along the meridian \(S^{1}_{\sharp}\). However, if we perform S-duality, then \(\text{M5}^{\prime}(1234\sharp)\to\text{D5}^{\prime}(12349)\xrightarrow{S} \text{NS5}(12349)\), which unfortunately is not a D5 and hence this M5\({}^{\prime}\) is not very suitable for engineering matters, although there is an exception; see (11). The M2-brane between the bulk M5 and defect M5\({}^{\prime}\) is finally turned into a D1-brane: \(\text{M2}(5\sharp)\to\text{F1}(5)\xrightarrow{T_{9}}\text{F1}(5)\xrightarrow{S} \text{D1}(5)\). The topology of this M2-brane is the cylinder \(I_{5}\times S^{1}_{\sharp}\), and hence the distance \(I_{5}\) engineers the FI parameter. Note that at endpoints of \(I_{6}\), \(S^{1}_{\sharp}\) shrinks and hence FI parameter always vanishes. **FI parameters.** FI parameter in principle should be independent of the defect M5-brane as it is given by the D1-string stretching between D3 and NS5. Unfortunately, we have not clearly identify the FI parameter, but only find two candidates. Both candidates suggest \(S^{1}_{9}\) is responsible for the FI parameter. In M-theory, FI parameter is usually known as the volume of the M2-brane ending on the bulk M5 in e.g. [22]. For Lens spaces, there is naturally a circle \(S^{1}_{9}\) which does not shrinks. For this candidate, the topology of this M2-brane is a disc with \(S^{1}_{9}\) as its boundary, and the bulk of M2 extends to the Lens space. We have another candidate for FI parameter. The massless M2-brane taking \(T^{2}_{9\sharp}\) can be inserted in the volume of the bulk M5-brane. Note that this M2-brane does not have a boundary. However, because of the existence of the D0-brane, there should be a subtle interaction that traps this M2-brane at the volume of the bulk M5-brane. This M2-brane M2(09A\(\bar{\text{\tiny{A}}}\)) leads to F1(09A) in IIA and then is T-dual to F1(09B) in IIB. After S-duality, this F1 becomes a D1(9B). Once again, because of the M/IIB duality, the radius of \(S^{1}_{9_{\text{\tiny{B}}}}\) opens up and hence is infinite large. This D1(9B) would be appropriate to engineer the FI parameter, because in 3d brane webs, D3 and NS5 locate at two separate points on the direction \(x_{9_{\text{\tiny{B}}}}\) and the distance between them is FI parameter. We can also use IIA to track D3 and NS5. The M5-brane system reduces to \(\text{D4}-\text{D6}\) with the intersection being \(S^{1}_{9_{\text{\tiny{A}}}}\) wrapped by F1(9A), namely \(\text{D4}\cap\text{D6}=\text{F1}\). After the T-duality along \(S^{1}_{9_{\text{\tiny{A}}}}\), \(\text{D4}\rightarrow\text{D3}\) and \(\text{D6}\rightarrow\text{NS5}\), and the D3 can be deformed away from NS5 along \(x_{9_{\text{\tiny{B}}}}\). Note that this F1 string is given by the M2-brane in the volume of M5-brane, so it is a self-dual string. **Charges and (p,q)-branes.** In [30], it is discussed that the generic defect M5-brane can wrap \(q\)-times on \(S^{1}_{9}\). This M5-brane could lead to a charge-\(q\) NS5-brane, which under S-duality becomes a charge-\(q\) D5-brane. This charge-\(q\) D5-brane suggests a 3d brane web \(\text{D3}(6)-\text{NS5}(5)-\text{D5}^{q}(9)\), which engineers a charge-\(q\) matter. After S-duality the D1(5) becomes F1(5), which provides \(q\) Higgs vacua for the D3 to end on the charge-\(q\) D5-brane. These \(q\) vacua are points that were evenly distributed on \(S^{1}_{\sharp}\), which separate the charge \(q\) D5-branes into \(q\) fractions as shown in Figure 5. The relative real mass parameters between D5-brane fractions are \(m_{i}\sim S^{1}_{\sharp}/q\). Since \(S^{1}_{\sharp}\) shrinks on endpoints of \(I_{6}\), these fractions coincide to an overlapped charge-\(q\) D5-branes on endpoints. There is still an overall mass parameter \(m_{0}\) left, which is the length of the F1(5) string along \(x_{5}\). Moreover, if a M2-brane wraps \(q\) times on \(S^{1}_{9}\) and \(p\) times on \(S^{1}_{\sharp}\), then one can get a \((p,q)\)-string along \(x_{5}\) in IIB. Generically, a defect M5-brane wrap \((q,p)\) times on \(T^{2}_{9_{\text{\tiny{A}}\sharp}}\) leads to a \((p,q)\)5-brane along \(x_{12349_{\text{\tiny{B}}}}\), Once again, the brane configuration shows that the charges should be winding numbers, which agrees with what we have learned from handle-slides in section 2. **Gluing maps and charge-\(q\) defects.** The bare CS level \(k\) is engineered by the self-twisting/framing number of the Lens space \(L(k,1)\). Its definition is slightly different from linking numbers, which needs the information of gluing maps between two solid tori. We can denote \(S^{1}_{\sharp}\) and \(S^{1}_{9}\) by meridian \(\alpha\) and longitude \(\beta\) for later convenience, which are gauge circle and matter circle respectively. These two circles are transformed by the gluing map which is an element in the mapping class group \(SL(2,\mathbb{Z})\). The gluing map \(f:\,T^{2}\to T^{2}\) is represented as a matrix: \[\begin{bmatrix}\tilde{\alpha}\\ \tilde{\beta}\end{bmatrix}=\begin{bmatrix}k&1\\ -1&0\end{bmatrix}\cdot\begin{bmatrix}\alpha\\ \beta\end{bmatrix}\,. \tag{3.6}\] The framing number is defined as \(k:=\beta\cdot\tilde{\alpha}\), which is the linking number between longitude and the deformed meridian. However, in terms of circle/surgery representation of Lens spaces in section 2.3, in particular the \(\beta\)-type Kirby moves, it is more natural to use the self-intersection notation, namely \(L\cdot L:=\beta\cdot\tilde{\beta}=k\). These two definitions match, as surgery is gluing back a solid torus to a torus hole, so the meridian and longitude should be switched. The charge-\(q\) matter circle is the multiple winding of the charge-\(1\) matter circle, so it is given by \(\beta_{q}=q\,\beta\). One can check that the intersection/linking numbers between the gauge and matter circles are not changed by the gluing map, namely \(\alpha_{q}\cdot\beta=\tilde{\alpha}_{q}\cdot\tilde{\beta}=q\), since \(\alpha\cdot\alpha=0\,,\ \beta\cdot\beta=0\,,\ \alpha\cdot\beta=1\). **3d theory on \(S^{1}\times\text{annulus}\).** One can consider a trivial three-manifold \(T^{2}\times I\), where the torus never degenerates on the interval \(I\). This geometry can be obtained from \(S^{2}\times S^{1}\) by truncating two poles of \(S^{2}\). For this geometry, since no \(S^{1}\) degenerates, one can not get D6-branes. Fortunately, one can add defect M5-branes on meridian and longitudes of \(T^{2}\) to get NS5-branes and D5-branes respectively. This geometry is useful, as it can be used to connect Lens spaces. We will go back to this geometry in section 4.3. ### Brane webs on torus Movement of D5-branes.The movement of D5-brane in [13] can be described by gluing maps in (3.6). At the other endpoint of \(I_{6}\), the \(S^{1}_{9}\) and \(S^{1}_{\sharp}\) are mapped to \(\tilde{S}^{1}_{9}\) and \(\tilde{S}^{1}_{\sharp}\) where \(\tilde{S}^{1}_{9}=-S^{1}_{9}\) and meridian \(\tilde{S}^{1}_{\sharp}=kS^{1}_{9}+S^{1}_{\sharp}\), so the \(S^{1}_{9}\) is reversed and \(S^{1}_{\sharp}\) is twisted. This suggests the transformations of various 5-branes. The bulk M5-brane on Lens space \(L(k,1)\) leads to the \(\text{NS5}-\text{D3}-(k,1)5\)-brane web, where the NS5 and \((k,1)5\)-brane are differed by a relative angle \(k=\tan\theta\). We can undo the S-duality to return \(\text{D5}-\text{D3}-(1,k)5\)-brane. Note that in addition to generate \((1,k)5\)-brane from a D6-brane, it can also arise from a defect M5-brane wrapping \(\tilde{S}^{1}_{\sharp}=kS^{1}_{9}+S^{1}_{\sharp}\); see Table 2. The flavor D5 brane given by the defect M5 is almost invariant under the gluing map, as \(\tilde{S}^{1}_{9}=-S^{1}_{9}\) only reverses the charge. This means that if we move a D5-brane from one endpoint of \(I_{6}\) to the other endpoint, we still get the same D5-brane only with its orientation reversed, namely the signs of charges for 3d chiral multiplets are shifted. This is consistent with the already known fact from 3d brane webs in IIB string theory [13]. We use the Figure 4 to illustrate this parallel movement of the defect M5-branes/D5-branes. Before and after the move, the defect M5 always gives rise to the same pair of chiral multiplets. Note that the red line (D5-brane) gives a hypermultiplets, or in other words, a pair of chiral multiplets \(1\mathbf{F}+1\mathbf{AF}\), change to \(1\mathbf{AF}+1\mathbf{F}\), once they are moved to the other endpoint. Some Lens space should also engineer abelian 3d \(\mathcal{N}=4\) theories, as they differ to 3d \(\mathcal{N}=2\) theories only by turning off Chern-Simons levels. We can let the bare Chern-Simons level vanishing, then \(\text{NS5}^{\prime}=\text{NS5}\) and the Lens space is only \(L(0,1)=S^{1}\times S^{2}\). To get the theory \(\bullet_{k}-\leavevmode\hbox{\small 1\kern-3.8pt\vbox{\hrule height 0.4pt width 100 pt depth 0.0pt\kern 1.0pt\vrule width 0.4pt height 6.0pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0.0pt}\hrule height 0.4pt width 10pt depth 0.0pt}\hrule height 0.4pt width 10pt depth 0.0pt}\hrule height 0.4pt width 100 pt depth 0. by sending its mass \(m_{a}\to-\infty\). More explicitly, since the effective Chern-Simons level is \(k_{\rm eff}=k+\big{(}N_{f}{\rm sign}(m_{f})-N_{a}{\rm sign}(m_{a})\big{)}/2\), for the theory \(U(1)_{0}+N_{f}{\bf F}+N_{a}{\bf A}{\bf F}\), one should let two matters in the theory \(U(1)_{0}+1{\bf F}+1{\bf A}{\bf F}\) to have opposite signs, namely \({\rm sign}(m_{f}){\rm sign}(m_{a})<0\), such that eventually \(k_{\rm eff}=\pm 1\) to produce the basic building block \(U(1)_{\pm 1}+1{\bf F}\), which is required for constructing plumbing theories through \(ST\)-moves. It is already known how to perform decoupling and give opposite signs to both matters on 3d brane webs in IIB, while on the M-theory it is unclear. The patterns of various decoupling are discussed in [13]. In below, we present the 3d brane web for \(U(1)_{0}+1{\bf F}+1{\bf A}{\bf F}\) and its decoupled brane webs for plumbing graph \(\bullet_{+1}-\hbox{\hbox{\kern 0.0pt\vrule height 6.45pt width 0.4pt depth -0.215pt\hss} \hbox{\kern 0.0pt\vrule height 6.45pt width 0.4pt depth -0. becomes a \((q,p)\) 5-brane. One can take a quotient \(\mathbb{Z}_{q}\) on torus to reproduce this winding number/charge \(q\). We illustrate this in Figure 5. If we put a defect \((1,1)\), then the quotient makes it to \((q,1)\). Recall that the D6-brane comes from D0-brane on \(S^{1}_{\sharp}\), which can be represented as a vertical line on the torus. Then circles corresponding to a NS5 and a charge \(q\) D5 are given by \[\begin{array}{c}\includegraphics[width=142.26378pt]{fig/D6-brane-on-torus-with----- Correspondingly, the slope of the shrinking circle \(S^{1}_{\sharp}\) wrapped by D0 should also be shifted from \(S^{1}_{\sharp}\to S^{1}_{\sharp}\pm S^{1}_{9}\); then this quantum correction transforms \(L(0,1)\) into \(L(\pm 1,1)\). It is a geometric transformation when a defect M5 (denoted as \(L_{\bigcirc}\)) is present: \[S^{1}\times S^{2}\sqcup L^{\prime}_{\bigcirc}\ \xrightarrow{\text{ decoupling $\mathbf{AF}$}}\ S^{3}\sqcup L_{\bigcirc}\,, \tag{3.14}\] which gives effective CS levels \(k_{\text{eff}}=\pm 1\). Moreover, instead of decoupling both chiral multiplets, we can also only decouple \(\mathbf{AF}\) by taking \(m_{a}\to-\infty\) and keep \(\mathbf{F}\), however, this does not change \(k_{\text{eff}}\) as \(\mathbf{F}\) is assumed to have a positive mass in the renormalization; see Tong's lectures [36] for a nice discussion. The effective CS levels are naturally selected by plumbing graphs, as we have shown in (2.21). This geometric change caused by decoupling is analogous to the handle-slide, although the latter may contain more information. After the Dehn twist or decoupling, the framing number describing effective CS levels changes significantly \(\bullet_{k}\to\bullet_{k+q^{2}}\). ### 3d theories from Lens spaces. In this subsection, we look at the theories that can be engineered by Lens spaces. Firstly, it is free to add many defect M5-branes on Lens spaces \(L(k,1)\). These M5-branes can only be parallel to each other, as the longitude \(S^{1}_{9}\) is unique for Lens spaces. However, we can distinguish them by tuning on different mass parameters \(m_{i}\) and winding numbers \(q_{i}\). Thus Lens spaces with Lagrangian defects can engineer the class of theories11: Footnote 11: Note that \(\mathbf{AF}\) can be viewed as \(\mathbf{F}\) with the opposite charge. \[U(1)_{k}+\oplus_{i=i}^{N_{f}}\mathbf{F}_{q_{i}}+\oplus_{i=i}^{N_{a}}\mathbf{ AF}_{q^{\prime}_{i}}\,, \tag{3.15}\] where \(N_{f}\) is the number of defect M5-brane, and charge \(q_{i}\) are the winding numbers of defect M5-branes on the matter circle \(S^{1}_{9}\). For instance, one can introduce two matters, and the configuration of putting two defects on the same solid torus is \[\includegraphics[width=142.26378pt]{figures/2-3-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-1-2-12-12-12-12-12-1-2-12-12-12-12-12-1-2-12-1-12-12-1-2-1-12-12-1-12-12-1-2-12-12-1-2-1-12-1-12-12-12-12-12-1-2-12-1-12-12-1-2-1-12-1-2-1-12-1-2-12-1-12-1-12-2-1-1-2-1-2-1-1-2-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-2-1-2-1-1-2-1-1-2-1-1-2-1-2-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-1-2-1-1-2-1-1-2-1-2-1-1-2-1-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-2-1-1-2-1-2-1-1-2-1-1-2-1-1-2-1-1-2-1-1-2-1-1-2-1-2-1-1-2-1-1-2-1-2-1-2-1-2-1-1-2-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-1-2-1-2-1-1-2-1-1-2-1-2-1-2-1-1-2-1-1-2-1-2-1-2-1-2-1-2-1-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-2-1-1-2-1- If we cut the \(I_{6}\) in the middle, then the left part is a solid torus \(D^{2}\times S^{1}_{9}\), so the transport (movement) of matter circles is the deformation of circles from the core to bulk of the solid torus. The theory is totally determined by defects and the gluing map which fixes the framing numbers \(k\). The matte circle can be viewed as the flavor D5, and gauge circle can be viewed as the NS5. Since NS5 can transports and gets twisted, the D3-brane could suspend in between and provides the space for deformation and gluing. This is why we call it gauge circle. The correspondence between the solid torus and the part of 3d brane web is: \[\begin{array}{c}\includegraphics[width=142.26378pt]{figures/D3-brane-crop.pdf}\end{array} \tag{23}\] Notice that the D3-brane has a finite length \(I_{6}\), as its gauge coupling is finite. The gauging of D3-brane is given by gluing two solid tori. ## 4 Surgery constructions of matters In the last section, we analyzed Lens spaces \(L(k,1)\). However, Lens spaces can only engineer theories with one gauge node \(U(1)_{k}\), and hence are not generic. The generic three-manifolds are constructed by Dehn surgeries on links of circles (plumbing graphs). In this section, we will discuss how to introduce defect M5-branes through Dehn surgeries, and see how they interact with gauge groups. ### Dehn surgeries and \(St\)-moves Lens spaces are building blocks of any closed orientable three-manifolds, namely \[M_{3}=L(k_{1},1)\sqcup L(k_{2},1)\sqcup\cdots\sqcup L(k_{n},1)\,, \tag{24}\] for which the circles \(L_{i}\) for Lens spaces are linked with each other, and the link \(L_{1}\cup L_{2}\cup\cdots\cup L_{n}\) could represent the \(M_{3}\). Recall that plumbing graphs denote this link by the rules that \(L_{i}\rightarrow\bullet_{k_{i}}\) and \(L_{i}\cup L_{j}\rightarrow\bullet_{k_{i}}-\bullet_{k_{j}}\). Equivalently, \(M_{3}\) is defined as Dehn surgeries along these links \[M_{3}:=S^{3}\backslash N(L_{1}\cup L_{2}\cup\cdots\cup L_{n}):=\left(S^{3}- \cup_{i=1}^{n}N(L_{i})\right)\ \sqcup_{f_{i}}\ \left(\oplus_{i=1}^{n}D_{i}^{2}\times S^{1}\right)\,. \tag{25}\] The definition of Dehn surgery is illustrated in Figure 1, which is the process of drilling out the neighborhood of the circle \(L_{i}\), namely \(N(L_{i})=D_{i}^{2}\times L_{i}\), and then filling in a solid torus \(D_{i}^{2}\times S^{1}\). The knot complement \(S^{3}-N(L_{i})\) is called knot exterior, whose boundary is a torus even if the \(L_{i}\) is knotted. We need to glue back a solid torus \(D^{2}\times S^{1}\) by the gluing map \(f_{i}\) to each torus hole. Note that these solid tori \(D_{i}\times S^{1}\) are independent. The Dehn surgery says that the gluing map \(f_{i}\) is determined by the meridian \(\alpha_{i}\) of the solid torus and the curve \(J_{i}\) on the boundary of the complement \(S^{3}-N(L_{i})\), namely \(f_{i}(\alpha_{i})=J_{i}\). Therefore, the gauge circle as the meridian \(\alpha_{i}\) is analogous to the circle \(L_{i}\) Moreover, the Dehn surgery for Lens spaces differs from gluing two solid tori by the torus switch which is defined as exchanging meridian and longitude \(\alpha_{i}\leftrightarrow\beta_{i}\). Therefore the gluing map (3.6) is transposed and the images of meridian of longitude of solid torus \(D^{2}\times S^{1}\) are \[\begin{bmatrix}\tilde{\alpha}\\ \tilde{\beta}\end{bmatrix}=\begin{bmatrix}k&-1\\ 1&0\end{bmatrix}\cdot\begin{bmatrix}\alpha\\ \beta\end{bmatrix}\,. \tag{4.3}\] The longitude \(\beta\) represents circle \(L\), and the Jordan curve is \(J=k\alpha-\beta\). The framing number is defined as \(L\cdot L:=\beta\cdot J=k\). \(ST\)**-moves and matter circles.** In the last section, using M-theory/IIB duality we find matter nodes on plumbing graphs do indeed exist and should be the matter circles given by defect M5-branes, and the charge \(q\) of the matter is the winding number between gauge circle and matter circle. Therefore, one can draw the link graph below for the simple plumbing theory: \[\begin{array}{ccc}\includegraphics[width=142.364pt]{L1}&\includegraphics[width=142.364pt]{L2}&\includegraphics[width=142.364pt]{L3}\\ &\includegraphics[width=142.364pt]{L4}&\includegraphics[width=142.364pt]{L5}\\ &\includegraphics[width=142.364pt]{L6}&\includegraphics[width=142.364pt]{L7}\\ &\includegraphics[width=142.364pt]{L8}&\includegraphics[width=142.364pt]{L9}\\ \end{array} \tag{4.4}\] Matter circle and gauge circle should form a Hopf link in each solid torus (3.18). We apologize that the colors of link graphs and plumbing graphs do not match. One can add many matter nodes on the same gauge node, then correspondingly, many matter circles (red circle) should link to the gauge circles (blue circle). Changing the orientation of red circles flips the signs of charges. Note that at this stage the matter circle \(\bigcirc\) is just a circle given by the intersection of Lagrangian defect with three-manifolds. Thus one cannot apply Dehn surgery on the matter circle. However, in section 4.4, we will show that matter circle relates to the three-sphere \(S^{3}_{\infty}\). Because of the mirror duality in (2.2), the matter circle can be equivalently viewed as a decorated gauge circle \(\bullet_{+1}-\blacksquare\), on which one can apply Dehn surgery. To see the topology, one can represent the \(ST\)-moves by matter circles and gauge circles. For instance, the \(ST_{\alpha}\)-move with \(q=1\) (2.13) is represented as \[\begin{array}{ccc}\includegraphics[width=142.364pt]{L1}&\includegraphics[width=142.364pt]{L2}&\includegraphics[width=142.364pt]{L3}\\ &\includegraphics[width=142.364pt]{L4}&\includegraphics[width=142.364pt]{L5}\\ \end{array} \tag{4.5}\] which is a non-trivial replacement. In section 4.4, we will try to geometrically derive this \(ST_{\alpha}\)-move. We draw the matter circle (red) big on the left graph and small on the right graph. This is due to the mirror map. The FI parameter \(\xi\) of the gauge circle \(U(1)\) is mirror dual to the mass parameter \(m\) of the matter \(\mathbf{F}\), and we assume the length of \(\bigcirc\) is the FI parameter. In addition, it is shown by sphere partition functions of this mirror triality in [1] that the matter in dual theory \(\bullet_{\pm 1}-\blacksquare\) is almost massless up to a quantum shift \(q^{*}=e^{i*\epsilon}\). Therefore, in (4.5) we can draw the scales of matter circles by the values of parameters. According to Lickorish-Wallace theorem, any closed orientable three-manifold can be obtained by Dehn surgeries along a link in \(S^{3}\), and each component of this link can be an unknot \(\bigcirc\). This means that if a three-manifold is given by surgeries along a knot, then this knot can be equivalently replaced by links of circles using Kirby moves. In principle, all 3d plumbing theories in [1] can be engineered by links and defects. One example has been shown in Figure 2. Here we illustrate a knot example in Figure 6. In the above, we show fundamental matter circles carried by each solid torus \(D_{i}^{2}\times S^{1}\) can bring matters to plumbing manifolds by surgeries. However, these are just a small sector. There could be matters charged under many gauge nodes, such as bifundamental chiral multiplets, trifundamental, and so on, depending on how matter circles are linked with gauge circles. Multi-charged matters cannot be inherited from the fundamental matters on each Lens space component in (4.1). From the perspective of \(\beta\)-Kirby moves in (2.22), these matters with many charges are not basic, and the most basic one is \(\bullet_{\pm 1}-\leavevmode\hbox{\kern 1.0pt\vbox{\hrule height 0.4pt width 100 pt\hbox{\kern-1.0ptI}\hrule height 0.4pt width 100 pt}}\). In the following, we analyze these matter circles to show their existence. If a matter node connects to many gauge nodes, then the plumbing theories contains a chiral multiplet with charges \(q_{i}\) under gauge groups \(U(1)_{k_{i}}\). In the link graph, one just need to link this matter circle to these gauge circles. For example, a bifundamental matter is represented by the following graph \[\vbox{\hbox{\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ **Lens spaces \(L(k,1)\).** The Lens space is given by \(S^{3}\backslash N(\bigcirc_{k}):=S^{3}-N(\bigcirc)\cup_{f_{k}}(D^{2}\times S^{1})\), where the circle complement \(S^{3}-N(\bigcirc)\) has a torus (cusp) boundary12. We denote its meridian and longitude by \((\alpha^{\prime},\beta^{\prime})\). The meridian and longitude of the solid torus are \((\alpha,\beta):=((S^{1},*),(*,S^{1}))\subset D^{2}\times S^{1}\) where \(*\) means a point on \(D^{2}\) or \(S^{1}\). Footnote 12: Here we borrow the name cusp, although the boundary torus does not shrink or is singular. The gluing map \(f:(\alpha,\beta)\mapsto(\alpha^{\prime},\beta^{\prime})\) is from the boundary of solid torus to the cusp boundary. For convenience, we denote \(\mathrm{Im}(\alpha)=f(\alpha)\) and \(\mathrm{Im}(\beta)=f(\beta)\). Then we have \[\begin{bmatrix}f(\alpha)\\ f(\beta)\end{bmatrix}=\begin{bmatrix}p&-q\\ r&s\end{bmatrix}\cdot\begin{bmatrix}\alpha^{\prime}\\ \beta^{\prime}\end{bmatrix}\,, \tag{4.7}\] where the determinant of the matrix should be one, namely \(ps+qr=1\) (unitary condition). The Jordan curve is \(J=\mathrm{Im}(\alpha)=p\alpha^{\prime}-q\beta^{\prime}\). For integral surgeries \(L(k,1)\), we have \(p=k,q=1\), so \(r=-sk+1\) for any \(s\in\mathbb{Z}\) could satisfy the unitary condition, which gives infinite many equivalent gluing maps. The images of gauge and matte circle on the cusp torus read \[\begin{split} f(\alpha)&=k\alpha^{\prime}-\beta^{\prime} \,,\\ f(\beta)&=(-sk+1)\alpha^{\prime}+s\beta^{\prime}=\alpha^{ \prime}-s(k\alpha^{\prime}-\beta^{\prime})=\alpha^{\prime}-sf(\alpha)\,.\end{split} \tag{4.8}\] Note that the framing number is defined as the linking number between the image of meridian and the longitude of cusp torus, namely \(f(\alpha)\cdot\beta^{\prime}=k\). One can see that the image of matter circle \(\mathrm{Im}(\beta)\) contains Dehn twists of the image \(f(\alpha)\). We should set \(s=0\) to fix this free integer, and then \(f(\beta)=\alpha^{\prime}\) leads to a D5-brane. Otherwise, D5-brane on one endpoint would become a charge \((-sk+1,s)\) 5-brane on the other endpoint. The matter D5-brane should be invariant under this transport as shown in Figure 4. In short, \((f(\alpha),f(\beta))=(k\alpha^{\prime}-\beta^{\prime},\alpha^{\prime})\). The image of matter circle \(\beta\) is fixed and is always the meridian \(\alpha^{\prime}\) of the cusp torus. This is consistent with the circle graph in (4.4). However, there is a special case for \(f(\beta)\). When \(s=\pm 1\) and \(k=\pm 1\), the image of matter circle is \(f(\beta)=\pm\beta^{\prime}\). This means that a D5-brane transforms to a NS5-brane. This is the exceptional 3d brane web in (3.9), which also gives \(\bullet_{\pm 1}-\blacksquare\) after decoupling the \(\mathbf{AF}\). In this sense, \(L(\pm 1,1)\) are quite special, as both \(\alpha^{\prime}\) and \(\beta^{\prime}\) can be matter circles. The integer \(s\) can also be fixed from another perspective. Note that the linking number of gauge and matter images is \(f(\alpha)\cdot f(\beta)=2sk-1\). It is natural to set \(s=0\), such that \(f(\alpha)\cdot f(\beta)=-1\), which could ensure that matter circle has winding number \(\pm 1\) with gauge circle. Since the winding number is the charge of matter, it should be preserved. For \(L(\pm 1,1)\), we can also have \(f(\alpha)\cdot f(\beta)=1\) if \(s=\pm 1\). **Three-sphere S3.** For \(L(1,0)=S^{3}_{\infty}\), the meridian of solid torus \(\alpha\) maps to \(J=\alpha^{\prime}\) of the boundary torus of \(S^{3}-N(\bigcirc)\), and the longitude \(\beta\) maps to \(r\alpha^{\prime}+\beta^{\prime}\). In short, \[(f(\alpha),f(\beta))=(\alpha^{\prime},r\alpha^{\prime}+\beta^{\prime})\,. \tag{4.9}\] \(L(1,0)\) defines the identical/trivial surgery along the circle \(\bigcirc_{\infty}\), which is obtained by filling in the same solid torus that was drilled out, so \((\alpha,\beta)\mapsto(\alpha^{\prime},\beta^{\prime})\), which fixes \(r=0\). In addition, although it is an identical gluing, if taking into account the torus switch, we have \(\text{NS5}\to\text{D5}\) and the defect \(\text{D5}\to\text{NS5}\). This gives a \(U(1)_{0}+1\mathbf{F}+1\mathbf{A}\mathbf{F}\), but the role of NS5 and D5 are flipped. We use the brane web below to illustrate: (4.10) Therefore, identical surgery also leads to the exceptional brane web, which describes the \(T[U(1)]\) theory with a D5-brane as shown in (3.9). Note that \(L(1,n)\) for any \(n\in\mathbb{Z}\) give the same three-sphere \(S^{3}\). One can understand \(L(1,n)\) by thinking about how to obtain it. This \(n\) is the number of Dehn twists that send \(\alpha^{\prime}=\alpha^{\prime\prime}+n\beta^{\prime\prime}\,,\beta^{\prime}= \beta^{\prime\prime}\) which lead to equivalent surgeries [37]. Using the unitary condition and taking this twist into (4.9), we get the images of gauge circle \(\alpha\) and matter circle \(\beta\) below \[f(\alpha) =\alpha^{\prime\prime}+n\beta^{\prime\prime}\,, \tag{4.11}\] \[f(\beta) =r\alpha^{\prime\prime}+(rn+1)\beta^{\prime\prime}=r(\alpha^{ \prime\prime}+n\beta^{\prime\prime})+\beta^{\prime\prime}=\beta^{\prime\prime }+rf(\alpha)\,,\] where \((\alpha^{\prime\prime},\beta^{\prime\prime})\) are circles on the cusp boundary of \(L(1,n)\). Once again, the image of matter circle contains the \(r\)-twists of the image of meridian. Just like Lens space \(L(k,1)\), we can use the linking number \(f(\alpha)\cdot f(\beta)=2nr+1\) to fix \(r\) and \(n\). It is better to set \(r=0\), such that we always have \(f(\beta)=\beta^{\prime\prime}\) for any \(n\). Another way to understand this \(r\)-integer is the Dehn twists of \(\beta^{\prime}\) along \(\alpha^{\prime}\), namely \(\beta^{\prime}=\beta^{\prime\prime}+t\alpha^{\prime\prime}\,,\alpha^{\prime}= \alpha^{\prime\prime}\), which could cancel this freedom \(r\)-integer in (4.9) by setting \(t=-r\), and then this returns \(L(1,0)\). The brane web for \(L(1,n)\) are similar to webs in (3.9), where the matter circle D5 on the endpoint of D3-brane is mapped also to a NS5 on the other endpoint of D3. If \(n=\pm 1\), \(r=\mp 1\), then \(f(\alpha)=\mp\alpha^{\prime\prime}\). Once again, one can see \(L(1,\pm 1)\) are special. As the defect D5-brane in this case can also map to a D5, which is shown in the first web in (3.8). Here, we use the mapping class group of torus to illustrate. (4.12) The red line as the matter circle maps to either longitude or meridian when \(k=\pm 1\), and these two cases are S-dual to each other and should be equivalent, since when one chiral multiplet is decoupled properly, they lead to the same brane webs for \(\bullet_{+1}-\blacksquare\). An observation is that (4.8) and (4.11) are equivalent if meridian and longitude are exchanged, however the properties of \(L(k,1)\) and \(L(1,n)\) are significantly different, so the identification of meridian and longitude is crucial. \(\mathbf{S^{2}\times S^{1}}\)**.** For \(L(0,1)=S^{1}\times S^{2}\), the images of meridian and longitude are \[(f(\alpha),f(\beta))\simeq(-\beta^{\prime},\alpha^{\prime}+s\beta^{\prime})\,. \tag{4.13}\] We have to set \(s=0\) to preserve the charge of D5-brane. If we perform the Dehn twist \(\alpha^{\prime}=\alpha^{\prime\prime}+t\beta^{\prime\prime}\) and \(\beta^{\prime}=\beta^{\prime\prime}\), then the twist \(s\) can be canceled by setting \(t+s=0\). If we perform the twist \(\alpha^{\prime}=\alpha^{\prime\prime}\,,\ \beta^{\prime}=\beta^{\prime\prime}+m \alpha^{\prime\prime}\), then we get \(f(\alpha)=m\alpha^{\prime\prime}-\beta^{\prime\prime}\,,f(\beta)=(-sm+1)\alpha^{ \prime\prime}+s\beta^{\prime\prime}\). This turns \(L(0,1)\) into a Lens space \(L(m,1)\), so the Dehn twist of \(\beta^{\prime\prime}\) is not an equivalent operation. Namely, \(L(0,1)\) is very sensitive to the Dehn twist along meridian. This property is consistent with the decoupling of \(\mathbf{AF}\) and geometric transformation, as shown in (3.14). The twist \(\beta^{\prime}=\beta^{\prime\prime}+m\alpha^{\prime\prime}\) namely \(S^{1}_{\sharp}\to S^{1}_{\sharp}+mS^{1}_{9}\) changes geometry. Therefore, one can view the decoupling of the matter \(\mathbf{AF}\) of charge \(m\) as Dehn twists. \(\mathbf{L}(\pm 1,1)\).Form the above analysis, one can see that \(L(\pm 1,1)\) are special, as the images of matter circles can be on either meridian or longitude, depending on equivalent Dehn twists. If one views \(L(\pm 1,1)\) as the three-sphere, the image of matter circle is on longitude \(\beta^{\prime}\), and if as Lens space, then it is on meridian \(\alpha^{\prime}\). These two locations should be equivalent, which motivate us to interpret this as mirror duality shown in (4.5). Before proving that, there are still a few steps and we leave it to next section. We summarize the images of matter circles in Table 3. ### Matter circles in cobordisms For connected three-manifolds, there are many linked gauge circles, and hence many possible locations for matter circles. In this section, we discuss the three-manifolds beyond Lens spaces by using cobordism description. Hopf link cobordism.One basic example is the surgery along a Hopf link in \(S^{3}\). To obtain it one can drill out two solid tori in sequence. After drilling out the first torus \(N(\bigcirc)\), one gets a complement which is also a solid torus by taking the interior of its torus boundary to the outside, namely \(S^{3}-N(\bigcirc)=D^{2}\times S^{1}\). In this process, the meridian and longitude are exchanged, which is a torus switch. One can denotes it by \(S^{3}-N(\bigcirc):=\widetilde{N}(\bigcirc)\). Before drilling out the second solid torus, recall that in the complement of Hopf link of circles \(\bigcirc_{1}\) and \(\bigcirc_{2}\), the meridian \(\alpha_{1}\) (longitude \(\beta_{1}\)) of \(\bigcirc_{1}\) could deform to the longitude \(\beta_{2}\) (meridian \(\alpha_{2}\)) of \(\bigcirc_{2}\). Therefore, if we take the inside of torus boundary to outside, the meridian \(\tilde{\alpha}_{1}\) of \(\widetilde{N}(\bigcirc_{1})\) will deform to the \(\alpha_{2}\), and similarly \(\tilde{\beta}_{1}\) to \(\beta_{2}\). Finally we get a solid torus with a torus hole inside. We illustrate Hopf link complement \(S^{3}-N(\bigcirc_{1}\#\bigcirc_{2})=\widetilde{N}(\bigcirc_{1})-N(\bigcirc_{2})\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(L(k,1)\) & \(L(1,n)\) & \(L(\pm 1,1)\) & \(L(1,0)\) & \(L(0,1)\) \\ \hline gauge circle \(\mathrm{Im}(\alpha)\) & \(k\alpha^{\prime}-\beta^{\prime}\) & \(\alpha^{\prime}+n\beta^{\prime}\) & \(\alpha^{\prime}\pm\beta^{\prime}\) & \(\alpha^{\prime}\) & \(-\beta^{\prime}\) \\ \hline matter circle \(\mathrm{Im}(\beta)\) & \(\alpha^{\prime}\) & \(\beta^{\prime}\) & \(\alpha^{\prime}\stackrel{{\mathrm{twist}}}{{\longleftrightarrow}} \beta^{\prime}\) & \(\beta^{\prime}\) & \(\alpha^{\prime}\) \\ \hline \end{tabular} \end{table} Table 3: In this table, we summarize the possible locations of gauge circles and matter circles. Note that \(L(\pm 1,1)\) is special, as its matter circle has two equivalent locations. We will show in (4.28) that this is crucial to geometrically interpret mirror triality in (4.5). If one translates the images of circles to the construction of Lens spaces by gluing two solid tori, then the torus switch \(\alpha^{\prime}\leftrightarrow\beta^{\prime}\) should be applied. as follows (4.14) which has two boundaries \(T^{2}\times T^{2}\). The topology of this cobordism is \(T^{2}\hookrightarrow M_{3}\to I\), which can be seen by slicing it into foliations. We use the figure below to denote this cobordism: (4.15) where the bulk is the inside of solid torus given by \(L_{1}\), and each slice of the bulk is a torus. The matter circle of \(L_{1}\) is the longitude, which can be freely moved in the bulk, while that of \(L_{2}\) is still the meridian. Note that in this cobordism, longitude transports to longitude and meridian transports to meridian, although roles played by these circles are switched from the left torus boundary to the right torus boundary. To get the closed three-manifold given by surgery along the Hopf link, one needs to fill in one solid torus and glue another solid torus: (4.16) In this process, one can introduce matter circles, whose images are shown in the above graph. Notice that filling in a solid torus for \(\bigcirc_{2}\) is a surgery, while gluing back the solid torus for \(\bigcirc_{1}\) leads to a Lens space. Obviously, two matter circles have diverse ways to tangle or fuse with each other, which may involves non-trivial phenomenon and we prefer to solve it in future. One can read off the theories determined by this torus cobordism. Let us first look at the three-manifold in (4.16), which is a three-manifold with two defects: \(M_{3}(\bigcirc_{1},\bigcirc_{2})=\big{(}S^{3}-N(\bigcirc_{1}\cup\bigcirc_{2}) \big{)}\sqcup(D^{2}\times S^{1})\sqcup(D^{2}\times S^{1})\). Note that two matter circles \(S^{1}\) are not linked. The theory for a solid torus is just a hypermultiplet, namely \(T[D^{2}\times S^{1}]=1\mathbf{F}+1\mathbf{A}\mathbf{F}\). In addition, Hopf link cobordism represents a mixed CS level \(k_{12}=+1\), which encodes \(T[U(1)]\) theory [35]. Therefore, we get a gauged \(T[U(1)]\) theory with two D5-branes. If we set the framing numbers be zero, then the theory is described by the plumbing graph below: (4.17) One can perform S-duality on the hypermultiplet to get the right graph [1], which represents the theory \(U(1)_{0}+2{\bf F}+2{\bf A}{\bf F}\), and this theory is exactly the self-dual \(T[SU(2)]\) theory [35]. It is straightforward to consider the non-abelian theories by putting multiple M5-branes on three-manifolds; then the Hopf link cobordism encodes the \(T[U(N)]\) theories. In [38], Demofte and Gaiotto found similar geometries from a different perspective. However, there are still some gaps to match, since there-manifolds that we use have no boundaries, while these from gluing tetrahedrons often have singular torus boundaries (which are called torus cusps) since the tips of tetrahedrons are truncated [14]. One can see that gluing back the solid torus to close torus boundaries of the complement \(M^{3}-N(L_{i})\) is interpreted as gauging global symmetries, and each torus boundary is associated with a global symmetry. However, if defects are not introduced, there is still no flavor symmetry and no matter. Hence this global symmetry carried by torus boundaries should be the topological symmetry. Through gluing or surgeries, framing numbers as Chern-Simons levels can be introduced, and topological symmetries become gauge symmetries. We show two more examples below \[\includegraphics[width=142.26378pt]{figures/4-1-1}\qquad\includegraphics[width=142.26378pt]{figures/4-1-1}\qquad\includegraphics[width=142.26378pt]{figures/4-1-1} \tag{4.18}\] Note that in the above, although we only draw the Hopf link complement, we assume solid tori have been glued back. For more generic Hopf links with the linking number \(\bigcirc_{1}\cdot\bigcirc_{2}=k_{12}>1\), one can still take the boundary of \(N(\bigcirc_{1})\) to the outside to get the a solid torus \(\widetilde{N}(\bigcirc_{1})\), but the \(N(\bigcirc_{2})\) would wind the hole of this solid torus for \(k_{12}\) times. Gluing cobordisms.If \(\bigcirc_{1}\) and \(\bigcirc_{2}\) are not linked, then we have the complement \(S^{3}-N(\bigcirc_{1}\cup\bigcirc_{2})=T^{2}\times I\), which can be obtained by truncating two poles of the two-sphere in \(S^{2}\times S^{1}\), which is a trivial bundle with \(S^{1}\) fibered over a tube, and hence could be used to connect two three-manifolds. For instance, the reducible three-manifold \(L(k_{1},1)\#L(k_{2},1)\) can be obtained if two solid tori are filled in this complement. We can still discuss the fusion of matter circles that are mapped to this complement. Let us represent it in terms of a solid torus: \[\includegraphics[width=142.26378pt]{figures/4-1-1}\qquad\includegraphics[width=142.26378pt]{figures/4-1-1}\qquad\includegraphics[width=142.26378pt]{figures/4-1-1} \tag{4.19}\] To get the a similar description using cobordisms, we can glue two Hopf link complements in (4.15): \[\begin{split}\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/ fig//fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig//fig/fig//fig/fig//fig/fig/fig//fig/fig//fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig//fig/fig//fig/fig//fig/fig//fig/fig/fig//fig//fig//fig//fig/fig//fig/fig//fig//fig/fig//fig//fig//fig/fig//fig//fig//fig//fig/fig//fig//fig//fig//fig/fig//fig//fig/fig//fig//fig/fig//fig//fig//fig//fig//fig/fig//fig/fig//fig/fig//fig//fig//fig//fig//fig//fig//fig/fig//fig//fig//fig//fig//fig//fig/fig//fig//fig//fig/fig//fig//fig//fig//fig/fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig/fig//fig//fig//fig//fig//fig//fig//fig///fig//fig//fig//fig//fig/fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig/fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig//fig/fig//fig//fig//fig//fig//fig/fig//fig//fig/fig//fig/fig//fig//fig/fig/fig//fig/fig//fig//fig//fig//fig//fig//fig//fig/fig//fig/fig//fig/fig//fig/fig//fig/fig/fig/fig//fig/fig/fig/fig//fig/fig/fig//fig/fig//fig/fig/fig/fig//fig/fig//fig//fig/fig/fig//fig/fig//fig/fig//fig/fig//fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig//fig//fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/ of 3d theories in our context is in complement with the DGG/GPV construction. When the matter fields are introduced by defect M5-branes, the corresponding objects on 3d CS theory should be Wilson loops13. For other works on defect extensions of 3d-3d correspondence, see [40]. Footnote 13: We would like to thanks Satoshi Nawata for clarifying this relation to us. We therefore borrow an idea from CS theories. As pointed out by Witten in [41], the 3d CS theory on \(S^{3}\) with the Wilson loop along the knot \(K\) can be equivalently described by the CS theory on \(M_{3}=S^{3}\backslash N(K)\), which is obtained by a Dehn surgery along \(K\). This indicates that some knot can be replaced by its solid torus \(N(K)\). As we have reviewed before, one can freely glue a \(S^{3}_{\infty}\) to any three-manifold, but this does change the manifold, namely \(M_{3}=S^{3}_{\infty}\sqcup M_{3}\). The \(S^{3}_{\infty}\) is \(L(1,0)\) which is obtained by the identical surgery along a circle \(\bigcirc\). Note that this circle is the longitude rather than meridian. Therefore, \(S^{3}_{\infty}\) is a basic example that the circle \(\bigcirc\) that can be replaced by its neighborhood solid torus \(N(\bigcirc)=D^{2}\times\bigcirc\). In the presence of matter circle \(\bigcirc\) in the three-manifold \(M_{3}\), one can drill out a solid torus along the neighborhood of the matter circle \(\bigcirc\), and then fill in the same solid torus. This drilling and identical surgery only introduce a three-sphere \(S^{3}_{\infty}\), and hence does not change the geometry, but the matter circle \(\bigcirc\) as a loop in \(M_{3}\) is moved to the three-sphere, namely \[M_{3}\backslash\bigcirc=M_{3}\ \sqcup\ \big{(}S^{3}_{\infty}\backslash \bigcirc\bigcirc\big)\,. \tag{4.22}\] Since \(S^{3}_{\infty}\) has an infinite framing number, it is not convenient to interpret it as the CS level. One can use the rational equivalent surgery calculus [37] to transform it into \(L(\pm 1,1)\backslash\bigcirc\) and the position of matter circle \(\bigcirc\) should be properly chosen. This gives rise to \(ST_{\alpha}\)-moves. We illustrate this derivation in Figure 7. The loop of maps \((b)\to(c)\to(d)\) is the mirror triality (\(ST\)-moves). The map \((a)\) is the drilling and identical surgery on the matter circle \(\bigcirc\). In map \((b)\), the surgery is replaced by an equivalent rational surgery, which turns \(L(1,0)\backslash\bigcirc\) into \(L(\pm 1,1)\backslash\bigcirc\). Meanwhile, the positions of matter circles \(\bigcirc\) should be changed from longitude to meridian, see Table 3. In \((c)\), one can continue drilling, and if using bare CS levels, the last graph will only differ to the second graph by a \(\bigcirc_{\pm 1}\) which can be integrated out by \(\alpha\)-Kirby moves, and hence the second and fourth graph only differ by the orientation. This gives to two types of \(ST\)-moves shown in (2.4). The rational equivalent surgery changes linking numbers, which is basically a Dehn-twist of meridian, namely \(\text{Im}(\alpha)=\alpha^{\prime}+t\beta^{\prime}\), as we have shown in (4.11), and this twist turns Figure 7: Using the drilling stick, identical surgery, and rational equivalent surgery, one can derive the \(ST_{\alpha}\)-move coming from physical analysis. This move is between the first graph and the third graph in this figure. \(L(1,0)\) into \(L(1,t)\). If we perform \(t\) numbers of twists on the circle \(L_{i}\), then it also twists its linked circles \(L_{j}\) to \(\tilde{L}_{j}\). Their linking numbers become \[\tilde{k}_{i}=\frac{1}{t+\frac{1}{k_{i}}}\,,\;\;\tilde{k}_{j}=k_{j} +t(L_{i}\cdot L_{j})^{2}\,,\;\;\tilde{k}_{ij}=k_{ij}\,, \tag{119}\] \[\tilde{L}_{i}\cdot\tilde{L}_{j}=\tilde{k}_{ij}\,,\;L_{i}\cdot L_ {j}=k_{ij}\,,\;\tilde{L}_{i}\cdot\tilde{L}_{i}:=\tilde{k}_{i}\,,\;\tilde{L}_{ j}\cdot\tilde{L}_{j}:=\tilde{k}_{j}\,. \tag{120}\] The rational equivalent surgeries put strong constraints on the framing numbers, since they should be integral numbers. This type of equivalent surgeries does not fall in the class of Kirby moves, and is called rational calculus by Rolfsen in [37]. If introducing a charge-\(q\) matter on \(L(k,1)\), one can get the link graphs for \(ST_{\alpha}\)-move: \[\vbox{\hbox{\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// It seems that rational equivalent surgeries lead to the same plumbing/circle graphs as handle-slides that was discussed in section 2. We know how to freely introduce a \(S^{3}_{\infty}\) but not \(S^{3}_{\infty}\bigcirc\). Fortunately, handle-slides, such as (2.22) may tell us how to introduce it. We still need to use \(S^{3}_{\pm 1}\bigcirc\) and then apply the handle-slide operation by recombining gauge circles. Finally, we apply rational twist to get \(S^{3}_{\infty}\bigcirc\). We use the graph below to illustrate this process: (4.29) where the twist is the reverse of process \((b)\) in Figure 7. For bifundamental matter, the straightforward application of drilling and rational surgery could indeed derive (2.22). For the linked matter circles, \(ST\)-moves could separate them. For instance, (4.30) Note that if a gauge circle \(\bullet_{k}\) links to a \(S^{3}_{\infty}\), we cannot directly apply \(\alpha\)-Kirby moves to integrate out \(\bullet_{k}\), as \(\infty\) could eat any number. **Trefoils.** One can consider complicated matter circle linking to a trefoil. As a typical knot in textbook e.g. [37, 39, 42], the trefoil can be unknoted by \(\alpha\)-Kirby moves into the link \(\bullet_{k}-\bullet_{\pm 1}\) with linking number \(\pm 2\). One can also insert a complicated matter circle \(\bigcirc_{\infty}\) on this \(\bullet_{\pm 1}\) as follows: (4.31) which engineers a theory \(U(1)_{k^{\prime}}\) with a charge-2 matter. Note that matter circles could have many other ways to tangle with trefoils depending on how to insert matter circles, which are much more complicated than examples in this note. ## 5 Open problems The surgery constructions of 3d theories depend mainly on the topology of three-manifolds, which translate dualities in terms of geometric transformations. There are many related open problems: * One problem is to engineer superpotentials, and find the corresponding geometric objects in three-manifolds. We hope the gauged SQED-XYZ duality [1], which encodes the simple cubic superpotential \(\mathcal{W}=XYZ\), could be interpreted as some combinatorial relations of matter circles. The surgery construction in this note should be extended to non-abelian theories, which is straightforward for Lens spaces. For non-abelian theories, \(W^{\pm}\)-bosons should be carefully realized. Monopole operators in [12] should also be considered to complete the construction. * The matter circles we have considered in this note are unknotted. Generically, matter circles could be knots suggested by Ooguri-Vafa construction, which intersect the three-manifolds along knots, namely \(L_{K}\cap M_{3}=K\). Then we should call \(K\) the matter knot. If it is possible to transform the matter knot \(K\) into links of simple matter circles \(\bigcirc\), the nature of knots could be significantly uncovered, and the encoded gauge theories can be read off. The RT invariants of \(M_{3}\backslash K\) could be useful to address many physical aspects of the underling gauge theories. Some results from mathematical works such as [45] could be translated into physics. * We expect the further development of surgery construction could provide a systematic derivation of knots-quivers correspondence [43; 44]. The expected correspondences between Wilson loops and matter defects may provide more details to 3d-3d correspondence. * There are multiple ways to insert matter circles to three-manifolds, and they can be deformed and combined. We hope the modular tensor category (MTC) can be used to describe the relations and fusions of matter circles. The mapping class group of torus is also useful to analyze the images of matter circles, which is a problem that we have not managed to solve in this note. * There are some gaps to connect surgery constructions of 3d theories with the Large-N transitions of Lens spaces [46; 47]. It seems that the fiber and base should be reversed to match them, since during the large-N transition, Lens space should correspond to the flavor symmetry and Lagrangian brane \(L_{K}\) corresponds to the gauge group, which is opposite to surgery construction and thus is mysterious. * We have not considered hyperbolic structures yet, which however are very useful in the DGG construction. Finding the role played by hyperbolic structure and metric would be an interesting direction. * In [16; 38], the branched covering realization of 3d theories through 4d Seiberg-Witten curves is established, which seems similar to the surgery construction in many aspects. We hope to find the relations between these constructions. ###### Acknowledgments. I would like to thank Satoshi Nawata for many discussions, and insisted that Ooguri-Vafa construction is the correct candidate for engineering matter, when I was jumping between right and wrong. I am also grateful to Sung-Soo Kim and Fengjun Xu for helpful discussions, and the hospitality of UESTC and BIMSA where part of this work was done. The work of S.C. is supported by NSFC Grant No.11850410428 and NSFC Grant No.12305078. ## Appendix A Lens space The Lens spaces are the orbifold of three-sphere \(L(k,1)=S^{3}/\mathbb{Z}_{k}\). If complex coordinates are given, the definition is \(|z_{1}|^{2}+|z_{2}|^{2}=r^{2}\) with the \(\mathbb{Z}_{k}\) action \((z_{1},z_{2})\mapsto(e^{\frac{2\pi i}{k}}z_{1},e^{\frac{2\pi i}{k}}z_{2})\). The homology is \(H_{1}(L(k,1))=\mathbb{Z}_{k}\) and \(L(2,1)=S^{3}/\mathbb{Z}_{2}=\mathbb{RP}^{3}\). For generic Lens space, there are equivalences that \(L(p,q)=L(p,q+np)=L(-p,q)\), so \(L(k,1)=L(\frac{k}{nk+1},1)\). Lens spaces are special Lagrangian submanifolds insider the Calabi-Yau three-folds \(T^{*}L(k,1)\). As special examples of Seifert manifolds, Lens spaces are circle bundles over a two-sphere: \(\mathcal{O}(-k)\hookrightarrow L(k,1)\xrightarrow{\pi}\mathbb{P}^{1}\). The plumbing manifolds are given by gluing a couple of Lens spaces by Dehn surgeries, which are performed by extending circles of framing number \(k\) to a solid torus. Then the Lens space is denoted by this circle. The plumbing graphs are made to denote the links for surgeries. We use the example below to illustrate: (A.1) where each black node denotes a lens space \(L(k,1)\), and the linking numbers denote the lines. Because of orientations, there could be different signs. By extending this examples, complicated plumbing graphs containing nodes and lines are links that could lead to generic three-manifolds given by Dehn surgeries along these links. The self-linking number is known to be the framing number between the circle \(L\) and its slightly deformed circle \(L^{\prime}\). After deformation, there two circles have linking number \(L_{i}\cdot L_{i}:=L_{i}\cdot L^{\prime}_{i}=k_{i}\). The linking number between two circles are usually defined as \(L_{i}\cdot L_{j}=k_{ij}:=\frac{1}{2}\times\text{crosssing number}\). We only focus on integral surgeries, as the parity anomaly requires framing numbers to be integers [3]. **Others.** The geometry of Lagrangian defect is \(S^{1}\times\mathbb{R}^{2}\) where \(\mathbb{R}^{2}\subset\mathbb{R}^{3}\) and \(\mathbb{R}^{3}\) is the fiber of \(T^{*}M_{3}\). The intersection is the matter circle \(S^{1}=(S^{1}\times\mathbb{R}^{2})\cap M_{3}\). The topology of the defect can be written as \(S^{1}\times\mathbb{R}^{2}=T^{2}\times\mathbb{R}_{+}\) where the \(T^{2}\) degenerates to \(S^{1}\) at the origin of \(\mathbb{R}^{2}\). This is one half of the geometry \(S^{1}\times S^{2}\) and can be represented as a half-infinite line using tori geometry.
2301.09148
Separating signal from combinatorial jets in a high background environment
We study procedures for discriminating combinatorial jets in a high background environment, such as a heavy ion collision, from signal jets arising from a hard-scattering. We investigate a population of jets clustered from a combined PYTHIA+TennGen event, focusing on jets which can unambiguously be classified as signal or combinatorial jets. By selecting jets based on their kinematic properties, we investigate whether it is possible to separate signal and combinatorial jets without biasing the signal population significantly. We find that, after a loose selection on the jet area, surviving combinatorial jets are dominantly imposters, combinatorial jets with properties indistinguishable from signal jets. We also find that, after a loose selection on the leading hadron momentum, surviving combinatorial jets are still dominantly imposters. We use rule extraction, a machine learning technique, to extract an optimal kinematic selection from a random forest trained on our population of jets. In general, this technique found a stricter kinematic selection on the jet's leading hadron momentum to be optimal. We find that it is possible to suppress combinatorial jets significantly using this machine learning based selection, but that some signal is removed as well. Due to this stricter kinematic selection, we find that the surviving signal is biased towards quark-like jets. Since similar selections are used in many measurements, this indicates that those measurements are biased towards quark-like jets as well. These studies should motivate an increased emphasis on assumptions made when suppressing and subtracting combinatorial background and the biases introduced by methods for doing so.
P. Steffanic, C. Hughes, C. Nattrass
2023-01-22T16:20:52Z
http://arxiv.org/abs/2301.09148v3
# Separating signal from combinatorial jets in a high background environment ###### Abstract We study procedures for discriminating combinatorial jets in a high background environment, such as a heavy ion collision, from signal jets arising from a hard-scattering. We investigate a population of jets clustered from a combined PYTHIA +TennGen event, focusing on jets which can unambiguously be classified as signal or combinatorial jets. By selecting jets based on their kinematic properties, we investigate whether it is possible to separate signal and combinatorial jets without biasing the signal population significantly. We find that, after a loose selection on the jet area, surviving combinatorial jets are dominantly imposters, combinatorial jets with properties indistinguishable from signal jets. We also find that, after a loose selection on the leading hadron momentum, surviving combinatorial jets are still dominantly imposters. We use rule extraction, a machine learning technique, to extract an optimal kinematic selection from a random forest trained on our population of jets. In general, this technique found a stricter kinematic selection on the jet's leading hadron momentum to be optimal. We find that it is possible to suppress combinatorial jets significantly using this machine learning based selection, but that some signal is removed as well. Due to this stricter kinematic selection, we find that the surviving signal is biased towards quark-like jets. Since similar selections are used in many measurements, this indicates that those measurements are biased towards quark-like jets as well. These studies should motivate an increased emphasis on assumptions made when suppressing and subtracting combinatorial background and the biases introduced by methods for doing so. ## I Introduction A hot, dense, strongly interacting liquid of quarks and gluons called the Quark Gluon Plasma (QGP) is briefly created in high energy heavy ion collisions [1; 2; 3; 4]. Two of the key signatures of the formation of the QGP are jet quenching and hydrodynamical flow. There have been extensive measurements of jets in a QGP, including single particle spectra, jet spectra, fragmentation functions, and jet substructure [5]. While there have been some constraints on the properties of the medium due from measurements of jets [6; 7], the era of quantitative measurements of jets is just beginning. A detailed understanding of jet quenching requires a quantitative understanding of the background. The paradigm the field uses to separate signal and background assumes that particles are either from jets or from background processes. While this approach is commonly used in heavy ion collisions, we note that similar approaches to a combinatorial background may be applicable to p+p collisions in a high pile up environment. The signal is assumed to be a cluster of particles from a hard process. The background includes particles from all other processes, including other jets. Many background particles are correlated, either due to resonances or flow. A jet finder will cluster all particles into a jet, so the background impacts the signal both through background particles clustered with signal jets and jets consisting exclusively of background particles, called combinatorial jets. In practice, there is some ambiguity in whether particles are from the signal or the background. When partons interact with the medium, they may leave some of their energy and momentum in medium particles, for instance through a wake from movement of the parton through a fluid [8]. It is unclear in that case if those particles should be considered signal or background. Furthermore, at low momenta, the definition of a jet itself becomes ambiguous, as there is no clear threshold for when a process is hard enough to result in a jet and a jet finder will cluster particles with correlated momenta independent of their production mechanism. The assumption that particles are either from the signal or from the background works well for describing combinatorial jets, jets composed entirely of particles from the background. This can be seen by the agreement between the sum of the energies of all particles found in a random cone in a heavy ion collision and expectations for drawing random particles with momenta matching that observed in the data [9]. While there are some deviations between these observations and expectations from a random sample, there is even good agreement with PYTHIA Angantyr, which includes correlations from resonances and mini-jets [10]. The contribution from combinatorial jets is typically managed by focusing on high momentum jets or suppressing the background through kinematic selections such as requiring a minimum momentum for the highest momentum particle [11]. Combinatorial jets have also been described by mixed events and their contribution subtracted when their contribution is limited by a coincidence with a high momentum hadron \(180^{\circ}\) away [12]. The contribution of background particles in jets has typically been subtracted either through an iterative procedure to estimate background contributions [13; 14], or by estimating the background per unit area [11; 12]. Recently a shallow neural network was applied to estimate the background contribution to jet measurements to extend these measurements to lower momenta and larger R [15; 16]. Measurements of jets in heavy ion collisions are limited at low momenta both because the number combinatorial jets becomes comparable to the number of signal jets and because measurements have large uncertainties when fluctuations in the background contributions to signal jets are comparable to the jet momenta. In this study, we use a model with a randomly generated background from TennGen [10], described in sec. II.1, and a signal from PYTHIA, described in sec. II.2. We define unambiguous samples of signal and combinatorial jets in sec. II.3, and characterize them by their properties, summarized in sec. II.4. We introduce the silhouette value in sec. II.5 as an alternate means to look at whether or not signal and combinatorial jets are distinguishable. In sec. II.6 we describe a machine learning approach to optimizing the separation of signal and combinatorial jets and in sec. II.7 we describe the leading subjet fraction, which we use to look for bias introduced by the kinematic selections used to separate signal and combinatorial jets. In sec. III we describe the kinematic selections identified, discuss the limitations in separating signal and combinatorial jets, and investigate possible biases imposed by these kinematic selections. ## II Method We use TennGen [10; 17], briefly described in sec. II.1, for a realistic background Pb+Pb event at \(\sqrt{\mathrm{s}_{\mathrm{NN}}}\) = 2.76 TeV with correlations due to flow but no other physics correlations. We embed a p+p collision produced with PYTHIA 6 [18] using the Perugia 2011c[19] tune at \(\sqrt{\mathrm{s}}\) = 2.76 TeV, briefly described in sec. II.2, in the TennGen event to generate a jet signal. We cluster the combined event with the anti-\(\mathrm{k}_{\mathrm{T}}\) jet finder, producing a population of jets with particles from both PYTHIA and TennGen. We can then classify jets as combinatorial or signal by how much momentum is from TennGen and PYTHIA particles, as described in sec. II.3. We use a number of jet observables, discussed in sec. II.4, to characterize our jet population and evaluate the effect of different kinematic selections on signal and combinatorial jets. We introduce silhouette values, used to quantify how similar an object is to other object in its cluster, in sec. II.5 and use the distribution of silhouette values to evaluate the similarity between signal and combinatorial jets. We use a random forest, a type of machine learning algorithm, coupled with a decision tree to search for optimal selections that distinguish between signal and combinatorial jets. This algorithm is described in sec. II.6. We study the momentum fraction carried by the leading subjet, described in sec. II.7, to understand the bias towards quark-like jets imposed by kinematic selections. ### Background generation TennGen emulates a realistic background for jet studies in heavy ion collisions by throwing random particles which match the multiplicities [20], momentum distributions [21], and azimuthal distributions [22] of single particles. TennGen is described in greater detail in [10] and the source code is publicly available [17]. The measured single particle double differential spectra for \(\pi^{\pm}\), \(\mathrm{K}^{\pm}\), \(\mathrm{p}\) and \(\bar{\mathrm{p}}\) from [21] are fit to a Boltzmann-Gibbs Blast Wave distribution [23; 24]. This distribution is used to randomly select a momentum for each particle. The single particle flow coefficients [22] are used to determine the azimuthal distribution of particles with that momentum, and a random azimuthal angle is determined from that distribution. The pseudorapidity \(\upeta_{\mathrm{i}}\) of the particle is randomly selected from a flat distribution in -0.9 \(<\upeta_{\mathrm{i}}<\) 0.9 to match the \(\upeta_{\mathrm{i}}\) acceptance of the ALICE detector. The multiplicity of each particle species is determined from ratios [20] and is scaled up assuming a constant charged particle multiplicity per unit pseudorapidity. The multiplicities are determined from measurements of the charged particle multiplicities in [25]. By construction, TennGen events contain no correlations other than those due to flow. As such, all particles in TennGen are considered background particles for measurements of jets. ### Signal generation PYTHIA [18] is a general purpose Monte Carlo event generator. In this study we generate proton-proton events at \(\sqrt{\mathrm{s}}\) = 2.76 TeV in PYTHIA using the Perugia 2011c[19] tune with various \(\mathrm{p}_{\mathrm{T}}^{\mathrm{hardmin.}}\). Jets are clustered from primary \(\pi^{0}\), \(\pi^{\pm}\), \(\mathrm{K}^{\pm}\), \(\mathrm{p}\) and \(\bar{\mathrm{p}}\) particles with \(|\upeta|<\) 0.9 using the anti-\(\mathrm{k}_{\mathrm{T}}\) algorithm implemented in FastJet v3.2.1 [26] with various jet resolution parameters R. We keep PYTHIA events which have one jet with at least 80% of the \(\mathrm{p}_{\mathrm{T}}^{\mathrm{hardmin.}}\)and embed them in a TennGen event. This leads to a slightly different distribution of jets than if we were to use minimum bias PYTHIA events but does not qualitatively change the conclusions. Particles from PYTHIA are considered to be signal particles. There is a small underlying event in p+p, but we treat this as negligible compared to the jet signal. ### Signal and combinatorial jets We cluster all the particles from the combined TennGen and PYTHIA event with the anti-\(\mathrm{k}_{\mathrm{T}}\) jet finder. We calculate the fraction of each jet's momentum carried by PYTHIA particles. Jets which have less than 2\(\mathrm{\pi R}^{2}\) GeV/c from PYTHIA are classified as combinatorial jets. This allows up to the average momentum in a random cone from the underlying event in PYTHIA assuming an average momentum density of 2 GeV/c per unit area. Jets which contain at least 0.8 \(\rm p_{T}^{\rm hardmin.}\)GeV/c from PYTHIA particles are classified as signal jets. This ensures that only PYTHIA jets are classified as signal jets, without any combinatorial jets classified incorrectly as signal jets. The remainder are not classified. This allows us to identify unambiguous samples of signal and combinatorial jets. The results presented in sec. III focus on \(\rm R=0.5\) and \(\rm p_{T}^{\rm hardmin.}\)= 40 GeV/c and results for additional resolution parameters ranging from R=0.2 to 0.6 for \(\rm p_{T}^{\rm hardmin.}\)ranging from 10 to 80 GeV/c are given in the appendices. ### Observables Our objective is to identify observables which may help discriminate between signal and combinatorial jets, could realistically be used in data, and either would lead to a negligible bias in the surviving population of signal jets or whose bias could be reproduced well in model calculations. In particular, our aim is to study low momentum jets to investigate approaches to decreasing the lower threshold for jet measurements and decrease the systematic uncertainties associated with background subtraction in this region. We started with observables which could be measured on an individual jet basis and eliminated observables using a combination of our knowledge of the strengths and weaknesses of these observables and statistical techniques. We only used observables which are reliably measurable in data and calculable in models with reasonable uncertainties. This excludes the \(\rm n^{th}\) leading particle's momentum for \(\rm n{>}1\), for instance, because they would be difficult to model accurately, and may make the model sensitive to fluctuations in data and models. We used the scikit learn implementation of principle component analysis [27] to better understand which observables are redundant, and feature importance from our random forest [28] to remove observables which had little discriminatory power. The observables we chose are summarized in tab. 1. While a similar approach may lead to a different set of observables, we think that this is a realistic set of observables which could be used to discriminate between signal and combinatorial jets. #### ii.5.1 Area To calculate the area of jet, we add many very soft particles("ghosts") to the event, counting how many ghosts are clustered into our jet. The jet area is given by \[\rm A_{\rm jet}=\rm A_{\rm g}\langle N_{\rm g}\rangle \tag{1}\] where \(\rm A_{\rm g}\) is the area of a single ghost and \(\langle N_{\rm g}\rangle\) is the average number of ghosts clustered into our jet [29]. #### ii.5.2 Leading hadron momentum The leading hadron momentum, \(\rm p_{T}^{\rm I}\), is the highest momentum jet constituent. While there is some model dependence in calculating a leading hadron momentum in theory calculations because hadronization is non-perturbative, such calculations are fairly robust since single hadron observables in p+p collisions agree well with pQCD calculations [30]. We opted not to include the subleading hadron momentum and beyond because they would make model calculations more sensitive to details of hadronization. #### ii.5.3 Jet width The jet width[31], \(\rm\lambda_{1}^{1}\), is given by \[\rm\lambda_{1}^{1}=\sum_{i=1}^{N}z_{i}\cdot\frac{\Delta R_{i,jet}}{R} \tag{2}\] where N is the number of constituents in the jet, \(\rm z_{i}\) is the momentum fraction carried by constituent i, and \(\rm\Delta R_{i,jet}\) is the distance in \(\eta\)-\(\rm\varphi\) space between constituent i and the jet axis. This provides a measure of how far constituents are from the jet axis on average. #### ii.5.4 Mean constituent transverse momentum Mean constituent transverse momentum including background, \(\rm\langle p_{T}\rangle\), is the average \(\rm p_{T}\) of the jet's constituents. We investigated the use of higher order moments of the momentum distribution, such as the standard deviation. These features provide useful information, but reproducing them in a model would require getting the single particle spectrum and the fragmentation functions correct to high precision. We concluded this would add too much model dependence. ### Silhouette values We borrow a technique from data science to characterize the overlap between our different populations. Our two populations are unambiguous signal, and combinatorial jets. The silhouette value[32] describes, for each signal or combinatorial jet, whether it shares more characteristics with its own cluster or the other cluster. The silhouette value for the jet with index i is given by \[\rm S(i)=\frac{b(i)\,-a(i)}{max\{a(i),b(i)\}} \tag{3}\] a(i) is the mean in-class distance, \[\rm a(i)=\frac{1}{N_{I}\,-\,1}\sum_{j\neq i}^{N_{I}}d(i,j) \tag{4}\] where the sum runs over jets of the same class as jet i, and \(\rm N_{I}\) is the number of jets in the same class as jet i; b(i) is the mean out-of-class distance, \[\rm b(i)=\frac{1}{N_{J}}\sum_{j}^{N_{J}}d(i,j) \tag{5}\] where the sum here runs over jets of the opposite class from jet i, and \(\rm N_{J}\) is the number of jets in the opposite class. The \(\rm d(i,j)\) in both equations represents the distance between jet i and jet j. This is typically the Euclidean distance computed in feature space. We standardize our data so that each feature lies in the range [0,1], ensuring that each feature contributes equally to the silhouette value, rather than features with larger ranges having a larger effect. \[\rm d(i,j)=\left\{\left(\frac{A_{jet,j}-A_{jet,i}}{A_{jet}^{max}-A _{jet}^{min}}\right)^{2}+\left(\frac{p_{T,j}^{1}-p_{T,i}^{1}}{p_{T}^{1,max}-p_{T,min}^{1,min}}\right)^{2}\right.\] \[+\left(\frac{\lambda_{1,j}^{1}-\lambda_{1,i}^{1,min}}{\lambda_{1,max}^{1,\max}-\lambda_{1,min}^{1,min}}\right)^{2}+\left(\frac{\langle p_{T} \rangle_{j}-\langle p_{T}\rangle_{i}}{\langle p_{T}\rangle^{max}-\langle p_{T }\rangle^{min}}\right)^{2}\right\}^{\frac{1}{2}}. \tag{6}\] Silhouette values have a range from -1 to 1. A jet with a positive silhouette value is more similar to its own class than the other class. A negative silhouette value indicates the opposite, the jet is more similar to jets from the other class than to jets from its own class. While it would be possible to extend the list of observables, we do not anticipate others would have any significant distinguishing power. We rejected a number of interesting observables that were either too complex to faithfully simulate or too difficult to measure in data. Silhouette values could help determine the effectiveness of any new observable to distinguish signal and combinatorial jets. ### Kinematic selection optimization We train a machine learning system to learn about the relationship between a jet's features and whether it is a signal jet or a combinatorial jet using the input features described in sec. II.4. We use a random forest, but a neural network or any other algorithm capable of classification could be used. We then train a decision tree on the predictions of the random forest. The decision tree is a classification algorithm that uses a series of kinematic selections to classify each jet. This technique is a type of rule extraction called the oracle method[33]. We extract the top-level node of our trained decision tree, the kinematic selection which best separates signal and combinatorial jets. We then evaluate the biases and background reduction of any such selection. The details of our specific machine learning system are mentioned below. #### ii.6.1 Decision trees We use the sci-kit learn implementation of the decision tree[34], which is based on the Classification and Regression Trees algorithm described in [35]. A decision tree recursively partitions the feature space such that the samples with the same labels are grouped together. Each candidate split, \(\rm\vartheta\), consists of a feature and a threshold; in our case, a kinematic selection. The selection is evaluated according to an impurity function \(\rm H\) and its quality is determined by \[\rm G(Q_{m},\rm\vartheta)=\frac{N_{m}^{left}}{N_{m}}H(Q_{m}^{left}(\rm \vartheta))+\frac{N_{m}^{right}}{N_{m}}H(Q_{m}^{right}(\rm\vartheta)) \tag{7}\] where \(\rm Q_{m}\) are the data at node m, \(\rm N_{m}\) is the total number of samples at node m, and the left (right) superscripts indicate that the data are below (above) the threshold. The algorithm selects the split which minimizes \(\rm G\). There are two primary choices for the impurity function \(\rm H\), Gini impurity \[\rm\sum_{i=1}^{N_{c}}p_{i}(1-p_{i}) \tag{8}\] and entropy \[\rm-\sum_{i=1}^{N_{c}}p_{i}log_{2}p_{i} \tag{9}\] where \(\rm N_{c}\) is the number of classes and \(\rm p_{i}\) is the probability of randomly picking an element of class i. We use Gini impurity in this study. \begin{table} \begin{tabular}{|c c c|} \hline Symbol & Name & Definition \\ \hline \(\rm A_{jet}\) & Jet Area & Area covered by all jets constituents \\ \(\rm p_{T}^{1}\) & Leading Hadron Momentum & Momentum of leading jet constituent \\ \(\rm\lambda_{1}^{1}\) & Jet Width & \(\rm\sum_{i=1}^{N_{contit.}}z_{consit.,i}\cdot\Delta R_{consit.i,jet}\) \\ \(\rm\langle p_{T}\rangle\) & Mean \(\rm p_{T}\) & \(\rm\sum_{i=1}^{N_{contit.}}p_{T,i}\) \\ \hline \end{tabular} \end{table} Table 1: Observables used to characterize each jet population. Decision trees can be prone to over-fitting the data, resulting in poor generalization. Additionally, their accuracy is quite dependent on appropriate hyper-parameter tuning [34]. We used scikit-learn v1.2.0 and the default parameters of the DecisionTreeClassifier except for the max_depth parameter, which was set to 3. #### ii.2.2 Random forests In order to overcome the problem of over-fitting the random forest algorithm[36] is used. This is an ensemble method involving training hundreds of randomized decision trees and averaging their predictions for a more robust, general prediction. There are two sources of randomness in the algorithm: each decision tree sees only a random sample of the data, and at each splitting the decision tree can use either all of the features or a random subset of a chosen size. Each decision tree is trained independently from the others. The variance of a random forest is smaller than that of a single decision tree due to the injection of randomness and resulting average over the decision trees. The prediction for each jet becomes the ground truth for the "oracle" decision tree. It should be noted that the scikit-learn implementation is not exactly as in [36], as described in [37]. We used scikit-learn v1.2.0 and the default parameters of the RandomForestClassifier except as noted in tab. 2. After the "oracle" decision tree is trained, we extract a kinematic selection-the top node of the tree-as well as the signal jet and combinatorial jet rejection rates after applying the selection. We then apply these selections to our data and study the potential biases that they introduce to determine their usefulness in analyses. ### Leading sub-jet momentum fraction The concept of quark or gluon jets is only defined at leading order, so any realistic distinction, even in a model calculation, is somewhat ad hoc. However, models predict differences between quark-like and gluon-like jets. Suppression of background by selecting on the kinematic properties of jets may bias the population of surviving jets towards or away from quark-like or gluon-like jets because quark-like jets fragment into fewer, harder particles which are closer on average to the jet axis than gluon-like jets [38; 39]. The observables in sec. II.4 are therefore expected to have different distributions for quark-like and gluon-like jets. We use the leading subjet momentum fraction to qualitatively evaluate whether kinematic selections impose a substantial bias towards quark-like jets. The jet constituents are reclustered with the anti-k\({}_{\mathrm{T}}\) jet finder with a smaller radius parameter, R = 0.1. The leading subjet momentum fraction, z\({}_{\mathrm{subjet}}\), is the fraction of the jet's total momentum in the leading subjet [40] \[\mathrm{z}_{\mathrm{subjet}}=\frac{\mathrm{p}_{\mathrm{T}}^{\mathrm{subjet}} }{\mathrm{p}_{\mathrm{T}}^{\mathrm{jet}}}. \tag{10}\] Studies of this observable indicate that quark-like jets have a higher z\({}_{\mathrm{subjet}}\) while gluon-like jets have a lower z\({}_{\mathrm{subjet}}\)[41; 42]. We use only PYTHIA particles to determine z\({}_{\mathrm{subjet}}\) in our sample. We note that the connection between z\({}_{\mathrm{subjet}}\) and the leading parton is more tenuous for smaller R and lower momenta, but nevertheless consider this a reasonable observable to test for biases towards quark-like jets. ## III Results We successively apply four different kinematic selections. Here we present results for R = 0.5 and p\({}_{\mathrm{T}}^{\mathrm{hardmin.}}\)= 40 GeV/c. Results from other selections can be found in the appendices. Figure 1(a) shows the inclusive distribution of jet area versus \(\lambda_{1}^{1}\) for both signal and combinatorial jets. This shows the overlap between both jet populations. There are few signal jets at small areas, indicating that the large area region can be selected with little bias. We therefore apply the ALICE selection of A \(>\) 0.6\(\pi\)R\({}^{2}\). For jets with resolution parameters R \(>\) 0.3 and p\({}_{\mathrm{T}}^{\mathrm{hardmin.}}\)\(<\) 30 GeV/c there may be some bias imposed due to more signal jets in the low area region. Figure 2 shows the distribution of p\({}_{\mathrm{T}}^{1}\), \(\lambda_{1}^{1}\), and \(\langle\)p\({}_{\mathrm{T}}\rangle\) for signal and combinatorial jets before and after this selection normalized by the total number of jets before the selection. Changes in the signal distributions are negligible except for resolution parameters R=0.5, and 0.6 when p\({}_{\mathrm{T}}^{\mathrm{hardmin.}}\)\(<\) 20 GeV/c, while the surviving combinatorial jet distributions for \(\lambda_{1}^{1}\) and \(\langle\)p\({}_{\mathrm{T}}\rangle\) become more like those of signal jets. We call these combinatorial jets which look like signal jets "imposter jets." The selection on area is so effective because many combinatorial jets consist of a single particle; such jets have small areas with low \(\lambda_{1}^{1}\) and a \(\langle\)p\({}_{\mathrm{T}}\rangle\) distribution closer to the inclusive particle momentum distribution. The jets remaining after applying a kinematic selection in fig. 2 indicate a significant difference between p\({}_{\mathrm{T}}^{1}\) distributions for signal and combinatorial jets, while the \(\lambda_{1}^{1}\) distributions overlap significantly for signal and combinatorial jets. The differences for both p\({}_{\mathrm{T}}^{1}\) and \(\lambda_{1}^{1}\) increase with p\({}_{\mathrm{T}}^{\mathrm{hardmin.}}\)and decrease with increasing resolution parameter. The \(\langle\)p\({}_{\mathrm{T}}\rangle\) distributions indicate a separation \begin{table} \begin{tabular}{|c c c|} \hline Parameter name & This study & Default \\ \hline n\_estimators & 200 & 100 \\ max\_depth & 3 & None \\ min\_samples\_leaf & 100 & 1 \\ min\_weight\_fraction\_leaf & 0.1 & 0.0 \\ max\_samples & 0.9 & 1.0 \\ random\_state & 42 & None \\ \hline \end{tabular} \end{table} Table 2: Non-default parameters used for the RandomForestClassifier in this study. between signal and combinatorial jets. They follow the same trends in \(\rm p_{T}^{hardmin.}\)and R as in \(\rm p_{T}^{1}\) and \(\rm\lambda_{1}^{1}\), but the differences are not as large as those in the \(\rm p_{T}^{1}\) distributions. Furthermore, the \(\rm\langle p_{T}\rangle\) distribution would be difficult to recreate in simulation, meaning that any biases would be difficult to reproduce for comparisons between models and data. Figure 1(b) shows that after this selection, the distributions of signal and background jets overlap significantly, so any further selection will suppress signal jets as well. We then investigate a selection of \(\rm p_{T}^{1}>3.0\) GeV/c. The distributions of A, \(\rm\lambda_{1}^{1}\), and \(\rm\langle p_{T}\rangle\) for surviving signal and combinatorial jets are shown in fig. 3. As for the selection on area, there is no apparent difference in the distributions for surviving signal jets. Surviving combinatorial jets are imposters. Figure 1(c) shows that the properties of the surviving jets, indeed, still overlap significantly. A selection on \(\rm p_{T}^{1}\) is effective at suppressing background, but it is not collinear safe and may introduce biases. Biases may be unavoidable, and the impact of a selection on \(\rm p_{T}^{1}\) is at least likely to be easier to reproduce in model calculations than other observables, such as \(\rm\langle p_{T}\rangle\). We then approach kinematic selections using rule extraction from a random forest. The selection on area is efficient, cutting little or no signal while eliminating significant background, but a tighter selection which eliminated significant signal might be difficult to reproduce in models. We therefore keep the area selection and use the random forest, as well as the decision tree, both described in sec. II.6, to identify the best kinematic selection among \(\rm p_{T}^{1}\), \(\rm\lambda_{1}^{1}\) and \(\rm\langle p_{T}\rangle\). The algorithm found that the optimal selection was \(\rm p_{T}^{1}>5.036\) GeV/c for jets with R=0.5, and \(\rm p_{T}^{hardmin.}\)=40 GeV/c. For other values of resolution parameter and \(\rm p_{T}^{hardmin.}\)the trend is generally a tighter selection on \(\rm p_{T}^{1}\) ranging from 3.5 GeV/c to 5.2 GeV/c, given in tab. 4.The algorithm finds \(\rm\langle p_{T}\rangle\) to be the optimal selection for \(\rm p_{T}^{hardmin.}>60\) GeV/c, but we do not explore it in this study for the reasons mentioned above. We can see in fig. 1(d) that the remaining combinatorial jets are imposters. Figure 4 shows the impact of this selection on the distribution of A, \(\rm\lambda_{1}^{1}\), and \(\rm\langle p_{T}\rangle\) for signal and combinatorial jets. The suppression of combinatorial jets is much greater than that seen in fig. 2 or fig. 3. We have only considered rectilinear selections so far, but it is possible that some combination of observables described in sec. II.4 may be more effective at distinguishing signal and combinatorial jets. The silhouette Figure 1: Distribution of \(\rm A_{jet}\) vs. \(\rm\lambda_{1}^{1}\) for signal and combinatorial jets with R = 0.5 and \(\rm p_{T}^{hardmin.}\)= 40 GeV/c after each kinematic selection. The z-axis has a log scale. values described in sec. II.5 are designed to help determine whether two sets are distinguishable or whether there is too much overlap. Figure 5 shows the silhouette values for each of our selections. Table 3 lists the fraction of signal and combinatorial jets above and below zero. The inclusive distribution in fig. 5(a) shows that, before the area selection, there is a large population of combinatorial jets with a positive silhouette value (63%). These combinatorial jets are easier to distinguish from signal jets. However, there is also a significant population of imposters, those with a negative silhouette value (37%). Figure 5(b), (c), and (d) show the silhouette values for signal and combinatorial jets after the area selection, p\({}_{\rm T}^{1}\) selection, and the selection chosen by the random forest. In these cases, the distributions of silhouette values for signal and combinatorial jets have similar shapes, indicating that either is just as likely to look like its own group or like the other group. The selection chosen by the machine learning algorithm suppresses combinatorial jets much more effectively, removing 99.9% of combinatorial jets. However, signal jets may be more susceptible to survivor bias, as this selection removes 1.6% of signal jets. Figure 6 shows the distribution of z\({}_{\rm subject}\) for each kinematic selection compared to the inclusive distribution for signal jets. The selection on A and p\({}_{\rm T}^{1}>3.0\) GeV/c are consistent with one for all z\({}_{\rm subject}\). For the kinematic selection chosen by the machine learning algorithm, low z\({}_{\rm subject}\) signal jets are significantly suppressed. This suppression is more pronounced for jets with larger resolution parameters and lower p\({}_{\rm T}^{\rm hardmin.}\). Figure 4: Distributions of jet area, leading hadron momentum, jet width, and mean constituent momentum for jets with R = 0.5 and p\({}_{\rm T}^{\rm hardmin.}\)= 40 GeV/c before and after excluding jets with \(\Lambda<0.6*\pi\)R\({}^{2}\), and p\({}_{\rm T}^{1}<5.036\) GeV/c. The overall reduction of combinatorial jets was 99.8%. This tighter selection rejects predominantly gluon-like signal jets, as shown in Figure 6. Figure 3: Distributions of jet area, jet width, and mean constituent momentum for jets with R = 0.5 and p\({}_{\rm T}^{\rm hardmin.}\)= 40 GeV/c before and after excluding jets with p\({}_{\rm T}^{1}<3\) GeV/c. The overall reduction of combinatorial jets was 81.6%. ## IV Conclusions We used the background generator TennGen combined with signal jets from PYTHIA to investigate ways that signal and combinatorial jets can be distinguished in heavy ion collisions. We use properties which could be reproduced in data, A, \(\mathrm{p_{T}^{1}}\), \(\lambda_{1}^{1}\), and \(\langle\mathrm{p_{T}}\rangle\), to describe each jet. We find that signal and combinatorial jets overlap inextricably. The silhouette values show that they are indistinguishable using properties which could realistically be used in data. Any kinematic selection to reduce the number of combinatorial jets leaves a population of imposter jets which look like signal jets. Most kinematic selections to reduce combinatorial jets, aside from a loose cut on the area, reduce the population of signal jets as well. While a loose selection on \(\mathrm{p_{T}^{1}}\) does not appear to impose as much bias, the tighter selection suggested by the machine learning system significantly biases the surviving jet population towards quark-like jets. If such a selection were applied in data, this indicates that the surviving signal jets are biased towards quark-like jets. Measurements where corrections for kinematic selections are made by an unmodified simulation, such as PYTHIA, could be correcting for unmeasured gluon-like jets with measurements of quark-like jets. Complicated methods for distinguishing signal and combinatorial jets, such as black-box machine learning or correcting for imposter jets using unfolding, may have model-dependent assumptions. The possibility of such issues should be clearly elucidated in studies which use these methods. We call for a greater focus on any assumptions made when subtracting combinatorial background and biases introduced by methods for suppressing and subtracting this background. ## V Acknowledgements We are grateful to Mateusz Ploskon, Raghav Elayavalli, and Hannah Bossi for feedback on the manuscript. This work was supported in part by funding from the Division of Nuclear Physics of the U.S. Department of Energy under Grant No. DE-FG02-96ER40982. We also acknowledge support from the UTK and ORNL Joint Institute for Computational Sciences Advanced Computing Facility.
2310.10358
Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs
Large language models (LLMs) are increasingly applied for tabular tasks using in-context learning. The prompt representation for a table may play a role in the LLMs ability to process the table. Inspired by prior work, we generate a collection of self-supervised structural tasks (e.g. navigate to a cell and row; transpose the table) and evaluate the performance differences when using 8 formats. In contrast to past work, we introduce 8 noise operations inspired by real-world messy data and adversarial inputs, and show that such operations can impact LLM performance across formats for different structural understanding tasks.
Ananya Singha, José Cambronero, Sumit Gulwani, Vu Le, Chris Parnin
2023-10-16T12:51:24Z
http://arxiv.org/abs/2310.10358v1
# Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs ###### Abstract Large language models (LLMs) are increasingly applied for tabular tasks using in-context learning. The prompt representation for a table may play a role in the LLMs ability to process the table. Inspired by prior work, we generate a collection of self-supervised table structure understanding tasks (e.g. navigate to a cell and row; transpose the table) and evaluate the performance differences when using eight formats. In contrast to past work, we introduce eight noise operations inspired by real-world messy data and adversarial inputs, and show that these can impact LLM performance across formats for different structural understanding tasks. ## 1 Introduction Recent progress in large language models (LLMs) has enabled substantial gains when performing data-related tasks, such as table question answering [6], semantic type annotation [12], and data wrangling [8]--often with just in-context learning. However, data is also messy. Tabular data often arrives in a semi-structured format, with inconsistent shapes, missing entries, and unnormalized or inconsistently formatted values. This makes the task of processing and understanding tabular data particularly challenging for any system, including LLMs. For example, based on product telemetry, 21% of Excel files imported using LLM-based Dataverse Copilot1 were missing headers. Furthermore, in industrial settings, challenges such as privacy, compliance, and even potentially adversarial table inputs constrain how data can be handled and processed [3]. Footnote 1: [https://powerapps.microsoft.com/en-us/blog/introducing-an-easier-than-ever-experience-to-import-data-from-excel/](https://powerapps.microsoft.com/en-us/blog/introducing-an-easier-than-ever-experience-to-import-data-from-excel/) In this work we systematically explore the impact that the tabular representation format and real-world-inspired noise have on LLMs' ability to perform basic structural table understanding tasks [13] through in-context learning. Like prior work, we generate self-supervised structural table tasks to assess structural understanding. In contrast to prior investigations, we incorporate eight noise-inducing operations--such as renaming columns or transposing the table--that manipulate the table's structure in ways that emulate messy data [11] or even adversarial inputs. We evaluate both fact-finding and transformation tasks over seven public datasets, eight table representations commonly used in data-science and eight noise-inducing operations. In contrast to prior work, we find that HTML does not seem to provide the best performance at fact-finding or transformation tasks. We find that a dataframe-based format (DFLoader) obtains the highest overall pass@1 (79.79%) in fact-finding tasks and the highest overall F1 score (98.55%) for transformation tasks. We find that applying noise operations to tables can affect performance in fact-finding and transformation tasks. For example, introducing semi-structured content impacts data type detection (e.g. JSON format's pass@1 drops by 12.43%) and introducing sequential column naming can degrade performance for a column reordering task (e.g. comma-separated-value format's F1 score degrades by 67.33%). We believe future work can build on our findings by exploring the extent to which these structural table understanding tasks relate to downstream task performance. Furthermore, such work should include (and extend) our noise operations to evaluate the impact of structural changes. In summary, our key contributions are: 1. Extending self-supervised table structure understanding tasks by incorporating noise operations inspired by real-world noise 2. An extensive evaluation over eight table formats and eight noise-inducing operations 3. Our data and code to facilitate future work on structural table understanding ## 2 Related Work Transformer architectures have led to state-of-the-art performance in NLP and other areas of machine learning. This has motivated a line of research developing transformer models designed for tabular tasks, such as table question answering. These models (e.g. TUTA [14], TAPAS [7]) are predominantly developed by training and fine-tuning on large corpus of data scraped from Wikipedia and introducing different attention mechanisms. Prior work [10] has carried out a detailed analysis of how these mechanisms work and how they affect table understanding tasks. In contrast, we focus on a general LLM (GPT3) rather than ones designed for table tasks and carry out our experiments using in-context learning. Importantly, we scope our experiments to understand the impact of table representation format (subject to noise operations) on self-supervised table structure tasks. Prior work has used in-context learning to carry out tasks on tabular data. For example, TableLLM showed that LLMs can perform classification tasks over tabular datasets. Techniques like chain-of-thought [15] have been further refined in the context of tabular data [16; 5]. In contrast, we focus on self-supervised table structure tasks and consider the impact of table formats and the robustness to noise inspired by real-world data issues and adversarial behaviors. The closest related work is [13], which examines LLM performance on structural understanding tasks as a function of different tabular formats. Our work extends this line of research with other formats, new fact-finding and transformation tasks, and noise operations inspired by messy data. ## 3 Methodology To evaluate the extent to which different table representation formats and noise operations affect an LLM's ability to correctly answer structural table understanding tasks, we generate a collection of self-supervised tasks (i.e. where we can derive the task and answer from the table without the need for annotation). We now describe this approach in detail. Let \(T\) be a flat table with a header, \(F\) be a set of table representation formats (e.g. JSON), each of which transforms \(T\) into a corresponding string representation for the prompt. Let \(N\) be the set of noise operations (e.g. shuffle header names), each of which transforms \(T\) into \(T^{\prime}\). Let \(Q\) be the set of self-supervised tasks (e.g. lookup the value at row X and column Y), each of which given a table generates a collection \(\{(t,a)\}\) of self-supervised task question \(t\) and answer \(a\) pairs. We create an evaluation benchmark for a given table of the form \(\{(f(n(T)),t,a)|\;q\in Q,f\in F,n\in N,(t,a)\in q(f(n(T)))\}\). For each \((t,a)\), we then compare \(a\) to the LLM's answer given \(t\) and \(f(n(T))\). ### Table and Formats **Tables**: We scope our experiments to flat tables with a header row. Furthermore, each column must contain a single datatype (e.g. string, numeric, date). **Formats**: We represent these tables with the following 8 popular formats, summarized in Figure 1: DFLoader, JSON, Data-Matrix, Markdown, Comma-Separated-Values, and Tab-Separated-Values format. DFLoader corresponds to the associated Python code snippet to define the table using the Pandas DataFrame API. Data-Matrix format represents each row as a mixed-type list of values. HTML format represent the table using nested tags. HTML No Space inlines HTML by removing whitespaces. Note that our tables include both headers and row indices. ### Noise Operations We explore the extent to which noise operations can impact the LLM's ability to correctly perform structural table tasks under varying table representation formats. We design noise operators that emulate real world table challenges (e.g., uninformative sequential headers or merged cells) or even adversarial behavior (e.g. shuffled or arbitrary column names). **Spatial Invariance**: Tables often need to be rearranged or transformed to be used. For example, long tables with many columns may need to be transposed to faciliate plotting or for better readability. Inspired by these challenges, we introduced the following noise operations: * _ShuffleRows_: We randomly reorder table rows. Figure 1: Our evaluation considers 8 different table representation formats that are popular in the data science domain. Figure 2: We apply eight different noise operations to test for the influence of spatial invariance, header rows information, and the presence of semi-structured content on structural table task performance. * _ShuffleColumns_: We randomize the order of columns within the table. * _TransposeTable_: We transpose the table. **Headers**: Table headers often play an important role in table understanding, providing pseudo-natural language information about their content and facilitating referencing. However, in practice, user tables may not always have informative or consistent headers, or adversarial actors may remove header information altogether. To simulate such cases we introduce the following noise operations: * _ArbitraryColumnNames_: We arbitrarily rename headers to randomly drawn alphanumeric sequences. * _SequentialColumnNames_: We rename headers to sequential entries of the form col_0, col_1, so on. * _ShuffleColumnNames_: We shuffle header names, while keeping data intact. **Semi-structured Content**: Tables may contain columns that have semi-structured content (e.g. phone numbers) or users may need to start by parsing the table from a semi-structured representation. To induce such semi-structured data we use two noise operations: * _SerializeRow_: We transform each row into a string of key-value pairs. The resulting table has only one column in it. * _ColumnMerger_: We merge randomly chosen 2, 3 and 4 contiguous columns together by adding (within each row) a -- between their values. Figure 2 shows each of these noise operations applied to a table. ### Self-Supervised Structural Tasks We employ self-supervised structural tasks [13], which can be automatically generated, to evaluate the extent to which formats and noise operations affect the LLM's ability to understand table structure. We consider the following structural fact-finding tasks: * Navigation Test: Given row and column coordinates, retrieve the value at that location. The model succeeds if it retrieves the value at those coordinates. Figure 3: We generate self-supervised structural table understanding tasks: fact-finding tasks (e.g. navigation) and transformation tasks (e.g. table transposition). * Column Lookup Test: Given a value, retrieve the name of a column that contains that value. The model succeeds if it retrieves the name of a column that contains that value. * Row Lookup Test: Given a value, retrieve the row index for a row that contains it. The model succeeds if it retrieves the index of a row that contains that value. * Data Type Lookup Test: Given a column name, determine the associated Pandas API datatype for the column values. The model succeeds if the datatype matches the groundtruth. In addition to fact-finding tasks, we introduce transformation tasks that require manipulating the whole table: * Table Reconstruction Test: Given a table, we serialize it (applying the Serialize Rows operation previously described and then joining rows with new line character). The model must parse the table and generate its output in one of our 8 table formats. * Table Transpose Test: Given a table, the model is tasked with transposing the table. * Table Column Reordering Test: Given a table and a new (random) column order, the model must reorder the columns to match the indicated order. To measure the success of transformation tasks we compute precision, recall, and F1 score over the table values based on coordinates. Figure 3 shows an example for each task. ## 4 Experimental Setup Our evaluation is designed to answer the following research questions: * How does the table format impact LLM performance for self-supervised structural table understanding tasks? * How do noise operations impact LLM performance across different table formats? ### Experimental Setup We use OpenAI's GPT3 _(text-davinci-003_ endpoint) [2]. Exploring cross-LLM behavior is left to future work. We generate responses with temperature 0 to encourage deterministic behavior. Our prompts have a token limit of 4097, as determined by the underlying LLM. For each (table, format, noise operation, structural task) we generate 100 tests2 for fact-finding tasks and 25 tests for transformation tasks. For each fact-finding test we generate 15 completions and for each transformation test we generate 5 completions. Footnote 2: For HTML format we generate 50 tests per (table, format, noise operation, structural task), due to token limits and throttling We report average performance metrics over tests. For fact-finding tasks we compute pass@1 [4] for each test. For transformation tasks we compute cell-wise precision, recall and F1 per completion and average them. Computing exact table match, as needed for pass@1 would not account for partial performance. When reporting statistical significance, we perform comparisons using the t-test (SciPy T-Test implementation) and perform Bonferroni correction for multiple comparisons [1]. We evaluate on 7 public datasets collected from the popular data-science website Kaggle [9]. We chose datasets that are popularly used for both classification and regression tasks: AirQuality, HousingData, Diabetes, Wine Testing, Iris and Titanic. We remove all rows where null values are present to avoid creating spurious tasks. ## 5 Results ### RQ1: Impact of Formats on LLMs performance on Different Task **Fact-Finding Tasks**: Table 1 summarizes pass@1 rates for different formats across our fact-finding tasks. We find that performance can vary substantially by format and task. For example, while Markdown is a popular format for data scientists sharing results, using this format for tabular representation results in the worst ColumnLookupTest performance -- 18.4% points lower than the best performing HTML format (p-value < \(\frac{0.01}{7}\)). In contrast to prior work [13], we found that the HTML format underperforms alternatives like the JSON and DFLoader formats. However, HTML did result in the highest performance for one of our fact-finding tests: ColumnLookUpTest, where the average pass@1 was 6.38% higher than the next best (p-value < \(\frac{0.01}{7}\)). A substantial downside of HTML as a table representation is its verbosity: in our experiments, using HTML results in up to half as many rows being included compared to other formats. Removing spaces in HTML improves this slightly but the challenge remains. JSON format, which is a popular serialization format, outperformed alternatives in the NavigationTests: 5.86% higher than the Comma Separated format (p-value < \(\frac{0.01}{7}\)). We hypothesize that this performance stems from a combination of orderly structure and repeating navigation elements (specifically headers). As shown in Figure 1, every row is laid out in a separate line with an associated key (showing the row index) and each row contains a dictionary where keys are header names. Our results further emphasize the brittleness of LLMs to minor changes in structure representation. For example, while there is relatively little difference between the DataMatrix format and Comma Separated format, RowLookup performance for Data Matrix was 9.3% higher (p-value < \(\frac{0.01}{7}\)). On average, across our fact-finding tasks, we found that DFLoader format, which is essentially a code snippet in Pandas, demonstrates competitive results and may be a suitable choice for prompts where the user does not yet known what kind of fact-finding knowledge is important for their task. Finally, we find that all of our formats perform relatively well in our DataTypeLookUpTests, highlighting that different table formats may not play a substantially role in understanding the type of values (e.g. string versus numeric). **Transformation Tasks**: Our transformation tasks require that a format be suitable for whole-table transformations. Our results are summarized in Table 2. Overall, we found that DFLoader and JSON format outperformed alternatives for all the table transformation tasks. We hypothesize that this stems from isolation and repetition of key structural elements, which enable use of local context to carry out whole-table tasks: DFLoader presents each column in a separate list, and JSON repeats headers locally. For example, TableTranspose over our JSON format can effectively be carried out per-line, compared to transposition over a format like comma separated values, which requires more complex retrievals (e.g. all header values are in first row). Similarly to the fact-finding tasks, Markdown format results in low performance across all our tasks providing further evidence that such a format should not be used for prompts for tabular data. For example, Markdown's F1 score for column reordering is 49.67% lower than JSON's (p-value < \(\frac{0.01}{8}\)). ### RQ2: Impact of Noise Operations on LLM's Performance on Structural Tasks. **Fact-Finding Tasks**: Table 3 presents results of the impact of different noise operations on a subset of formats for our fact-finding tasks, chosen based on RQ1 performance. The first takeaway from these experiments is that different noise operations have a different impact on formats and particular fact-finding tasks. Furthermore, this impact can be both positive and negative. For example, we find that transposing the input table and representing it as JSON results in an improvement of 20.86% (p-value < \(\frac{0.01}{8}\)) at the navigation tests compared to the original input. However, this same transformation substantially degrades column and row lookup tests. After \begin{table} \begin{tabular}{l c c c c c} \hline \hline Table Females & ColumnLookupTests & DataTypeId.cosMapTests & NavigationTests & RowLookupTests & Overall \\ \hline CombmaSeparated & 64.43 & 95.00 & 65.57 & 78.14 & 75.78 \\ DFLoader & 72.71 & 95.29 & 68.29 & 82.86 & **79.79** \\ DataMatrix & 62.57 & 84.00 & 56.57 & **87.43** & 72.64 \\ JSON & 65.00 & **96.43** & **71.43** & 78.86 & 77.93 \\ MaxLookdown & 61.43 & 85.86 & 48.71 & 73.29 & 67.32 \\ TaskSeparated & 67.00 & 94.00 & 64.43 & 78.14 & 75.8 \\ HTML & **79.83** & 94.67 & 58.83 & 25.33 & 71.4 \\ HTMLNoSpace & 73.00 & 93.50 & 62.00 & 59.50 & 72.00 \\ \hline \hline \end{tabular} \end{table} Table 1: Average Pass@1 for fact-finding tasks. DFLoader provides overall high pass@1 performance. inspecting generations, we found that the LLM's generations for these tasks seem to ignore the transposition and often reply with the former headers (now row indices) as column names and viceversa. For both the DataMatrix format and HTML formats, we found that introducing noise into the header names through operations like shuffling column names, sequential column renaming, and arbitrary column renaming resulted in degraded performance across our navigation and column lookup tests. For example, the Data Matrix format with sequential column naming resulted in 38% (p-value < \(\frac{0.01}{8}\)) and 27.14% (p-value < \(\frac{0.01}{8}\)) declines in navigation tests and column lookup tests, respectively. Similarly, inducing semi-structured content results can lower performance. Serializing rows results in worse performance for data type detection across our formats. For example, JSON format's pass@1 score drops by 12.43% (p-value < \(\frac{0.01}{8}\)). Merging cells impacts column lookup tests negatively, while not impacting (or in some cases even improving) row lookup performance. For example, Data Matrix format's pass@1 score drops by 8% (p-value < \(\frac{0.01}{8}\)) in column lookup tests after applying the column merger noise operation. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Table & Table CombinationReorderTests & TableReconstructionTests & TableTransposeTests & Overall \\ \hline ComdaSuperdated & 95.33 & 74.33 & 99.00 & 89.55 \\ DeLande & 99.33 & **98.00** & 98.33 & **98.55** \\ DataMatrix & 92.67 & 90.67 & 0.00 & 61.11 \\ Json & **99.67** & 85.00 & **100.00** & 94.89 \\ Maskdown & 50.00 & 24.33 & 34.00 & 36.11 \\ Transpraterated & 93.33 & 92.33 & 50.00 & 78.55 \\ HTML & 50.00 & 86.00 & 83.33 & 73.11 \\ HTMLNoSpace & 83.33 & 84.00 & 83.33 & 83.55 \\ \hline \hline \end{tabular} \end{table} Table 2: F1 scores for transformation tasks. DFLoader and JSON format, with structural element isolation and repetition, enable high performance on average across transformation tasks. \begin{table} \begin{tabular}{l l|r r r r} \hline \hline Table Format & Table Manipulation & NavigationTests & ColumnLockup/Tests & RoutLockup/Tests & Dant/TypeLockup/Tests \\ \hline \multirow{8}{*}{Json} & OriginalData & 71.43 & 65.00 & 78.86 & 96.43 \\ \cline{2-6} & ShuffleRows & -40.57 & +1.43 & -6.57 & +0.14 \\ & ShuffleColumns & 0.00 & +1.14 & -6.72 & -1.86 \\ & ShuffleColumnNames & 1.57 & +1.43 & -6.57 & -8.86* \\ & SequentialColumnNames & -1.72 & 92.57* & -4.29 & -1.01 \\ & ArbitrageColumnNames & -4.43 & -2.14 & -10.43* & -40.57 \\ & TranspoetTable & -40.80* & -25.00* & -33.08* \\ & ColumnMerger & -7.72* & -4.57 & -3.43 & -2.15 \\ & SerialRow & -3.86 & -10.88* & -15.86* & -16.00* \\ \hline \multirow{8}{*}{DFLoader} & OriginalData & 56.57 & 62.57 & 87.43 & 84.00 \\ \cline{2-6} & ShuffleRows & -17.43* & -4.00 & -31.72* & +2.29 \\ & ShuffleColumns & -6.57 & -2.14 & -0.72 & +1.12 \\ & ShuffleColumns & -20.57* & -23.14* & -1.86 & -15.00* \\ & SequentialColumnNames & -28.80* & -22.74* & +2.28 & -7.43* \\ & ArbitrageColumnNames & -17.17* & -23.71* & -1.57 & -2.43 \\ & TranspoetTable & -4.57 & -60.00* & -4.42 & -22.00* \\ & ColumnMerger & -10.71* & -8.00* & -2.00 & +4.86 \\ & SerialRow & -22.57* & -87.22* & -39.57* & -1.00 \\ \hline \multirow{8}{*}{ITML} & OriginalData & 58.83 & 79.83 & 52.33 & 94.67 \\ \cline{2-6} & ShuffleRows & -1.80 & -15.0 & -33.50 & +1.83 \\ \cline{1-1} & ShuffleColumns & -1.16 & -2.33 & +4.00 & -0.34 \\ \cline{1-1} & ShuffleColumns & -20.50* & -22.16* & +11.66* & -15.67* \\ \cline{1-1} & SequentialColumnNames & -27.66* & -36.16* & +93.34* & -19.67* \\ \cline{1-1} & ArbitrageColumnNames & -12.16* & -30.58 & -7.33 & -2.00 \\ \cline{1-1} & TranspoetTable & -25.08 & -20.80* & -9.83 & -49.92 \\ \cline{1-1} & ColumnMerger & **+13.17** & -25.83* & +3.17 & -6.34* \\ \cline{1-1} & SerialRow & -24.84* & +3.50 & -1.33 & -18.67* \\ \hline \hline \end{tabular} \end{table} Table 3: Average pass@1 delta from original to noisy for fact-finding tasks. Statistically significant values (p-value < \(\frac{0.01}{8}\)) are marked with ** **Transformation Tasks**: Table 4 presents our transformation task results after applying noise operations. We discuss multiple interesting trends. First, we find that introducing sequence information into headers (through the sequential column renaming noise operation) can significantly impact performance for the column reordering task (which requires changing column order) for some formats. For example, for the comma separated format, introducing sequential column renaming degrades column reordering F1-score by 67.33% (p-value < \(\frac{0.01}{8}\)). Column name shuffling and arbitrary column renaming, which _do not_ introduce any form of sequential bias reduce performance as well, but by a smaller margin. Second, we find that table transpose performance can be significantly affected by _transposing_ the table initially. For example, transposing the table in JSON reduces the transpose task F1-score by 89% (p-value < \(\frac{0.01}{8}\)). This emphasizes that preprocessing may be necessary for tabular data, compared to relying on the model to perform such transformations itself for downstream tasks. Finally, we find that introducing unstructured content can impact transformation tasks. For example, we find that JSON format, which obtains high table transposition performance, drops to zero (p-value < \(\frac{0.01}{8}\)), when the column merging noise operation is applied. ## 6 Conclusion We evaluated LLM performance on self-supervised structural table understanding tasks using different formats and noise operations. Our results show that different formats obtain varying performance and noise operations can change results (both positively and negatively). Future work should consider cross-LLM performance, further exploring what format properties correlate with performance, and evaluating whether performance on table structure understanding tasks correlates with performance on downstream table task such as question answering or NL-to-code generation. \begin{table} \begin{tabular}{l l r r r} \hline \hline \multicolumn{1}{c}{Table Formats} & \multicolumn{1}{c}{Table Manipulation} & \multicolumn{1}{c}{TableColumnReorder/Tests} & \multicolumn{1}{c}{TableReconstructionTests} & \multicolumn{1}{c}{TableTransposeTests} \\ \hline \multirow{8}{*}{Json} & OriginalData & 99.67 & 85.00 & 100.00 \\ \cline{3-5} & ShuffleDowns & -1.00 & -45.00* & -13.33* \\ & ShuffleColumns & +0.33 & -19.00 & -40.07* \\ & ShuffleColumns & -0.34 & -13.67 & -29.33* \\ & SequentialColumnNames & +0.33 & -9.00 & -2.00 \\ & ArbitraryColumnNames & +0.33 & -4.33 & -0.67 \\ & TransposeTable & +0.00* & -26.31* & -42.00* \\ & ColumnMarkerger & +0.33 & -48.31* & -25.33* \\ & SerializeRow & +0.00* & -46.33* & -19.00* \\ \hline \multirow{8}{*}{DFLOADER} & OriginalData & 99.33 & 98.00 & 98.33 \\ & ShuffleDowns & +0.67 & -16.33* \\ & ShuffleColumns & +0.67 & -34.00* & -34.33* \\ & ShuffleColumns & +0.67 & -31.33* & -26.33* \\ & SequentialColumnNames & +0.67 & -34.07* & -1.00 \\ & ArbitraryColumnNames & +0.67 & -16.00* & -0.33 \\ & TransposeTable & +2.33* & -98.00* & -88.00* \\ & ColumnMarkerger & +0.67 & -98.00* & -88.00* \\ & SerializeRow & +0.33* & -73.33* & -60.33* \\ \hline \multirow{8}{*}{CommaSeparated} & OriginalData & 95.33 & 74.33 & 99.00 \\ \cline{2-5} & ShuffleRows & -7.33 & -41.66* & -20.33* \\ \cline{1-1} & ShuffleColumns & +4.66 & -19.00 & -33.00* \\ \cline{1-1} & ShuffleColumnsNames & -32.00* & -9.66 & -47.00* \\ \cline{1-1} & SequentialColumnNames & +26.33* & -13.00 & -24.33* \\ \cline{1-1} & ArbitraryColumnNames & -25.66* & -4.34 & -21.67* \\ \cline{1-1} & TransposeTable & +2.00 & -46.00* & -26.31* \\ \cline{1-1} & ColumnMarkger & +4.66 & -24.00* & -26.31* \\ \cline{1-1} & SerializeRow & +0.00* & -57.60* & -98.00* \\ \hline \multirow{8}{*}{TabSeparated} & OriginalData & 93.33 & 92.33 & 50.00 \\ \cline{1-1} & ShuffleRows & -2.00 & -57.00* & -34.67* \\ \cline{1-1} & ShuffleColumns & -6.00 & -31.00* & -6.00* \\ \cline{1-1} & ShuffleColumns Names & +59.33* & -29.66* & -0.00 \\ \cline{1-1} & SequentialColumnNames & +0.00* & -27.00* & -7.33* \\ \cline{1-1} & ArbitraryColumnNames & +53.33* & -13.00* & -2.00* \\ \cline{1-1} & TransposeTable & +46.67* & -98.00* & -50.00* \\ \cline{1-1} & ColumnMarkger & +41.33* & -98.00* & -48.00* \\ \cline{1-1} & SerializeRow & +0.00* & -91.00* & -50.00* \\ \hline \hline \end{tabular} \end{table} Table 4: Average F1 score delta from original to noisy for transformation tasks. Statistically significant values (p-value < \(\frac{0.01}{8}\)) are marked with “**”.
2303.10260
Online Linear Quadratic Tracking with Regret Guarantees
Online learning algorithms for dynamical systems provide finite time guarantees for control in the presence of sequentially revealed cost functions. We pose the classical linear quadratic tracking problem in the framework of online optimization where the time-varying reference state is unknown a priori and is revealed after the applied control input. We show the equivalence of this problem to the control of linear systems subject to adversarial disturbances and propose a novel online gradient descent based algorithm to achieve efficient tracking in finite time. We provide a dynamic regret upper bound scaling linearly with the path length of the reference trajectory and a numerical example to corroborate the theoretical guarantees.
Aren Karapetyan, Diego Bolliger, Anastasios Tsiamis, Efe C. Balta, John Lygeros
2023-03-17T22:02:48Z
http://arxiv.org/abs/2303.10260v2
# Online Linear Quadratic Tracking with Regret Guarantees + ###### Abstract Online learning algorithms for dynamical systems provide finite time guarantees for control in the presence of sequentially revealed cost functions. We pose the classical linear quadratic tracking problem in the framework of online optimization where the time-varying reference state is unknown _a priori_ and is revealed after the applied control input. We show the equivalence of this problem to the control of linear systems subject to adversarial disturbances and propose a novel online gradient descent based algorithm to achieve efficient tracking in finite time. We provide a dynamic regret upper bound scaling linearly with the path length of the reference trajectory and a numerical example to corroborate the theoretical guarantees. Optimal Tracking Online Control ## 1 Introduction Linear quadratic tracking (LQT) is the natural generalization of the optimal linear quadratic regulator (LQR) for the setting where the goal is not to drive the state to the origin but to a certain reference. The reference trajectory need not be necessarily time-invariant and in the classic formulation of the problem is known in advance. This is a reasonable assumption in many practical applications, such as aircraft tracking of a predetermined trajectory or precision control in industrial process engineering. However, in other scenarios, for example, in tracking the output of a secondary agent whose dynamics are unknown and/or the measurements are imperfect, the prediction of the next reference point is non-trivial. In these cases the reference trajectory is only revealed sequentially, after the action has been taken, suggesting the need for an online or adaptive algorithm that will learn or adapt to the dynamics of the reference generating agent. In this letter, we study the LQT problem with an unknown reference trajectory. We pose the problem in the framework of online convex optimization (OCO) subject to the dynamics constraint of the system [1]. In particular, the tracking problem is recast into an equivalent regulation problem with a redefined state that evolves with linear dynamics subject to additive adversarial disturbances. In the spirit of OCO, we show why classical online gradient descent (OGD) may fail to achieve optimal tracking and propose a modified algorithm, called SS-OGD (steady state OGD) that is guaranteed to achieve the goal under mild conditions. Given the online nature of the algorithm, its performance is quantified through the means of dynamic regret that compares the accumulated finite time cost of a given algorithm to that of an optimal benchmark that solves the LQT problem with an _a priori_ knowledge of the reference trajectory. We provide dynamic regret bounds in terms of the path length of the reference trajectory. The LQT problem for sequentially revealed adversarial reference states is studied in [2] with policy regret bounds. In cases where a window of predictions is available, a receding horizon gradient descent algorithm is suggested in [3] with a dynamic regret analysis. In a more recent line of work [4], the authors introduce a memory based, gradient descent algorithm and in [5], tackle the constrained tracking problem with policy regret guarantees. In[6], the authors analyze the output tracking scheme of an iterative learning controller and provide dynamic and static regret bounds. A lower bound on the regret of the online LQT problem is given in [3] with the reference path length as the complexity term. We pose the tracking problem in the framework of nonstochastic control subject to adversarial disturbances, studied in [1], with logarithmic policy regret bounds achieved in [7]. In Section 2, we formally define the LQT problem and recast it into a regulation problem for linear systems subject to adversarial disturbances. In Section 3, we present the SS-OGD algorithm and provide dynamic regret bounds in Section 4. The performance of SS-OGD is compared to that of a certainty equivalence (CE) controller that assumes a constant reference, by applying both to the tracking control problem of a quadrotor model in Section 5. _Notation_: The set of positive real numbers is denoted by \(\mathbb{R}_{+}\) and that of non-negative integers by \(\mathbb{N}\). For a matrix \(W\) the spectral radius and the spectral norm are denoted by \(\rho(W)\), and \(\|W\|\), respectively, and \(\lambda_{min}(W)\) denotes its lowest eigenvalue. We define \(\lambda_{W}:=\frac{1+\rho(W)}{2}\); one can show that if \(\rho(W)<1\), there exists a \(c_{W}\in\mathbb{R}_{+}\) such that for all \(k\geq 1\)\(\|W^{k}\|\leq c_{W}\lambda_{W}{}^{k}\). For a given vector \(x\), its Euclidean norm is denoted by \(\|x\|\), and the one weighted by some matrix \(Q\) by \(\|x\|_{Q}=\sqrt{x^{\top}Qx}\). ## 2 Problem Statement Consider the discrete-time linear time-invariant (LTI) dynamical system, given by \[x_{t+1}=Ax_{t}+Bu_{t},\quad\forall t\in\mathbb{N}, \tag{1}\] where \(x_{t}\in\mathbb{R}^{n}\) and \(u_{t}\in\mathbb{R}^{m}\) are the state and input vectors respectively, and \(A\in\mathbb{R}^{n\times n},B\in\mathbb{R}^{n\times m}\) are _known_ system matrices. The goal of the optimal LQT problem is the tracking of a time-varying signal \(r_{t}\in\mathbb{R}^{n}\), such that the cost \[\|x_{T}-r_{T}\|_{P}^{2}+\sum_{t=0}^{T-1}\|x_{t}-r_{t}\|_{Q}^{2}+\|u_{t}\|_{R}^ {2}\] is minimized for some weighting matrices \(Q\in\mathbb{R}^{n\times n}\) and \(R\in\mathbb{R}^{m\times m}\), and where \(P\in\mathbb{R}^{n\times n}\) is the solution of the discrete algebraic Riccati equation (DARE)2 Footnote 2: Given that the dynamics information and the cost matrices are known _a priori_ we consider the final cost matrix to be \(P\) for notational simplicity. For other values of the terminal cost matrix the results still hold, thanks to the fact of exponentially converging state feedback matrices. \[P=Q+A^{\top}PA-A^{\top}PB(R+B^{\top}PB)^{-1}B^{\top}PA. \tag{2}\] The LQT problem can also be recast into an equivalent LQR formulation [8] by considering instead the dynamics \[e_{t+1}=Ae_{t}+Bu_{t}+w_{t},\quad\forall t\in\mathbb{N}, \tag{3}\] with \(e_{t}:=x_{t}-r_{t}\) and \(w_{t}:=Ar_{t}-r_{t+1}\) for all \(t\in\mathbb{N}\), and the corresponding cost function \[J(e_{0},u):=\|e_{T}\|_{P}^{2}+\sum_{t=0}^{T-1}\|e_{t}\|_{Q}^{2}+\|u_{t}\|_{R}^ {2}. \tag{4}\] When the reference trajectory \(r_{t}\), \(t\in\mathbb{N}\) is known at the initial time a closed form solution for the optimal controller that solves the following optimization problem can be obtained \[u^{\star}= \operatorname*{arg\,min}_{u} J(e_{0},u)\] (5a) subject to ( 3 ) \[\forall\ 0\leq t<T. \tag{5b}\] This controller, often referred to as the optimal offline noncausal controller, can be represented as a linear feedback on the current state and the future reference [9, 10]. Departing from the classical formulation of tracking control, we assume that the reference signal is _unknown_ and is only revealed sequentially after the control input has been applied, similar to the adversarial tracking framework in [2]. In particular, for each time step \(0\leq t<T\): 1. The state \(x_{t}\) and the reference state \(r_{t}\) are observed, 2. The agent decides on an input \(u_{t}\), 3. The environment decides on the next reference \(r_{t+1}\), which, in turn, determines \(w_{t}\). The error state then evolves according to (3), incurring the following cost for the agent \[c_{t}(e_{t},u_{t}):=\|Ae_{t}+Bu_{t}+w_{t}\|_{Q}^{2}+\|u_{t}\|_{R}^{2}.\] (6) Note that the online cost (6), depends on the current input \(u_{t}\) and the unknown disturbance \(w_{t}\), and is therefore unknown to the decision maker at timestep \(t\); it is revealed only at time \(t+1\), after the input \(u_{t}\) has been applied to the system. From this point of view, our problem formulation fits the online learning framework, with the extra challenge of inherent dynamics. The goal of the controller is, then, to minimize the online cumulative cost 3 Footnote 3: For consistency we require \(c_{T-1}(e_{T-1},u_{T-1}):=\|Ae_{T-1}+Bu_{T-1}+w_{T-1}\|_{P}^{2}+\|u_{T-1}\|_{R} ^{2}\). To forego unnecessary cluttering of the notation, the separate treatment of the last timestep is implied implicitly. \[\sum_{t=0}^{T-1}c_{t}(e_{t},u_{t})=J(e_{0},u)-\|e_{0}\|_{Q}^{2}.\] This is the same as the LQR cost without the initial state, implying that the minimizers of both problems coincide. We quantify the finite-time performance of the algorithm through the means of dynamic regret. Consider a policy \(\pi:\mathcal{I}\to\mathbb{R}^{m}\), mapping from the available information set, \(\mathcal{I}\), to the control input space. Its dynamic regret, given a disturbance signal \(w\), is defined as \[\mathcal{R}^{\pi}(w,e_{0})=J(e_{0},u^{\pi})-J(e_{0},u^{\star}), \tag{7}\] where \(u^{\pi}\) is the input generated by \(\pi\) and \(u^{\star}\) is given by (5). We allow the trajectory \(r_{t},\ t\in\mathbb{N}\) to be arbitrary, as long as it remains bounded. **Assumption 1** (Bounded trajectory).: _There exist \(\bar{R}\in\mathbb{R}_{+}\), such that \(\|r_{t}\|\leq\bar{R}\) for all \(t\in\mathbb{N}\)._ The more abruptly a trajectory changes, the harder it is to achieve good tracking performance, especially if the trajectory is unknown beforehand. To capture this inherent complexity of the problem, we use the well established notion of path length [3]. **Definition 2.1** (Path Length).: _The path length of a reference trajectory \(r_{0:T}\in\mathbb{R}^{n(T+1)}\) is_ \[L(T)=\sum_{t=0}^{T-1}\|\Delta r_{t}\|,\] _where \(\Delta r_{t}=r_{t+1}-r_{t}\)._ For more random and abrupt changes in the trajectory, the path length is higher, and one expects the performance of an online algorithm to deteriorate. Likewise, an efficient algorithm should improve as path length decreases. This is captured quantitatively by showing at least a linear dependence of the algorithm's regret on the path length. Finally, we make the following standard assumptions for the LQR problem to be well-posed. **Assumption 2** (LQR is well-posed).: _The system \((A,B)\) is stabilisable, the pair \((Q^{\frac{1}{2}},A)\) is detectable and \(R\succ 0\)._ ## 3 The SS-OGD Algorithm We consider a control law of the following form \[u_{t}=-Ke_{t}+v_{t},\quad\forall 0\leq t<T, \tag{8}\] where \(K=(R+B^{\top}PB)^{-1}B^{\top}PA\) is fixed to the optimal LQR gain, and \(v_{t}\) is a correction term that should account for the unknown disturbances; we will employ online learning techniques to update the latter term. We investigate the performance of online gradient descent based algorithms. Consider the following "naive" update \[v_{t}=v_{t-1}-\alpha\nabla_{v}c_{t-1}(e_{t-1},u_{t-1}), \tag{9}\] where \(v_{t}\) is updated in the opposite direction of the gradient of the most recent cost. Here \(\alpha\in R_{+}\) is the step size and the recursion starts from some \(v_{0}\in\mathbb{R}^{m}\). As the online objective is quadratic, the gradient is available in a closed form and the update can be represented as \(v_{t}=v_{t-1}-2\alpha(Ru_{t-1}+B^{\top}Qe_{t})\). For the case of a constant reference signal and an underactuated system, the algorithm can converge to a point that is not necessarily the optimal one with respect to infinite horizon cost minimization. This is due to the greedy behavior of the update that does not take into account future dynamics. In this section, we propose a simple modification to this myopic OGD update (9), called SS-OGD that accounts for this shortcoming. To motivate the SS-OGD update, we consider the steady state solution of (3) in closed loop with the affine control law (8) when we fix \(v_{t}=\bar{v}\) and \(r_{t}=\bar{r}\) for all \(i\geq t\). Defining \(S\coloneqq(I-A+BK)^{-1}B\), a closed form solution for the steady state and input is given by 4 Footnote 4: Note that \(\bar{x}\) and \(\bar{u}\) are both defined for a given \(\bar{v}\) and \(\bar{r}\). The dependence is left for simplicity \[\bar{x} =S\bar{v}+SK\bar{r}\] \[\bar{u} =(I-KS)(\bar{v}+K\bar{r}).\] One can then find the \(\bar{v}\) which will recover the optimal steady state solution by minimizing the time-averaged infinite horizon steady state cost. This is equivalent to minimizing \[\arg\min_{\bar{v}}\{c(\bar{x}-\bar{r},\bar{u})=\|\bar{x}-\bar{r}\|_{Q}^{2}+\| \bar{u}\|_{R}^{2}\}, \tag{10}\] whose gradient is given by \[\nabla_{\bar{v}}c(\bar{x}-\bar{r},\bar{u})\!=\!2\left((I-KS)^{\top}R\bar{u}\! +\!S^{\top}Q(\bar{x}-\bar{r})\right). \tag{11}\] **Lemma 3.1**.: _Under Assumption 2, (10) is strictly convex for any \(K\in\mathbb{R}^{m\times n}\), for which \(\rho(A-BK)<1\)._ Proof.: If the matrix \(I-KS\) is singular, there exists a \(v\in\mathbb{R}^{n}\), such that \(v=KSv\). Then, for \(x=Sv\), at steady state \(x=Ax+B(KSv-Kx)=Ax\). Given the detectability condition of the pair \((Q^{\frac{1}{2}},A)\), for any unstable, or marginally stable mode of \(A\), the matrix \(Q\succ 0\). This ensures that the matrix \(S^{\top}QS+(I-KS)^{\top}R(I-KS)\) is positive definite, which is equivalent to the strong convexity of (10). Since \(r\) is, in general, not constant, we suggest a new OGD-like update rule on the bias term \(v_{t}\) that is a modified version of the gradient in (11). Specifically, the feedback on the steady state error, \(\bar{x}-\bar{r}\), is replaced with the measured error, \(x_{t}-r_{t}\), and the steady state input, \(\bar{u}\), with the latest applied input, \(u_{t-1}\). This results in the following update, named SS-OGD \[v_{t}=v_{t-1}-2\alpha\left((I-KS)^{\top}Ru_{t-1}+S^{\top}Qe_{t}\right). \tag{12}\] The modifications from the standard OGD can be interpreted as incorporating the dynamics information in the update rule. As we show in the following, this ensures that in the limit, if the algorithm is stable and the reference signal is constant, the SS-OGD converges to the same point as the solution of the LQR problem minimizing (4). Moreover, through the feedback on the state \(e_{t}\) and input \(u_{t-1}\), the update rule (12) incorporates a proportional integral (_PI_) control on the measured state. This is demonstrated on a quadrotor control example in Section 5, where, with the inherent integrator dynamics of the quadrotor, the SS-OGD achieves a zero steady state error in tracking a position reference signal with a constant rate of change. To study the SS-OGD update rule, we introduce the following evolution of the combined system optimizer dynamics \[z_{t+1}=\tilde{A}z_{t}+\tilde{B}w_{t}, \tag{13}\] where \(z_{t}\coloneqq[v_{t}^{\top}\,e_{t}^{\top}]^{\top}\), the matrices \(\tilde{A}\in\mathbb{R}^{p\times p}\) and \(\tilde{B}\in\mathbb{R}^{p\times n}\) are defined in Appendix A and \(p\coloneqq m+n\). **Assumption 3**.: _The step size \(\alpha>0\) is such that \(\rho(\tilde{A})<1\)._ Since all the variables in \(\tilde{A}\) are known _a priori_, we show that there always exists an \(\alpha\) satisfying this assumption and provide a sufficient condition in Appendix A. The following theorem shows that, for a constant \(w_{t}=\bar{w}\) for all \(0\leq t<T\), SS-OGD update (12) converges to the solution of \[(\hat{e}_{t},\hat{v}_{t})=\operatorname*{arg\,min}_{(e,v)} \|e\|_{Q}^{2}+\left\|-Ke+v\right\|_{R}^{2}\] (14a) subject to \[e=(A-BK)e+Bv+w_{t}, \tag{14b}\] with \(r_{T+1}\coloneqq r_{T}\). The solution (14) can be interpreted as the steady state and input that minimize the infinite horizon time-averaged cost (4). **Theorem 3.2**.: _Under Assumptions 2 and 3, if \(w_{t}=\bar{w}\) for all \(t\in\mathbb{N}\), the steady state of (13) coincides with the solution of (14)._ The proof of the theorem is provided in Appendix B. As a corollary, for a constant signal \(r_{t}=\bar{r}\), the update converges to the solution of (10). Note that this is not always true for the naive OGD update (12), as its fixed point for a fixed disturbance is not necessarily the same as (14). ## 4 Regret Analysis To characterize the effectiveness of the algorithm for time-varying signals and to provide finite time guarantees, we analyze its dynamic regret and show that it scales with the path length. The next theorem summarizes this main result. **Theorem 4.1**.: _Under Assumptions 1, 2 and 3, the dynamic regret of the SS-OGD algorithm scales with the path length_ \[\mathcal{R}^{\rm SS-OGD}(w,e_{0})\leq\mathcal{O}\left(1+L(T)\right).\] The proof of the theorem is provided in Section 4.1 after some auxiliary results. **Lemma 4.2**.: _(Cost Difference Lemma [11]) For any two policies \(\pi_{1},\pi_{2}\) mapping states to inputs, the difference of their accumulated costs over a \(T\) long horizon is given by_ \[J(e_{0},u^{\pi_{2}})-J(e_{0},u^{\pi_{1}})=\sum_{t=0}^{T-1}\mathcal{Q}_{t}^{ \pi_{1}}(e_{t}^{\pi_{2}},u_{t}^{\pi_{2}})-J_{t}(e_{t}^{\pi_{2}},u^{\pi_{1}}),\] _where \(u_{t}^{\pi_{2}}\) is the input generated by the policy \(\pi_{2}\) at time \(t\), \(e_{t}^{\pi_{2}}\) is the state at time \(t\) generated by applying the policy \(\pi_{2}\), and \(Q_{t}^{\pi_{1}}(e,u)=\|e\|_{Q}^{2}+\|u\|_{R}^{2}+J_{t+1}(Ae+Bu+w_{t},u^{\pi_{1}})\) is the Q-function for policy \(\pi_{1}\) and \(J_{i}(e_{i},u)\) is the cost-to-go at time step \(i\), with initial state \(e_{i}\) and control signal \(u\)._ The proof is omitted, as it is identical to the one for Markov decision processes [11]. The following result for a general policy \(\pi\) akin to the result in [12] follows. **Lemma 4.3**.: _Given the system dynamics (3) and cost function (4), the dynamic regret of any policy \(\pi\) is given by_ \[\mathcal{R}^{\pi}(w,e_{0})=\sum_{t=0}^{T-1}(u_{t}^{\pi}-u_{t}^{*})^{\top}(R+B ^{T}PB)(u_{t}^{\pi}-u_{t}^{*}),\] _where \(P\) is the solution of the DARE (2)._ Proof.: Let \(Q_{t}^{*}(e,u)\) be the optimal Q-function, associated with the optimal control law \(u^{\star}\). Then, using Lemma 4.2 the dynamic regret of the policy \(\pi\) is given by \[\mathcal{R}^{\pi}(w,e_{0})=\sum_{t=0}^{T-1}\mathcal{Q}_{t}^{*}(e_{t}^{\pi},u _{t}^{\pi})-\min_{u}Q_{t}^{*}(e_{t}^{\pi},u). \tag{15}\] As \(J_{t}(x,u^{\star})\) can be represented as an extended quadratic function of \(x\), there exist \(f\in\mathbb{R}^{m}\) and \(g\in\mathbb{R}\)[13] such that \[Q_{t}^{*}(e,u)=\|e\|_{Q}^{2}+\|u\|_{R}^{2}+J_{t+1}(Ae+Bu+w_{t},u^{\star})=u^{ \top}(R+B^{\top}PB)u+f^{\top}u+g,\] Regret (15) is then a difference of two extended quadratic functions of the input, completing the proof. For future references, we also recall the Cauchy Product inequality defined for two finite series \(\{a_{i}\}_{i=1}^{T}\) and \(\{b_{i}\}_{i=1}^{T}\): \[\sum_{i=0}^{T}\left|\sum_{j=0}^{i}a_{j}b_{i-j}\right|\leq\left(\sum_{i=0}^{T} \left|a_{i}\right|\right)\left(\sum_{j=0}^{T}\left|b_{j}\right|\right). \tag{16}\] ### Proof of Theorem 4.1 As Lemma 4.3 suggests, the dynamic regret depends on the stepwise control input difference, \[\left\|u_{t}^{\rm ogd}-u_{t}^{\star}\right\|=\left\|-Ke_{t}+v_{t}+Ke_{t}+\sum _{i=t}^{T-1}K_{w}^{i,t}w_{t}\right\|\leq\underbrace{\left\|v_{t}+\sum_{i=t}^{ \infty}K_{w}^{i,t}w_{t}\right\|}_{s_{1,t}}+\underbrace{\left\|\sum_{i=t}^{T-1 }K_{w}^{i,t}\Delta w_{i,t}\right\|}_{s_{2,t}}+\underbrace{\left\|\sum_{i=T}^{ \infty}K_{w}^{i,t}w_{t}\right\|}_{s_{3,t}},\] where \(\Delta w_{i,t}=w_{i}-w_{t}\) and for all \(0\leq t\leq i<T\), \[K_{w}^{i,t}=(R+B^{\top}PB)^{-1}B^{\top}\left((A-BK)^{\top}\right)^{i-t}P.\] We proceed by bounding each of the above terms separately. **Term \(\mathbf{s_{2,t}}\):** This captures the deviation of the artificial disturbance term from the one fixed at timestep \(t\). By noting that \(\Delta w_{i,t}\) can be represented as a telescopic sum, \[s_{2,t}\leq c_{F}d\sum_{i=t}^{T-1}\lambda_{F}^{i-t}\sum_{j=t}^{i-1}\|w_{j+1}-w_ {j}\|\leq\frac{c_{F}d}{1-\lambda_{F}}\sum_{j=t}^{T-2}\|w_{j+1}-w_{j}\|\lambda_{ F}^{j-t},\] where \(F:=A-BK\) and \(d=\|(R+B^{\top}PB)^{-1}B^{\top}\|\cdot\|P\|\). Using (16) \[\sum_{t=0}^{T-1}s_{2,t} \leq\frac{c_{F}d}{1-\lambda_{F}}\sum_{t=1}^{T-1}\sum_{j=t}^{T-1} \|w_{j}-w_{j-1}\|\lambda_{F}^{j-t}\leq\frac{c_{F}d}{1-\lambda_{F}}\sum_{j=1}^ {T-1}\sum_{t=1}^{j}\|w_{j}-w_{j-1}\|\lambda_{F}^{j-t}\] \[\leq\frac{c_{F}d}{(1-\lambda_{F})^{2}}\sum_{j=1}^{T-1}\|w_{j}-w_{ j-1}\|\leq\frac{c_{F}d\left(\|A\|+1\right)}{(1-\lambda_{F})^{2}}\cdot L(T)\] **Term \(\mathbf{s_{3,t}}\):** This captures the effect of truncating the infinite horizon problem to a finite one \[s_{3,t}\leq c_{F}h\bar{R}\sum_{i=T}^{\infty}\lambda_{F}^{i-t}\leq\frac{c_{F}h \bar{R}\lambda_{F}^{T-t}}{1-\lambda_{F}}\sum_{t=0}^{T-1}s_{3,t}\leq\frac{c_{F} h\bar{R}\left(1-\lambda_{F}^{T}\right)}{(1-\lambda_{F})^{2}},\] where \(h=\left\|\left(I-\tilde{A}\right)^{-1}\tilde{B}\right\|\left(\|A\|+1\right)\). **Term \(\mathbf{s_{1,t}}\):** This captures the cost of performing a gradient step in the direction of the steady state solution instead of the full solution, for a fixed \(w_{t}\). Note that \(-\sum_{i=t}^{\infty}K_{w}^{i,t}w_{t}\) is the solution of the following infinite horizon optimization problem and is independent of the initial state [10, 14] \[\hat{v}_{t}=\operatorname*{arg\,min}_{v}\lim_{T\to\infty}\frac{1} {T}\sum_{i=0}^{T}\|e\|_{Q}^{2}+\|Ke+v\|_{R}^{2}\] \[\text{subject to}\quad e=(A-BK)e+Bv+w_{t},\] which is equivalent to (14). Hence, by Theorem 3.2 \[\sum_{i=t}^{\infty}K_{w}^{i,t}w_{t}=-\left[I\;0\right](I-\tilde{A})^{-1}\tilde {B}w_{t}=-\left[I\;0\right]\hat{z}_{t},\] where \(\hat{z}_{t}=[\hat{v}_{t}^{\top}\;\hat{e}_{t}^{\top}]^{\top}:=(I-\tilde{A})^{-1 }\tilde{B}w_{t}\) is the steady state of the SS-OGD dynamics (13) for a given \(w_{t}\). This term captures the difference between the SS-OGD update term \(v_{t}\) and the steady state value \(\hat{v}_{t}\) for that timestep. We look at the evolution of the augmented state difference; for all \(0<t\leq T\) \[\varepsilon_{t}=\tilde{A}z_{t-1}+\tilde{B}w_{t-1}-\hat{z}_{t}=\tilde{A}z_{t-1 }+(I-\tilde{A})\hat{z}_{t-1}-\hat{z}_{t}=\tilde{A}\varepsilon_{t-1}+\hat{z}_{t -1}-\hat{z}_{t}. \tag{17}\] Thus, at a given time step \(0\leq t\leq T\) \[\varepsilon_{t}=\tilde{A}^{t}\varepsilon_{0}+\sum_{i=0}^{t-1}\tilde{A}^{i} \left(\hat{z}_{t-i-1}-\hat{z}_{t-i}\right).\] Then, under Assumption 3, \[\|\varepsilon_{t}\| \leq c_{\tilde{A}}\lambda_{\tilde{A}}^{t}\left(\|e_{0}\|+\|v_{0}- \hat{z}_{0}\|\right)+\left\|\left(I-\tilde{A}\right)^{-1}\tilde{B}\right\| \cdot\sum_{i=0}^{t-1}\lambda_{\tilde{A}}^{i}\left(\|A\|\|\Delta r_{t-i-1}\|+ \|\Delta r_{t-i}\|\right)\] \[\leq c_{\tilde{A}}b\lambda_{\tilde{A}}^{t}+c_{\tilde{A}}h\sum_{i =0}^{t-1}\lambda_{\tilde{A}}^{i}\left(\|\Delta r_{t-i}\|+\|\Delta r_{t-1-i}\| \right).\] Defining \(b=(h+1)\bar{R}+\|x_{0}\|+\|v_{0}\|\), and \(\bar{\varepsilon}=c_{\bar{A}}\left(b+\frac{2Rh}{1-\lambda_{\bar{A}}}\right)\), the following time dependent and uniform bounds apply \[\|\varepsilon_{t}\|\leq c_{\bar{A}}b\lambda_{\bar{A}}^{t}+c_{\bar{A}}h\sum_{i=0 }^{t-1}\lambda_{\bar{A}}^{i}\left(\|\Delta r_{t-i}\|+\|\Delta r_{t-1-i}\| \right)<\bar{\varepsilon}, \tag{18}\] Using the above bound, \[s_{1,t}\leq\|\varepsilon_{t}\|\leq c_{\bar{A}}b\lambda_{\bar{A}}^{t}+c_{\bar{ A}}h\sum_{i=0}^{t}\lambda_{\bar{A}}^{i}\|\Delta r_{t-i}\|,\] \[\sum_{t=0}^{T-1}s_{1,t}\leq\sum_{t=0}^{T-1}\|\varepsilon_{t}\|\leq\frac{c_{ \bar{A}}}{1-\lambda_{\bar{A}}}\left(b+hL(T)\right).\] Note that there exist \(s_{2},s_{3}\in\mathbb{R}_{+}\), such that \(s_{2,t}\leq s_{2},\ s_{3,t}\leq s_{3}\) and from (18) \(s_{1,t}\leq\bar{\varepsilon}\) for all \(t\in\mathbb{N}\). Using Lemma 4.3 and denoting \(\bar{P}=4\|R+B^{\top}PB\|\) \[\mathcal{R}(w,e_{0})<\bar{P}\sum_{t=0}^{T-1}\left(s_{1,t}^{2}+s_{2,t}^{2}+s_{ 3,t}^{2}\right)\leq\mathcal{O}(1+L(T)).\] ### Steady-State Benchmark Given Theorem 3.2, one can also compare SS-OGD to the steady state optimal solution for each timestep. Consider the steady state control law \[\hat{u}_{t}=-K\hat{e}_{t}+\hat{v}_{t} \tag{19}\] for all \(0\leq t<T\), where \(\hat{e}_{t}\) and \(\hat{v}_{t}\) solve (14). This steady state controller can be interpreted as an optimal benchmark that is decoupled from the system dynamics, has access to the current cost \(c_{t}\) and solves for its optimal, steady state solution. The following Lemma provides a side result on the regret of the SS-OGD algorithm with respect to the steady state controller (19), \(\mathcal{R}^{\rm SS-OGD}_{\rm SS}(w,e_{0}):=J(e_{0},u^{\rm SS-OGD})-J(e_{0}, \hat{u})\). **Lemma 4.4**.: _Under Assumptions 1, 2, and 3, the regret of the SS-OGD algorithm with respect to the steady state benchmark (19) scales with the reference path length,_ \[\mathcal{R}^{\rm SS-OGD}_{\rm SS}(w,e_{0})\leq\mathcal{O}\left(1+L(T)\right).\] Proof.: The regret can be expressed as a function of the combined error state \(\varepsilon\) evolving according to (17). Defining \(\tilde{Q}_{i}:=\tilde{Q}\) for all \(0\leq i<T\) and \(\tilde{Q}_{T}\) as in (20) in Appendix A \[\mathcal{R}^{\rm SS-OGD}_{\rm SS}(w,e_{0})\leq\|\tilde{Q}\|\left(2h\bar{R}+ \bar{\varepsilon}\right)\sum_{t=0}^{T}\|\varepsilon_{t}\|,\] where we used the uniform bound in (18) and the fact \(\|\hat{z}_{t}\|\leq h\bar{R}\) for all \(t\). Using the bound in (18) again, together with the Cauchy Product inequality (16) completes the proof \[\mathcal{R}(w,e_{0})\leq\frac{c_{\bar{A}}\|\tilde{Q}\|\left(2h\bar{r}+\bar{ \varepsilon}\right)}{1-\lambda_{\bar{A}}}\left(b+hL(T)\right).\] ## 5 Numerical Example The SS-OGD algorithm is implemented on a linearized model of CrazyFlie quadrotors [15] in closed loop with a _PI_ velocity controller [16], to track a reference trajectory in two dimensions. In particular, we consider the following model \[A=\begin{bmatrix}1.000&0&0.096&0&0&0.040\\ 0&1.000&0&0.096&-0.040&0\\ 0&0&0.894&0&0&0.703\\ 0&0&0&0.894&-0.703&0\\ 0&0&0&0.193&0.452&0\\ 0&0&-0.193&0&0&0.452\end{bmatrix},\quad B=\begin{bmatrix}0.004&0&0.106&0&0&0. 193\\ 0&0.004&0&0.106&-0.193&0\end{bmatrix}^{\top},\] where the state \(x:=\begin{bmatrix}p_{x}&p_{y}&v_{x}&v_{y}&\beta&\rho\end{bmatrix}^{\top}\) contains the horizontal position and velocity, as well as the roll and pitch angles of the quadrotor, and the input \(u:=\begin{bmatrix}v_{x}^{t}&v_{y}^{t}\end{bmatrix}^{\top}\) sets the target horizontal velocities.The cost matrices are taken to be \(Q=\operatorname{diag}\left(100,100,1,1,0,0\right)\) and \(R=0.1\cdot I\). In the first experiment, the drone tracks the shape of the letters **IFA** for an _a priori_ unknown reference with a fixed \(\Delta r_{t}\) for all timesteps. SS-OGD's performance is compared to that of the CE controller (14) that assumes a constant reference, fixing \(r_{i+1}=r_{i}\) for all \(t\leq i<T\). The results are shown in Figure 1. Even though the CE controller appears to be tracking the reference better in the \((p_{x},p_{y})\) plot, the plot of position as a function of time reveals that in fact it lags behind the reference trajectory, resulting in around \(3\) times higher regret, compared to SS-OGD. As the reference signal has a constant rate of change, the double integrator dynamics of the open loop transfer function from the error to the state, allows SS-OGD to achieve perfect position tracking. Even though, for references that have a varying rate of change, this is no longer the case, SS-OGD still performs better than the CE controller. In a second experiment we calculate the empirical worst case regret as a function \(T\). In particular, for each \(T\), \(60\) random reference signals are simulated and the highest value of regret is noted. The references are generated such that \(\|\Delta r_{t}\|\) decreases with a constant factor of \(0.99\). This ensures a finite path length in the limit, as visualized in Figure 2, in agreement with the upper bound in Theorem 4.1. ## 6 Conclusion In this letter, we reformulate the LQT problem as an online control problem for linear systems subject to adversarial disturbances. Within this framework we propose a novel online gradient descent-based algorithm, called SS-OGD, that incorporates the known dynamics of the system into the online control update. We analyze the dynamic regret Figure 1: Tracking a 2-D shape with a linearized quadrotor model. The horizontal position plot (left panel) suggests that the CE controller tracks the reference signal better. However, the time plot (top right panel) shows the visible time lag of the CE controller; by contrast SS-OGD quickly converges to the reference. This leads to a much lower rate of regret for SS-OGD (bottom right panel). Figure 2: Empirical regret of SS-OGD for an exponentially decreasing path length.
2302.07435
Log Parsing with Prompt-based Few-shot Learning
Logs generated by large-scale software systems provide crucial information for engineers to understand the system status and diagnose problems of the systems. Log parsing, which converts raw log messages into structured data, is the first step to enabling automated log analytics. Existing log parsers extract the common part as log templates using statistical features. However, these log parsers often fail to identify the correct templates and parameters because: 1) they often overlook the semantic meaning of log messages, and 2) they require domain-specific knowledge for different log datasets. To address the limitations of existing methods, in this paper, we propose LogPPT to capture the patterns of templates using prompt-based few-shot learning. LogPPT utilises a novel prompt tuning method to recognise keywords and parameters based on a few labelled log data. In addition, an adaptive random sampling algorithm is designed to select a small yet diverse training set. We have conducted extensive experiments on 16 public log datasets. The experimental results show that LogPPT is effective and efficient for log parsing.
Van-Hoang Le, Hongyu Zhang
2023-02-15T02:57:05Z
http://arxiv.org/abs/2302.07435v1
# Log Parsing with Prompt-based Few-shot Learning ###### Abstract Logs generated by large-scale software systems provide crucial information for engineers to understand the system status and diagnose problems of the systems. Log parsing, which converts raw log messages into structured data, is the first step to enabling automated log analytics. Existing log parsers extract the common part as log templates using statistical features. However, these log parsers often fail to identify the correct templates and parameters because: 1) they often overlap the semantic meaning of log messages, and 2) they require domain-specific knowledge for different log datasets. To address the limitations of existing methods, in this paper, we propose LogPPT to capture the patterns of templates using prompt-based few-shot learning. LogPPT utilises a novel prompt tuning method to recognise keywords and parameters based on a few labelled log data. In addition, an adaptive random sampling algorithm is designed to select a small yet diverse training set. We have conducted extensive experiments on 16 public log datasets. The experimental results show that LogPPT is effective and efficient for log parsing. log parsing, few-shot learning, prompt-tuning, deep learning ## I Introduction Large-scale software-intensive systems often produce a large volume of logs to record runtime status and events for troubleshooting purposes. Logs play an important role in the maintenance and operation of software systems, which allow engineers to better understand the system's behaviours and diagnose problems. The rich information included in log data enables a variety of log analytics tasks, such as anomaly detection [1, 2, 3, 4], root cause analysis [5, 6], failure prediction [7, 8], and log compression [9, 10]. Among them, the first and foremost step is log parsing, which parses free-text raw log messages into a structured format [11]. The structured log data from log parsing are fed to various machine learning (ML) or deep learning (DL) models to perform many downstream analysis tasks. Log parsing is the task of converting a raw log message into a specific log template. As shown in Figure 1, log messages are generated from logging statements in the source code. A log message usually contains a header that is automatically produced by the logging framework and includes information such as component and verbosity level. The log message body (log message for short) typically consists of two parts: 1) _Template_ - constant strings (or keywords) describing the system events; 2) _Parameters_ - dynamic variables, which vary during runtime and reflect system runtime information. For example, in the first log message in Figure 1, the header (i.e., "17/08/22 15:50:46", "INFO", and "BlockManager") can be easily distinguished through regular expressions. The log message consists of a template "Putting block <*> with replication took <*>" and the parameters including "rdd_1_1" and "0". To achieve automated log parsing, many data-driven approaches [12, 13, 14, 15] have been proposed over the years to extract the common parts that constantly occur among log messages as templates and the dynamic parts that vary during runtime as parameters. Although making progress, existing log parsers still suffer from unsatisfactory accuracy, which may significantly affect the follow-up analysis such as log-based anomaly detection [16]. For example, the existing state-of-the-art log parsers Drain [12] and AEL [17] only achieve an average Parsing Accuracy of 0.34 and 0.28 on 16 log datasets [18]. We have identified the following limitations of the existing log parsers: * **Accuracy:** Existing log parsers extract common parts as templates using statistical features (e.g., word length, log length, frequency) and ignore the semantic meaning of log messages. Without considering the semantic information, traditional log parsers tend to misidentify parameters as keywords [19] in many cases (e.g., when encountering previously unseen log templates). * **Robustness:** Existing log parsers are not robust across different types of logs because they require domain-specific knowledge for different datasets [20]. The domain-specific knowledge includes data pre-processing (e.g., defining regular expressions) and hyper-parameter settings (e.g., the number of clusters or similarity threshold). Fig. 1: An example of log parsing from Spark The accuracy of these log parsers could be significantly affected by the input domain-specific knowledge. For example, without the pre-processing step, the parsing accuracy can decline by 6.1%-73.5% [21]. When applying the existing log parsers to a new log dataset, due to different logging formats and behaviours, time-consuming adjustment of hyper-parameters and regular expressions are needed [19]. To overcome the above-mentioned limitations, in this paper, we propose LogPPT, a novel log parser with prompt-based few-shot learning. LogPPT is able to capture the semantic information of log messages to identify keywords and parameters in log messages by learning from only a few labelled log messages. First, we design an Adaptive Random Sampling algorithm that can sample a small and diverse set of log messages to label as the training data. The training data is a set of labelled logs that contain raw log messages and the corresponding ground truth templates. Second, to effectively train a model with a few labelled log data, we tune a pre-trained language model (e.g., RoBERTa [22]) to predict a specific virtual label token ("PARAM", an acronym for parameters) at the position of parameters in the log message in a few-shot learning manner. The embedding vector for the virtual label token "PARAM" is generated based on the word distribution from language model predictions and the unlabelled log dataset. After training, LogPPT can be directly applied to parse new log data. Our proposed method does not require any pre-processing step and uses the same set of hyper-parameter values for different datasets, which is robust across different logging formats and behaviours, and more generalised than existing approaches. We have evaluated LogPPT on 16 public log datasets [11]. LogPPT achieves over 0.9 average Group Accuracy [11] and Parsing Accuracy [18, 19] when using only 32 labelled samples. The experimental results show that LogPPT is effective and efficient. It outperforms state-of-the-art parsers by 16% on Group Accuracy [11] and about 84% on Parsing Accuracy [19]. Moreover, LogPPT is also robust across different log datasets. To summarise, our main contributions are as follows: * We propose LogPPT, a prompt-based few-shot log parser that can precisely capture the patterns of templates and parameters in log messages. LogPPT uses a novel prompt tuning method to effectively learn the semantic information from a few labelled log samples. The proposed approach does not require manually-defined regular expressions for pre-processing and uses the same set of hyper-parameter values for every dataset, thus can quickly adapt to new log datasets. * We evaluate LogPPT on 16 public log datasets, and the results demonstrate that LogPPT outperforms existing approaches. The experimental results confirm the effectiveness and efficiency of our proposed method. ## II Background and Motivation ### _Log Parsing_ Log parsing is one of the first steps for log analysis tasks [11]. It is a process to extract the static log template parts and the corresponding dynamic parameters (or variables) from free-text raw log messages. For example, Figure 1 shows an example of logs of the Spark system, where Datetime, Component, and Level fields are the log header generated by the logging framework and are generally easy to extract. The log template "Putting block <*> with replication took <*>" associated with parameters (e.g., "rdd_1_1", "0"), in contrast, is often difficult to identify. The goal of log parsing is to convert each log message into a specific log template and extract the corresponding parameters [11, 19]. The straightforward way of log parsing relies on handcrafted regular expressions or grok patterns to extract log templates and parameters [11]. However, manually writing regular expressions to parse a huge volume of logs is time-consuming and error-prone [11]. Some studies [23, 24] extract the log templates from logging statements in the source code to compose regular expressions for log parsing. However, it is not applicable in practice since the source code is often unavailable, especially for third-party libraries [11]. Therefore, regular expression matching often serves as a pre-processing step to (1) separate headers and content (which contains log templates and dynamic parameters) from raw log messages, and (2) abstract some special information such as IP address and ID to improve parsing accuracy. To achieve the goal of automated log parsing, many data-driven approaches have been proposed to identify log templates as the frequent part of log messages. Data-driven log parsing approaches can be divided into three main groups: 1) Frequent pattern mining. Some approaches, including SLCT [25], LFA [15], and Logram [14], find frequent patterns which emerge constantly across the entire log dataset. They leverage the token position or \(n\)-gram information to extract log templates based on frequent pattern mining. 2) Similarity-based clustering. These approaches apply various clustering algorithms to group similar logs and consider logs under the same group belonging to the same template. Representative methods include LKE [26], LogSig [27], and LenMa [28], which compute distances between two log messages or their signature to cluster them based on similarity. 3) Heuristics-based parsing. AEL [17], Spell [13], or Drain [12] propose heuristics-based log parsing methods that leverage unique characteristics from log messages to extract common templates efficiently. Although making progress, traditional log parsers are still criticized for unsatisfactory parsing accuracy due to the omission of semantic information or improper evaluation metrics. Recent studies [18, 19] show that traditional approaches focus more on grouping logs and fail to identify the correct templates and parameters. For example, in Figure 1, some tokens (such as "rdd_0_1" and "0") are identified as keywords by traditional log parsers because they do not vary in different log messages. However, these tokens should be classified as parameters considering their semantic meanings. Besides, existing log parsers are not robust across different log datasets. They require domain-specific knowledge to define regular expressions for pre-processing of different log data [20]. For example, on the HDFS dataset [21, 29], \(block\_id\) (e.g., "blk_-6670958622368987959") information is abstracted from logs by using a regular expression "blk_-?d+". For a new dataset such as BGL [21, 30], this regular expression must be changed to match the \(core\_id\) such as "core.2275" (i.e., "blk_-?\(\backslash\)d+" \(\rightarrow\) "core.\(\backslash\)d+"). Moreover, existing log parsers require specific hyper-parameters (e.g., number of clusters or similarity threshold) for different datasets to optimize the performance. For example, Drain [12] uses a low _similarity threshold_ of 0.2 for the HealthApp dataset and a high _threshold_ of 0.6 for the Proximifier dataset [11]. Due to different logging formats and behaviours, when facing a new log dataset, existing log parsers have to adjust the hyper-parameters and reconfigure the regular expressions for pre-processing [19]. ### _Language Models_ #### Iv-B1 Pre-training and Fine-tuning Pre-trained models have been shown effective in many natural language processing (NLP) tasks. These language models (LM), such as BERT [31] and T5 [32], are generally pre-trained using the Masked Language Modelling (MLM) objective. During the pre-training phase, the model learns to predict randomly masked input tokens. Based on the idea that log is actually a natural language sequence [2], some studies [16, 33, 34] have leveraged pre-trained language models such as BERT [31] to analyse log data. Language models are pre-trained on large-scale unlabelled corpus and then fine-tuned to perform downstream tasks. **Fine-tuning** a pre-trained model for downstream tasks [31, 35] is a prevalent paradigm in the NLP field that further trains the model in a supervised way. As shown in Figure 2(a), a straightforward way to apply fine-tuning for log parsing is to convert the log parsing task into the token classification problem. The model can easily extract keywords and form log templates by classifying whether a token in log messages is a keyword or parameter (binary classification) using an additional classifier. However, the inconsistency between pre-training objectives and the fine-tuning objective (i.e., classification) restrains the use of rich knowledge distributed in pre-trained models [36, 37], leading to sub-optimal results. Besides, the performance of fine-tuning significantly depends on the scale of downstream data. #### Iv-B2 Prompt Tuning Recently, prompt tuning [37, 38, 39, 40] has been proposed to close the gap between pre-training and downstream tasks. Figure 2(b) illustrates the concept of prompt tuning. Instead of designing a new training objective for each downstream task, prompt tuning rewrites the input by adding a natural language instruction such as "[S] is a [MASK]" to reuse the masking objective for downstream tasks. Formally, standard prompt tuning employs a prompt template \(T_{prompt}(.)\) to convert the input \(X\) to prompt input \(X_{prompt}=T_{prompt}(X)\). The prompt template is a textual string with unfilled slots to fill the input \(X\) and a label slot [MASK]. For log parsing, a standard prompt template consists of three unfilled slots to fill the input log message, the token needed to be identified, and the label for the processing token. For example, in Figure 2(b), the prompt template is in the form of "[X] [S] is a [MASK]", where [X], [S], and [MASK] are the unfilled slots for the input log message, token, and label, respectively. The LMs then try to fill the label slot [MASK] with label words such as _keyword_ or _variable_. After that, a verbalizer is used to map each predicted label word to a class for the input token. In Figure 2(b), the verbalizer contains label words sets of "[_const, keyword_]" for keywords and "[_parameter_]" for parameters. By enumerating over all tokens in a log message, we can extract the corresponding template and parameters. According to the flexibility of the prompt template, standard prompt tuning techniques can be categorized into two types: hard prompt and soft prompt. We briefly introduce each prompt type in the following. **Hard Prompt.** Hard prompt or discrete prompt [37, 38] is a technique that modifies the input by adding fixed natural language instructions. Hard prompt templates usually correspond to natural language phrases [41], in which each token in prompt templates is meaningful and understandable. Although hard prompt has shown promising performance [38], the template design and the label word choices are challenging because it requires task-specific knowledge. **Soft Prompt.** Soft prompt [42, 43] is an alternative to hard prompt. Instead of using fixed discrete words as in hard prompt, soft prompt uses _virtual tokens_, which are in the form of continuous vectors and can be learnt during the tuning stage, to construct prompt templates. The soft prompt is proposed to remove the constraints of manually selecting a prompt template in the hard prompt. Although achieving promising results in various NLP tasks, standard prompt tuning is insufficient for the log parsing task because (1) it needs to enumerate all span candidates, which is inelegant and time-consuming [40], and (2) it is sensitive to noises (see Section VI-A for details). In this paper, we apply prompt tuning to achieve the goal of log parsing with a few labelled training data. However, instead of using standard prompt tuning, we leverage the paradigm of template-free prompt [39] for log parsing, which does not require prompt templates as the instruction. In template-free prompt [39], an additional _virtual label token_ is generated and plays the role of prompt instructions as in standard prompt tuning. Then, the model learns to predict the _virtual label token_ at the positions of parameters and the original token at the positions of keywords using a custom MLM objective. The template-free prompt tuning method [39] addresses the major limitations of standard prompt by (1) relaxing the burden of Fig. 2: An illustration of fine-tuning and prompt tuning for log parsing manually selecting prompt templates [39] and (2) performing one-pass decoding to process all tokens simultaneously, which is more efficient compared to the time-consuming enumeration process of standard prompts [39, 40]. ## III Approach In this section, we describe the proposed LogPPT approach. To overcome the limitations of existing approaches, we train a model to capture the patterns of templates and parameters based on the context information of log messages using rich knowledge derived from language models pre-trained on large corpora. Specifically, we apply the paradigm of prompt tuning [39] to enable few-shot log parsing to better transfer the knowledge from pre-trained language models to log parsing. To make the best use of prompt tuning, it is essential to select an optimized labelled training set for our method. Therefore, we introduce an Adaptive Random Sampling algorithm to effectively select a small number of samples for training. The overview of the proposed approach is shown in Figure 3. In the following, we first present the problem formulation in Section III-A. Then, we describe the few-shot data sampling method in Section III-B. Section III-C describes the training process, which consists of three modules, including a pre-trained language model, a virtual label token generation module, and a training objective. Finally, we describe how to apply LogPPT for online parsing in Section III-D. ### _Problem Definition_ In this work, we transform the log parsing task into a parameter recognition problem where only a small number of labelled examples are used for training by adopting a novel prompt tuning method [39]. Specifically, for a new dataset \(\mathcal{D}\), we tune a pre-trained language model, \(\mathcal{M}\), to recognise keywords and parameters in a log message through prompt tuning. The model takes the input of a raw log message consisting of \(n\) tokens, \(X=\{x_{1},x_{2},\dots,x_{n}\}\) and predicts a virtual label token "PARAM" at the position of parameters. For keywords, the model remains to predict the original tokens. Formally, the model \(\mathcal{M}\) is trained to generate the output, \(Y=\{y_{1},y_{2},\dots,y_{n}\}\), where: \[y_{i}=\mathcal{M}(x_{i})=\begin{cases}\text{``PARAM''}&\text{if $x_{i}$ is a parameter}\\ x_{i}&\text{if $x_{i}$ is a keyword}\end{cases} \tag{1}\] For example, as shown in Figure 3, the model is trained to predict the parameter "blgio91" as a label token "PARAM". For keywords such as "failed", the model will predict the original words. "PARAM" is a specific virtual token that does not have any linguistic meaning. It indicates parameters in log messages and guides the model to recognise those parameters based on their relations with the "PARAM" token. The embedding vector of "PARAM" is calculated based on the most frequent parameters in log messages. "PARAM", therefore, is generated using both labelled training data and unlabelled data to better represent the meaning of parameters in log messages. In the online parsing (inference) phase, all tokens with \(y_{i}=\text{``PARAM''}\) are considered parameters, and other tokens are included in the log template. ### _Few-shot Data Sampling_ During the training phase, our proposed method requires a small amount of labelled log data as the training dataset. To collect accurately labelled samples with low manual effort, we propose a simple yet effective approach to select a small number (\(K\)) of labelled samples. Firstly, training log messages are cleaned by applying some commonly-used pre-processing techniques [2, 16], such as removing all non-character tokens, stop words or camel case. Then, we propose to use an Adaptive Random Sampling algorithm from Adaptive Random Testing [44] to obtain a diverse and evenly distributed sample set. Algorithm 1 describes the adaptive random sampling based algorithm for few-shot data selection. ``` Input: \(\mathcal{D}\), \(\mathcal{M}\), \(\mathcal{D}\ algorithm iteratively selects one log message per iteration based on their similarities until \(\mathcal{S}\) contains \(K\) samples. From lines 5-8, \(\eta\) random candidate logs from \(\widehat{L}\) are selected and stored in \(\widehat{C}\). Then, for each candidate in \(\widehat{C}\), the algorithm finds and calculates the similarity with its nearest neighbour in \(S\) (lines 9-16). At line 17, the algorithm finds a candidate \(c\) in \(\widehat{C}\) which has the smallest similarity with its nearest neighbour (i.e., smallest \(\Delta_{c}\)) and inserts it to the sample set \(\mathcal{S}\). The outer loop repeats until \(\mathcal{S}\) contains \(K\) elements. From lines 20-23, the algorithm collects the templates for all original log messages in \(\mathcal{S}\) from user feedback and returns \(\mathcal{D}_{train}\) as the final output. ``` Data:\(\mathcal{D}\): Log dataset \(K\): The number of collected samples Result:\(\mathcal{D}_{train}\): a set of \(K\)-labelled samples 1\(\widehat{L}\leftarrow\) pre-process(\(\mathcal{D}\)) // \(\widehat{L}_{i}=\{cln,org\}\): clean and original logs \(\mathcal{D}_{train}\gets Q\) // initialize the training set \(\mathcal{S}\leftarrow\{l\mid l\in\widehat{L}\)and\(l.cln\) is the shortest cleaned log\(\}\) while\(K>1\)do 2\(\widehat{C}\leftarrow\mathcal{O}\)// initialize candidate set for\(i=1\rightarrow\eta\)do// \(\eta=32\) \(\widehat{C}.\)add(\(\{\)random \(c\in\widehat{L}|\)\(c.cln\notin\widehat{C}\)\(\&\)\(c.org\notin\mathcal{S}\}\)) 3 end for /* compute the similarities between logs in \(\widehat{C}\) and \(\mathcal{S}\) */ \(\Delta\leftarrow\mathcal{O}\) for\(c=\{cln,org\}\in\widehat{C}\)do 4\(\delta\gets 0\) /* find the nearest neighbour of \(c\) in \(\mathcal{S}\) and calculate the similarity between \(c\) and its nearest neighbour */ foreach\(l=\{cln,org\}\in\mathcal{S}\)do 5\(\delta=\textbf{MAX}(\delta,\textbf{similarity}(c.cln,l.cln))\) 6 end for 7\(\Delta.\textbf{add}(\delta)\) 8 end for /* select the candidate with the longest distance/smallest similarity to its nearest neighbour in \(\mathcal{S}\) */ \(\mathcal{S}.\textbf{add}(\{c\in\widehat{C}\mid\Delta_{c}\text{ is smallest}\})\) \(K\gets K-1\) 9 end for /* label the sample set \(\mathcal{S}\) of \(K\) samples as the training set */ 10foreach\(s=\{cln,org\}\in\mathcal{S}\)do 11\(\mathcal{D}_{train}.\textbf{add}(\{s.orig,template(s.orig)\})\) 12 end foreach\(\mathcal{D}_{train}\) ``` **Algorithm 1**Few-shot Data Sampling ### _Prompt-Tuning for Log Parsing_ In this work, we take advantage of prompt-tuning, which recently set the state-of-the-arts for many NLP tasks, by applying the entity-oriented LM objective [39]. The essence behind this idea is that (1) most keywords in log statements are valid words and readable, which can be looked up in a dictionary [33], thus are easier to be predicted by the language model; and (2) parameters, in contrast, are constantly changing, which are hard to be predicted by the language model. In view of this, we transform the log parsing task into a label token prediction problem. Specifically, for parameters, we force the model to predict the virtual label token "PARAM", while for keywords, the model is trained to predict the original words. #### Iii-C1 Pre-trained Language Model Pre-trained language models [22, 31, 45, 35] have been shown to be effective in many NLP tasks. These models are pre-trained on large-scale unlabelled corpus and then usually fine-tuned on downstream tasks. Recent studies [16, 33] demonstrate that these pre-trained models can be applied to understand the semantic meanings of log messages, thus favouring many downstream log analytics tasks. In this paper, we choose RoBERTa [22] as the studied pre-trained model since it is one of the most widely-used models. RoBERTa is an encoder-only model and uses the same transformer architecture as BERT [31]. Different from BERT, RoBERTa is trained to predict the mask token with a large byte-level Byte-Pair Encoding (BPE) [46]. One of the main reasons we choose RoBERTa over BERT is that the use of BPE allows RoBERTa to tokenize any input text without introducing any "unknown" tokens by tokenizing out-of-vocabulary words into subwords. This makes RoBERTa more suitable for log parsing because parameters created by developers are far beyond the scale of common English words and constantly changing, which would incur the out-of-vocabulary problem [19]. Several studies also found that RoBERTa is effective for log analysis [16, 33, 47]. #### Iii-C2 Virtual Label Token Generation Given an input sequence, \(X=\{x_{1},x_{2},...,x_{n}\}\), we adopt the template-free prompt tuning method [39] to predict a virtual label token "PARAM" at the position \(i\) via the pre-trained language model \(\mathcal{M}\), where \(x_{i}\) is a parameter. Since all parameters are converted to the same token, it is essential to find a pivot token that can properly represent the parameters. From the training set \(\mathcal{D}_{train}\!=\!\{(X_{i}\!,\!Y_{i})\}_{i=1}^{K}\), we leverage the pretrained language model \(\mathcal{M}\) to get the probability distribution of predicting each token \(t\) at each position \(i\). Specifically, we feed each sample \((X,Y)\) into \(\mathcal{M}\) and get the probability distribution \(p(\widehat{x}_{i}=t|X)\) of predicting each token \(t\) in the log message \(X\). Then, for each position \(i\) which is indicated as a parameter, we select the top\(k\) predicted tokens of \(x_{i}\) as the initial parameters indication set \(\mathcal{V}_{ini}\). This step aims to select top\(k\) tokens having a similar meaning to the original parameter tokens to enrich the parameters indication set. From the initial label-words set \(\mathcal{V}_{ini}\), we simply search for the most frequent word in the unlabelled data. Specifically, we calculate the frequency \(\phi(x=t|D)\) of each token \(t\in\mathcal{V}_{ini}\) and select the most frequent words by ranking: \[\mathcal{V}=\underset{t}{\text{argmax}}\ \phi(x=t|D),\forall t\in\mathcal{V}_{ini} \tag{2}\] After obtaining the set \(\mathcal{V}\), we assign the embedding vector for the virtual label token "PARAM" by calculating the mean vector of all tokens in \(\mathcal{V}\) and add it to the language model \(\mathcal{M}\). #### Iii-C3 Training Given the input log message \(X=\{x_{1},x_{2},\ldots,x_{n}\}\), we construct a target sequence \(Y=\{y_{1},y_{2},\ldots,y_{n}\}\) by replacing the parameter at the position \(j\) with the virtual label token "PARAM", and maintaining the original words at keyword positions using Equation 1. Then, the LM model is trained to maximize the probability \(P(Y|X)\) of the target sequence \(Y\): \[\mathcal{L}=-\frac{1}{K}\sum_{K}^{i=1}\Biggl{(}\frac{1}{n}\sum_{j=1}^{n}logP(x_{j} =y_{j}|X_{i})\Biggr{)} \tag{3}\] where \(K\) is the number of labelled training samples. Note that we reuse the whole pre-trained model during the tuning process. The entity-oriented objective is similar to the LM-based (i.e., mask token prediction) objective, which can reduce the gap between pre-training and fine-tuning, thus allowing our model to keep the knowledge learned by the pre-trained LM model. ### _Online Parsing_ During online parsing (inference), we directly feed the log messages into the trained model, which will first tokenize the input to a set of tokens and then predict their corresponding target tokens. If a token is predicted as "PARAM", it will be integrated into the parameter list; otherwise, it will be kept in the log template. Finally, we follow [18] to post-process log templates by replacing consecutive parameters with a single parameter. Note that we only need a one-pass decoding process to parse a log message, which is efficient when scaling to a large volume of logs. ## IV Experimental Design ### _Research Questions_ We evaluate our approach by answering the following research questions (RQs): **RQ1:** How effective is LogPPT? **RQ2:** How efficient is LogPPT? **RQ3:** How do different modules contribute to LogPPT? **RQ4:** How does LogPPT perform with different tuning techniques? ### _Datasets_ We conduct experiments based on datasets initially collected from the _LogPai_ benchmark [11, 48], which consists of log data of 16 different systems, including distributed systems, supercomputers, operating systems, mobile systems, server applications, and standalone software. To determine the ground truth log templates, Zhu et al. [11] randomly sampled 2,000 log messages for each dataset and manually labelled them. However, recent studies [18, 19] point out that there are multiple errors from these original datasets. Therefore, Khan et al. [18] applied some heuristic rules such as Double Space or User-defined String to fix incorrect templates in the original datasets. In this study, we use the corrected version of these 16 datasets from [18] in our evaluation. ### _Baselines_ We compare our proposed method with five state-of-the-art methods, including AEL [17], LenMa [28], Spell [13], Drain [12], and Logram [14]. These approaches apply many techniques such as similarity-based clustering (i.e., LenMa), frequency-based mining (i.e., AEL and Logram), or heuristics-based searching (i.e., Drain and Spell). We choose these five approaches in our evaluation since they have their source code publicly available; and a prior study [11] finds that these approaches have the highest accuracy and efficiency among all the evaluated log parsers. We adopt the implementation of these methods from their replication packages [49, 50]. For a fair comparison, we extend baseline methods to include the labelled data from the data sampling phase. We transform the message-level labels into token-level labels by splitting log messages using default separators of each method. ### _Evaluation Metrics_ Following recent studies [11, 18, 19, 20], we apply three metrics in our evaluation, including: **Group Accuracy (GA)**: Group Accuracy [11] is the most commonly used metric for log parsing. Group Accuracy considers template identification as a clustering process in which log messages with different log events are clustered into different groups [18]. The GA metric is defined as the ratio of "correctly parsed" log messages over the total number of log messages, where a log message is considered "correctly parsed" if and only if it is grouped with other log messages consistent with the ground truth. However, recent studies [18, 19] show that GA only accounts for how the parsed templates support the log message grouping activity instead of considering whether the templates and parameters are correctly identified or not. **Parsing Accuracy (PA):** The Parsing Accuracy (or Message-Level Accuracy [19]) metric is defined as the ratio of "correctly parsed" log messages over the total number of log messages, where a log message is considered to be "correctly parsed" if and only if every token of the log message is correctly identified as template or variable. This metric is much stricter than Group Accuracy since any incorrectly parsed token will lead to the wrong parsing result for the whole log message. We found that this metric is useful when evaluating the performance of log parsers when dealing with unseen log events compared to Group Accuracy. For example, for those log events that only appear once, GA always considers them as correctly identified since they belong to the correct groups. In contrast, PA could mark this identification as incorrect if some variables are incorrectly recognised as keywords. **Edit Distance (ED):** Edit Distance is proposed in [20]. Different from GA and PA, Edit Distance is used to evaluate the template extraction in terms of string comparison. Specifically, Edit Distance (or Levenshtein edit distance) is computed by counting the minimum number of operations required to transform one template into the other [20]. The score of Edit Distance for a dataset is computed as the median edit distance of all parsed template and ground truth template pairs. By computing the distance between parsed templates and ground truth templates, this metric can measure the accuracy of log parsers in terms of meaning similarity (i.e., lexical similarity in our evaluation) between parsed results and ground truth. Note that the smaller the distance between two templates, the more similarity between them. ### _Implementation and Environment_ We conduct our experiments on a GPU server equipped with NVIDIA Tesla V100 GPU and CUDA 10.2. We implement LogPPT with Python 3.8 and PyTorch 1.7. Followed recent studies for prompt tuning [36, 38], during the training process, we utilize AdamW [51] optimizer and set the initial learning rate to \(5e^{-5}\). We set the batch size as 8 and train the model for 200 steps. AdamW optimizer is used with a linear decaying schedule with 10% warm-up steps. During the online parsing phase, we set the batch size to 32. In the Virtual Label Token Generation module, we calculate the embedding of the virtual label token "PARAM" from the 8 most frequent label tokens for our experiments. We also evaluate the performance of LogPPT with different numbers of frequent label tokens. We provide the results in our project webpage1 due to space constraints. The results show that the performance of the proposed method is robust to the number of label tokens. It achieve consistently good results when choosing at least four label tokens. In the Few-shot Data Sampling module, we set \(K=32\) as the default. We also experiment with different values of \(K\) (from 4 to 128) in the experiments. Footnote 1: [https://github.com/LogIntelligence/LogPPT](https://github.com/LogIntelligence/LogPPT) ## V Experimental Results ### _RQ1: Parsing Effectiveness_ #### V-A1 Accuracy In this RQ, we compare LogPPT with five state-of-the-art methods (including AEL [17], LenMa [28], Spell [13], Drain [12], and Logram [14]) on all 16 log datasets. Firstly, we compare the results of LogPPT with baselines using \(K=32\) labelled samples. The results in terms of three metrics (Group Accuracy, Parsing Accuracy, and Edit Distance) are shown in Table I. From the results, we can see that our model outperforms baseline methods on almost all datasets in the three evaluation metrics. Specifically, in terms of Group Accuracy (GA), LogPPT exceeds the most powerful log parser (Drain) by 15.8% (0.923 versus 0.797 on average) and achieves the best results on 12 out of 16 datasets. It is worth noting that LogPPT achieves the accuracy of over 0.9 on 12 datasets and achieves 1.0 accuracy on four datasets among them, which is significantly superior to existing log parsers. In terms of Parsing Accuracy (PA), LogPPT surpasses baselines by at least 83.9% when achieving an accuracy of 0.916 on average. LogPPT also achieves the best parsing accuracy on 14 out of 16 datasets. The high Parsing Accuracy suggests that LogPPT is able to accurately recognise the templates and corresponding parameters of log messages. The experimental results confirm that LogPPT is effective in grouping logs into the same templates and identifying correct log templates and parameters. Inspired by recent studies [18, 20], we also evaluate our proposed LogPPT in terms of Edit Distance (ED) to measure the similarity between identified templates and their corresponding ground truth. It can be seen that LogPPT achieves the best average edit distance of 1.130, which is 7 times better than Drain. Besides, LogPPT outperforms baseline approaches on 15 out of 16 datasets and achieves a comparable result on the Apache dataset (0.024 versus 0). The experimental results on Edit Distance show that the parsed templates produced by LogPPT have high textual similarities with the ground truth. The main reason for the high accuracy of LogPPT is that it is capable of learning from semantic information of log messages, thus is able to precisely identify the templates and parameters of log messages. #### V-A2 Robustness Our proposed LogPPT explicitly aims at supporting a broad range of diverse log datasets as employing a general log parser in production environments requires a robust performance [11]. Existing log parsers are sensitive to pre-processing steps, which involve domain-specific knowledge. Therefore, they show low robustness against different logging formats and behaviours [11, 21]. Therefore, we next analyze and compare the robustness against different types of logs of LogPPT with that of the baselines. Figure 4 shows the accuracy distribution of each log parser across different log datasets. From the results, we can see that LogPPT outperforms the baselines in terms of robustness across different log types. Existing methods require different regular expressions for pre-processing and different hyper-parameter values, thus, performing inconsistently on different datasets. For example, Drain uses different _similarity threshold_ (e.g., 0.2 for HealthApp and 0.6 for Proxifier) and different regular expressions (e.g., "\(\text{blk}\_\)-?\(\text{\textbackslash d}\)+" for HDFS and "core.\(\text{\textbackslash d}\)+" for BGL) for different datasets. In contrast, LogPPT does not require to manually define regular expressions and achieves the smallest variance over different datasets. LogPPT is robust and performs well on most of the datasets (accuracy higher than 0.9) in terms of group and parsing accuracy. For example, LogPPT yields a median of 0.99 for GA robustness and 0.94 for PA robustness, which exceeds the second best log parser (i.e., Drain) by 6.9%, and 98.5%, respectively. Besides, LogPPT uses the same set of hyper-parameter values for every dataset in the training phase and does not require re-adjustment for each dataset. Overall, the experimental results confirm that LogPPT is robust and can be applied to different log datasets with low effort. Our method requires a small amount (\(K\)) of labelled data sampled by an adaptive random sampling algorithm as the training set. Therefore, to evaluate the sensitivity of our proposed LogPPT to the amount of labelled data, we conduct an experiment using different numbers of training log messages (i.e., different _shots_). Figure 5 shows the performance of LogPPT with different numbers of shots. Fig. 4: Accuracy Distribution of Log Parsers with 32-shot The experimental results show that the model's performance witnesses a severe drop when less data is used for training. The low results are reasonable since pre-trained models require task-specific data for better adapting to downstream tasks [36]. However, we observe that LogPPT achieves a good balance between Group Accuracy and Parsing Accuracy. Also, LogPPT performs better than baselines in terms of Parsing Accuracy and Edit Distance even with only four labelled training samples. Moreover, it is noticeable that LogPPT can consistently achieve good results when \(K\geq 16\). In summary, LogPPT significantly outperforms the existing approaches in all three evaluation metrics. The experimental results confirm that LogPPT is capable of recognising log templates and the corresponding parameters. #### Iv-A3 Accuracy with Unseen Logs Unseen log events occur frequently in logs. In this study, we consider the log events appearing only once in a dataset as previously unseen log events. LogPPT can accurately recognise the templates and corresponding parameters of unseen log events, as reflected by the high Parsing Accuracy. To further evaluate the ability of LogPPT in parsing unseen logs, we measure the Parsing Accuracy of LogPPT on unseen log data and compare it with baseline methods. Specifically, for every dataset, we extract those log messages whose corresponding log templates only appear one time based on the ground truth, then calculate the Parsing Accuracy on these log messages. Table II shows the results. There are 42.64 unseen log events on average on 16 studied datasets. LogPPT achieves the best accuracy of 0.599 when parsing unseen log data, which exceeds existing log parsers by 58.9% (LenMa) to 517.5% (Logram). ### _RQ2: Runtime Performance Evaluation_ Besides effectiveness, efficiency is another critical metric for log parsers to consider in order to handle large-scale log data. To measure the efficiency of our proposed LogPPT, we record the running time it needs to finish the entire parsing process and compare it with the baseline methods. Specifically, we conduct this experiment on BGL and HDFS datasets, as they are relatively large. Figure 6 reports the results. We can see that the running time of LogPPT increases slowly with the increase of log data volume. With the use of GPU acceleration, our model can perform faster than or comparable with traditional log parsers. For example, LogPPT takes about 107 seconds to process one million log messages, which is just slightly slower than Drain (94s), Spell (95s) and AEL (84s) Fig. 5: Results of LogPPT with different shots (\(K\)) and much faster than LenMa and Logram (cannot finish within 1,000 seconds). ### _RQ3: Ablation Study_ In this section, we evaluate the effectiveness of the major components and parameters in our proposed model. Specifically, we exclude the Virtual Label Token Generation module and let the pre-trained model automatically assign the embedding for the virtual label token "PARAM". To measure the contribution of the Adaptive Random Sampling module, we remove it from our model and randomly sample the log messages for labelling. We repeat this random process five times to avoid random bias and report the average results in Table III. We can see that LogPPT performs worse in terms of parsing accuracy and edit distance without Virtual Label Token Generation and Adaptive Random Sampling modules. For example, without the Virtual Label Token Generation module, LogPPT only achieves a parsing accuracy of 0.835, which is 8.8% worse than complete LogPPT, while it can still achieve an acceptable group accuracy (0.879). The reason is that without the Virtual Label Token Generation module, the model cannot find the pivot word that can mostly represent parameters in log messages. Consequently, many parameters are misidentified, leading to a worse parsing accuracy and edit distance. On the other hand, log messages are highly imbalanced under different log templates. Using a naive random sampling technique cannot guarantee the quality of the training set. Therefore, the results significantly decline when we remove the Adaptive Random Sampling module (23.1% decreasing in terms of Parsing Accuracy). In summary, this comparison demonstrates the usefulness of the proposed Adaptive Random Sampling module and the Virtual Label Token Generation module of LogPPT. ### _RQ4: Comparison with Different Tuning Techniques_ LogPPT applies a novel prompt tuning method (i.e., template-free prompt [39]), which relaxes the burden of manually selecting prompt templates and improves the efficiency compared to other prompt tuning methods. In this section, we evaluate the performance of this prompt tuning method. To this end, we replace our prompt tuning module with four different prompt tuning methods (introduced in Section II-B) and a fine-tuning technique. We then compare the performance of LogPPT with that of the variants. * **FT (fine-tuning)**: We add a binary classification layer on top of the pre-trained RoBERTa model and fine-tune the model to perform log parsing as a binary token classification problem. * **HardPT\({}_{\mathbf{M}}\)** (hard prompt tuning with manual label words): We use a standard hard prompt [52] with the prompt template of "[X] [S] is a [MASK]", where [X], [S], and [MASK] are the unfilled slots for the input log message, token, and label respectively. The model learns to predict the label word at the [MASK] position. In this setting, we use fixed manual sets of label words, including "_[const, keyword]_" for keyword tokens and "_[variable, parameter]_" for parameter tokens. * **HardPT\({}_{\mathbf{S}}\)** (hard prompt tuning with soft label words): We use the same standard hard prompt template as the above setting. However, we use trainable tokens [53] as the label words for this setting. * **SoftPT\({}_{\mathbf{M}}\)** (soft prompt tuning with manual label words): In this setting, we follow recent works to use a soft prompt-template of "[X] [S] [SOFT] [SOFT] [MASK]", where [X], [S], and [MASK] are the unfilled slots for the input log message, token, and label respectively. [SOFT] is a trainable token. The embeddings of these [SOFT] tokens are optimized during the tuning stage. We use manual label word sets for this setting as in **HardPT\({}_{\mathbf{M}}\)**. * **SoftPT\({}_{\mathbf{S}}\)** (soft prompt tuning with soft label words): We use the same soft prompt template of "[X] [S] [SOFT] [SOFT] [MASK]" as in the **SoftPT\({}_{\mathbf{M}}\)** setting. For label words, we adopt the same **HardPT\({}_{\mathbf{M}}\)** setting to use trainable tokens [53] as the label words. Fig. 6: Running time of different log parsers under different volume Table IV shows the results. We can see that LogPPT with our proposed prompt tuning method achieves the best results among all studied methods. For example, with _16shot_ setting, LogPPT outperforms others by 6.0%-74.5% in terms of Parsing Accuracy. Our proposed method significantly outperforms other prompt tuning methods because it can leverage both semantic and position information of tokens in log messages. Standard prompt tuning methods overly focus on leveraging the semantic meaning of a token and overlook the contextual information which is important in log parsing. Fine-tuning, on the other hand, can achieve better results than standard prompt tuning because it can use the positional information during the training stage. With more labelled training data, fine-tuning can achieve quite similar results with LogPPT. Next, we evaluate the parsing time of different tuning methods. As shown in Figure 7, the parsing time of LogPPT and fine-tuning approach is similar because they only need one-pass decoding to parse one log message. On the other hand, other prompt tuning methods need to enumerate all tokens in a log message which is a time-consuming process. For example, with soft prompts, the model cannot finish parsing one million log lines within 1,000 seconds. In summary, our proposed method is more effective and efficient compared to other tuning techniques and can achieve high accuracy with a few shots of training data. ## VI Discussion ### _Why does LogPPT Work?_ There are several reasons that make LogPPT perform better than the related approaches. First, LogPPT predicts keywords and parameters using the semantic information from log messages by tuning a pre-trained language model. Thus, compared to traditional methods using only superficial features, LogPPT is able to indicate the keywords or parameters more precisely. Besides, LogPPT does not require domain-specific knowledge to define regular expressions for each dataset, thus is easy to be applied to a new log dataset. Second, compared to other few-shot learning techniques, LogPPT applies an effective and efficient prompt tuning method, which can avoid the complex design for prompt instructions and also boost the few-shot performance. LogPPT leverages both semantic and positional information of tokens in log messages, thus can handle the noise in log data compared to other prompt tuning methods. For example, the log message from Proximiter, "open through **proxy proxy**.cse.cuhk.edu.hk:5070", contains two "proxy" tokens with different roles. Standard prompt tuning methods fail to distinguish these tokens and predict the same label for them. The reason is that standard prompt tuning methods only consider the semantic meaning of tokens but ignore the position information, which is important for log parsing. In contrast, our method utilizes both semantic and position information of a token in log messages and achieves high parsing accuracy (100% parsing accuracy). ### _Threats to Validity_ We have identified the following major threats to validity. **Data Quality.** In this paper, we used public log datasets for our evaluation. The ground truth templates of all log messages, including log templates and corresponding parameters, are provided within the datasets. Although these datasets are commonly used by many related works [11, 14, 20], they may also contain a small proportion of errors. To reduce this threat, we leverage the latest version of the benchmark datasets [18] that are corrected with automatic and manually-defined rules. **Tool Comparison.** In our evaluation, we compared our results with related approaches. The approaches achieved the best results in a recent benchmark [11] and are used in both industry and academia. We adopt the implementations from their replication packages. We apply the parameters and settings (e.g., number of log templates, similarity threshold, etc.) optimized by the previous work [11]. **Labelling Effort.** Our proposed method relies on a small number of labelled log data. To reduce the labelling effort, we propose to use an Adaptive Random Sampling algorithm to select a diverse set of \(K\) log messages (\(K\) from 4 to 128) and attain the templates from user feedbacks. ## VII Related Work **Log Analysis with Language Models:** Log analysis is a research area that has attracted lots of attention due to its practical importance. Typical applications of log analysis include anomaly detection [1, 54, 55, 56], failure prediction [7, 8], root cause analysis [5, 6], etc. Recently, inspired by the success of pre-trained models in NLP, many studies have been proposed to apply pre-trained language models to log analysis. SwissLog [33] and NeuralLog [16] utilize the pre-trained BERT [31] model for log-based anomaly detection. Ott et al. [57] studied the use of different pre-trained models such as BERT [31] and XLNet [58] for log anomaly detection. Setianto et al. [59] proposed to fine-tune the GPT2 [45] model for log parsing. **Data-driven Log Parsing:** Log parsing has become an active research topic in recent years [12, 13, 20, 60]. Recently, to address the limitations of traditional log parsers Fig. 7: Parsing time of different tuning methods and improve the parsing accuracy, some approaches [34, 19] proposed to use token classification for log parsing. LogStamp [34] converts the log parsing task into a sequence labelling problem. It leverages the BERT [31] model to classify words in log messages. These approaches, however, adopt a traditional log parser to generate pseudo labels for log messages as the training data, which can introduce many noises in training data. Liu et al. [19] proposed UniParser, which is a unified log parser for heterogeneous log data. UniParser is trained with labelled data across multiple log sources to capture the common patterns of templates and parameters. Although effective, UniParser requires a noticeable amount of labelled data to train a classification model, which is not always available in practice. Besides, UniParser requires handcrafted rules to split raw log messages into tokens, which is not suitable to apply on some special dataset [19]. Our LogPPT can effectively leverage semantic information from a few labelled data by using a pre-trained language model. LogPPT does not require any domain-specific knowledge to pre-process log data, thus can adapt to new log dataset with low effort. Besides, by using a novel prompt tuning method, LogPPT can effectively learn the semantic patterns from a few labelled data. ## VIII Conclusion Log parsing is the foundation step to enabling automated log analytics. To overcome the limitations of existing log parsers, we propose a log parser with prompt-based few-shot learning, namely LogPPT, to capture the patterns of templates and parameters. LogPPT utilises a novel prompt tuning method to recognise keywords and parameters from a few labelled log data selected by an adaptive random sampling algorithm. We have evaluated LogPPT on public log datasets. The results show that LogPPT is effective and efficient, outperforming the state-of-the-art log parsers. In the future, we will deploy LogPPT in a production environment to further evaluate its scalability and effectiveness in practice. **Data Availability:** Our source code and experimental data are publicly available at [https://github.com/LogIntelligence/LogPPT](https://github.com/LogIntelligence/LogPPT). ## Acknowledgment This work is supported by Australian Research Council (ARC) Discovery Projects (DP200102940, DP220103044). We also thank anonymous reviewers for their insightful and constructive comments, which significantly improve this paper.
2302.11257
Defining eccentricity for gravitational wave astronomy
Eccentric compact binary mergers are significant scientific targets for current and future gravitational wave observatories. To detect and analyze eccentric signals, there is an increasing effort to develop waveform models, numerical relativity simulations, and parameter estimation frameworks for eccentric binaries. Unfortunately, current models and simulations use different internal parameterisations of eccentricity in the absence of a unique natural definition of eccentricity in general relativity, which can result in incompatible eccentricity measurements. In this paper, we adopt a standardized definition of eccentricity and mean anomaly based solely on waveform quantities, and make our implementation publicly available through an easy-to-use Python package, gw_eccentricity. This definition is free of gauge ambiguities, has the correct Newtonian limit, and can be applied as a postprocessing step when comparing eccentricity measurements from different models. This standardization puts all models and simulations on the same footing and enables direct comparisons between eccentricity estimates from gravitational wave observations and astrophysical predictions. We demonstrate the applicability of this definition and the robustness of our implementation for waveforms of different origins, including post-Newtonian theory, effective one body, extreme mass ratio inspirals, and numerical relativity simulations. We focus on binaries without spin-precession in this work, but possible generalizations to spin-precessing binaries are discussed.
Md Arif Shaikh, Vijay Varma, Harald P. Pfeiffer, Antoni Ramos-Buades, Maarten van de Meent
2023-02-22T10:10:45Z
http://arxiv.org/abs/2302.11257v3
# Defining eccentricity for gravitational wave astronomy ###### Abstract Eccentric compact binary mergers are significant scientific targets for current and future gravitational wave observatories. To detect and analyze eccentric signals, there is an increasing effort to develop waveform models, numerical relativity simulations, and parameter estimation frameworks for eccentric binaries. Unfortunately, current models and simulations adopt different internal parameterisations of eccentricity in the absence of a unique natural definition of eccentricity in general relativity, which can result in incompatible eccentricity measurements. In this paper, we present a standard definition of eccentricity and mean anomaly based solely on waveform quantities. This definition is free of gauge ambiguities, has the correct Newtonian limit, and can be applied as a postprocessing step when comparing eccentricity measurements from different models. This standardization puts all models and simulations on the same footing and enables direct comparisons between eccentricity estimates from gravitational wave observations and astrophysical predictions. We demonstrate the applicability of our definition for waveforms of different origins, including post-Newtonian theory, effective one body, extreme mass ratio inspirals, and numerical relativity simulations. We focus on binaries without spin-precession in this work, but possible generalizations to spin-precessing binaries are discussed. We make our implementation publicly available through an easy-to-use Python package, gw_eccentricity. ## I Introduction The gravitational wave (GW) detectors LIGO [1] and Virgo [2], have observed a total of \(\sim\!90\) compact binary coalescences so far [3], which includes binary black holes (BHs) [4], binary neutron stars (NSs) [5] and BH-NS binaries [6]. One of the key goals of GW astronomy is to understand how such compact binaries form in nature. The astrophysical source properties inferred from the GW signals carry valuable clues about the origin of these binaries. In particular, the spins of the compact objects and the eccentricity of the orbit are powerful GW observables for this purpose. If the spins are aligned with the orbital angular momentum, the orbital plane remains fixed throughout the evolution. If the spins are tilted on the other hand, the spins interact with the orbit, causing the orbital plane to precess on a timescale of several orbits [7; 8]. Spin-precession leaves a direct imprint on the GW signal and can be used to distinguish between possible binary formation mechanisms. For example, while isolated binaries formed in galactic fields are expected to have aligned spins [9], binaries formed via random encounters in dense stellar clusters can have randomly oriented spins [9]. To reliably extract this astrophysical information from GW signals, accurate waveform models [10; 11; 12; 13; 14; 15] and GW data analysis methods [16; 17; 18] that capture the effects of spin-precession have been developed. By contrast, orbital eccentricity leads to bursts of GW radiation at every pericenter (point of closest approach) passage [19; 20], which appear as orbital timescale modulations of the GW amplitude and frequency [21]. The eccentricity of GW signals carries information about the binary formation mechanism that is complimentary to what can be learned from spin-precession alone. For example, isolated galactic-field binaries are expected to become circularized via GW emission [19; 20] before they enter the LIGO-Virgo frequency band [9]. Because eccentric signals are considered less likely for LIGO-Virgo, most analyses to date (e.g. Ref. [3]) ignore eccentricity. However, binaries formed via random encounters in dense clusters can merge before they can circularize, thereby entering the LIGO-Virgo band with a finite eccentricity [9]. Similarly, in hierarchical triple systems, the tidal effect of the tertiary can excite periodic eccentricity oscillations of the inner binary [22], resulting in high-eccentricity mergers in the LIGO-Virgo band [23]. LIGO-Virgo observations can be used to ascertain whether the assumptions of small eccentricity are valid, and to measure any nonzero eccentricity that may be present. Therefore, eccentricity measurements and/or upper limits from GW signals are highly sought after, and several groups have already analysed the observed signals to obtain information on eccentricity [24; 25; 26; 27; 28; 29; 30; 31]. As LIGO-Virgo, now joined by KAGRA [32], continue to improve [33], and with next-generation ground-based detectors expected in the 2030s [34; 35; 36; 37], future observations will enable stronger constraints on eccentricity. The case for eccentric signals is stronger for the fu ture space-based GW observatory LISA, which will see the earlier inspiral phase of some of the BH mergers observed by LIGO-Virgo [38; 39; 40], at which point they may still have larger eccentricity. Furthermore, mergers of supermassive black hole binaries observed by LISA may have significant eccentricity if triple dynamics played a role in overcoming the final parsec problem [41]. Finally, LISA will observe the mergers of stellar mass compact objects with supermassive black holes, the so-called extreme mass-ratio inspirals (EMRIs). EMRIs are expected to primarily be formed through dynamical capture leading to high eccentricities when entering the LISA band [39]. Driven by these observational prospects, there has been an increasing effort to develop waveform models [42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60], gravitational self-force calculations [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73], numerical relativity (NR) simulations [74; 75; 76; 77; 78; 79], and source parameter estimation methods [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73] that include the effects of eccentricity. In addition to these efforts, one important obstacle needs to be overcome in order to reliably extract eccentricity from GW signals: Eccentricity is not uniquely defined in general relativity [21], and therefore most waveform models and simulations use custom internal definitions that rely on gauge-dependent quantities like binary orbital parameters or compact object trajectories. As a result, the eccentricity inferred from GW signals can be riddled with ambiguity and can even be incompatible between different models [85]. Such ambiguities propagate into any astrophysical applications, including using eccentricity to identify the binary formation mechanism. To resolve this problem, there is a need for a standardized definition of eccentricity for GW applications. A good definition of eccentricity should have the following features: 1. To fully describe an eccentric orbit, two parameters are required: eccentricity and mean anomaly [21; 48; 87], which is the fraction of the orbital period (expressed as an angle) that has elapsed since the last pericenter passage. The definition should include both eccentricity and mean anomaly. 1 Footnote 1: While mean anomaly is the most convenient choice in our experience, other choices for the second parameter [88] like the “true anomaly” are also possible. 2. To avoid gauge ambiguities, eccentricity and mean anomaly should be defined using only observables at future null-infinity, like the gravitational waveform. 3. In the limit of large binary separation, the eccentricity should approach the Newtonian value, which is uniquely defined. 4. In the limit of large mass ratio, the eccentricity should approach the test particle eccentricity on a Kerr geodesic. 5. The standardized definition should be applicable over the full range of allowed eccentricities for bound orbits \((0-1)\). It should return zero for quasicircular inspirals and limit to one for marginally bound "parabolic" trajectories. 6. The eccentricity and mean anomaly computation should be cheap and robust across binary parameter space and be applicable to a broad range of waveform models and NR simulations. Thus, most models/simulations can continue to rely on their internal eccentricity definitions as it is most convenient to conduct source parameter estimation using the internal definitions. However, if the computation is cheap and robust, one can convert posterior samples from the internal definition to the standardized one as a postprocessing step, thus putting all models and simulations on the same footing. 7. Because the eccentricity and mean anomaly vary during a binary's evolution, one must pick a point in the evolution at which to measure them. This is generally taken to be the point where the GW frequency reaches a certain reference value \(f_{\rm ref}\) (typically 20Hz [3]). However, because eccentricity causes modulations in the GW frequency, the same \(f_{\rm ref}\) can occur at multiple points. Therefore, the standardized definition should also prescribe how to select an unambiguous reference point for eccentric binaries. 8. As current GW detectors are only sensitive to frequencies above a certain \(f_{\rm low}\) (typically 20Hz [3]), when using time-domain waveforms, one typically discards all times below \(t_{\rm low}\), chosen so that the GW frequency crosses 20Hz at \(t_{\rm low}\). Once again, because the GW frequency is nonmonotonic, the standardized definition should prescribe how to select \(t_{\rm low}\) for eccentric binaries. In this paper, we present a standardized eccentricity and mean anomaly definition that meets all of the above criteria. 2 Over the last few years, there have been several similar attempts to standardize the definition of eccentricity [82; 48; 86], or map between different definitions [85], but these approaches either ignore mean anomaly, or do not have the correct limits at large separation or large mass ratio [76]. More recently, Ref. [76] introduced a new definition, that has the correct limits, which we adopt in this work. We rigorously test and demonstrate the robustness of our implementation on eccentric waveforms spanning the full range of eccentricities and different origins: post-Newtonian (PN) theory, NR, effective one body (EOB), and EMRIs. Footnote 2: As described in Sec. IV.1, criterion (D) in the above list is only approximately satisfied. While we focus on eccentric binaries without spin-precession for simplicity, we include a discussion of how our methods can be extended to spin-precessing eccentric systems. In addition, we describe how \(f_{\rm ref}\) and \(t_{\rm low}\) should be generalized for eccentric binaries, along with a discussion on the benefit of using dimensionless reference points [89]. Our computation is very cheap, and our implementation can be used directly during source parameter estimation or as a postprocessing step. We make our implementation publicly available through an easy-to-use Python package gw_eccentricity [90]. This paper is organized as follows. In Sec. II, we describe the standardized eccentricity and mean anomaly definitions, along with a discussion of how to generalize \(f_{\rm ref}\) and \(f_{\rm low}\). In Sec. III, we provide implementation details, along with different choices for capturing the eccentricity modulations in waveforms. In Sec. IV, we demonstrate the robustness of our implementation on waveforms of different origins and over the full range of eccentricities. We finish with some concluding remarks in Sec. V. ## II Defining eccentricity ### Notation and conventions The component masses of a binary are denoted as \(m_{1}\) and \(m_{2}\), with \(m_{1}\geq m_{2}\), total mass \(M=m_{1}+m_{2}\), and mass ratio \(q=m_{1}/m_{2}\geq 1\). The dimensionless spin vectors of the component objects are denoted as \(\mathbf{\chi}_{1}\) and \(\mathbf{\chi}_{2}\), and have a maximum magnitude of 1. For binaries without spin-precession, the direction of the orbital angular momentum \(\mathbf{L}\) is fixed, and is aligned to the \(z\)-axis by convention. For these binaries, the spins are constant and are aligned or anti-aligned to \(\mathbf{L}\), meaning that the only nonzero spin components are \(\chi_{1z}\) and \(\chi_{2z}\). The plus (\(h_{+}\)) and cross (\(h_{\times}\)) polarizations of GWs can be conveniently represented by a single complex time series \(\mathpzc{h}=h_{+}-i\,h_{\times}\). The complex waveform on a sphere can be decomposed into a sum of spin-weighted spherical harmonic modes \(\mathpzc{h}_{\ell m}\), so that the waveform along any direction (\(\iota,\varphi_{0}\)) in the binary's source frame is given by \[\mathpzc{h}(t,\iota,\varphi_{0})=\sum_{\ell=2}^{\ell=\infty}\sum_{m=-\ell}^{m= \ell}\mathpzc{h}_{\ell m}(t)\ _{-2}Y_{\ell m}(\iota,\,\varphi_{0}), \tag{1}\] where \(\iota\) and \(\varphi_{0}\) are the polar and azimuthal angles on the sky in the source frame, and \({}_{-2}Y_{\ell m}\) are the spin\(=-2\) weighted spherical harmonics. Unless the total mass and/or distance are explicitly specified, we work with the waveform at future null-infinity scaled to unit total mass and distance for simplicity. We also shift the time array of the waveform such that \(t=0\) occurs at the peak of the amplitude of the dominant \((2,2)\) mode. 3 We note, however, that the implementation in gw_eccentricity[90] handles waveforms in arbitrary units and time conventions. Footnote 3: When generalizing to spin-precessing binaries, this should be replaced by the total waveform amplitude, defined in Eq. (5) of Ref. [10]. ### Defining eccentricity using the waveform Because eccentricity is not uniquely defined in general relativity, a wide variety of definitions of eccentricity exists. At Newtonian order, eccentricity can be uniquely defined as [91]. \[e_{\rm Newt}=\frac{r^{\rm a}-r^{\rm p}}{r^{\rm a}+r^{\rm p}}, \tag{2}\] where \(r^{\rm a}\) and \(r^{\rm p}\) are the separations at apocenter (point of furthest approach) and pericenter (point of closest approach), respectively. Starting at 1 PN order, the Keplerian parametrization can be extended to the so-called _quasi-Keplerian_ parametrization where three different eccentricity parameters are defined, the radial \(e_{r}\), temporal \(e_{t}\) and angular \(e_{\phi}\) eccentricities, each of which has the same Newtonian limit [21]. These quantities can be defined in terms of the conserved energy and angular momentum, but depend on the gauge used [49]. In the EOB formalism, initial conditions for the dynamics are prescribed in terms of an eccentricity parameter defined within the quasi-Keplerian parameterization [46; 92; 93; 94; 47]. Thus, the gauge dependency of the eccentricity parameter also extends to the EOB waveforms [46; 47]. For EMRIs, one typically uses an eccentricity definition based on the turning points of the underlying geodesics [55; 56; 57; 42; 43; 58; 59]. This is inherently dependent on the gauge used for the background spacetime, and picks-up further gauge ambiguities at higher orders in the mass-ratio. For NR waveforms, the compact object trajectories are used to define eccentricity, typically by fitting to analytical PN (or Newtonian) expressions [95; 96; 97; 98]. This also inherently depends on the gauge employed in the simulations. A more convenient definition of eccentricity that can be straightforwardly applied to waveforms of all origins was proposed in Ref. [99]: \[e_{\Omega_{\rm orb}}(t)=\frac{\sqrt{\Omega_{\rm orb}^{\rm p}(t)}-\sqrt{\Omega _{\rm orb}^{\rm a}(t)}}{\sqrt{\Omega_{\rm orb}^{\rm p}(t)}+\sqrt{\Omega_{\rm orb }^{\rm a}(t)}}, \tag{3}\] where \(\Omega_{\rm orb}^{\rm p}(t)\) is an interpolant through the orbital frequency \(\Omega_{\rm orb}(t)\) evaluated at pericenter passages, and likewise for \(\Omega_{\rm orb}^{\rm a}(t)\) at apocenter passages. Because eccentricity causes a burst of radiation at pericenters, the times corresponding to pericenters are identified as local maxima in \(\Omega_{\rm orb}(t)\), while apocenters are identified as local minima. Eq. (3) was used, for example, in Ref. [100] to analyze generic spin-precessing and eccentric binary BH waveforms. Unfortunately, because \(\Omega_{\rm orb}\) is computed using the compact object trajectories, Eq. (3) is also susceptible to gauge choices, especially for NR simulations. Nevertheless, Eq. (3) has the important quality that it can be applied to waveforms of all origins. Furthermore, Eq. (3) has the correct Newtonian limit. This is easily seen using Kepler's second law \(\Omega_{\rm orb}\propto 1/r^{2}\), where \(r\) is the binary separation [101; 91]. Using this relation in Eq. (3), one finds that \(e_{\Omega_{\rm orb}}\) matches \(e_{\rm swt}\) from Eq. (2). The main limitation of Eq. (3) is that \(\Omega_{\rm orb}\) is gauge-dependent. To remove such dependence, one must turn to the waveform at future null-infinity, which is where our detectors are approximated to be with respect to the source. The emitted GWs can be obtained at future null-infinity, for example, by evolving Einstein's equations along null slices [102; 103; 104; 105; 106; 107; 108]. While the waveform at future null-infinity is unique up to Bondi-Metzner-Sachs (BMS) transformations, this freedom can be fixed using BMS charges [109]. In the rest of this paper, we assume this freedom has been fixed, but our method can also be applied to waveforms specified in any given frame. For a gauge-independent definition of eccentricity, we seek an analogue of Eq. (3) that only depends on the waveform \(\mathpzc{k}_{\ell m}\). The simplest possible generalization [86; 46; 82; 82] is to replace the trajectory-dependent orbital frequency \(\Omega_{\rm orb}(t)\) in Eq. (3) with the frequency of the dominant \((2,2)\) mode \(\omega_{22}(t)\): \[e_{\omega_{22}}(t)=\frac{\sqrt{\omega_{22}^{\rm p}}(t)-\sqrt{\omega_{22}^{\rm p }(t)}}{\sqrt{\omega_{22}^{\rm p}(t)}+\sqrt{\omega_{22}^{\rm u}(t)}}, \tag{4}\] where \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm a}(t)\) are interpolants through \(\omega_{22}(t)\) evaluated at pericenters and apocenters, respectively. \(\omega_{22}\) is obtained from \(\mathpzc{k}_{22}\) as follows: \[\mathpzc{k}_{22}(t)=A_{22}(t)\,e^{-i\phi_{22}(t)}, \tag{5}\] \[\omega_{22}(t)=\frac{\mathrm{d}\phi_{22}(t)}{\mathrm{d}t}, \tag{6}\] where \(A_{22}\) is the amplitude and \(\phi_{22}\) the phase of \(\mathpzc{k}_{22}\). In Eq. (4), the pericenter and apocenter times can be chosen to correspond to local maxima and minima, respectively, in \(\omega_{22}(t)\). This procedure is illustrated in the bottom-left panel of Fig. 1. It is not guaranteed that the local extrema of \(\omega_{22}\) coincide with the local extrema of \(\Omega_{\rm orb}\). Instead, we can _define_ the local extrema of \(\omega_{22}\) to correspond to pericenters and apocenters. Other choices for assigning pericenter/apocenter times and their impact on the eccentricity will be discussed in Sec. III. Because of its simplicity and gauge-independent nature, Eq. (4) has been applied to parameterize eccentric waveforms as well as GW data analysis [86; 46; 82; 87]. However, as shown in Ref. [76], this definition of eccentricity does not have the correct Newtonian limit at large separations. In particular, in the small eccentricity limit at Newtonian order, one obtains [76]: \[\lim_{e_{t}\to 0}e_{\omega_{22}}^{\rm 0PN}=\frac{3}{4}e_{t}+\mathcal{O}(e_{t}^ {3}), \tag{7}\] where \(e_{t}\) is the temporal eccentricity used in PN theory, which matches the Newtonian eccentricity at Newtonian order [21]. This discrepancy can be resolved by using the following transformation [76] \[e_{\rm sw}=\cos(\Psi/3)-\sqrt{3}\,\sin(\Psi/3), \tag{8}\] where \[\Psi=\arctan\left(\frac{1-e_{\omega_{22}}^{2}}{2\,e_{\omega_{22}}}\right). \tag{9}\] Eq. (8) has the correct Newtonian limit over the full range of eccentricities [76], and we adopt this definition in this work. As we will show in Sec. IV.1, \(e_{\rm sw}\) also approximately matches the geodesic eccentricity in the extreme mass ratio limit, while \(e_{\omega_{22}}\) does not. The top-left panel of Fig. 1 shows an example evaluation of \(e_{\rm sw}(t)\) for an NR simulation produced using the Spectral Einstein Code [110; 111] (SpEC), developed by the Simulating eXtreme Spacetimes (SXS) collaboration [112]. As expected, \(e_{\rm sw}\) monotonically decreases as the binary approaches the merger (\(t=0\)). However, while the waveform itself covers the full range of times shown, \(e_{\rm sw}(t)\) does not. This is because \(e_{\rm sw}(t)\) depends on the \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm a}(t)\) interpolants in Eq. (4), which do not span the full time range, as shown in the bottom-left panel of Fig. 1. \(\omega_{22}^{\rm p}(t)\) is only defined between the first and last available pericenters, and \(\omega_{22}^{\rm a}(t)\) is only defined between the first and last available apocenters. Therefore, the first available time for \(e_{\rm sw}(t)\) is the maximum of the times of the first pericenter and first apocenter. Similarly, the last available time for \(e_{\rm sw}(t)\) is the minimum of the times of the last pericenter and last apocenter. Furthermore, we find that \(e_{\rm sw}(t)\) near the merger can become nonmonotonic, which is not surprising as it becomes hard to define an orbit in this regime. To avoid this nonmonotonic behavior, we discard the last two orbits of the waveform before computing \(e_{\rm sw}\). As a result, the last available time for \(e_{\rm sw}\) is the minimum of the times of the last pericenter and last apocenter in the remaining waveform, which falls at about two orbits before the peak amplitude. In addition, to successfully build the \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm a}(t)\) interpolants in Eq. (4), we require at least two orbits in the remaining waveform. Therefore, the full waveform should include at least \(\sim\!4-5\) orbits to reliably compute \(e_{\rm sw}\). #### iv.1.1 Extending to spin-precessing and frequency-domain waveforms Eqs. (4) and (8) use only the \((2,2)\) mode as it is the dominant mode of radiation [113; 114; 115], at least for binaries without spin-precession in which the direction of the orbital angular momentum is fixed (taken to be along \(\hat{z}\) by convention). On the other hand, for spin-precessing binaries, the orbital angular momentum direction varies, and the power of the \((2,2)\) mode leaks into the other \(\ell=2\) modes, meaning that there need not be a single dominant mode of radiation. For this reason, we restrict ourselves to binaries without spin-precession in this work. We expect that our method can be generalized to spin-precessing binaries by using \(\mathpzc{h}_{22}\) in the coprecessing frame [111; 116; 117], which is a non-inertial frame that tracks the binary's spin-precession so that \(\hat{z}\) is always along the instantaneous orbital angular momentum. Alternatively, one could replace \(\omega_{22}\) in Eq. (4) with a frame-independent angular velocity [118] that incorporates information from all available waveform modes. We also restrict ourselves to time-domain waveforms in this work. One main difficulty for frequency-domain waveforms [53; 54] is the identification of the frequencies at which pericenters and apocenters occur. This is complicated by the fact that even for the \((2,2)\) mode, eccentricity excites higher harmonics that make it difficult to identify local extrema in the frequency domain (see e.g. Fig. 3 of Ref. [53]). Alternatively, one could simply apply an inverse Fourier transform to first convert the frequency-domain waveform to time-domain, although this can be computationally expensive for long signals. ### Defining mean anomaly using the waveform To fully describe an eccentric orbit, two parameters are required: eccentricity and mean anomaly [21; 48; 87], which is the fraction of the orbital period (expressed as an angle) that has elapsed since the last pericenter passage. Similar to \(e_{\rm gw}\), we seek a definition of mean anomaly that depends only on the waveform at future null-infinity. This can be achieved by generalizing the Newtonian definition of mean anomaly to [87; 87; 48; 76] \[l_{\rm gw}(t)=2\pi\,\frac{t-t_{i}^{\rm p}}{t_{i+1}^{\rm p}-t_{i}^{\rm p}}, \tag{10}\] defined over the interval \(t_{i}^{\rm p}\leq t<t_{i+1}^{\rm p}\) between any two consecutive pericenter passages \(t_{i}^{\rm p}\) and \(t_{i+1}^{\rm p}\). \(l_{\rm gw}\) grows linearly in time over the range \([0,2\pi)\) between \(t=t_{i}^{\rm p}\) and \(t=t_{i\downarrow^{\rm p}}^{\rm p}\). In Newtonian gravity, the period of the orbit \(T=t_{i+1}^{\rm p}-t_{i}^{\rm p}\) remains constant, while in general relativity, radiation reaction cause \(T\) to decrease over time, making \(l_{\rm gw}(t)\) a stepwise linear function whose slope increases as as the binary approaches the merger. As the times corresponding to pericenter passages are already determined when calculating \(e_{\rm gw}\), computing \(l_{\rm gw}\) is straightforward. This procedure is illustrated in the right panel of Fig. 1. We stress that the mean anomaly cannot be absorbed into a time or phase shift [48], and is instead an intrinsic property of the binary like the component masses, spins and \(e_{\rm gw}\). This can be seen from the bottom-right panel of Fig. 1, showing \(\omega_{22}(t)\). Consider the first pericenter occurring at \(t\simeq-8500M\), for which \(l_{\rm gw}=0\). First, because \(\omega_{22}\) is insensitive to phase shifts, one cannot apply a phase shift to change the mean anomaly at \(t\simeq-8500M\) away from \(l_{\rm gw}=0\). Similarly, one cannot apply a time shift so that the mean anomaly at \(t\simeq-8500M\) is changed, without simultaneously also changing the frequency at that time (because the time shift also applies to \(\omega_{22}(t)\)). In other words, to change the mean anomaly at a fixed time before the merger, one also needs to change the frequency at a fixed time before the merger, Figure 1: Eccentricity and mean anomaly measured using the waveform from an equal-mass nonspinning eccentric NR simulation (SXS:BBH:2312 [48; 110]). _Left:_ Time evolution of the eccentricity \(e_{\rm gw}\) (upper panel) and frequency of the \((2,2)\) waveform mode \(\omega_{22}\) (lower panel). \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm u}(t)\) are interpolants through \(\omega_{22}(t)\) evaluated at the pericenters (blue circles) and apocenters (pink squares), respectively. Eq. (8) is used to compute \(e_{\rm gw}(t)\) given \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm u}(t)\). _Right:_ Time evolution of the mean anomaly \(l_{\rm gw}\) (upper panel) and \(\omega_{22}\) (lower panel). The vertical dashed gray lines denote the pericenter times. \(l_{\rm gw}(t)\) grows linearly in time from \(0\) to \(2\pi\) between successive pericenters (Eq. (10)). which results in a different physical system. Ignoring mean anomaly in waveform models and/or parameter estimation can result in systematic biases in the recovered source parameters [48, 119, 88]. ### Generalizing the reference frequency \(f_{\text{ref}}\) Binary parameters like the component spin directions, and orientation with respect to the observer, as well as eccentricity and mean anomaly, can vary during a binary evolution. Therefore, when measuring binary parameters from a GW signal, one needs to specify at which point of the evolution the measurement should be done. This is typically chosen to be the point at which the GW frequency crosses a reference frequency \(f_{\text{ref}}\), with a typical choice of \(f_{\text{ref}}=20\text{Hz}\)[3] as that is approximately where the sensitivity band of current ground-based detectors begins. For quasicircular binaries without spin-precession, the GW frequency increases monotonically, and \(f_{\text{ref}}\) can be uniquely associated with a reference time \(t_{\text{ref}}\). For spin-precessing, quasicircular binaries, while \(\omega_{22}\) in the inertial frame can be nonmonotonic, one can use the frequency computed in the coprecessing frame, which is always monotonically increasing [10, 111]. Unfortunately, no such frame exists for eccentric binaries, and \(\omega_{22}\) becomes nonmonotonic if eccentricity is sufficiently high (see Fig. 1). Therefore, unique specification of a reference point via a frequency \(f_{\text{ref}}\) requires a generalization of \(\omega_{22}\) that is monotonically increasing, and approaches \(\omega_{22}\) in the quasicircular limit. In the following we discuss two different ways to accomplish this and point out why the second is superior. #### ii.4.1 Mean of \(\omega_{22}^{\text{p}}(t)\) and \(\omega_{22}^{\text{a}}(t)\) A simple method to compute a monotonically increasing frequency for eccentric binaries is to take the mean of the interpolants through the frequencies at pericenters (\(\omega_{22}^{\text{p}}(t)\)) and apocenters (\(\omega_{22}^{\text{a}}(t)\)), both of which are monotonically increasing functions of time: \[\omega_{22}^{\text{mean}}(t)=\frac{1}{2}\left[\omega_{22}^{\text{p}}(t)+ \omega_{22}^{\text{a}}(t)\right], \tag{11}\] with the reference time defined as \(\omega_{22}^{\text{mean}}(t_{\text{ref}})=2\pi f_{\text{ref}}\). As \(\omega_{22}^{\text{p}}(t)\) and \(\omega_{22}^{\text{a}}(t)\) are already constructed when computing \(e_{\text{sw}}\), there is no additional computational cost. Furthermore, as \(\omega_{22}^{\text{p}}\) and \(\omega_{22}^{\text{a}}\) approach \(\omega_{22}\) in the quasicircular limit, so does \(\omega_{22}^{\text{mean}}\). This method was used to set the reference frequency in Ref. [86]. Figure 2 shows examples of \(\omega_{22}^{\text{mean}}(t)\) for waveforms produced using the SEOBNRv4EEMH [46] eccentric EOB model, for three different values of the model's internal eccentricity parameter \(e_{\text{eob}}\), defined at a time \(t_{0}=-4.93\) s before the peak amplitude. #### ii.4.2 Orbit averaged \(\omega_{22}\) Alternatively, one can use the orbit average of \(\omega_{22}\) in fixing the reference point. Between any two consecutive pericenters \(t_{i}^{\text{p}}\) and \(t_{i+1}^{\text{p}}\) we define \[\langle\omega_{22}\rangle_{i}^{\text{p}} =\frac{1}{t_{i+1}^{\text{p}}-t_{i}^{\text{p}}}\,\int_{t_{i}^{ \text{p}}}^{t_{i+1}^{\text{p}}}\omega_{22}(t)\,\text{d}t\] \[=\frac{\phi_{22}(t_{i+1}^{\text{p}})-\phi_{22}(t_{i}^{\text{p}})} {t_{i+1}^{\text{p}}-t_{i}^{\text{p}}}, \tag{12}\] and associate \(\langle\omega_{22}\rangle_{i}^{\text{p}}\) with the midpoint between the \(t_{i}^{\text{p}}\) and \(t_{i+1}^{\text{p}}\): \[\langle t\rangle_{i}^{\text{p}}=\frac{1}{2}\left(t_{i}^{\text{p}}+t_{i+1}^{ \text{p}}\right). \tag{13}\] Applying this procedure to all consecutive pairs of peri Figure 2: Different methods to construct a monotonically increasing frequency to replace \(\omega_{22}(t)\), in order to set the reference frequency \(f_{\text{ref}}\) for eccentric binaries. We consider two different approaches: (i) \(\omega_{22}^{\text{mean}}(t)\), the mean of \(\omega_{22}^{\text{p}}(t)\) and \(\omega_{22}^{\text{a}}(t)\), and (ii) \(\langle\omega_{22}\rangle(t)\), an interpolant through the orbit averaged \(\omega_{22}\) (Eq. (12)). We show SEOBNRv4EEMH waveforms with three different eccentricities; the binary parameters are given in the figure text. While the two approaches agree for small eccentricities, they deviate significantly at large eccentricities. We adopt \(\langle\omega_{22}\rangle(t)\) as it captures the correct frequency scale in an orbit-averaged sense (Sec. II.4). center times, we obtain the set \(\Set{\left((t)_{i}^{\rm p},(\omega_{22})_{i}^{\rm p}\right)}\). Similarly, using all consecutive pairs of apocenter times \(t_{i}^{\rm a}\) and \(t_{i+1}^{\rm a}\), we obtain the set \(\Set{\left((t)_{i}^{\rm a},\left\langle\omega_{22}\right\rangle_{i}^{\rm a} \right)}\). Taking the union of these two datasets, we build a cubic spline interpolant in time to obtain \(\langle\omega_{22}\rangle(t)\). The resulting orbit averaged frequency \(\langle\omega_{22}\rangle(t)\) is also monotonically increasing and reduces to \(\omega_{22}(t)\) in the quasi-circular limit. The reference time associated with a reference frequency is now determined via \[\langle\omega_{22}\rangle(t_{\rm ref})=2\pi f_{\rm ref}. \tag{14}\] This method was used in Refs. [119, 76]. Compared to \(\omega_{22}^{\rm mean}(t)\), \(\langle\omega_{22}\rangle(t)\) has the added costs of computing orbit averages and constructing a new interpolant. The orbit averages are very cheap to compute as they can be written in terms of phase differences (Eq. (12)). The cost of the interpolant scales with the number of orbits but it is generally also cheap to construct. Figure 2 also shows \(\langle\omega_{22}\rangle(t)\) for the same SEOBNRv4EHM waveforms. While \(\omega_{22}^{\rm mean}(t)\) and \(\langle\omega_{22}\rangle(t)\) agree at small eccentricities, they deviate significantly at large eccentricities. Unlike \(\omega_{22}^{\rm mean}(t)\), \(\langle\omega_{22}\rangle(t)\) has the additional property, albeit only in an orbit-averaged sense, that at the time \(t_{\rm ref}\) where \(\langle\omega_{22}\rangle(t_{\rm ref})=2\pi f_{\rm ref}\), one GW cycle occurs over a time scale of \(1/f_{\rm ref}\). This also explains why for the high eccentricity case in Fig. 2 (bottom panel), \(\langle\omega_{22}\rangle\) follows the general trend of \(\omega_{22}\) more closely than \(\omega_{22}^{\rm mean}\). For these reasons, we will adopt \(\langle\omega_{22}\rangle\) and Eq. (14) in the rest of the paper. ### Selecting a good reference point Given a reference frequency \(f_{\rm ref}\), Sec. II.4 describes how that can be used to pick a reference time, \(t_{\rm ref}\), in the binary's evolution. Another important choice is what frequency to use for \(f_{\rm ref}\). Most current analyses for ground-based detectors use \(f_{\rm ref}=20\) Hz [3], but we argue that this may not be suitable for eccentric binaries. Setting \(f_{\rm ref}=20\) Hz means that the reference time is chosen to be the point where the observed GW frequency (or its orbit average) at the detector crosses 20 Hz. However, the observed GW signals are redshifted because of cosmological expansion, and the observed GW frequency depends on the distance between the source and detector. Two identical binaries placed at different distances would therefore reach an observed frequency of 20 Hz at different points in their evolution. Because the eccentricity varies during the evolution, the measured eccentricities for these binaries will be different when they reach \(f_{\rm ref}=20\) Hz at the detector! This is particularly problematic for applications like constraining the astrophysical distribution of eccentricities of GW sources, as the same source can be mistaken to have two different eccentricities. All binary parameters that vary during a binary's evolution, like spin directions, could be prone to this problem. However, because spin tilts vary over spin-precession time scales spanning many orbits, this has not been a significant issue so far when constraining the astrophysical spin distribution [120], with the exception of Ref. [121] where this effect was found to be important when modeling the full 6D spin distribution. Eccentricity, on the other hand, can change rapidly on an orbital time scale, especially in the late stages near the merger (see Fig. 1). One way to avoid this problem is to use the GW frequency defined in the source frame instead of the detector frame. However, this requires assuming a cosmological model to compute the redshift between the two frames. This can be problematic for applications like independently extracting cosmological parameters like the Hubble parameter from GW signals [122]. Alternatively, one can use a dimensionless reference frequency \(Mf_{\rm ref}\) or time \(t_{\rm ref}/M\) as proposed by Ref. [89], where \(M\) is the total mass in the detector frame. Both of these choices have the benefit of not depending on the distance to the source as the total mass measured in the detector frame is also redshifted and exactly cancels out the redshift of \(f_{\rm ref}\) and \(t_{\rm ref}\). Ref. [89] proposed reference points of \(t_{\rm ref}/M=-100\) (where \(t=0\) is at the peak of the GW amplitude) and \(Mf_{\rm ref}=6^{-3/2}\) (the Schwarzschild inner-most-circular-orbit (ISCO) frequency), as these always occur close to the merger for comparable mass binaries, and certain spin parameters like the orbital-plane spin angles are best measured near the merger. For measuring eccentricity, an earlier dimensionless time or frequency may be more appropriate, as eccentricity can be radiated away before the binary approaches merger. A more straightforward approach could be to set the reference point at a fixed number of orbits before a fixed dimensionless time (\(t_{\rm ref}/M\)) or dimensionless orbit-averaged frequency (\(M\langle\omega_{22}\rangle\)). Here, we define one orbit as the period between two pericenter passages, as measured from the waveform. As the number of orbits defined with respect to a dimensionless time/frequency is also unaffected by the redshift, this serves the same purpose as a dimensionless time/frequency. The number of orbits also scales more naturally to EMRI systems, while dimensionless time/frequency may not. A similar approach was recently adopted by Ref. [84]. Another advantage of using a fixed number of orbits before a dimensionless time/frequency is that by using pericenters to define the number of orbits, we can always measure eccentricity at a fixed mean anomaly of \(l_{\rm gw}=0\). This can make it simpler to report posteriors for eccentric GW signals by reducing the dimensionality by one. Similarly, this can make it easier to connect GW observations to astrophysical predictions for GW populations, as the predictions would just need to be made at a single mean anomaly value. However, we stress that mean anomaly would still need to be included as a parameter in waveform models and parameter estimation, and it is only when computing the eccentricity from the waveform predictions in postprocessing that this simplification occurs. To summarize, while the most appropriate choice will need to be determined by analyzing eccentric GW sig nals in a manner similar to Ref. [89], we propose that the reference point be chosen to be a fixed number of orbits (e.g. 10) before a fixed dimensionless time (e.g. \(t_{\text{ref}}/M=-100\)) or a fixed dimensionless orbit-averaged frequency (e.g. \(M\langle\omega_{22}\rangle=2\pi\,6^{-3/2}\), the Schwarzschild ISCO frequency). While not all GW signals will enter the detector frequency band with \(\sim 10\) orbits to go before the merger, this can be achieved by always generating GW templates with at least 10 orbits when analyzing the GW signals. One important question that remains is whether using a reference point that falls outside the detector band leads to systematic biases or complications during parameter estimation. We expect that as long as the number of orbits by which the reference point falls outside the band is small, such effects should be small, but we leave this investigation to future work. ### Truncating eccentric time domain waveforms GW detectors are most sensitive over certain frequency bands (\(\sim 20\) Hz to \(\sim 10^{3}\) Hz for LIGO-Virgo), and waveform predictions need to include all physical GW frequencies present in this region. For frequency domain waveform models this is achieved by evaluating the model starting at initial frequency \(f_{\text{low}}=20\) Hz. On the other hand, time-domain waveform models need to be evaluated starting at an initial time \(t_{\text{low}}\), chosen so that the GW signal at earlier times does not contain any frequencies above \(f_{\text{low}}\). In other words, the part of the time domain waveform that is not included (\(t<t_{\text{low}}\)) does not contribute to the GW signal in the detector frequency band. For quasicircular waveform models with only the \((2,2)\) mode, \(t_{\text{low}}\) can be chosen to be the time when \[\omega_{22}(t_{\text{low}})=2\pi\,f_{\text{low}}. \tag{15}\] Because \(\omega_{22}(t)\) is a monotonically increasing function for quasicircular binaries, frequencies \(>f_{\text{low}}\) only occur at times \(>t_{\text{low}}\). This is no longer the case for eccentric binaries as \(\omega_{22}(t)\) can be nonmonotonic. An example is shown in Fig. 3, where we see that \(\omega_{22}(t)/(2\pi)\) crosses \(f_{\text{low}}=20\) Hz at several different times. One could choose the earliest of these crossings as \(t_{\text{low}}\), but this only works if the original waveform is long enough to include all such crossings. If the original waveform only includes a subset of the crossings, this approach cannot guarantee that the discarded waveform only contains frequencies \(<f_{\text{low}}\). To ensure all frequencies above \(f_{\text{low}}\) are included, we need to generalize Eq. (15) to eccentric binaries. A seemingly natural choice is to replace \(\omega_{22}(t)\) in Eq. (15) with the monotonically increasing \(\langle\omega_{22}\rangle(t)\) from Eq. (12): \[\langle\omega_{22}\rangle(t_{\text{low}})=2\pi\,f_{\text{low}}, \tag{16}\] The pink dashed line in Fig. 3 shows \(\langle\omega_{22}\rangle/(2\pi)\), and the frequencies retained when setting \(t_{\text{low}}\) using Eq. (16) are also marked in pink. However, in this approach the section colored in blue is discarded, even though it still includes some frequencies above \(f_{\text{low}}=20\) Hz. Instead, we propose that \(t_{\text{low}}\) should be set using the interpolant through pericenter frequencies, \(\omega_{22}^{\text{p}}(t)\), which is already constructed when evaluating Eqs. (4) and (8). \[\omega_{22}^{\text{p}}(t_{\text{low}})=2\pi\,f_{\text{low}}. \tag{17}\] Because \(\omega_{22}^{\text{p}}(t)\) represents the upper envelope of \(\omega_{22}(t)\), this approach guarantees that the discarded waveform (\(t<t_{\text{low}}\)) does not contain any frequencies \(>f_{\text{low}}\). This is demonstrated in Fig. 3, where we see that the blue section is included if Eq. (17) is used to set \(t_{\text{low}}\). So far, we only considered the \((2,2)\) mode when determining \(t_{\text{low}}\). The frequency of the \((\ell,m)\) waveform mode (Eq. (1)) can be approximated during the inspiral as \(\omega_{\ell m}(t)\sim(m/2)\)\(\omega_{22}(t)\)[21]. Therefore, for models containing higher modes, Eq. (17) should be replaced with: \[\omega_{22}^{\text{p}}(t_{\text{low}})=\left(\frac{2}{m_{\text{max}}}\right)\, 2\pi\,f_{\text{low}}, \tag{18}\] where \(m_{\text{max}}\) is the largest \(m\) among all included modes. ### Summary Our procedure to compute the eccentricity and mean anomaly from the waveform can be summarized as follows: 1. Find the times corresponding to the pericenters and apocenters, which we denote as \(\{t_{1}^{\text{p}}\}\) and \(\{t_{i}^{\text{a}}\}\) Figure 3: How to truncate time domain eccentric waveforms while retaining all frequencies above \(f_{\text{low}}=20\) Hz. The orange, blue and pink curves show different sections of \(\omega_{22}(t)\) for an eccentric SEOBNRv4EHM waveform (with binary parameters shown in the title). If we discard all times below the point where the orbit-averaged frequency \(\langle f_{22}\rangle\equiv\langle\omega_{22}\rangle/(2\pi)\) (pink dashed curve) crosses \(f_{\text{low}}=20\) Hz, only the pink section is retained and the blue section is discarded even though it contains some frequencies above 20 Hz. On the other hand, using \(f_{22}^{\text{p}}\equiv\omega_{22}^{\text{p}}/(2\pi)\) (blue dashed curve) to pick this time ensures that the discarded region (orange) contains no frequencies above 20 Hz. respectively. In the example in Fig. 1, \(\{t_{i}^{\rm p}\}\) and \(\{t_{i}^{\rm a}\}\) are identified as the local maxima and minima, respectively, of \(\omega_{22}\), but other methods for locating these times will be discussed in Sec. III. 2. Evaluate \(\omega_{22}(t)\) at \(\{t_{i}^{\rm p}\}\) and \(\{t_{i}^{\rm a}\}\) to get the frequencies at pericenters and apocenters and construct interpolants in time, \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm a}(t)\), using these data. We use cubic splines for interpolation. 4 Footnote 4: When the number of pericenters or apocenters in not sufficient to build a cubic spline, the order of the spline is reduced accordingly. 3. Obtain \(e_{\omega_{22}}(t)\) using \(\omega_{22}^{\rm p}(t)\) and \(\omega_{22}^{\rm a}(t)\) in Eq. (4). Finally, apply the transformation in Eq. (8) to obtain the eccentricity \(e_{\rm gw}(t)\). 4. Use the pericenter times \(\{t_{i}^{\rm p}\}\) in Eq. (10) to compute the mean anomaly \(l_{\rm gw}(t)\). 5. To get the eccentricity and mean anomaly at a reference frequency \(f_{\rm ref}\), first use the orbit averaged frequency \(\langle\omega_{22}\rangle(t)\) (Eq. (12)) to get the corresponding \(t_{\rm ref}\). However, instead of using a fixed \(f_{\rm ref}\) in Hz, a fixed dimensionless frequency or time, or a fixed number of orbits before a dimensionless frequency/time might be a better choice for eccentric binaries (Sec. II.5). 6. Use \(\omega_{22}^{\rm p}(t)\) (Eq. (18)) to truncate time-domain signals at a given start frequency \(f_{\rm low}\) so that the discarded waveform does not contain any frequencies above \(f_{\rm low}\). ## III Methods to locate pericenters and apocenters In Sec. II and Fig. 1, the pericenter and apocenter times are taken to correspond to local extrema in \(\omega_{22}(t)\). Identifying these times is a crucial step in our definitions of eccentricity and mean anomaly, as well as the generalizations of \(f_{\rm ref}\) and \(f_{\rm low}\). In this section, we explore several different alternatives for identifying the pericenter/apocenter times and their benefits and drawbacks. Instead of \(\omega_{22}(t)\), these methods set extrema in various other waveform quantities (like the amplitude) as the pericenter/apocenter times. Therefore, the pericenter/apocenter times can depend on the method used, and each of these alternatives should be viewed as a new _definition_ of eccentricity and mean anomaly. However, all of these methods satisfy the criteria listed in Sec. II for a good definition of eccentricity, and as we will show in Sec. IV the differences between the different methods are generally small. We denote the waveform quantity whose extrema are used as \(U(t)\). Given \(U(t)\), we use the find_peaks routine within SciPy[123] to locate the extrema. ### Frequency and amplitude The most straightforward choice for \(U(t)\) is \[U(t)=\omega_{22}(t), \tag{19}\] as considered in Fig. 1. The local maxima in \(U(t)\) are identified as the pericenters while the local minima are identified as apocenters. We refer to this method as the Frequency method. Because \(\omega_{22}(t)\) relies on a time derivative - see Eq. (6) - it can be noisy in some cases, especially for NR waveforms. Such noise can lead to spurious extrema in \(\omega_{22}(t)\) that can be mistaken for pericenters/apocenters. Such problems can be avoided by locating the extrema of the amplitude of the \((2,2)\) mode, i.e. \[U(t)=A_{22}(t). \tag{20}\] We refer to this method as the Amplitude method and recommended it over the Frequency method. The simplicity of the Frequency and Amplitude methods comes with the drawback that these methods fail for small eccentricities, as illustrated in Fig. 4. The top two rows show \(\omega_{22}\) and \(A_{22}\) for an eccentric SEOBNRv4EHM[46] waveform. While local extrema can be found at early times, as eccentricity is radiated away, the prominence of the extrema decreases until local extrema cease to exist. The onset of this breakdown is signaled by the pericenters and apocenters converging towards each other, as seen in the figure insets. This occurs because at small eccentricity, the secular growth in \(\omega_{22}\) and \(A_{22}\) dominates the modulations due to eccentricity. We find that for eccentricities \(e_{\rm gw}\lesssim 10^{-2}\ldots 10^{-3}\) (see Sec. IV), the Frequency and Amplitude methods can fail to measure the eccentricity. This breakdown point can be approximately predicted by the following order-of-magnitude estimate. #### iii.1.1 Estimating the breakdown point of the Frequency method The inspiral rate of a binary in quasicircular orbit at Newtonian order is given by (e.g. [21]) \[\frac{\mathrm{d}\omega_{22}^{\rm circ}}{\mathrm{d}t}=\frac{192}{5}\nu\frac{1} {M^{2}}\left(\frac{M\omega_{22}^{\rm circ}}{2}\right)^{11/3}, \tag{21}\] where \(\nu=q/(1+q)^{2}\) is the symmetric mass-ratio. For small eccentricities, eccentricity induces an oscillatory component to the frequency, \[\omega_{22}(t)\approx\omega_{22}^{\rm circ}(t)+A\sin(\omega_{r}t), \tag{22}\] where \(\omega_{r}\) denotes the radial oscillation frequency. The amplitude \(A\) of the oscillations can be related to eccentricity by substituting into Eq. (4) and expanding to first order in \(A\), yielding \(A=2e_{\omega_{22}}\omega_{22}^{\rm circ}\). For a given short time interval, we take \(A\) to be constant. Extrema in \(\omega_{22}(t)\) correspond to zeros of the time derivative \[\frac{\mathrm{d}\omega_{22}}{\mathrm{d}t}\approx\frac{\mathrm{d}\omega_{22}^{ \mathrm{circ}}}{\mathrm{d}t}+A\omega_{r}\cos(\omega_{r}t). \tag{23}\] Such zeros exist only if the oscillatory component dominates over the inspiral part, \(A\omega_{r}\gtrsim\mathrm{d}\omega_{22}^{\mathrm{circ}}/\mathrm{d}t\), i.e. for sufficiently large eccentricities: \[e_{\omega_{22}}\gtrsim\frac{48}{5}\nu\left(\frac{M\omega_{22}}{2}\right)^{5/ 3}\frac{\omega_{22}}{2\omega_{r}}. \tag{24}\] Here we have dropped the subscript "circ", as \(\omega_{22}^{\mathrm{circ}}\approx\omega_{22}\) at leading order in the assumed small eccentricity. Neglecting pericenter advance, i.e. setting \(\omega_{22}/(2\omega_{r})=1\), and noting that for small eccentricity, \(e_{\omega_{22}}\approx(3/4)\,e_{\mathrm{gw}}\) (Eq. 7), we find that local extrema in \(\omega_{22}(t)\) are only present if \[e_{\mathrm{gw}}\gtrsim\frac{192}{15}\nu\left(\frac{M\omega_{22}}{2}\right)^{5 /3}. \tag{25}\] The systems considered in this paper have \(\omega_{22}\sim 0.02/M\dots 0.1/M\) (e.g. Figs. 1 or 4), so that for comparable mass binaries, Eq. (25) predicts a break-down of the Frequency method for \(e_{\mathrm{gw}}\sim 10^{-3}\dots 10^{-2}\). This motivates us to consider alternative methods to detect local extrema that also work for small eccentricities. In the following, we will consider different methods that first subtract the secular growth in \(\omega_{22}\) or \(A_{22}\), and use the remainder as \(U(t)\). ### Residual frequency and residual amplitude We begin with a simple extension of the Frequency method, which we refer to as the ResidualFrequency method: \[U(t)=\Delta\omega_{22}(t)\equiv\omega_{22}(t)-\omega_{22}^{\mathrm{circ}}(t), \tag{26}\] and likewise the ResidualAmplitude method: \[U(t)=\Delta A_{22}(t)\equiv A_{22}(t)-A_{22}^{\mathrm{circ}}(t), \tag{27}\] where \(\omega_{22}^{\mathrm{circ}}\) and \(A_{22}^{\mathrm{circ}}\) are the frequency and amplitude of the \((2,2)\) mode for a quasicircular counterpart of the eccentric binary. We define the quasicircular counterpart as a binary with the same component masses and spins, but with zero eccentricity. The time array of the quasicircular waveform is shifted so that its peak time coincides with that of the eccentric waveform. Once again, the local maxima in \(U(t)\) are identified as the pericenters while the local minima are identified as apocenters. Eqs. (26) and (27) are motivated by the observation [48] that the quasicircular counterpart waveform captures the secular trend of the eccentric waveform, when the peak times of the waveforms are aligned. This is demonstrated for an example eccentric SEOBNRv4EHM waveform in Fig. 5. The quasicircular counterpart falls approximately at the midpoint between the peaks and troughs of amplitude and frequency of the eccentric waveform. We find this to be the case for the full range of eccentricities, and waveforms of all origins. For an eccentric waveform model, the quasicircular counterpart can be easily generated by evaluating the model with eccentricity set to zero while keeping the other parameters fixed. For NR waveforms, one can use a quasicircular waveform model; in this paper, we use the IMRPhenonT [14] quasicircular waveform model. Similarly to how the different methods to locate extrema are part Figure 4: Limitations of the Amplitude and Frequency methods in identifying pericenters (blue circles) and apocenters (pink squares) for a low eccentricity waveform. These methods (top two rows) detect only the first few pericenters/apocenters and fail once sufficient eccentricity is radiated away. On the other hand, the ResidualAmplitude and ResidualFrequency methods (bottom two rows) can detect all of the pericenters/apocenters present. The waveform is generated using SEOBNRv4EHM and the binary parameters are given in the title. of the eccentricity definition, the choice of quasicircular model should also be considered to be a part of the definition. The impact of the choice of the quasicircular model on eccentricity is generally small and will be explored further in Sec. IV.4. By first subtracting the secular growth in the eccentric waveform, the ResidualFrequency and ResidualAmplitude methods can detect local extrema even for small eccentricities. The bottom two rows of Fig. 4 show an example where these methods succeed while the Frequency and Amplitude methods fail. Once again, between ResidualFrequency and ResidualAmplitude, we recommend ResidualAmplitude as it is less prone to numerical noise for NR waveforms. While the ResidualFrequency and ResidualAmplitude are robust and straightforward to implement, their main drawback is that they require the evaluation of a quasicircular waveform, which increases the computational expense. We consider the next set of methods to model the secular trend without relying on additional waveform evaluations. ### Frequency fits and amplitude fits The ResidualAmplitude and ResidualFrequency methods described in Sec. III.2 have the disadvantage that they require a quasicircular reference waveform for subtraction. Such a reference waveform may not be available, or deviations in the reference waveform may lead to differences in the recovered eccentricity (see Sec. IV.4). The FrequencyFits method avoids the need for a reference waveform by self-consistently fitting the envelopes \(\omega_{22}^{\rm p}(t)\) (for pericenters) and \(\omega_{22}^{\rm a}(t)\) (for apocenters) that appear in Fig. 1, an idea introduced in Lewis _et al._[100]. To simplify the explanation, we will first describe this method when applied to locate pericenters. The idea lies in considering a _local_ stretch of data \(\omega_{22}(t)\) for \(t\in[t_{L},t_{R}]\), in which we identify the times \(T_{\alpha}\) (labeled by \(\alpha\)) as local maxima of the envelope-subtracted frequency (Eq. (28)), while self-consistently constructing the envelope fit \(\omega_{22}^{\rm fit,p}(t)\) through \(\omega_{22}(t)\) evaluated at \(T_{\alpha}\). The fit \(\omega_{22}^{\rm fit,p}(t)\), the local maxima times \(T_{\alpha}\), and the interval \([t_{L},t_{R}]\) are iteratively refined and the central \(T_{\alpha}\) is identified as a pericenter time. To make this idea precise, we start by choosing a time \(\hat{t}\), which will roughly correspond to the middle of the fitting interval. We now seek to determine a fitting function \(\omega_{22}^{\rm fit,p}(t)\) through the pericenter frequencies, valid in a time-interval \([t_{L},t_{R}]\) encompassing \(\hat{t}\), as well as times \(T_{\alpha}\in[t_{L},t_{R}]\), \(\alpha=0,\ldots,2N\) (with \(N=3\), as explained after Eq. (31)). These quantities are determined in a self-consistent manner such that the following conditions are all satisfied: 1. \(T_{\alpha}\) are local maxima of the envelope-subtracted frequency \(U(t)\) given by: \[U(t)=\omega_{22}(t)-\omega_{22}^{\rm fit,p}(t).\] (28) 2. \(\omega_{22}^{\rm fit,p}(t)\) is a fit through the \(2N+1\) evaluations of \(\omega_{22}(t)\) at times \(T_{\alpha}\), i.e. \((T_{\alpha},\omega_{22}(T_{\alpha}))\) in the interval \([t_{L},t_{R}]\), \[\omega_{22}^{\rm fit,p}(T_{\alpha})\approx\omega_{22}(T_{\alpha}),\quad\alpha= 0,\ldots,2N.\] (29) 3. The time-interval \([t_{L},t_{R}]\) contains precisely \(2N+1\) local maxima of \(U(t)\) where the first \(N\) are before \(\hat{t}\), and the others after. If these conditions are met, then the extremum in the middle, \((T_{N},\omega_{22}(T_{N}))\) will be identified as a pericenter passage, and included in the overall list of pericenters for the inspiral. This procedure is illustrated in Fig. 6. The top panel shows \(\omega_{22}(t)\) in orange, for a configuration with eccentricity so small that \(\omega_{22}(t)\) does not have extrema. The locations of the identified local maxima \(\left(T_{\alpha},\,\omega_{22}(T_{\alpha})\right)\) are indicated by blue circles, with the middle one (corresponding to \(T_{N}\)) being filled. The lower panel shows the envelope subtracted function, whose maxima determine the \(T_{\alpha}\). In practice, the fitting function is chosen to have the functional form \[\omega_{22}^{\rm fit,p}(t;\,A,\,n,\,t_{\rm merg})=A(t_{\rm merg}-t)^{n}, \tag{30}\] with fit-parameters \(\{A,n,t_{\rm merg}\}\). The form of Eq. (30) is inspired by the leading order PN behavior of a quasicircular binary inspiral, which has the form of Eq. (30) with exponent \(-3/8\)[21]. In addition, Eq. (30) ensures monotonicity by construction. To reduce correlations between Figure 5: Comparison of the amplitude (top) and the frequency (bottom) of an eccentric SEOBNRv4EHM waveform to those of its quasicircular counterpart. The binary parameters are shown in the figure text. Both waveforms are aligned so that \(t=0\) occurs at the peak of \(A_{22}\). The quasicircular counterpart captures the secular growth in the amplitude and frequency of the eccentric waveform. the parameters \(A\) and \(n\), the fitting function is reparameterized by \(\{f_{0},f_{1},t_{\text{merg}}\}\) where \(f_{0}\) and \(f_{1}\) represent the function value and first time-derivative at a time \(t_{\text{mid}}\), \[f_{0}= A(t_{\text{merg}}-t_{\text{mid}})^{n}, \tag{31a}\] \[f_{1}= -nA(t_{\text{merg}}-t_{\text{mid}})^{n-1}=-n\frac{f_{0}}{t_{\text {merg}}-t_{\text{mid}}}. \tag{31b}\] Equations (31) are readily inverted to yield \[n= -\frac{f_{1}(t_{\text{merg}}-t_{\text{mid}})}{f_{0}}, \tag{32a}\] \[A= f_{0}(t_{\text{merg}}-t_{\text{mid}})^{-n}. \tag{32b}\] The fit for \(\{f_{0},f_{1},t_{\text{merg}}\}\) is performed with the curve_fit routine of the SciPy[123] library. Because there are three free parameters, at least three local maxima are needed to perform the fit; we choose \(2N+1=7\) maxima for increased robustness. The concrete choice for \(t_{\text{mid}}\) is found to be not critical; we choose the time in the middle of the entire waveform to be analyzed. To analyze an entire waveform, we proceed from the start of the waveform toward the merger. At the first, "cold" initialization at the start of the waveform, we choose \(t_{L}\) to be the start of the waveform, \(\hat{t}\) to be \(N\) orbits later (as judged by the accumulated \(\phi_{22}\)), and \(t_{R}\) to be \(2N\) orbits later. We initialize a first guess for \(\omega_{22}^{\text{fit,p}}\) through a fit to \(\omega_{22}(t)\) during the first 10 orbits of the waveform. In order to satisfy the conditions 1 to 3 self-consistently, an iterative procedure is applied: local maxima of \(U(t)\) are calculated using find_peaks, and the interval \([t_{L},t_{R}]\) is adjusted to achieve the desired number of extrema on either side of \(\hat{t}\).5 Now an improved \(\omega_{22}^{\text{fit,p}}\) is computed by fitting to the extrema, Eq. (29), and the procedure is iterated until the changes in the extrema \(T_{\alpha}\) and fitting parameters \(\{f_{0},f_{1},T\}\) fall below a tolerance, typically \(10^{-8}\). At the initial cold start, this typically takes 3-5 iterations. Footnote 5: For the very first application of this procedure at the start of the waveform, \(t_{L}\) cannot be reduced to before the start of the waveform, so if needed we increase \(\hat{t}\) instead. We then shift the analysed region by one pericenter passage at a time, i.e. \(\hat{t}\rightarrow\hat{t}=(T_{N}+T_{N+1})/2\), \(t_{L}\rightarrow(T_{0}+T_{1})/2\), \(t_{R}\to T_{2N}+1.5\times(T_{2N}-T_{0})/(2N)\), and repeat the iterative procedure to satisfy conditions 1 to 3, using the current \(\omega_{22}^{\text{fit,p}}\) as the initial guess. Because of the improved guess for \(\omega_{22}^{\text{fit,p}}\), each successive pericenter passage needs only 2-3 iterations to converge. We stop the procedure when \(t_{L}\) reaches the end of the waveform, or when all three conditions can no longer be simultaneously satisfied. For instance, in rare cases, the iterative procedure settles into a limiting cycle, which switches between two different results for the interval \([t_{L},t_{R}]\), the extrema \(T_{\alpha}\), and the fit \(\omega_{22}^{\text{fit,p}}\). Equation (28) identifies local maxima of \(\omega_{22}(t)-\omega_{22}^{\text{fit,p}}(t)\), i.e. pericenter passages. To identify apocenter passages, we _change the sign_ of the right-hand-side of Eq. (28), while keeping the remainder of the algorithm unchanged. The algorithm will then generate a fit to the apocenter points, \(\omega_{22}^{\text{fit,a}}\), as indicated in pink in Fig. 6. The procedure outlined above also works if we fit the amplitude \(A_{22}\) in place of \(\omega_{22}\), since at leading post Figure 6: Illustration of the FrequencyFits method. _Left:_ The blue circles indicate the \(2N+1=7\) extrema through which the fitting function Eq. (30) passes. The lower panel shows the envelope-subtracted data from which the extrema \(T_{\alpha}\) are determined. The solid blue circle indicates the central extremum, whose parameters are used for the eccentricity definition. The pink square and the pink dashed line show the analogous construction for the apocenter passages. _Right:_ Enlargement of the region around the solid markers in the upper panel on the left. The waveform is generated using SEDBNRv4EHM, and the binary parameters are given in the title. Newtonian order, the amplitude also has the form of Eq. (30) with exponent \(-1/4\)[21]. We refer to the method of finding the pericenters/apocenters by fitting to \(A_{22}\) as AmplitudeFits. Once again, FrequencyFits is more prone to numerical noise as it relies on \(\omega_{22}\). Therefore, we recommend AmplitudeFits over FrequencyFits. ## IV Robustness Tests In this section, we check the robustness of our eccentricity definition and the different methods to locate pericenters/apocenters by putting our implementation through various tests. ### The large mass ratio limit of \(e_{\text{gw}}\) In Sec. I, we noted that one of the desired features of an ideal eccentricity definition is that in the limit of large mass ratio, it should approach the test particle eccentricity on a Kerr geodesic. The geodesic eccentricity \(e_{\text{geo}}\) typically used for EMRI calculations [124; 125] is given by: \[e_{\text{geo}}=\frac{r^{\text{a}}-r^{\text{p}}}{r^{\text{a}}+r^{\text{p}}}, \tag{33}\] where \(r^{\text{p}}\) and \(r^{\text{a}}\) are the pericenter and apocenter separations along the geodesic in Boyer-Lindquist coordinates. To test the test particle limit of \(e_{\text{gw}}\), we compare \(e_{\text{gw}}\) and \(e_{\text{geo}}\) for an EMRI waveform with \(q=\infty\) and nonspinning BHs, but with varying eccentricities in the range \(e_{\text{geo}}\in[0,0.5]\). In the \(q\to\infty\) limit, there is no orbital evolution and the waveform is that of a test particle following a geodesic. For our comparisons, we use the waveforms computed within this framework in Ref. [76] using a frequency domain Teukolsky code. Because there is no orbital evolution these waveforms each have a constant value of eccentricity \(e_{\text{geo}}\) and orbit averaged frequency \(\langle\omega_{22}\rangle\). Figure 7 shows the differences \(|e_{\text{geo}}-e_{\text{gw}}|\) and \(|e_{\text{geo}}-e_{\omega_{22}}|\), evaluated at different values of \(e_{\text{geo}}\) and \(\langle\omega_{22}\rangle\). While \(e_{\text{gw}}\) does not exactly match \(e_{\text{geo}}\) in the test particle limit, the differences for \(e_{\text{gw}}\) lie in the range \(\sim[10^{-6},6\!\times\!10^{-3}]\), whereas the differences for \(e_{\omega_{22}}\) lie in the range \(\sim[5\!\times\!10^{-4},10^{-1}]\). Therefore, \(e_{\text{gw}}\) is an improvement over \(e_{\omega_{22}}\) in two ways: \(e_{\text{gw}}\) has the correct Newtonian limit (as shown by Ref. [46]) and is closer to \(e_{\text{geo}}\) in the test particle limit, by about two orders of magnitude. ### Applicability for waveforms of different origins Another criteria for the eccentricity definition identified in Sec. I is that it should be robust and applicable for waveforms of different origins, such as analytical PN waveforms [49; 50; 51; 52; 53; 54], numerical waveforms from NR [74; 75; 76; 77; 78; 79] simulations, semi-analytical EOB waveforms calibrated to NR [74; 75; 76; 77; 44], and EMRI [73; 42; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67] waveforms obtained by solving the Teukolsky equation. In Fig. 8 we show examples of our \(e_{\text{gw}}\) implementation in gw_eccentricity[90] applied to waveforms of four different origins: PN (EccentricTD [51]), EOB (SEOBNRv4EHM [46]), NR (SpEC [110; 48]), and EMRI (Ref. [76]). The binary parameters are arbitrarily chosen to cover a wider parameter space and are shown in the figure text. In each of the four subplots in Fig. 8, the lower panel shows the real part of \(\mathpzc{h}_{22}\), and the upper panel shows the measured \(e_{\text{gw}}\). We consider three different methods to locate the pericenters/apocenters Amplitude, ResidualAmplitude, and AmplitudeFits, and \(e_{\text{gw}}\) is consistent between the three methods. For the ResidualAmplitude method, for the PN, EOB and EMRI cases, we use the same model evaluated at zero eccentricity for the quasicircular counterpart. For NR, we use the IMRPhenomT [14] model. In addition to Fig. 8, we have tested our implementation in gw_eccentricity[90] against eccentric SpEC NR waveforms from Refs. [76; 48]. When testing against eccentric NR simulations from the RIT catalog [126], we are able to compute \(e_{\text{gw}}\) whenever the waveform contains at least \(\sim 4-5\) orbits before the merger, for reasons explained in Sec. II.2. Finally, we have conducted extensive robustness tests using the SEOBNRv4EHM model in different regions of the parameter space, including converting \(e_{\text{eob}}\) posterior samples to \(e_{\text{gw}}\) samples in a postprocessing step after parameter estimation. Figure 7: Comparison of \(e_{\text{gw}}\) and \(e_{\omega_{22}}\) to the geodesic eccentricity \(e_{\text{geo}}\) in the \(q\to\infty\) limit, as a function of the orbit averaged frequency \(\langle\omega_{22}\rangle\). In the left panel, the colors show the absolute difference between \(e_{\text{geo}}\) and \(e_{\text{gw}}\) measured using Eq. (8) with the Amplitude method. The right panel shows the same for \(e_{\omega_{22}}\). \(e_{\text{geo}}\) is closer to \(e_{\text{gw}}\) than \(e_{\omega_{22}}\) by about two orders of magnitude. ### Smoothness tests In this section, we demonstrate that our implementation of \(e_{\text{gw}}\) varies smoothly as a function of internal definitions of eccentricity used by waveform models. Specifically, we generate 50 waveforms using the SEOBNRv4EHM model [46], with the model's internal eccentricity parameter varying from \(e_{\text{eob}}=10^{-7}\) to \(e_{\text{eob}}=0.9\), 6 while keeping the other parameters fixed at \(q=4\), and \(\chi_{1z}=\chi_{2z}=-0.6\). The eccentricity \(e_{\text{eob}}\) refers to the start of each waveform, which we choose to be at \(t_{0}=-20000M\) before the peak waveform amplitude. 7 In addition to testing whether \(e_{\text{gw}}\) varies smoothly, this test also demonstrates that our implementation in gw_eccentricity [90] works over a wide range of eccentricities. Both of these features are important for applications like converting posterior samples for \(e_{\text{eob}}\) to the standardized \(e_{\text{gw}}\). Footnote 7: To achieve the desired length of the inspiral, we adjust the start frequency of the SEOBNRv4EHM model accordingly. For simplicity, we restrict our consideration to the three preferred methods from Sec III, Amplitude, ResidualAmplitude and AmplitudeFits. The Frequency, ResidualFrequency and FrequencyFits methods perform similarly to Amplitude, Figure 8: Demonstration of the measurement of eccentricity using the gw_eccentricity [90] package for waveforms of different origins: PN, EOB, NR and EMRI. The binary parameters are indicated in the figure text. In each subplot, the lower panel shows the real part of \(\hat{\mathpzc{k}}_{22}\), and the upper panel shows the measured eccentricity. We consider three different methods for identifying the pericenters/apocenters: Amplitude, ResidualAmplitude and AmplitudeFits. ResidualAmplitude and AmplitudeFits methods, respectively, but can be prone to numerical noise. #### iv.2.1 \(e_{\text{gw}}\) vs \(e_{\text{eob}}\) at initial time We first compare \(e_{\text{eob}}\) (which is defined at \(t_{0}=-20000M\)) to \(e_{\text{gw}}\) at its first available time (which we denote as \(\widehat{t}_{0}\)). As described in Sec. II.2, the first available time for \(e_{\text{gw}}(t)\) is the maximum of the times of the first pericenter and first apocenter, as starting at this time, both \(\omega_{22}^{\text{p}}(t)\) and \(\omega_{22}^{\text{a}}(t)\) interpolants in Eq. (4) can be defined. For our dataset of SEOBNRv4EHM waveforms, this time varies from \(\widehat{t}_{0}=-19250M\) for \(e_{\text{eob}}=10^{-7}\) to \(\widehat{t}_{0}=-15250M\) for \(e_{\text{eob}}=0.9\). However because the difference between \(\widehat{t}_{0}\) and \(t_{0}\) is always a fraction of an orbit, and eccentricity does not change significantly over one orbit, comparing \(e_{\text{gw}}\) at \(\widehat{t}_{0}\) to \(e_{\text{eob}}\) at \(t_{0}\) is reasonable. The ideal outcome for this test is that the eccentricity measured from the waveform \(e_{\text{gw}}\) matches the model's eccentricity definition \(e_{\text{eob}}\). Figure 9 shows how \(e_{\text{gw}}\) at \(\widehat{t}_{0}\) varies with \(e_{\text{eob}}\) at \(t_{0}\), for the Amplitude, ResidualAmplitude and AmplitudeFits methods. For sufficiently high eccentricities (\(e_{\text{eob}}\gtrsim 5\times 10^{-3}\)), all three methods follow the expected trend of \(e_{\text{gw}}=e_{\text{eob}}\). However, the Amplitude method starts to deviate from this trend for smaller eccentricities, before completely breaking down for \(e_{\text{eob}}\lesssim 10^{-3}\). This is expected as local extrema do not exist in \(A_{22}\) for such low eccentricities (see Sec. III). By contrast, the ResidualAmplitude and AmplitudeFits method follow the \(e_{\text{gw}}=e_{\text{eob}}\) trend all the way down to \(e_{\text{eob}}=10^{-5}\). For smaller \(e_{\text{eob}}\), the SEOBNRv4EHM model itself ceases producing waveforms for which the modulations due to eccentricity decrease with decreasing \(e_{\text{eob}}\). For most practical applications, this is not problematic for SEOBNRv4EHM as \(e_{\text{eob}}=10^{-5}\) is very small. However, this exercise highlights how (in addition to testing our implementation) tests like this can help identify the limitations of eccentric waveform models. In this spirit, we repeat this test for several different eccentric waveform models in Fig. 10. For an equal-mass nonspinning binary, we show how \(e_{\text{gw}}\) at \(\widehat{t}_{0}\) varies with the internal definitions of eccentricity (defined at \(t_{0}=-20000M\)) used by the SEOBNRv4EHM [46], TEOBResumS-DALI [47, 127], SEOBNRE [44, 45], and EccentricTD [51] models. For simplicity, we only consider the ResidualAmplitude method, where the quasicircular counterpart is obtained by evaluating the same model at zero eccentricity. Figure 10 also shows the dependence of \(e_{\text{gw}}\) on the internal definition of eccentricity for a few eccentric equal-mass nonspinning NR simulations produced with the SpEC code [48, 75, 110] (with SXS IDs 2267, 2270, 2275, 2280, 2285, 2290, 2294 and 2300). In this case, we use the IMRPhenomT model [14] for the quasicircular counterpart. The internal eccentricity for these simulations is computed using the orbital trajectories, following the method of Refs. [95, 96]; we refer to this as the "SpEC metadata eccentricity" as the same method is used to report eccentricity in the metadata files accompanying the simulations [75, 110]. However, because the publicly available SpEC metadata files [110] report eccentricity at different times for different simulations, we recompute the eccentricity at a fixed time \(t_{0}\) using the same methods as Refs [95, 96]. Because the NR simulations are typically short, we choose \(t_{0}=1500M\) after the start of the simula Figure 10: \(e_{\text{gw}}\) vs the internal definition of eccentricity, for waveforms of different origin, for equal-mass nonspinning binaries with varying eccentricity. For the NR waveforms (SpEC), we compute the internal eccentricity at \(t_{0}=1500M\) after the start of the simulation, while for the rest we use \(t_{0}\equiv-20000M\) before peak waveform amplitude. In both cases, \(\widehat{t}_{0}\) is the first available time for \(e_{\text{gw}}(t)\). The inset shows the same but on a linear scale, and focuses on the \(e_{\text{gw}}\leq 0.4\) region. Figure 9: \(e_{\text{gw}}\) vs \(e_{\text{eob}}\) at the initial time, for SEDOBNRv4EHM waveforms with varying \(e_{\text{eob}}\), but keeping the other binary parameters fixed (given in figure title). \(e_{\text{eob}}\) is the model’s internal eccentricity, specified at \(t_{0}=-20000M\). \(e_{\text{gw}}\) is evaluated at its first available time, \(\widehat{t}_{0}\). We consider three different methods for locating pericenters/apocenters: Amplitude, ResidualAmplitude, and AmplitudeFits. The Amplitude method breaks down for small eccentricities (\(e_{\text{eob}}\lesssim 10^{-3}\)), while the ResidualAmplitude and AmplitudeFits method follow the expected \(e_{\text{gw}}=e_{\text{eob}}\) trend down to \(e_{\text{eob}}=10^{-5}\). tions, and \(\widehat{t}_{0}\) (where \(e_{\rm gw}\) is plotted) is once again the first available time for \(e_{\rm gw}(t)\). Before computing \(e_{\rm gw}(t)\), the initial parts of the NR waveforms (\(t<t_{0}\)) are discarded to avoid spurious transients due to imperfect NR initial data. In agreement with Fig. 9, we find that the SEOBNRv4EHM model follows the \(e_{\rm gw}=e_{\rm eob}\) trend for \(e_{\rm eob}\gtrsim 10^{-5}\) in Fig. 10. While TEOBResumS-DALI follows the same trend at higher eccentricities, it deviates significantly from this trend at \(e_{\rm eob}\lesssim 5\times 10^{-3}\), and breaks down at \(e_{\rm eob}\lesssim 10^{-4}\). This behavior of TEOBResumS-DALI was also noted in Ref. [85] and suggests that the model may need improvement in this region. Next, both SEOBNRE and EccentricTD models fall away from the \(y=x\) line in Fig. 10, suggesting that the internal definitions of these models may need modifications. Finally, the SpEC metadata eccentricity has a scatter around the \(y=x\) line. This behavior is not surprising as the SpEC metadata eccentricity is not meant to be precise and is known to be sensitive to factors like the length of the time window used when fitting the orbital trajectories to PN expressions [96, 75, 75]. Furthermore, because the orbital trajectories in NR simulations are gauge-dependent, the eccentricity reported in the SpEC metadata can also be gauge-dependent. To get a precise and gauge-independent eccentricity estimate from NR, one must use waveform-defined quantities like \(e_{\rm gw}\). Figure 10 also shows that for the same \(e_{\rm gw}\), different models have different internal values of eccentricity. Therefore, the eccentricity inferred from GW signals via Bayesian parameter estimation using two different models can also be different, highlighting the need for using a waveform-defined eccentricity like \(e_{\rm gw}\). In particular, posterior samples obtained using different models can be put on the same footing by evaluating \(e_{\rm gw}\) and \(l_{\rm gw}\) using the model's waveform prediction. This approach was recently taken in Ref. [86], albeit restricted to only \(e_{\rm gw}\). #### iv.2.2 Smoothness of the time evolution of \(e_{\rm gw}\) We now consider a more stringent smoothness test: using the same dataset of 50 SEOBNRv4EHM waveforms, we test whether the time evolution of \(e_{\rm gw}\) changes smoothly when varying \(e_{\rm eob}\) at \(t_{0}=-20000M\). Figure 11 shows \(e_{\rm gw}(t)\) for the Amplitude, ResidualAmplitude and AmplitudeFits methods. Even though the waveform data starts at \(t_{0}=-20000M\), the \(e_{\rm gw}(t)\) is only available for \(t\geq\widehat{t}_{0}\), the maximum of the times of the first pericenter and apocenter. In Fig. 9 only eccentricities at the first available time \(e_{\rm gw}(\widehat{t}_{0})\) are considered, while in Fig. 11 we consider the full time evolution. In Fig. 11, we once again find that the Amplitude method breaks down for small eccentricities \(e_{\rm gw}\lesssim 10^{-3}\dots 10^{-2}\), especially as one approaches the merger as eccentricity is continuously radiated away. The Amplitude method fails when the local extrema in \(A_{22}\) cease to exist, which is why the curves with smaller initial \(e_{\rm gw}\) are shorter. By contrast, the ResidualAmplitude and AmplitudeFits methods continue to compute the eccentricity until \(e_{\rm gw}\sim 10^{-5}\). While the ResidualAmplitude method successfully computes \(e_{\rm gw}(t)\) up to the last available orbit (we discard the last two orbits before the merger as explained in Sec. II.2), the AmplitudeFits method misses some extrema near the merger, especially when the eccentricity becomes small. However, as we will see below, the ResidualAmplitude method can depend on the choice of the quasicircular waveform in the same region. In most regions of Fig. 11, we find that the time evolution of \(e_{\rm gw}\) varies smoothly with \(e_{\rm eob}\). However, for the ResidualAmplitude and AmplitudeFits methods, for small eccentricities and near the merger, we find that \(e_{\rm gw}(t)\) can be noisy. Rather than a limitation of these methods, this behavior arises from the SEOBNRv4EHM model itself. Figure 12 focuses on one of the noisy \(e_{\rm gw}(t)\) Figure 11: \(e_{\rm gw}(t)\) for SEOBNRv4EHM waveforms with varying \(e_{\rm eob}\), but keeping the other binary parameters fixed (given in figure title). The method used to locate pericenters/apocenters is indicated in the figure text. The colors indicate the value of \(e_{\rm eob}\), defined at \(t_{0}=-20000M\). The Amplitude method breaks down for small eccentricities \(e_{\rm gw}\lesssim 10^{-3}\dots 10^{-2}\), especially as one approaches the merger. The ResidualAmplitude and AmplitudeFits methods continue to compute the eccentricity until \(e_{\rm gw}\sim 10^{-5}\). The features at \(e_{\rm gw}\sim 10^{-5}\) arise from the waveform model itself (see Fig. 12). curves from the middle panel of Fig. 11. The bottom panel of Fig. 12 shows the corresponding \(\Delta\omega_{22}(t)\) from Eq. (26), which helps highlight the modulations due to eccentricity. The fall in \(e_{\rm gw}(t)\) is associated with an abrupt fall in the amplitude of the eccentricity modulations in \(\Delta\omega_{22}(t)\). Such jumps in \(\Delta\omega_{22}(t)\) at small eccentricities arise from a transition function in SEDBNRv4EHM[46] that windows out the eccentric corrections as one approaches the merger (see Sec. II B of Ref. [46]). The undesired behavior seen in Fig. 12 is shown in Ref. [119] to not cause significant biases in parameter estimation, and can be resolved in future versions of SEOBNRv4EHM. Nevertheless, Fig. 11 once again highlights the importance of such smoothness tests, not only to check our implementation of \(e_{\rm gw}\) but also to identify potential issues in waveform models. ### Dependence of \(e_{\rm gw}\) on extrema finding methods For the final robustness test, we consider how strongly \(e_{\rm gw}\) depends on the method used to locate extrema. We will only consider the ResidualAmplitude and AmplitudeFits methods for simplicity. From Figs. 9 and 11, we already see that \(e_{\rm gw}\) is broadly consistent between different methods. We now quantify the differences in Fig. 13, for the same dataset of 50 SEOBNRv4EHM waveforms from Sec. IV.3. The top-left panel of Fig. 13 shows \(e_{\rm gw}(t)\) for these waveforms when using the ResidualAmplitude method and the colors represent the instantaneous absolute difference with respect to the \(e_{\rm gw}(t)\) obtained from the AmplitudeFits method. Here, we use SEOBNRv4EHM evaluated at zero eccentricity for the quasicircular counterpart required for ResidualAmplitude. The gray region represents the parts where ResidualAmplitude can compute \(e_{\rm gw}(t)\), but AmplitudeFits can not. However, we note that this only occurs for small eccentricities \(e_{\rm gw}\lesssim 5\times 10^{-3}\), and close to the merger. This region also coincides with the region where SEOBNRv4EHM exhibits the noisy behavior discussed in Fig. 12. Next, the top-right panel of Fig. 13 illustrates the difference in \(e_{\rm gw}(t)\) between different choices of quasicircular counterpart for the ResidualAmplitude method. The curves once again represent \(e_{\rm gw}(t)\) evaluated using ResidualAmplitude with the quasicircular counterpart obtained from SEOBNRv4EHM (the same model used to produce the eccentric waveforms). The colors represent the instantaneous absolute difference with respect to the \(e_{\rm gw}(t)\) obtained from the ResidualAmplitude method with the quasicircular counterpart obtained from the IMRPhenomT model instead. The gray region represents the parts where ResidualAmplitude using SEOBNRv4EHM for the quasicircular counterpart can compute \(e_{\rm gw}(t)\), but ResidualAmplitude using IMRPhenomT can not. Once again, this occurs only for small eccentricities and near the merger. In this regime, the small differences between SEOBNRv4EHM (in the quasicircular limit) and IMRPhenomT, especially near the merger, become important, and IMRPhenomT does not accurately capture the secular growth in SEOBNRv4EHM. In the regions where both ResidualAmplitude and AmplitudeFits methods successfully compute \(e_{\rm gw}(t)\) in the top-left panel of Fig. 13, the biggest differences are of order \(10^{-2}\). These differences occur either for small eccentricities near the merger, or for very large eccentricities (\(e_{\rm gw}\sim 0.9\)). At such high eccentricities, the waveform is characterized by sharp bursts at pericenter passages alternating with wide valleys that include the apocenter passages (see bottom panel of Fig. 2, for example). As a result, it is easy to identify the pericenter times but not the apocenter times for these waveforms. This can be resolved by only identifying the pericenter times and _defining_ the apocenter times to be the midpoints between consecutive pericenters. The assumption employed here is that the radiation reaction is not strong enough that the times taken for the first and second halves of an orbit are significantly different. While this assumption is broken near the merger, we already discard the last two orbits before the merger when computing \(e_{\rm gw}\) (Sec. II.2). The bottom panels of Fig. 13 show the same as the top panels, but when identifying the midpoints between pericenters as apocenters. We find that the largest differences between ResidualAmplitude and AmplitudeFits, as well as the largest differences between ResidualAmplitude with different quasicircular counterparts, are now an order of magnitude smaller. This suggests that identifying the midpoints between pericenters as apocenters may be a more robust choice than directly Figure 12: Tracing the noisy features in Fig. 11 to the behavior of the SEOBNRv4EHM model at small eccentricities. The top panel shows \(e_{\rm gw}\) for the case with \(e_{\rm eob}=1.05\times 10^{-5}\) at \(t_{0}=-20000M\), from the middle panel of Fig. 11. The bottom panel shows the corresponding \(\Delta\omega_{22}\) (Eq. (26)), which helps highlight the modulations due to eccentricity. The drop in \(e_{\rm gw}\) occurs at the same time as an abrupt drop in the eccentricity modulations in \(\Delta\omega_{22}\) that arises from a transition function in SEDOBNRv4EHM. locating apocenters, especially for large eccentricities. We provide this as an option in gw_eccentricity [90]. To summarize, the different choices for locating extrema in Fig. 13 lead to broadly consistent results for \(e_{\mathrm{gw}}(t)\), with the only notable differences occurring for: (i) small eccentricities (\(e_{\mathrm{gw}}\lesssim 5\times 10^{-3}\)) and near the merger, where the SEOBNRv4EHM model also has known issues (see Fig. 12), and (ii) large eccentricities (\(e_{\mathrm{gw}}\sim 0.9\)), where locating apocenters is problematic. As discussed in Sec. III, such differences are expected, and the different methods to locate extrema should be regarded as different definitions of eccentricity. However, identifying the midpoints between pericenters as apocenters, rather than directly locating apocenters, can lead to more consistent results between different methods. ## V Conclusion We present standardized definitions of eccentricity (\(e_{\mathrm{gw}}\)) and mean anomaly (\(l_{\mathrm{gw}}\)) that are computed directly from the gravitational waveform (Sec. II). Our method is free of gauge ambiguities, has the correct Newtonian limit, and is applicable for waveforms of all origins, over the full range of allowed eccentricities for bound orbits (\(0-1\)). However, as our method relies on computing the frequency at pericenter and apocenter passages, it requires waveforms with at least \(\sim\!4-5\) orbits. Our method can be applied directly during source parameter estimation or as a postprocessing step to convert posterior samples from the internal definitions used by models and simulations to the standardized ones. This puts all models and simulations on the same footing, while also helping connect GW observations to astrophysical predictions for GW populations. Finally, we propose how the reference frequency \(f_{\mathrm{ref}}\) and start frequency \(f_{\mathrm{low}}\), that are used in GW data analysis, should be generalized for eccentric binaries (Secs. II.4, II.5, II.6). One key aspect of computing \(e_{\mathrm{gw}}\) and \(l_{\mathrm{gw}}\) is identifying the times of pericenter and apocenter passages from the waveform. We provide different methods for this purpose, that should be treated as different variants of the eccentricity definition. Among the provided methods (see Sec. III), the Amplitude method is applicable when ec Figure 13: Differences in \(e_{\mathrm{gw}}(t)\) due to different methods used to locate pericenters and apocenters, for the same SEOBNRv4EHM waveforms as Fig. 11. _Top-left:_ The curves show \(e_{\mathrm{gw}}(t)\) obtained using the ResidualAmplitude method with the quasicircular counterpart also obtained from SEOBNRv4EHM. The colors represent the absolute difference with respect to the \(e_{\mathrm{gw}}(t)\) obtained using the AmplitudeFits method, and the gray region shows the parts where the second method fails to compute \(e_{\mathrm{gw}}(t)\). _Top-right:_ Same, but now the colors show the difference with respect to the \(e_{\mathrm{gw}}(t)\) obtained with ResidualAmplitude method with the quasicircular counterpart obtained from the IMRPhenomT model. In both top panels, the different choices for locating pericenters/apocenters lead to broadly consistent results for \(e_{\mathrm{gw}}(t)\), with the only notable differences occurring for: (i) small eccentricities (\(e_{\mathrm{gw}}\lesssim 5\times 10^{-3}\)) and near the merger, where the SEOBNRv4EHM model also has known issues (see Fig. 12), and (ii) large eccentricities (\(e_{\mathrm{gw}}\sim 0.9\)), where locating apocenters is problematic. The bottom panels show the same as the top panels, but when identifying the midpoints between pericenters as apocenters. This leads to more consistent results between different methods, and the largest differences in \(e_{\mathrm{gw}}\) decrease by an order of magnitude. centricity is sufficiently high (\(e_{\rm gw}\gtrsim 10^{-3}\ldots 10^{-2}\)), while ResidualAmplitude and AmplitudeFits are applicable for smaller eccentricities as well. We demonstrate the robustness of our implementation by testing against waveforms of different origins, including PN, EOB, EMRIs and NR (Sec. IV.2). We further conduct smoothness tests that have the added benefit of identifying noisy features in waveform models (Sec. IV.3). Finally, we make our implementation publicly available through an easy-to-use Python package, gw_eccentricity[90]. This work focuses on systems without spin-precession, and the most important next step is to generalize our methods to spin-precessing eccentric binaries. We leave this to future work but discuss potential approaches. ###### Acknowledgements. We thank Peter James Nee for useful discussions and Geraint Pratten, Isobel Romero-Shaw, Teagan Clarke, Paul Lasky, Eric Thrane and Aditya Vijaykumar for comments on the manuscript. M.A.S.'s research was supported by the Department of Atomic Energy, Government of India and the National Research Foundation of Korea under grant No. NRF-2021R1A2C2012473. M.A.S acknowledges travel support from the Infosys Exchange Scholars program to visit AEI, Potsdam and hospitality by AEI, Potsdam where a part of the work was completed. V.V acknowledges support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 896869. M.v.d.M. is supported by VILLUM FONDEN (grant no. 37766), and the Danish Research Foundation. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the NSF. Most of the numerical calculations reported in this paper as well as the development of gw_eccentricity[90] were performed using the Alice cluster at ICTS-TIFR.
2301.10419
Deconstructing Pedestrian Crossing Decision-making in Interactions with Continuous Traffic: an Anthropomorphic Model
As safe and comfortable interactions with pedestrians could contribute to automated vehicles' (AVs) social acceptance and scale, increasing attention has been drawn to computational pedestrian behavior models. However, very limited studies characterize pedestrian crossing behavior based on specific behavioral mechanisms, as those mechanisms underpinning pedestrian road behavior are not yet clear. Here, we reinterpret pedestrian crossing behavior based on a deconstructed crossing decision process at uncontrolled intersections with continuous traffic. Notably, we explain and model pedestrian crossing behavior as they wait for crossing opportunities, optimizing crossing decisions by comparing the visual collision risk of approaching vehicles around them. A collision risk-based crossing initiation model is proposed to characterize the time-dynamic nature of pedestrian crossing decisions. A simulation tool is established to reproduce pedestrian behavior by employing the proposed model and a social force model. Two datasets collected in a CAVE-based immersive pedestrian simulator are applied to calibrate and validate the model. The model predicts pedestrian crossing decisions across all traffic scenarios well. In particular, by considering the decision strategy that pedestrians compare the collision risk of surrounding traffic gaps, model performance is significantly improved. Moreover, the collision risk-based crossing initiation model accurately captures the timing of pedestrian crossing initiations within each gap. This work concisely demonstrates how pedestrians dynamically adapt their crossings in continuous traffic based on perceived collision risk, potentially providing insights into modeling coupled human-AV interactions or serving as a tool to realize human-like pedestrian road behavior in virtual AVs test platforms.
Kai Tian, Gustav Markkula, Chongfeng Wei, Yee Mun Lee, Ruth Madigan, Toshiya Hirose, Natasha Merat, Richard Romano
2023-01-25T05:58:26Z
http://arxiv.org/abs/2301.10419v1
# Deconstructing Pedestrian Crossing ###### Abstract As safe and comfortable interactions with pedestrians could contribute to automated vehicles' (AVs) social acceptance and scale, increasing attention has been drawn to computational pedestrian behavior models. However, very limited studies characterize pedestrian crossing behavior based on specific behavioral mechanisms, as those mechanisms underpinning pedestrian road behavior are not yet clear. Here, we reinterpret pedestrian crossing behavior based on a deconstructed crossing decision process at uncontrolled intersections with continuous traffic. Notably, we explain and model pedestrian crossing behavior as they wait for crossing opportunities, optimizing crossing decisions by comparing the visual collision risk of approaching vehicles around them. A collision risk-based crossing initiation model is proposed to characterize the time-dynamic nature of pedestrian crossing decisions. A simulation tool is established to reproduce pedestrian behavior by employing the proposed model and a social force model. Two datasets collected in a CAVE-based immersive pedestrian simulator are applied to calibrate and validate the model. The model predicts pedestrian crossing decisions across all traffic scenarios well. In particular, by considering the decision strategy that pedestrians compare the collision risk of surrounding traffic gaps, model performance is significantly improved. Moreover, the collision risk-based crossing initiation model accurately captures the timing of pedestrian crossing initiations within each gap. This work concisely demonstrates how pedestrians dynamically adapt their crossings in continuous traffic based on perceived collision risk, potentially providing insights into modeling coupled human-AV interactions or serving as a tool to realize human-like pedestrian road behavior in virtual AVs test platforms. Pedestrian-AV interaction, Pedestrian road crossing, Decision-making model, Traffic flow, Simulation. ## I Introduction Continued advances in vehicle automation have brought us great anticipation that society will adopt highly automated vehicles (AVs) in the near future. However, this vision faces many unresolved challenges. One of them is to achieve smooth interaction between AVs and other road users. The consensus suggests that in the transition from manual to fully automated driving, there will be mixed traffic with AVs and other road users on the road [1]. A typical case is the expansion of the deployment of AVs from a few confined areas of low risk to other road users to a range of operational design domains, which could inevitably increase conflicts with other road users [2]. Failures in interactions between AVs and other road users may hinder the large-scale adoption and social acceptance of AVs [3, 4]. This, therefore, leads to the research context of this study, which is to promote safe and smooth communication and interaction in traffic [1, 3, 4]. Pedestrians are generally regarded as the most vulnerable road users in modern transport systems, due to the lack of protective equipment and slow movement compared to other road users [5]. Given that pedestrians' actions and intentions are nondeterministic, and the diversity and dynamism of their behavior, moving through this complicated environment is a challenge for AVs [6]. Moreover, AVs' own behavior can also affect pedestrian road behavior, which introduces further uncertainties into interactions. In particular, the issues mentioned above become more pronounced at uncontrolled intersections where pedestrian behavior is more unpredictable, and safety problems are more common than on other controlled road sections, as there are no traffic signals to coordinate the interaction process [7]. Additionally, most existing automated driving systems regard the driving task as a pure collision-free motion planning problem and view pedestrians in some contexts as rigid road obstacles, instead of social beings [5, 8]. Against the above background, if AVs cannot properly understand the behavior of pedestrians, they may not improve traffic efficiency and safety as expected, but rather increase traffic dilemmas and additional issues [9]. Accordingly, much attention has been drawn to one pressing issue, namely computational models for pedestrian road behavior, [6, 10, 11, 12, 13], which may help AVs to better anticipate pedestrian intentions or serve as a tool to implement realistic pedestrian behavior in simulated scenarios, and thus be used in the validation and development of AVs [3, 14]. Existing computational models for pedestrian behavior, particularly for pedestrian road-crossing decisions have been developed based on a wide range of theories and hypotheses, such as the cognitive models [10, 15], data-driven approaches [16], discrete choice models [12], as well as game theoretical models [17]. However, those approaches have not yet bridged several gaps, as identified and discussed below. Firstly, most of these approaches are rarely based on specific behavioral or psychological theories, such as pedestrian visual perception. Instead, external physical factors, like time to collision (TTC), have been often used. For example, [18, 19] developed a pedestrian crossing decision-making model based on the vehicle deceleration distance. [14, 20] applied a minimum TTC as the threshold for pedestrian crossing decisions. Although TTC or distance from the vehicle has become the most used decision cue in crossing decision models [18], growing evidence has shown that the impacts of vehicle kinematics on pedestrians are multi-dimensional. For instance, at the same TTC condition, a higher vehicle speed induces more pedestrians to cross the street compared to a lower one [21]. Therefore, the TTC or distance may not properly carry the risk information that pedestrians may perceive. As our previous research has shown, pedestrian crossing behavior is highly correlated with their perceived visual cues [22]. Hence, existing models lack effort in characterising pedestrian perceived information, e.g., anthropomorphic visual cues [1, 10]. Moreover, few computational models specifically characterize pedestrian decisions in the traffic flow scenario. In real situations, pedestrians usually face a fleet of vehicles and accept one traffic gap after rejecting some gaps. Thus, the decision-making in continuous traffic may not only be based on the collision risk, but also involve many trade-offs between safety and time efficiency [23]. Several previous studies indicated that with the increased waiting time, pedestrians tended to accept crossing opportunities with higher risk [7]. [14] developed a model which hypothesized that pedestrians would change their obedience to the law when they waited a long time. However, there is much evidence that pedestrians who tended to wait were more cautious and less likely to accept risky gaps [24, 25, 26]. A meta-study uncovered these conflicting results and noted that there was insufficient evidence to support a linear relationship between waiting times and pedestrians risking crossing the street [27]. On the one hand, the available findings support that pedestrians may dynamically adjust their crossing decision-making strategies in continuous traffic. On the other hand, it is unreasonable to assume that pedestrians always tend to accept more dangerous crossing opportunities as waiting time increases. Instead, we should treat each case on its own merits. Therefore, it is necessary to look into the details of pedestrian crossing behavior when interacting with traffic flow. Finally, very limited models pay attention to the time dynamic of pedestrian crossing decision-making. According to the cognitive decision-making theory, pedestrian crossing initiation time (or onset time) is a variable due to the noisy evidence in the human cognitive system [28]. In addition, it has been shown that pedestrian crossing initiation time can be affected by many factors. For instance, pedestrians may initiate quickly when facing a vehicle with a low speed [21] or with a small time gap from the approaching vehicle [29]. Accordingly, existing empirical observations highlight the time-dynamic nature of pedestrian crossing decision-making. Recently, a class of emerging models [10, 11, 15], namely the evidence accumulation model, detailed model pedestrian crossing decisions and their timing by simulating the cognitive process underlying crossing decision-making. However, given the complexity of those models, they focused more on the details of the cognitive process, and it is unclear whether it would be feasible to extend them to cover additional factors, such as vehicle kinematics. Regarding the above discussion, several research questions in existing computational models of pedestrian crossing behavior can be summarised: * There is a lack of computational models that characterize pedestrian crossing decisions based on anthropomorphic behavioral theory. * The decision pattern of pedestrians crossing the road when interacting with the traffic flow remains unclear. * There is a lack of computational models that concisely consider the time-dynamic nature of road crossing decisions and relate them to vehicle kinematics. In this study, a decision-making model for pedestrians interacting with continuous traffic at uncontrolled intersections is proposed to solve the above questions. The main contributions of this paper are as follows: * We formally apply our findings [22] and extend it to a relatively complex traffic scenario, demonstrating that pedestrian crossing decisions are dynamic and intrinsically linked to their perceived collision risk. Specifically, a visual collision risk model is introduced as the main decision cue accounting for pedestrian crossing decisions. Moreover, a novel decision strategy is proposed to interpret pedestrian crossing decisions in continuous traffic flow. In addition, a crossing initiation time model is developed and associated with the collision cue model to account for the pedestrian dynamic crossing initiation time. * Two different datasets collected in a highly immersive pedestrian simulator are applied to calibrate and validate the model. * A simulation tool is established to reproduce pedestrian crossing decisions in a customized traffic scenario based on the proposed model. ## II Methodology ### _Deconstructing the crossing decision-making process_ During the decision-making process for road-crossing, several cognitive stages may be involved to establish pedestrian situation awareness [1, 30]. Normally, pedestrian perceived Fig. 1: A simplified framework for pedestrians road-crossing decision-making process. collision cues are the basis of their decisions, which contain vehicle distance, speed, TTC, and more. Based on those visual cues, pedestrians comprehend traffic situations and decide whether to cross the road or not by combining some prior knowledge and strategies. Finally, there is a reaction process before pedestrians start to move. Therefore, according to the deconstructed three-stage cognitive process, we propose a collision cue-based framework for road-crossing decision-making tasks (Fig. 1), assuming that the crossing decision-making model consists of three constituent parts: visual collision cue, decision, and crossing initiation. ### _Visual collision cue model_ Modeling pedestrian-vehicle interaction is challenging, partly because existing pedestrian models lack psychological underpinnings. According to psychological theory, when moving through the environment, people rely on their visual perception of the space around them [31, 32]. The road crossing task is a typical case that highly demands pedestrians to use visual cues to evaluate the collision risk from approaching vehicles and guide their movements. Relevant behavioral research has shown that the human visual system is sensitive to changes in some visual cues, which may be the source of collision perception. Specifically, one group of cues may provide reliable collision time information, such as Tau [33]. Other cues, like visual angle and its first temporal derivative [32], effectively translate motion information into visual cues through images that expand on the retina. Although most daily naturalistic road crossings involve all of the above visual cues (and possibly others), Delucia [32] has suggested that humans may rely on collision time-related cues when the scenarios include robust optical information or occur at a near distance. Conversely, when the optical information in the task is impoverished or occurs at a long distance, the visual angle and its first temporal derivative may play a dominant role. In light of this conceptual framework, we have previously identified that the first temporal derivative of visual angle, \(\dot{\theta}\), is a critical collision cue for making crossing decisions at uncontrolled intersections. We have demonstrated that \(\dot{\theta}\) not only well explains pedestrian crossing decisions across a wide range of traffic scenarios from two different datasets, but also reasonably characterizes the impacts of vehicle speed and traffic gap on pedestrians [22]. Therefore, in this study, we formalized the pedestrian crossing decision model based on our previous findings. Typically, \(\dot{\theta}\) refers to the change rate of the visual angle subtended by an approaching vehicle, \(\theta\), (Fig. 2a) [31]. The following equations specify its physical model: \[\theta=2\tan^{-1}\frac{w}{2Z}\Rightarrow\dot{\theta}\left(Z,v,w\right)=\frac{ wv}{(Z)^{2}+w^{2}/4} \tag{1}\] where \(v\) denotes the vehicle speed, \(Z\) and \(w\) are the distance to and width of the vehicle. To better interpret the collision cue model, an example is shown in Fig. 2. Suppose that a vehicle ( \(w=1.95\) m) approaches the pedestrian with two different constant speeds (30 km/h and 60 km/h) from 100 m. \(\dot{\theta}\) is an approximately inversely exponential function of distance and TTC from the approaching vehicle (Fig. 2b, c), showing that \(\dot{\theta}\) increases slowly at long distances and rapidly at close distances, which agrees qualitatively with the observation that pedestrians usually feel safe to cross for long distance or big time gap conditions but not when the vehicle is close [21]. Further, it can be noticed that speed effects vary across distance (Fig. 2b) and TTC dimensions (Fig. 2c). When \(\dot{\theta}\) is a function of distance and speed, it increases with speed, which is opposite to the results in Fig. 2c, suggesting that pedestrians may perceive a higher collision threat from the vehicle with higher speed at the same distance. However, the approaching vehicle with a slower speed gives pedestrians a bigger collision threat under the same TTC. The results tie well with the previous experimental observations on pedestrian crossing behavior [21, 34, 35]. ### _Decision model_ Regarding crossing decisions at uncontrolled intersections, pedestrians typically make crossing decisions by judging and selecting the appropriate gaps between two consecutive vehicles, called gap acceptance behavior [7]. Our previous study has proven that \(\dot{\theta}\) is significantly negatively correlated with pedestrian gap acceptance behavior, and a collision cue-based binary choice logit model predicts pedestrian gap acceptance well across different vehicle speeds and traffic gap experimental scenarios [22]. Furthermore, evidence from experimental observations indicated that individuals' judgments Fig. 2: (a) Visual collision cue model in road crossing scenario. Collision cues are as a (b) function of distance from and speed of the vehicle or (c) TTC from and speed of the vehicle. toward traffic gaps are not necessarily entirely static over time, especially in traffic streams [24, 25, 36]. Due to certain learning or comparison strategies, pedestrians may estimate different utilities for the approaching vehicles with the same collision cues, thus adjusting their crossing decision to balance safety and efficiency. We, therefore, propose the following assumptions for the crossing decision-making in the traffic flow: (i) Pedestrians make decisions mainly based on collision cues, i.e., \(\hat{\theta}\), provided by approaching vehicles. (ii) Pedestrians are unwilling to accept the current gap with a collision cue equal to or greater than the maximum collision cue previously rejected. For example, if pedestrians reject a \(0.02\) rad/s cue, they would be more likely to reject the same or bigger one upstream of traffic. The rule is given by: \[X_{1}=\left\{\begin{array}{ll}1,&\hat{\theta}_{c}\geq\hat{\theta}_{mr}\\ 0,&\hat{\theta}_{c}<\hat{\theta}_{mr}\end{array}\right. \tag{2}\] where \(X_{1}\) is the dummy variable for the rule. \(\hat{\theta}_{c}\) and \(\hat{\theta}_{mr}\) represent collision cues for the current gap and maximum rejected gap, respectively. (iii) If pedestrians find that a gap next to the current gap has a smaller collision cue than the current gap, they may prefer to wait for this gap rather than accept a current gap with a greater collision threat, given the rule: \[X_{2}=\left\{\begin{array}{ll}1,&\hat{\theta}_{c}\geq\hat{\theta}_{f}\\ 0,&\hat{\theta}_{c}<\hat{\theta}_{f}\end{array}\right. \tag{3}\] where \(X_{2}\) is the dummy variable for the decision rule. \(\hat{\theta}_{f}\) represents a collision cue of the gap following the current one. Therefore, the utility function of the decision model is formulated as: \[V=\rho_{0}\ln(\hat{\theta})+\rho_{1}X_{1}+\rho_{2}X_{2}+\rho_{3} \tag{4}\] where \(\rho_{0}\) to \(\rho_{3}\) are estimated coefficients. In this study, every \(\hat{\theta}\) only refers to the \(\hat{\theta}\) value of the approaching vehicle at the time when the rear end of the previous vehicle just past the pedestrian (Fig. 3a). Regarding the \(\ln\) transformation, we have previously proven that it can efficiently increase the accuracy of model fitting [22]. Since crossing decisions at uncontrolled intersections are assumed to be a binary choice task, a logistic function is applied [7]. Then, a decision model for crossing tasks in the traffic flow is given by: \[p(\hat{\theta},X_{1},X_{2})=\frac{1}{1+\exp{(-V)}} \tag{5}\] where \(p\) is the probability of the gap acceptance. The (5) without the terms \(X_{1}\) and \(X_{2}\) degenerates to the model we proposed in [22]. ### _Crossing initiation model_ In real traffic, the time at which pedestrians start to cross the road is a variable [28]. As illustrated in Fig. 3a, crossing initiation time, \(t_{int}\), is typically defined as the duration between the time when the rear end of the previous car passes the pedestrians' position, \(t_{pass}\), and the time when pedestrians start their movements [21]. Emerging cognitive models [10, 11, 28] have shown that the crossing initiation time distribution may arise from an underlying evidence accumulation process, but of a form that requires costly stochastic simulation of to estimate the distribution. However, the skewed, lognormal-like shape of the distribution is similar to those arising from simpler evidence accumulation processes, which can be written in a closed mathematical form, such as Ex-Gaussian, Shifted Wald (SW), and Weibull [37]. Considering the similarities of those methods, we only apply the SW distribution instead of trying all of them. The SW distribution is a simple and concise distribution modeling tool, which can fully qualify the crossing initiation time distribution with three parameters: \(b\) (deviation around the mode), \(\gamma\) (tail magnitude) and \(\tau\) (onset of the distribution). Its equation is defined as: \[\begin{split} x&\sim\mathrm{SW}(b,\gamma,\tau)\\ \Rightarrow\frac{b}{\sqrt{2\pi(x-\tau)^{3}}}\cdot\exp{\left( \frac{-[b-\gamma(x-\tau)]^{2}}{2(x-\tau)}\right)}\end{split} \tag{6}\] An illustration of the distributional effect that occurs by changing each of the \(\gamma\) and \(\tau\) parameters are shown in Fig. 3 b and c. The tail becomes heavier as \(\gamma\) decreases, (Fig. 3b). Changes in \(\tau\) control the position of the distribution (Fig. 3c) [37]. Fig. 3: Illustration of the initiation model. (a) Initiation time \(t_{int}\) is the duration between \(t_{pass}\) and the time when the pedestrian start crossing. \(t_{sg}\) denotes the actual gap to the approaching vehicle when pedestrians initiate. (b) The shapes of the initiation model by changing \(\gamma\). (c) The positions of the initiation model by changing \(\tau\). According to our assumptions in Fig. 1, the crossing initiation time model is affected by collision cues, so we define the initiation time model as follows: \[\begin{split} t_{int}\sim\mathrm{SW}(b,\gamma,\tau)\\ \text{with }\gamma=\beta_{1}\ln(\dot{\theta})+\beta_{2};\tau=\beta_{3} \ln(\dot{\theta})+\beta_{4}\end{split} \tag{7}\] where \(t_{int}\) is the crossing initiation time. \(\beta_{1}\) to \(\beta_{4}\) are estimated coefficients. The idea behind these equations is that the strength of collision cues could affect the distribution pattern of pedestrian initiation time. For a more intensive collision threat, if pedestrians choose to cross, they tend to do so more quickly, so the distribution is concentrated and has a short tail. In contrast, when the collision threat is small, pedestrians tend to start crossing slowly, so the distribution is more likely to have a long tail [38]. Accordingly, the SW model is not only a practical distribution model but also provides notable psychological significance for our decision model. In addition, \(b\) is assumed to be a coefficient not influenced by collision cues. Furthermore, since response time data are routinely assumed to be normally distributed in many studies [21, 39], another crossing initiation time model based on the Gaussian distribution is proposed as a comparison to the SW model, defined as the following equations: \[\begin{split} t_{int}\sim\mathcal{N}(\mu,\sigma),\\ \text{with }\mu=\beta_{1}\ln(\dot{\theta})+\beta_{2};\sigma=\beta_{3} \ln(\dot{\theta})+\beta_{4}\end{split} \tag{8}\] where \(\mu\) and \(\theta\) are parameters of the Gaussian model, \(\mathcal{N}\). ### _Pedestrian road-crossing decision-making model in traffic flow_ Finally, a pedestrian road-crossing decision-making model based on the SW distribution in the traffic flow (SW-PRD) is then established by employing (5) and (7): \[\begin{split}& f_{SW}(t_{\text{int}})=\sum_{n=1}^{N}P_{n}\cdot \mathrm{SW}\left(b,\gamma\left(\dot{\theta}_{n}\right),\tau\left(\dot{\theta}_{ n}\right)\right)\\ & P_{n}=p\left(\dot{\theta}_{n},X_{1,n},X_{2,n}\right)\cdot(1-P_ {n-1})\\ & P_{0}=0\end{split} \tag{9}\] where \(n\) is the position number of the gap in the traffic flow. \(\dot{\theta}_{n}\), \(X_{1,n}\) and \(X_{2,n}\) represent the decision variables for the \(n\)th traffic gap. \(P_{n}\) means the recursive probability that pedestrians accept the \(n\)th gap, which is calculated based on \(p\) and \(P_{n-1}\). Similarly, a road-crossing decision model based on Gaussian distribution (G-PRD) is given by: \[f_{G}(t_{\text{int}})=\sum_{n=1}^{N}P_{n}\cdot\mathcal{N}\left(\mu\left(\dot{ \theta}_{n}\right),\sigma\left(\dot{\theta}_{n}\right)\right) \tag{10}\] ### _Simulation tool_ In this subsection, an agent-based simulation tool is proposed using the established models to reproduce pedestrian crossing behavior at uncontrolled intersections with traffic flow. The framework mainly includes three parts: the decision model, environment model, and pedestrian kinematics model. Regarding the traffic environment, as the intersections on multi-lanes are often separated by refuges [40], pedestrians actually cross one lane at a time. Therefore, a single-lane road with an uncontrolled intersection is considered. On the other hand, the model is possibly extended to a multi-lane situation, but the impacts of refuges should be further considered [41]. A fleet of vehicles travels on the lane at a constant speed, wherein the vehicle quantity, speed, and traffic gaps can be customized. Afterward, a basic social force model is applied as a pedestrian kinematics model [42], which considers the driving force towards the destination and repulsive force from the boundary of the crosswalk. Finally, according to the information provided by the traffic environment and kinematics model, each pedestrian's road crossing decision is generated through PRD models. The detailed process of the simulation tool is provided in the supplementary file (Appendix. A-A). A demonstration video of the simulation tool is also provided. Please see the attachment. ## III Model calibration and validation In this study, two empirical datasets collected in a simulated environment, i.e., a CAVE-based highly immersive pedestrian simulator, were applied to calibrate and validate the PRD models. The following sections provide detailed information on the two datasets, calibration, and validation methods. ### _Empirical data_ _Dataset one._ A virtual road scene with a 3.5 m wide single lane and 1.85 m wide pavement was created in the simulator. Two consecutive vehicles of 1.95 m in width were driven in the middle of the road at the same constant speed. Three vehicle speeds were selected, namely, 25 mph, 30 mph, or 35 mph. The first vehicle came into view 96 m away from the pedestrian, and the second vehicle maintained a specific time gap behind the first vehicle, i.e. 2 s, 3 s, 4 s, or 5 s (Fig. 4a). Sixty participants were instructed to cross the road between the two cars if they felt comfortable and safe to do so. Otherwise, they could reject the gap. Three experimental blocks were created, and each of the 12 scenarios (4 time gaps \(\times\) 3 speeds) were presented in random order and repeated once in each experimental block. Therefore, each participant Fig. 4: Schematic diagrams and photos of traffic scenarios in simulated experiments. The crossing scenarios and traffic of the (a) first dataset and (b) second dataset. experienced 72 trials, and 4270 trials of data were obtained in total. The virtual environment and simulation process mentioned above were designed and controlled by the Unity3D platform. Internal code automatically recorded the positions and velocities of vehicles and participants on each time step. Two main metrics were applied: gap acceptance, \(u\), and crossing initiation time, \(t_{int}\). The gap acceptance data were the binary crossing decisions made by participants, i.e., \(u=1\) means pedestrians accepted the gap, while 0 indicated rejected the gap. The crossing initiation time was defined as described in Section II-D and Fig.3a. For more detailed information about this dataset, please refer to [38]. _Dataset two._ To explore pedestrians' road crossing decisions in traffic flow, pedestrians were asked to cross a one-lane road with continuous traffic in the simulator (Fig.4b). The size of time gaps between every two consecutive vehicles varied, which provided pedestrians with different opportunities to make crossing decisions (Fig.4b). Four traffic scenarios with different sequences of gap sizes (in seconds) were designed as follows: * Scenario one: 1 1 1 3 3 3 6 1 1 6; * Scenario two: 1 1 1 1 3 3 7 1 1 3 8; * Scenario three: 1 1 1 3 1 3 1 3 5 4 8; * Scenario four: 2 3 1 1 3 1 1 1 5 4 7; Among these scenarios, the one-second and two-second time gaps between vehicles were considered dangerous crossing opportunities that very few pedestrians would accept. For the three-second and four-second gaps, decisions were expected to significantly differ between participants due to their heterogeneity (e.g., age and gender). The time gaps longer than four seconds were considered safe gaps that most pedestrians were expected to confidently accept. In all scenarios, a range of compact, midsize, van, and SUV vehicles were driven at 30 mph. Since the types of the approaching vehicle were randomly selected, in the analyses here, the width of the vehicle was calculated by averaging the width of all vehicles in the corresponding gap in each scenario. 60 participants completed four crossing tasks in any of the four scenarios and repeated them once more (4 crossing tasks \(\times\) 4 scenarios \(\times\) 2 repetitions). We, therefore, collected data from 1920 trials. All the trials that participants experienced were in a randomized order. Similar to the first dataset, two main metrics were used: gap acceptance, \(u\), and crossing initiation time, \(t_{int}\). For more detailed information about this dataset, please refer to [25]. ### _Data processing and parameter estimation_ With regard to data processing, both datasets were divided into a training set and a validation set. Regarding dataset one, as controlled experimental variables were vehicle speed and time gap size, we separated the training and validation sets by choosing the data from different combinations of experimental variables (As illustrated in Section III-A, there were 12 different combinations). To have enough data in the training and validation sets, data from 10 combinations were grouped into the training set, while the rest of the data belonged validation set. Moreover, in order to make sure the validation data were sufficiently different, the 2 combinations are not adjacent to each other in terms of speed or time gap size. Accordingly, the validation set included data in 4 s 25 mph and 5 s 35 mph conditions, approximately accounting for \(23\%\) of the initiation time data and \(14\%\) of the gap acceptance data (The data size of the two metrics was not the same as there was no initiation time data for participants who rejected the gap). The remaining data of all other conditions were grouped into the training set. Similarly, with respect to dataset two, the data from traffic scenario four were used as the validation set, accounting for \(24\%\) of gap acceptance data and \(25\%\) of initiation time data. A Maximum Likelihood Estimation (MLE) method was used to calibrate the parameters in the models. Firstly, regarding the decision model (5), since it assumes that crossing decisions are drawn from a Bernoulli distribution, its likelihood function is given by: \[\begin{split}\mathcal{L}_{1}(\omega)=\prod_{i=1}^{n}& p\left(\Theta\mid\omega\right)^{u_{i}}\left(1-p\left(\Theta\mid\omega\right)^{1-u_{i}} \right)\\ &\rho_{1},\rho_{2},\rho_{3},\rho_{4}\in\omega\\ &\dot{\theta}_{i},X_{1,i},X_{2,i}\in\Theta\end{split} \tag{11}\] where \(\omega\) includes all the estimated parameters \(\rho_{1},\rho_{2},\rho_{3},\rho_{4}\). \(\Theta\) denotes \(\dot{\theta}_{i},X_{1,i},X_{2,i}\) for the \(i\)th trial. \(n\) is the size of the dataset. With respect to the initiation models, their likelihood functions are given by the following equations based on (7) and (8): \[\begin{split}\mathcal{L}_{2}(\Delta)=\prod_{j=1}^{m}& \mathrm{SW}\left(t_{int,j},\dot{\theta}_{j}\mid\Delta\right)\\ &\beta_{1},\beta_{2},\beta_{3},\beta_{4},b\in\Delta\end{split} \tag{12}\] \[\mathcal{L}_{3}(\Delta)=\prod_{j=1}^{m}\mathcal{N}\left(t_{int,j},\dot{\theta }_{j}\mid\Delta\right) \tag{13}\] where \(\Delta\) is the summary of the estimated parameters of crossing initiation models. \(t_{int,j}\) is the \(j\)th crossing initiation time data. The data size is \(m\). According to the MLE method, the maximization problem is equivalent to minimizing the negative log-likelihood. Thus, the optimal estimations for parameters are achieved when negative log-likelihood functions are minimised, e.g., \(-\ln\left(\mathcal{L}_{1}(\omega)\right)\). We applied a built-in 'fminuc' function in MATLAB to find the solution to the above minimization problems [43]. Furthermore, there were some differences in the model estimates based on the two datasets. Firstly, since the traffic flow scenarios were not considered in dataset one, the models based on this dataset did not include the parameters \(\rho_{1},\rho_{2}\). Regarding dataset two, for comparison purposes, we manipulated the SW-PRD model so that it had the proposed decision rules for traffic flow, whereas the G-PRD model did not. The estimated parameters based on the two datasets are presented in Table. I and Table. II. In addition, the parameters of the social force model are adopted from [42]. ### _Validation methods_ After calibration, the predictions were compared with the validation set to verify the ability of the models. Two evaluation methods were applied to compare the performance of the proposed models, namely BIC and K-S test. The BIC is given by: \[\text{BIC}=k\ln(n)-2\ln(L) \tag{14}\] where \(k\) is the number of parameters in the model. \(n\) is the size of the dataset. \(L\) is the maximum likelihood. The preferred model is the one with the minimum BIC [44]. K-S test is a nonparametric test, which is used to evaluate the goodness-of-fit of the predicted results by quantifying the distance between empirical and predicted distributions [45]. The main equation of K-S test is: \[D_{n,m}=\sup|\boldsymbol{F}_{n}(x)-\boldsymbol{F}_{m}(x)| \tag{15}\] where \(\sup\) denotes the supremum function. \(\boldsymbol{F}_{n}(x)\) and \(\boldsymbol{F}_{m}(x)\) are the distribution functions of the observed data and predicted result. \(n\) and \(m\) represent the size of the samples. The K-S test rejects the null hypothesis, i.e., two samples are drawn from the same probability distribution if \(D_{n,m}\) is bigger than the selected threshold. In addition, the R-squared, \(R^{2}\), and Root Mean Square Error (RMSE) are also used in the model discussion. ## IV Results and Analysis In this Section, we first discuss the calibration results of the SW-PRD and G-PRD models. Afterward, the validation results of the two models were compared using the BIC and K-S test. Finally, the model with better performance is compared to two entire datasets, and the reproduced crossing behavior patterns are discussed in detail. Additionally, regarding the first dataset, as it does not include the traffic flow scenario, we focus on the impacts of speed and time gap on pedestrian crossing behavior, while the effect of traffic is discussed using the results based on the second dataset. ### _Calibration results_ _Dataset one_. The parameters of the SW-PRD and G-PRD models were calibrated using the first dataset. One thing to note is that as the first dataset did not include traffic flow scenarios, these two models thus did not implement decision strategies in traffic, which means \(\rho_{1}\) and \(\rho_{2}\) were not included in the models, and two decision models in the SW-PRD and G-PRD models were the same. The calibration results are shown in Table. I, where the maximum log-likelihood and BIC of the SW-PRD model based on the training set are -108.43 and 252.37, which are significantly better than those of the G-PRD model, i.e., -176.69 and 381.79, indicating that the SW-PRD model can better describe pedestrian crossing initiation time than the G-PRD model on the calibration set. Moreover, it can be found that the effect of \(\rho_{0}\) is significantly negatively correlated with \(\dot{\theta}\) (Est. \(=-2.14,\text{C.I.}=[-2.28,-1.98]\)), showing that pedestrian crossing gap acceptance decreases as the risk of collision increases. Additionally, the estimated effect of \(\beta_{3}\) in the SW-PRD model is significantly correlated with \(\dot{\theta}\) (Table. I), suggesting that pedestrian crossing initiation time is negatively related to the collision risk. _Dataset two_. The calibration results based on the second dataset are shown in Table. II. As the SW-PRD model implemented the decision strategies in traffic flow, it included \(\rho_{1}\) and \(\rho_{2}\). However, the G-PRD model did not. Meanwhile, as both the decision model and initiation time model in the SW-PRD model and the SW-PRD model were different, we calculated the respective log-likelihood of the decision and initiation time models to facilitate the comparison of the results. Again, the SW-PRD model fits data better than the G-PRD model, where the SW-PRD model has larger log likelihoods for both the decision and crossing initiation time models, and its BIC is smaller than that of the G-PRD model. In particular, concerning the SW-PRD model, except for the significant effect of \(\rho_{0}\) (Est. \(=-2.92,\text{C.I.}=[-3.16,-2.68]\)), \(\rho_{1}\) and \(\rho_{2}\) also significantly affect the pedestrian gap acceptance (Est. \(=-1.29,\text{C.I.}=[-1.56,-1.02];\text{Est.}=-0.50,\text{C.I.}=[-0.84,-0.15]\)), consistent with our assumed crossing decision strategies in traffic flow. In addition, although the effect of \(\beta_{3}\) in the SW-PRD model is not significant, the positive effect of \(\beta_{1}\) reduces the tail magnitude of the distribution of crossing initiation time as \(\dot{\theta}\) increases and thus can reduce pedestrians crossing initiation time. ### _Validation results_ The calibration results indicate that the SW-PRD model fits the training sets better than the G-PRD model. In this section, the validation sets of two datasets are compared with the predicted results of two models. _Dataset one_. Regarding the validation results, as shown in Table. III, the SW-PRD model has better BIC values and K-S scores for all conditions. Specifically, in the 35 mph 5 s condition, the K-S test rejects the null hypothesis and indicates that the results of the G-PRD model are different from the observed data at a \(5\%\) significance level. As shown in Fig. 5a, it can be found that the G-PRD model tends to overestimate the initial parts of the data, but the SW-PRD model does not. _Dataset two_. The predicted results are compared to the validation set of the second dataset. The log-likelihood of crossing initiation time models of SW-PRD and G-PRD are presented separately for reasons explained previously (Table. IV). Both SW-PRD and G-PRD models accurately capture the timing of pedestrian crossing decisions in the traffic flow, i.e., the peak location of the initiation time distribution ( Fig. 5b). The predicted peak shapes of both models are close to the data. However, the SW-PRD model has a relatively better performance than the G-PRD model because the log-likelihood of the crossing initiation time model for SW-PRD is bigger than the value for G-PRD in Table. IV. The overall predictions of the SW-PRD model are closer to the data than these of the G-PRD model. Specifically, the SW-PRD model has a better BIC value and log-likelihood than the G-PRD model (Table. IV). Also, the K-S test supports that the predicted density function of the SW-PRD model is similar to the empirical distribution. In contrast, the predicted result of the G-PRD model is rejected by the K-S test at a \(5\%\) significance level (Table. IV). As shown in Fig. 5b, it can be found that consistent with the empirical data, the SW-PRD model predicts a decrease in the gap acceptance from the first 3 s gap (at \(t_{pass_{2}}\)) to the second 3 s gap (at \(t_{pass_{5}}\)). By contrast, the G-PRD model calculates a constant value for both 3 s gaps, resulting in a significant underestimation of gap acceptance in the first 3 s gap. In general, the SW-PRD model has better performance than the G-PRD model on the validation set of dataset two. In the following sections, we discuss the predicted pedestrian crossing behavior patterns in detail by comparing predicted results with the two full datasets to provide a complete understanding of the proposed crossing decision-making model. Since SW-PRD performs better on all datasets than G-PRD, the SW-PRD model generates our results in the following sections. ### _Dataset one: Speed and time gap effects_ The SW-PRD model predictions of crossing gap acceptance for each speed and time gap condition are compared with the observed data in Fig. 6a. According to the empirical data, crossing gap acceptance increased with vehicle speed and traffic gap, aligning well with previous studies [21, 35]. The SW-PRD model reproduces these behavioral patterns very well (\(R^{2}=0.890\), \(RMSE=0.050\)), suggesting that pedestrians might adapt their crossing decisions based on the changes in collision cues. Fig. 7a shows a comparison between the predicted crossing initiation time and observed data. In line with the literature, [34], the empirical data showed that pedestrian crossing initiation time correlated with vehicle kinematics, i.e., it decreased as traffic gaps and vehicle speeds decreased. This behavioral pattern can be understood as a distance-dependent phenomenon whereby a reduction in vehicle speed and time gap leads to a reduction in spatial distance, resulting in an increase Fig. 5: Validation results. Probability density functions and data based on datasets (a) one and (b) two. The vertical dash-dotted lines in (b) indicate the time when the rear end of the vehicle passes the pedestrian’s position. The size of the time gap (in seconds) between every two vehicles is indicated at the top of the diagram. in the perceived risk of collision [22]. Hence, if pedestrians choose to cross, they tend to do so more quickly. Based on our modeling results, the proposed SW-PRD model captures this pattern with a good fit (\(R^{2}=0.890\), \(RMSE=0.050\)), again indicating that visual collision cues are associated with pedestrian crossing behavior. Moreover, a more detailed comparison between predictions and data is shown in Fig. A.2 in Appendix A-B. It can be noticed that the SW-PRD model predicts pedestrian crossing behavior qualitatively and quantitatively. It not only describes the distributions of pedestrian crossing initiation along the time axis but also captures the variation in the mean crossing initiation time. ### _Dataset two: Impacts of traffic flow_ Predicted gap acceptances of the SW-PRD model in the traffic flow are compared to the observed data in Fig. 6b. Firstly, it can be noticed that pedestrians in the traffic flow did not accept gaps of the same size equally. For instance, regarding the \(4\)th gap and the \(5\)th gap in traffic scenario one (The size of both traffic gaps is 3 s), the probability of crossing gap acceptance dropped significantly from \(27.9\%\) to \(10.5\%\). When pedestrians faced the \(6\)th gap, the decreasing trend became even stronger. The probability of crossing gap acceptance was \(8.1\%\), more than three times smaller than the value of the \(4\)th gap. Further looking at the predictions, the SW-PRD model reproduces this behavioral pattern across all traffic scenarios with reasonable goodness-of-fit Fig 6b). Fig.7b plots the predicted crossing initiation time as a function of the time gap and compares it with the observed data. The SW-PRD model fits the crossing initiation time data well (\(R^{2}=0.850\), \(RMSE=0.038\)). Consistent with empirical observations and similar to the first dataset [29], the SW-PRD model predicts a smaller initiation time as the time gap decreases, again suggesting that pedestrians attempted to compensate for crossing risk in unsafe traffic gaps by initiating faster. Furthermore, as shown in Fig. A.3 in Appendix A-B, detailed model predictions are compared with the observed data. Across all traffic scenarios, the SW-PRD model accurately predicts the level, shape and location of peaks of the crossing initiation time distribution, showing that the model has a Fig. 6: Predicted gap acceptance of the SW-PRD model for both datasets. The data and the predicted results are represented in black and blue respectively. (a) For dataset one, the proportion of gap acceptance is plotted as a function of vehicle speed and gap size (Gap sizes are indicated by different line styles). (b) For dataset two, the proportion of gap acceptance for each gap of each traffic scenario is presented. Fig. 7: Predicted crossing initiation time of the SW-PRD model for both datasets. Error bars and the edge of blue areas indicate the \(2.5\%\) and \(97.5\%\) percentiles of the data and predicted results. (a) For dataset one, the crossing initiation time is plotted as a function of vehicle speed and gap size. (b) For dataset two, the crossing initiation time is a function of gap size. good ability to characterize pedestrian crossing decisions in a continuous flow of traffic. ## V Discussion and conclusion This study demonstrates a novel approach to characterize pedestrian crossing decision-making at uncontrolled intersections with continuous traffic. We hypothesized that the crossing behavior could be understood as depending on three stages of information processing (perceive, decide, execute), and thus proposed a model with three corresponding constituent parts: visual collision cue, crossing decision, and crossing initiation. Following is a summary of the detailed research results. In our previous study [22], we showed that the visual collision cue, \(\dot{\theta}\), could capture the effects of vehicle kinematics on pedestrian crossing decisions in single gaps and explain why pedestrians tended to rely on distance from vehicles to make crossing decisions [21, 35]. In this study, this finding is formally applied to model crossing decisions and extended to a more complicated traffic scenario, i.e., a continuous flow of traffic. The modeling results support that \(\dot{\theta}\) is capable of characterizing the risk perceived by pedestrians, at least at uncontrolled intersections with constant speed traffic. Moreover, regarding our third hypothesis, i.e., pedestrian crossing initiation is time-dynamic and influenced by vehicle kinematics, we relate the proposed crossing initiation time model to \(\dot{\theta}\). The modeling results support our hypothesis and show that pedestrians dynamically adjust their initiation time based on vehicle kinematics. Both the SW and Gaussian distributions can reasonably describe pedestrian initiation time, whilst the SW distribution has relatively better goodness-of-fit than the Gaussian distribution, which further indicates that the distribution of crossing initiation time is right-skewed. Notably, to accurately reproduce pedestrian crossing behavior in continuous traffic flow, we further hypothesize that pedestrians compare the risks of the gaps around them before making decisions, which is supported by the fact that the proposed crossing decision strategy for continuous traffic scenarios significantly improves the performance of the model. The study thus concludes with the following findings. Firstly, pedestrians may have a reduced tendency to accept a gap if they see an upcoming larger gap. Secondly, pedestrians may have a greater tendency to reject a gap if they have already rejected a gap of that size or larger. Although no other studies have yet found these patterns of crossing behavior, some empirical observations provide indirect support. [46] showed that drivers who rejected the bigger traffic gap tended to incur a longer delay. [26] indicated that pedestrians who tended to reject the crossing opportunities would be more cautious and tend to accept longer gaps. Moreover, [24] found that pedestrians who missed the first opportunity to cross the road would not compensate for their loss by accepting a shorter second opportunity to cross the road. The above studies reinforce our hypothesis that pedestrians who tend to wait for safer crossing opportunities are more cautious and more likely to optimize their crossing strategies by comparing crossing opportunities. Unlike several previous studies, which simply assumed pedestrians tend to accept smaller gaps with the increase in waiting time [7, 14], the novelty is that we show that there may be other patterns in pedestrian crossing behavior in terms of waiting for the crossing opportunity, which may provide an explanation for the non-significant effect of waiting time on pedestrian crossing decisions found in the meta-study [27]. Furthermore, this finding is interesting in that it reminds us that there may be a complex changing pattern in pedestrians' strategy toward waiting for crossing opportunities. Future research can further attempt to disentangle the effects of waiting time and traffic flow. Overall, this work provides a new concept that pedestrian crossing decisions are dynamic and intrinsically closely linked to their perceived collision risk, and can be reinterpreted through a three-stage crossing decision-making process. The proposed model shows good predictive performance in different simulator datasets, and it could therefore be interesting to test the model on naturalistic traffic datasets as a next step. Furthermore, the idea of the deconstructed process may drive further study to involve more complicated perceptual, decision, and initiation models. Regarding the practical implications of this study, there are many possible ways to extend these concepts and models to further improve research in pedestrian-AV interactions. First, as an increasing number of studies have been keen on using pedestrian behavior models to promote safe and efficient interactions [47], the proposed decision model may provide predictive information to help automated driving systems to better anticipate pedestrian crossing intentions and initiations. Early work is emerging where researchers are attempting to plan and coordinate the actions of AVs and pedestrians toward common goals by considering the visual collision risk of pedestrians [6]. Another possible application case is future traffic scenarios involving AV platoons and pedestrians, where AV platoons may need to take into account the dynamic pedestrian crossing decisions along the length of the platoon and adopt the decision strategy of each AV. Moreover, there is an urgent need to train and evaluate AVs to perform well also in safety-critical interactions with human road users. However, due to the low frequency of critical traffic scenarios in real life, i.e., the corner case, and safety reasons, both academia, and industry have agreed on using simulation methods as a complementary way to validate AVs. Reliable simulation results rely on the behavioral authenticity of simulated road users [14]. Hence, another practical significance of this study is that the model can serve as a module in the microscopic transport simulation tools or virtual testing platforms to realize naturalistic pedestrian road crossing decisions. However, several limitations of this study need to be addressed in the future. Since the results and model cover only scenarios with single-lane, constant-speed traffic, the model cannot be directly generalized to other scenarios without further development. For example, in situations with yielding vehicles, the collision cue model used in this study alone may not provide sufficient information to model crossing decisions. In addition, compared to the crossing behavior in pedestrian simulators, in real traffic, pedestrians can flexibly adjust their behaviors and be affected by many potential factors. The pedestrian simulator allows exact experimental control of conditions but, therefore, naturally provides a less variable environment, and the virtual nature of the task may also affect the observed behavior. Hence, an important future work should apply the model to a reliable naturalistic dataset. Furthermore, the model is developed based on current theories of human collision and does not assert that pedestrians exactly use the applied visual cues and perception strategy. As collision perception theory is further developed, the model can be improved accordingly.
2302.09418
M-SENSE: Modeling Narrative Structure in Short Personal Narratives Using Protagonist's Mental Representations
Narrative is a ubiquitous component of human communication. Understanding its structure plays a critical role in a wide variety of applications, ranging from simple comparative analyses to enhanced narrative retrieval, comprehension, or reasoning capabilities. Prior research in narratology has highlighted the importance of studying the links between cognitive and linguistic aspects of narratives for effective comprehension. This interdependence is related to the textual semantics and mental language in narratives, referring to characters' motivations, feelings or emotions, and beliefs. However, this interdependence is hardly explored for modeling narratives. In this work, we propose the task of automatically detecting prominent elements of the narrative structure by analyzing the role of characters' inferred mental state along with linguistic information at the syntactic and semantic levels. We introduce a STORIES dataset of short personal narratives containing manual annotations of key elements of narrative structure, specifically climax and resolution. To this end, we implement a computational model that leverages the protagonist's mental state information obtained from a pre-trained model trained on social commonsense knowledge and integrates their representations with contextual semantic embed-dings using a multi-feature fusion approach. Evaluating against prior zero-shot and supervised baselines, we find that our model is able to achieve significant improvements in the task of identifying climax and resolution.
Prashanth Vijayaraghavan, Deb Roy
2023-02-18T20:48:02Z
http://arxiv.org/abs/2302.09418v1
M-Sense: Modeling Narrative Structure in Short Personal Narratives Using Protagonist's Mental Representations ###### Abstract Narrative is a ubiquitous component of human communication. Understanding its structure plays a critical role in a wide variety of applications, ranging from simple comparative analyses to enhanced narrative retrieval, comprehension, or reasoning capabilities. Prior research in narratology has highlighted the importance of studying the links between cognitive and linguistic aspects of narratives for effective comprehension. This interdependence is related to the textual semantics and mental language in narratives, referring to characters' motivations, feelings or emotions, and beliefs. However, this interdependence is hardly explored for modeling narratives. In this work, we propose the task of automatically detecting prominent elements of the narrative structure by analyzing the role of characters' inferred mental state along with linguistic information at the syntactic and semantic levels. We introduce a stories dataset of short personal narratives containing manual annotations of key elements of narrative structure, specifically climax and resolution. To this end, we implement a computational model that leverages the protagonist's mental state information obtained from a pre-trained model trained on social commonsense knowledge and integrates their representations with contextual semantic embed-dings using a multi-feature fusion approach. Evaluating against prior zero-shot and supervised baselines, we find that our model is able to achieve significant improvements in the task of identifying climax and resolution. 1 MIT Media Lab 75 Amherst Street, Cambridge, MA, 02139 USA [email protected], [email protected] ## Introduction Narratives are the fundamental means by which people organize, understand, and explain their experiences in the world around them. Researchers in the field of psychology maintain that the default mode of human cognition is a narrative mode [1]. Humans share their personal experiences by picking specific events or facts and weaving them together to make meaning. These are referred to as personal narratives, a form of autobiographical storytelling that gives shape to experiences. Polkinghorne (1988) suggested that personal narratives, like other stories, follow broad characteristics involving: (a) typically a beginning, middle, and end, (b) specific plots with different characters and settings, or events. Often, characters learn something or change as a result of the situation or a conflict and resolution, but not always. Some of these characteristics provide the basis for the organizational framework of a story, commonly referred to as the narrative structure or the storyline. The growing amount of personal narrative text information in the form of social media posts, comments, life stories, or blog posts presents new challenges in keeping track of the storyline or events that form the defining moments of the narrative. Several recent works [11, 12, 13, 14, 15] have made efforts to advance the research in narrative comprehension. However, the development of computational models that automatically detect and interpret different structural elements of a narrative remains an open problem. Discovery of structural elements of a narrative has many applications in: (a) retrieval of narratives based on similar dramatic events or concepts instead of keywords [14, 15, 16], (b) linking related stories that form a narrative thread towards theme generation [1], (c) summarization of stories [12, 13] and (d) story ending prediction or generation [11, 12, 13, 14], (e) commonsense reasoning [1, 15], to list a few. Several narrative theories have been proposed such as Freytag [16], Prince [17], Bruner [18, 19], Labov & Waletzky [19], to name a few. These theories explain different elements of a narrative structure containing typical orderings between them. Certain elements of the narrative structure are correlated across different narrative theories. For example, Bruner's 'breach in canonicity' [1] could correspond to (a) Freytag's 'climax' - referring to the Figure 1: (Left) Freytag’s Pyramid. (Right) Highlights of climax and resolution for a sample personal narrative. 'turning point' of the fortunes of the protagonist (Abrams and Harpham, 2014) or (b) Labov's'most reportable event' (MRE) - describing the event that has the greatest effect upon the goals, motivations and emotions of the characters (participants) in the narrative (Labov and Waletzky, 1997; Labov, 2006). Shorter narratives tend to consist mostly of complicating actions that culminate in the MRE or climax and instances of events that reach a'resolution' stage indicated by a swift drop in dramatic tension, while the other structural elements are more likely to occur in longer narratives. Figure 1 (Left) shows Freytag's pyramid containing the key elements of the narrative structure and Figure 1 (right) contains highlights of climax and resolution for a sample personal narrative. Thus, our work aims to leverage computational approaches at the intersection of information retrieval, NLP, and aspects of psychology, and model the key elements of narrative structure - MRE and resolution. As an operating definition, we consider an MRE to be contained in a sentence(s) based on the following criteria - it is an explicit event that can be reported as the summary of the story and occurs at the highest tension point of the story. Similarly, an event qualifies as'resolution' if it usually occurs after the MRE and resolves the dramatic tension in the narrative. Recently, Papalampidi et al. (Papalampidi, Keller, and Lapata, 2019) introduced a dataset consisting of movie screenplays and plot synopsis annotated with turning points. Few attempts have been made at annotating elements of high-level narrative structures (Li et al., 2017) and automatically extracting them from free text. Ouyang et al. (Ouyang and McKeown, 2015)'s study on predicting MRE in narratives is the closest work to the problem considered in this paper. While most of these methods rely on syntactic, semantic, surface-level affect, or narrative features obtained using hand-engineering or pre-trained semantic embedding methods to model narrative structure, we investigate the role of a protagonist's psychological states in capturing the pivotal events in the narrative and their relative importance in identifying the elements of narrative structure - Climax and Resolution. We find a basis for this study in prior theoretical frameworks (Murray, 2003; Ryan, 1986; Ouyang and McKeown, 2014; Lehnert, 1981; Schafer, 2016) that emphasize (a) how narrative structure organizes the use of psychological concepts (e.g. intentions, desires and emotions) and mediates all the human interactions and their social behavior, and (b) how protagonist's mental states (both implicit and explicit inferences, also imputed by readers) and psychological trajectory correlate with the classic dramatic arc of stories. Thus, to obtain the protagonist's mental states, we refer to a recent work (Vijayaraghavan and Roy, 2021; Sap et al., 2019; Rashkin et al., 2018) that learns to embed characters' mental states using an external memory module. Our contributions are summarized below: * climax and resolution. Footnote 1: Short for **ST**ructures **O**f Reddit **P**Esonal **S**tories * An end-to-end neural network for modeling narrative structure, referred to as M-sense2, that allows for integration of protagonist's mental state representations with linguistic information via multi-feature fusion. Footnote 2: Short for **M**ental State **E**nriched **N**arrative **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**trope **S**t** **S**trope **S**t** **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **t**S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t **S**t** **S**t **S**t** **S**t **S**t** **S**t **S**t **S**t** **S**t** **S**t **S**t** **S**t **S**t** **S**t **S**t** **S**t **S**t ** niques. We employ the ensuing mental state embeddings in tandem with contextual semantic embeddings towards our primary objective of identifying elements of high-level narrative structure - climax and resolution. We also conduct a detailed analysis of the outcome and the contribution of the protagonist's psychological state trajectory to our task. ## Dataset Collection Figure 2 presents our data collection pipeline. First, we collect Reddit posts from two communities: r/r/offmychest and r/confession using the PushShift API 3. Next, we filter the collected data to retain only those posts that do not contain tags like "[Deleted]", "NSFW" 4 or "over_18". Finally, we further narrow down the aggregated posts using a Bert-based story classifier. The pipeline is described in the Appendix. Footnote 3: [https://pushshift.io/](https://pushshift.io/) Footnote 4: NSFW – not safe for work ### Annotation Here, we explain the annotation process involved in the construction of our Stories dataset. Table 1 shows the descriptive statistics of our dataset. SetupWe created a user interface for MTurk workers to make the annotation procedure convenient for capturing key elements of the narrative structure - climax and resolution. The user interface allows the workers to highlight parts of the text that qualify as climax and resolution using red and green colors respectively. Three annotators were mainly involved in the annotation process. Each worker is presented a sampled text from the Reddit personal narrative corpus. Additionally, the workers are provided with an option of selecting checkboxes: "No Climax" or "No Resolution". This caters to those personal stories that don't contain a climax or resolution. AgreementsOnce the data is collected using our annotation setup, we measure the inter-annotator agreement (IAA) at the sentence-level. For sentence-level agreement, we use the following metrics: (i) Fleiss's kappa \((\kappa)\)[12], (ii) mean annotation distance (\(\mathcal{D}\)), i.e., the distance between two annotations for each category, normalized by story length [1]. AnalysisWe study the appearance of climax and resolution sentences by estimating their mean position normalized by the story length. We present the distribution of the position of both the structural elements in Figure 3. While the average position for climax (0.61) coincides with the peak, we observe that the resolution contents occur later in the story. Table 2 shows the sentence-level IAA measures for each narrative element. We observe that substantial agreement is achieved for both the climax and resolution. Clearly, we obtain higher agreement values for resolution than the climax. Figure 3(a) displays sample annotations (e.g. multi-sentence or non-contiguous highlights; no resolution) from our Stories dataset. ## M-sense: Modeling Narrative Structure In this paper, we explore different modeling and analysis methods for understanding narratives and automatically extracting text segments that act as key elements of narrative structure, particularly climax and resolution. The models are provided a narrative text \(T\) with \(L\) sentences, \(T=[S_{1},S_{2},...,S_{L}]\), as input. Here, each sentence \(S_{i}\) contains \(N_{i}\) words \(\{w_{1}^{i},w_{2}^{i},..,w_{N_{i}}^{i}\}\) from vocabulary \(\mathcal{V}\). Towards automatic detection of structural elements, we formulate it as a sentence labeling task where the goal is to predict a label \(\hat{y}_{i}\in\{None,Climax,Resolution\}\) for each sentence \(S_{i}\), based on the story context. Beyond linguistic features extracted from narratives, we focus on a dominant aspect in which a narrative is formed or presented, that is an account of characters' mental states - motives and emotions. Thus, we leverage transfer learning from pretrained models trained to infer characters' mental states from a narrative. We implement a multi-feature fusion based learning model, M-sense, that potentially encapsulates syntactic, semantic, characters' mental state features towards our overall goal of predicting climax and resolution in short personal narratives. Our M-sense model consists of the following components: **Ensemble Sentence Encoders**, which computes per-sentence linguistic & mental state embeddings. \begin{table} \begin{tabular}{l|c} \multicolumn{2}{l}{**Dataset Statistics**} \\ \hline \#Total Narratives & 63,258 \\ \#Annotated Narratives & 2,382 \\ \#Total Sentences & 42,614 \\ \#Climax Sentences & 5,173 \\ \#Resolution Sentences & 4,502 \\ \hline \end{tabular} \end{table} Table 1: Statistics of our annotated Stories dataset. \begin{table} \begin{tabular}{l c c} \multicolumn{1}{c}{**Metric**} & **Climax** & **Resolution** \\ \hline Percentage Agreement & 0.736 & 0.807 \\ Fleiss's Kappa \((\kappa)\) & 0.646 & 0.756 \\ Mean Annotation Distance \((\%\mathcal{D})\) & 1.764 & 1.590 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \end{tabular} \end{table} Table 2: Sentence-level inter-annotator agreement. Figure 3: Distributions of mean climax & resolution sentence positions. Figure 2: Illustration of our data collection pipeline. **Fusion layer**, which integrates the protagonist's mental state information with the extracted linguistic features. **Story Encoder**, which maps the fused encodings into a sequence of bidirectionally contextualized embeddings. **Interaction layer**, which estimates state transition across sequential context windows to identify the boundaries. **Classification layer**, which involves linear layers to eventually calculate the label probabilities. ### Ensemble Sentence Encoders In this work, we aim to exploit both linguistic and mental state features for an enhanced model for narratives. **Extracting Linguistic Representations** Pretrained general purpose sentence encoders usually capture a hierarchy of linguistic information such as low-level surface features, syntactic features and high-level semantic features. Given a narrative text with \(L\) sentences \(T=[S_{1},S_{2},...,S_{L}]\), this component outputs hidden representations for sentences \(H_{sents}=[h^{1},h^{2},...,h^{L}]\) using different encoding methods.In our Msense model, we use a token-level Bert-based sentence encoder (more in the Appendix). Here, each sentence is prepended with a special \([CLS]\) token at the beginning of each sentence and appended with a \([SEP]\) token at the end of each sentence in the narrative. We apply both position and segment embeddings and feed to the pre-trained Bert model as: \[H=[h^{1}_{[CLS]},..,h^{1}_{N_{1}},h^{1}_{[SEP]},..,h^{i}_{[CLS]},..,h^{i}_{N_{i}},..,h^{L}_{[SEP]}]\\ =\textsc{Bert}(T) \tag{1}\] The hidden representation of the \(i^{th}\)\([CLS]\) token from the top Bert layer is extracted as the semantic embedding of the \(i^{th}\) sentence. However, we drop the subscript \([CLS]\) from \(h^{i}_{[CLS]}\) and denote the output semantic embeddings as: \(H^{xSem}_{sents}=[h^{1},h^{2},...,h^{L}]\). **Incorporating Protagonist's Mental Representation** Prior studies have established how the progression of a story is as much a reflection of a sequence of a protagonist's motivation and emotional states as it is the workings of an abstract grammar [1, 2, 3]. We follow a recent work [2] that implements a Nemo model, a variant of a Transformer-based encoder-decoder architecture to embed and explain characters' (or entities') mental states. We extract the embeddings of intents and emotional reactions of the protagonist for a given sentence in the narrative conditioning on the prior story context. Figure 4b contains the overview of the Nemo architecture. The computation of mental state embeddings are facilitated by a knowledge enrichment module that consolidates common-sense knowledge about social interactions and an external memory module that tracks entities' mental states. Using prior context \((S_{<i})\), entity \((e_{j})\) and mental state attribute information (\(m\in\{xIntent,xReact\}\) representing intent and emotional reaction respectively), we use the encoder, StoryEntEnc\((\cdot)\), in this trained model to obtain entity-aware mental state representation of the current sentence \(S_{i}\). The encoding process in the Nemo model is given by: \[(\tilde{H}^{i}_{xIntent};\tilde{H}^{i}_{xReact})=\textsc{StoryEnt Enc}(S_{i},S_{<i},e_{j},m);\\ \forall m\in\{xIntent,xReact\} \tag{2}\] where \(e_{j}\in\mathcal{E}\) is the entity, \((\hat{H}^{i}_{xIntent},\tilde{H}^{i}_{xReact})\) is the resulting entity-aware intent and emotion representation of the \(i^{th}\)-sentence given the story context. In this work, we use the narrator ("I" or "self" in the personal narratives) as the protagonist. We only utilize the hidden representations of the \([CLS]\) token from both \((\hat{H}^{i}_{xIntent},\tilde{H}^{i}_{xReact})\) for subsequent processing steps. We denote these intent and emotion representation as: \(H^{xintent}_{sents}=[\hat{h}^{1},..,\hat{h}^{L}]\) and \(H^{xReact}_{sents}=[\tilde{h}^{1},..,\tilde{h}^{L}]\) respectively. ### Transformer-based Fusion Layer Given multiple sentence-level embeddings, we apply a fusion strategy to derive a unified sentence embedding for our classification task. Let \(h^{ik};\forall k\in\{1,...,K\}\) denote different per-sentence latent vectors. In our case, \(K=3\) and \(h^{i1}=h^{i};h^{i2}=\hat{h}^{i};h^{i3}=\tilde{h}^{i}\) are embeddings related to semantics (\(xSem\)), intents (\(xIntent\)) and reactions (\(xReact\)) of the \(i^{th}\) sentence respectively. Drawing ideas from the literature of multimodal analysis [10], we treat the multiple latent vectors as a sequence of features by first concatenating them together. We introduce a special token \([FUSE]\)5 that accumulates the latent features from different sentence encodings. The final hidden representation of \([FUSE]\) token obtained after feeding them to a Transformer layer is the fused output sentence representation: \(h^{i}_{fuse}=\text{Tr}(\|^{K}_{k=0}\ h^{ik})\), where Tf refers to the transformer encoder layer and \(h^{i0}\) (i.e. when \(k=0\)) is set to the trainable \([FUSE]\) vector. Footnote 5: \([FUSE]\) is similar to the commonly used \([CLS]\) token. ### Story Encoder We apply Transformer layers on the top of the sentence representations to extract narrative-level features. We refer it as _Inter-sentence Transformer_. Intuitively, the transformer layer focuses on possibly different sentences in the narrative, and produces context-aware sentence embeddings. This is given as: \[\begin{array}{c}\hat{H}^{l}=LayerNorm(\hat{C}^{l-1}+\textsc{Mia}(\hat{C}^{l -1})\\ \hat{C}^{l}=LayerNorm(\hat{H}^{l}+\textsc{Ffl}(\hat{H}^{l}))\\ C_{sents}=[c^{1},c^{2},...,c^{L}]=\hat{C}^{n_{L}}\end{array} \tag{3}\] where \(\hat{C}^{0}=PE(H_{sents})\), \(PE\) refers to the positional encoding, \(LayerNorm\) refers to layer normalization operation, Mha is the multi-head attention operation and Ffl is a feed-forward layer [2]. The superscript \(l\) indicates the depth of the stacked Transformer layers. The output from the topmost layer, \(l=n_{L}\), is our contextualized sentence embeddings \(C_{sents}\). ### Interaction layer In this layer, we compute the transition of state across sentences by measuring similarity metrics in the embedding space between sequential context windows and concatenating them with contextualized embeddings. By choosing windows of size \(s\), we compute the left \((c^{i}_{left})\) and right \((c^{i}_{right})\) context information for the \(i^{th}\) sentence by computing the mean sentence embedding within that window. Finally, we get the interaction-feature enhanced context-aware embeddings: \(E_{sents}=[e^{1},e^{2},...,e^{L}]\). ### Classification layer The resulting embeddings are mapped to a \(C\)-dimensional output using a softmax-based classification layer. Here, \(C=3\) is the number of labels. This step is given as: \(\hat{y}_{i}=softmax(f_{s}(e^{i}))\). ## Experiments We conduct experiments to study the following research questions: **RQ1:** How does our model compare with other baselines for identifying climax and resolution in short narratives? **RQ2:** How do various model components contribute to the overall performance? To what extent do mental state representations play a role for our classification task? ### Overall Predictive Performance (RQ1) #### Baselines We compare our model with a set of carefully selected zero-shot (see the Appendix section) & supervised baselines, shown as follows. **Random baseline**, which assigns labels (Climax, Resolution or None) to sentences randomly. **Distribution baseline**, which picks sentences that lie on the peaks of the empirical distributions for climax and resolution in our training set as explained earlier. **Heuristic baseline**, which labels the sentences as climax or resolution based on heuristics. While we use the sentence that is the closest semantic neighbour of the post title as climax, the last sentence in the narrative is labeled as the resolution (as explained in the Appendix section.). A recent work [20] has explored surprise a measure of suspense in narratives. [1]'s surprise is defined as the amount of change from previous sentence to the current sentence in the narrative (see the Appendix). We encode the sentences in the story using the following approaches and eventually compute suspense measures for our classification task. **GloVeSim**[2], **Bert**[1], **Use**[1], which computes semantic embeddings using average word vectors (using GloVe) or Transformer-based models. **StoryEnc**[1], which uses the hierarchical RNN based language model to encode sentences in the story. **StoryEntEnc**[21], which encodes the sentences in the story from the protagonist's perspective. Here, we denote intent and emotion embeddings as \((E_{int}=H^{xIntent}_{sents})\) and \((E_{emo}=H^{xReact}_{sents})\) respectively. **Cam and Tam**[1] consist of bidirectional LSTM model with the latter model having an additional interaction layer to compute boundaries between the topics in each story. **M-sense-Fusion**, which is a variant of our M-sense model without mental state embeddings. **M-sense**, which is our complete model incorporating protagonist's mental representation. #### Results Table 3 outlines the results of our evaluation. We report the performance of simple baselines, of which the distribution baseline turns out to be the strongest. The heuristic baseline performs slightly better than the random baseline. This suggests that the Reddit post title contains relevant signal to predict the climax while the last sentence heuristic for resolution is only as good as a random classifier. Figure 4: (a) Sample annotations of climax (Red) and Resolution (Green) by one of the annotators. (b) Illustration of our M-sense model. Note that \(h^{i1}=h^{i};h^{i2}=\tilde{h}^{i};h^{i3}=\tilde{h}^{i}\) relate to semantics \((xSem)\), intents \((xIntent)\) and emotional reactions \((xReact)\) of the \(i^{th}\) sentence respectively. Applying suspense-based approaches with different sentence embedding methods yields relative improvement over the simple baselines in terms of both the evaluation metrics. As expected, sentence-level \(\text{\sc Bert}/\text{\sc Use}\) performs worse than its token-level counterpart. We attribute this variation in performance to the lack of any story context information for computing latent embedding, thereby affecting the assessment of state changes in the narrative. However, sentence-level \(\text{\sc Use}\)'s ability to produce better similarity estimates gives it a slight advantage over sentence-level \(\text{\sc Bert}\). Notably, sentence representations obtained from models trained on stories \((\text{\sc StoryEnc},\text{\sc StoryEntEnc})\) recorded comparable to improved results over other sentence embedding methods. Strikingly, computing surprise using protagonist mental state embeddings exhibit an overall enhanced classification capability. We find that the intent embedding \((E_{int})\) helps achieve the best zero-shot performance for detecting climax. A competitive outcome for resolution is obtained using protagonist's emotion representation \((E_{emo})\). We compare our complete M-sense model with the best performing prior models such as \(\text{\sc Cam},\text{\sc Tam}\)(Papalampidi, Keller, and Lapata, 2019) applied for similar tasks. As we can see, supervised fine-tuning approaches easily beat the earlier results obtained using zero-shot methods. Finally, our M-sense model achieves an absolute improvement of \(\sim 20.07\%\) and \(\sim 22\%\) for climax and resolution prediction respectively. ### Ablation Study (RQ2) To evaluate the contributions of each component in our M-sense model, we conduct an ablation study using the validation set. For this study, we compare our best performing M-sense model with alternative modeling choices for each of the components. Table 4 shows the results of our study. We modify one component at a time and report their performance using \(F_{1}\) metric. This involves either replacing a component (denoted by "w/") or removing a component (denoted by "w/o" to refer without the component). For eg. "w/ Sentence-level \(\text{\sc Bert}\)" refers to replacing token-level \(\text{\sc Bert}\) in our M-sense model with sentence-level \(\text{\sc Bert}\) as our sentence encoder; "w/o \(E_{emo}\)" indicates the removal of protagonist's emotion state embedding from the fusion layer. _Influence of Mental State Embeddings_: In this study, we examine the necessity of a fusion layer and probe the influence of protagonist's mental state embeddings towards our classification task. Notably, the results in Table 4 validate the benefits of introducing the fusion layer and demonstrate the relative performance gains obtained with intent and emotion embeddings. In the absence of a fusion layer, we observe that the performance drop is \(\sim 11\%\) and \(\sim 13\%\) for predicting climax and resolution respectively. The loss of the protagonist's intent information impacts the climax prediction more. This is analogous to the effect emotion information has on resolution prediction. ### Analysis and Discussion _Effect of Story Length_: Here, we compare the performance of different sentence encoders with and without fusion layer for detecting climax in narratives with varying length. Figure 5 shows the results of this analysis. We observe that the token-level \(\text{\sc Bert}\) outperforms sentence-level \(\text{\sc Bert}\) and \(\text{\sc Use}\) encoders for narratives containing up to \(13-14\) sentences, but the performance gradually degrades beyond 14 sentences. Sentence-level \(\text{\sc Use}\) encoder produces stable and relatively better outcomes for longer narratives (story length \(>14\)). With the introduction of mental state representation through fusion layer, the \(F_{1}\) score improved significantly irrespective of the sentence encoder used. _Error Analysis_: In order to estimate why our model augmented with mental state representation performs better, we conduct error analysis between our full M-sense model and the model without mental representation fusion (M-sense-\(Fusion\)). For those narratives where the latter model fails to predict correctly, we gauge the patterns emerging out of the \begin{table} \begin{tabular}{l|c c} \hline \multicolumn{1}{c|}{**Model Variants**} & \multicolumn{2}{c}{\(F_{1}\uparrow\)} \\ \hline \hline & **C** & **R** \\ \hline **M-sense** & **0.688** & **0.738** \\ \hline **Sentence Encoder Variants** & & \\ w/ Sentence-level \(\text{\sc Bert}\) & 0.665 & 0.709 \\ w/ Sentence-level \(\text{\sc Use}\) & 0.677 & 0.726 \\ \hline **Story Encoder Variant** & & \\ w/o Story \(\text{\sc Encoder}\) & 0.620 & 0.653 \\ w/ Inter-Sentence \(\text{\sc Rnn}\) & 0.659 & 0.705 \\ \hline **Interaction Layer Variant** & & \\ w/o Interaction Layer & 0.654 & 0.716 \\ \hline **Fusion Layer Variants** & & \\ \(-\)w/o Fusion Layer & 0.614 & 0.640 \\ \(-\)w/o \(E_{int}\) & 0.638 & 0.703 \\ w/o \(E_{emo}\) & 0.652 & 0.687 \\ \hline \hline \end{tabular} \end{table} Table 4: We report \(F_{1}\) score per class with non-default modeling choices for each component of our model. \begin{table} \begin{tabular}{l|c c|c c} \hline \multicolumn{1}{c}{**Models**} & \multicolumn{2}{c}{\(F_{1}\uparrow\)} & \multicolumn{2}{c}{\(D\downarrow\)} \\ \hline & **C** & **R** & **C** & **R** \\ \hline Random & 0.196 & 0.143 & 29.05 & 30.57 \\ Distribution & 0.274 & 0.315 & **15.79** & **14.42** \\ Heuristic & 0.217 & 0.147 & 23.74 & 26.82 \\ \hline GloVeSim & 0.312 & 0.344 & 12.06 & 11.65 \\ \(\text{\sc Bert}_{tok}\) & 0.408 & 0.441 & 9.37 & 8.09 \\ \(\text{\sc Bert}_{sent}\) & 0.352 & 0.366 & 10.88 & 9.73 \\ \(\text{\sc Use}_{sent}\) & 0.379 & 0.391 & 10.42 & 9.58 \\ StoryEnc & 0.410 & 0.438 & 8.81 & 7.46 \\ \(E_{int}\) & **0.437** & 0.462 & **8.19** & 6.94 \\ \(E_{emo}\) & 0.429 & **0.475** & 8.43 & **6.67** \\ \hline Tam & \(0.565_{\pm 0.022}\) & \(0.609_{\pm 0.008}\) & \(5.90_{\pm 3.18}\) & \(5.02_{\pm 2.82}\) \\ Cam & \(0.578_{\pm 0.019}\) & \(0.604_{\pm 0.0032}\) & \(6.58_{\pm 4.02}\) & \(5.44_{\pm 3.05}\) \\ M-sense & **0.694\({}^{\bullet}_{\pm 0.0027}\)** & **0.743\({}^{\bullet}_{\pm 0.0015}\)** & **4.15\({}_{\pm 1.84}\)** & **3.20\({}_{\pm 1.06}\)** \\ \hline \end{tabular} \end{table} Table 3: Evaluation Results of different models to detect \(\text{\sc Climax}\) and \(\text{\sc Res}\)olution. We report \(F_{1}\) score per class & percent \((D)\) for these models. We use \(\uparrow,\downarrow\) to indicate if higher/ lower values mean better performance respectively. * refers to significance \((p<0.05)\) over TAM using a paired T-Test. following analysis: (a) Using VADER6Hutto and Gilbert (2014) a normalized, weighted composite sentiment score is computed for each sentence in the narrative, and (b) Using state classification Rashkin et al. (2018); Vijayaraghavan and Roy (2021), we assess Maslow's motivation or intent categories associated with sentences predicted as climax or resolution in the narrative and analyze for any pattern related to ground truth climax/resolution sentences. For predicting resolution, the M-sense\(-Fusion\) makes \(\sim 28\%\) more mistakes than M-sense model for narratives with homogeneous endings (i.e. narratives having same sentiment sentences in the neighbourhood of resolution closer to the end of the story). M-sense\(-Fusion\) model is unable to discern clearly and predicts a different sentence as resolution. Based on our analysis (b), there is a clear pattern that M-sense gains significantly over the M-sense\(-Fusion\) when the ground-truth climax sentences belong to "Esteem" and "Love/Belonging" categories. Our attention analysis results are shown in the Appendix section. Footnote 6: [https://github.com/cjhutto/vaderSentiment](https://github.com/cjhutto/vaderSentiment) ## Task: Modeling Movie Turning Points Given that our work is primarily focused on modeling narrative structure in personal narratives, we analyze how we can apply such a model towards identifying climax and resolution in movie plot synopsis. Papalampidi et al. (2019) introduced a Tripod dataset containing a corpus of movie synopses annotated with turning points (TPs). By testing our model on this dataset, we evaluate our model's performance on an out-of-domain dataset. The dataset identified five major turning points in the movie synopses and screenplay, referring to them as critical events that prevent the narrative from drifting away. By their definitions for each of these categories Papalampidi et al. (2019), TP4 and TP5 align clearly with our usage of climax and resolution from prior narrative theories. Due to this alignment, it is relevant to use our model to predict these two categories in the Tripod dataset. However, we focus on the movie plot synopses in this work and use the cast information collected from IMDb as a part of this dataset. We first apply our M-sense trained on our Stories corpus directly and evaluate its zero-shot performance (referred as Zs). We assume the protagonist in the movie to be the top character from the IMDb cast information. Though this may not always be true, it measures how our model fares on this dataset for predicting TP4 and TP5. Further, we use sentence-level Use-based sentence encoder as some of the wiki plot synopses are longer than what can be accommodated by our token-level Bert model. Additionally, we also fine-tune our model with the training set of the Tripod dataset. This is denoted by M-sense\((FT)\).
2303.09731
Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in Automated Driving System from Appearing Attacks
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds. However, recent works show the adversary can forge non-existent cars in the prediction results with a few fake points (i.e., appearing attack). By removing statistical outliers, existing defenses are however designed for specific attacks or biased by predefined heuristic rules. Towards more comprehensive mitigation, we first systematically inspect the mechanism of recent appearing attacks: Their common weaknesses are observed in crafting fake obstacles which (i) have obvious differences in the local parts compared with real obstacles and (ii) violate the physical relation between depth and point density. In this paper, we propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector to eliminate forged obstacles where a major proportion of local parts have low objectness, i.e., to what degree it belongs to a real object. At the core of our module is a local objectness predictor, which explicitly incorporates the depth information to model the relation between depth and point density, and predicts each local part of an obstacle with an objectness score. Extensive experiments show, our proposed defense eliminates at least 70% cars forged by three known appearing attacks in most cases, while, for the best previous defense, less than 30% forged cars are eliminated. Meanwhile, under the same circumstance, our defense incurs less overhead for AP/precision on cars compared with existing defenses. Furthermore, We validate the effectiveness of our proposed defense on simulation-based closed-loop control driving tests in the open-source system of Baidu's Apollo.
Qifan Xiao, Xudong Pan, Yifan Lu, Mi Zhang, Jiarun Dai, Min Yang
2023-03-17T02:20:47Z
http://arxiv.org/abs/2303.09731v1
# Exorcising "Wraith": Protecting LiDAR-based Object Detector ###### Abstract Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds. However, recent works show the adversary can forge non-existent cars in the prediction results with a few fake points (i.e., _appearing attack_). By removing statistical outliers, existing defenses are however designed for specific attacks or biased by predefined heuristic rules. Towards more comprehensive mitigation, we first systematically inspect the mechanism of recent appearing attacks: Their common weaknesses are observed in crafting fake obstacles which (i) have obvious differences in the local parts compared with real obstacles and (ii) violate the physical relation between depth and point density. In this paper, we propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector to eliminate forged obstacles where a major proportion of local parts have low _objectness_, i.e., to what degree it belongs to a real object. At the core of our module is a _local objectness predictor_, which explicitly incorporates the depth information to model the relation between depth and point density, and predicts each local part of an obstacle with an _objectness_ score. Extensive experiments show, our proposed defense eliminates at least 70% cars forged by three known appearing attacks in most cases, while, for the best previous defense, less than 30% forged cars are eliminated. Meanwhile, under the same circumstance, our defense incurs less overhead for AP/precision on cars compared with existing defenses. Furthermore, We validate the effectiveness of our proposed defense on simulation-based closed-loop control driving tests in the open-source system of Baidu's Apollo. ## 1 Introduction In automated driving systems (ADS), multiple deep neural networks (DNNs) are jointly deployed to provide key functionalities of localization, perception and planning, stimulating the recent development of automated transportation [36, 8, 33]. The robustness of each DNN module is of key importance to the security of the whole ADS. A typical example is the _perception_ module, which relies on a vector of _object detectors_, based on multiple sources like cameras and LiDARs [7], to predict the categories and locations of the obstacles around the ADS [32, 12]. As the LiDAR point clouds (PCs) contain richer location information than the images from cameras, most commercial ADS, including Google's Waymo One [5, 6] and Baidu's Apollo [1, 2], set LiDARs as the main sensors and rely on the detection results of _LiDAR-based object detectors_ for obstacle perception [37, 38, 27, 32]. Differently using PC as the model input, LiDAR-based object detectors still share the common vulnerability against _adversarial examples_[42, 48, 12]. In general, the attacker can spoof the LiDAR sensors with a limited number of perturbed/crafted points to mislead the detector's prediction. As a popular attack class, the _appearing attack_ aims at forging non-existent cars in the detection results to cause traffic jams and emergency braking [39, 11] (Fig.1). Despite the severity, existing defenses [41, 15, 24] either have strong prior assumptions on the undergoing attacks, or are biased by predefined heuristic rules, insufficient for handling complex driving scenarios (SS3.3). **Our Work.** In this paper, we propose a novel plug-and-play defense for 3D object detectors, which, instead of constructing a more robust model, adopts a _local objectness predictor_ (LOP) module to detect and eliminate forged obstacles from the original detection results. In general, our LOP is designed Figure 1: Appearing attacks on LiDAR-based object detectors in ADS can cause severe traffic accidents by forging cars. as a point-wise PC classifier [29, 34, 35, 45] which learns to predict each local part of a detected object with an _objectness_ score, i.e., the confidence of whether the local part belongs to a real object. By systematizing recent appearing attacks, we develop the following defensive insights: 1. Recent appearing attacks focus on increasing the confidence score of a fake detection result without considering the local difference between a real and a forged obstacle. Although an increased confidence score enhances the possibility of a non-existent obstacle to be detected by a 3D object detector, most appearing attacks leave the fake and the real obstacles locally distinguishable when inspected at the granularity of pillars or voxels (SS4.1) 2. Constrained by the physical capability of attack apparatus, appearing attacks are usually unable to forge a fake obstacle without violating some physical laws, especially the inimitable relation between the depth and the point density of real obstacles [14]. To pose real-world threats, the forged obstacles have to be close to the victim ADS, because otherwise they can be easily bypassed after the victim's re-routing. Yet, constrained by the attack apparatus (e.g., a laser transmitter [41]), the attacker can only forge a limited number of points near the victim during one scan of the LiDAR, which could hardly reach the normal point density of a real car at a close distance (SS4.2). Concurrent to our defense, Hau et al. [24] also notices the importance of the physical law in detecting forged obstacles, and presents a set of hand-crafted rules to eliminate the anomaly. Our work steps further by showing stronger robustness can be achieved if we exploit learning-based techniques to model the complicated physical laws. In fact, modeling the relation between the depth and the point density is rather challenging with hand-crafted rules. For example, although most of the real cars with smaller depth tend to have larger point density, those real cars occluded by others may also have smaller depth and point density simultaneously (Fig.4). To address this challenge, we implement the LOP as a DNN-based point-wise PC classification model and explicitly incorporate the depth information of each point into its feature vector. This substantially improves the modeling capability compared with using the original input feature for statistical outlier detection. Moreover, another technical challenge is the lack of no explicit annotation available for supervising the training of LOP in standard 3D object detection datasets. Inspired by a recent observation that a single part of the input already contains rich semantic information for a PC model to predict its related object's category and location [15], we construct a self-supervised learning task where the LOP learns to predict whether a pillar intersects with any bounding box of real objects based on the features of its inside points. During the detection, we first divide the input 3D space into equal-sized pillars, then the LOP predicts an objectness score for each pillar intersected with a predicted object's bounding box. By majority voting on the local objectness predictions, our defense determines whether the object is real or fake (SS4.3). **Our Contributions.** In summary, the key contributions of this work are as follows: \(\bullet\) We systematize the limitations of recent appearing attacks in violating the physical invariants and propose a learning-based defense to detect the forged obstacles with anomaly in the relation between the depth and the point density for the mainstream LiDAR-based object detectors. \(\bullet\) We propose the design of our local objectness predictor (LOP) which learns to predict the confidence of whether a local object part belongs to a real object, and allows plug-and-play integration with different defense targets for enhancing robustness against popular appearing attacks. \(\bullet\) Extensive evaluation on mainstream 3D detectors (i.e., Point-Pillars [27], PointRCNN [38] and PV-RCNN [37]) on the KITTI dataset [19] and on real-world PC data we collect from a driving test of the D-KIT Advanced with Velodyne-128 [4] validate the advantages of our proposed defense under three popular attacks. For example, with the same-level trade-off in model utility, our proposed defense eliminates at least 70% cars forged by most appearing attacks, while the best baseline method only eliminates the forged ones less than 30%. \(\bullet\) Moreover, we empirically validate that the effectiveness of our proposed LOP is robust to the architecture design of the LOP, the type of the defense target (including fusion models) which further implies our defense is more general-purpose than existing defenses. Besides, we also provide a preliminary study on the robustness of LOP against adaptive attacks. \(\bullet\) We further implement and evaluate the effectiveness of LOP in Apollo 6.0.0, an end-to-end open-source self-driving system, with closed-loop control in the LGSVL simulation tests, which validates the system-level usefulness of our proposed defense in both benign and adversarial scenarios. ## 2 Background **Basics of LiDAR.** As one of the main sensors deployed in an automated driving system (ADS), a LiDAR (Light Detection and Ranging) scans the surrounding environment and generates a point cloud (PC) \(X=\{(x_{i},y_{i},z_{i},int_{i})\}\in R^{n\times 4}\), including \(n\) points with \((x_{i},y_{i},z_{i})\) as \(i\)-th point's location and \(int_{i}\) as \(i\)-th point's intensity, during each detection [7, 39]. Technically, the LiDAR first emits a laser ray consecutive in both horizontal and vertical directions, which then captures the reflected lasers, records their time of flight and light intensity, and further computes the depth and 3D coordinate of the points related to these reflected lasers. Finally, the LiDAR collects these information to generate the raw PC, which represents the object surfaces in the surrounding environment, and sends this raw PC to the ADS for downstream processing. **3D Object Detectors.** DNN-based 3D object detectors empower modern ADS for perceiving and detecting objects in the surrounding environment (i.e., _obstacle perception_). Technically, a 3D object detector usually takes PC as the input and returns the category and _bounding box_, a rectangle or cuboid which bounds the detected object to represent its location in a PC, of each perceived object [21]. In most cases, 3D object detectors can be regarded as the combination of three modules: the preprocessing, the backbone and the prediction modules. A typical preprocessing module first divides the points of PC into a number of sets (e.g., voxels or pillars) based on specific rules and then calculates the statistical information [52, 27], or uses DNN models, such as PointNet [34] or DGCNN [45], to generate the feature vectors for each point [38]. Then, the backbone module implemented with 2D/3D convolutional neural networks (CNN) [26, 28] extracts the PC's features and generates the global feature map. Finally, the prediction module in one-stage 3D object detectors like VoxelNet [52] and PointPillars [27] directly predicts the bounding box and category of each obstacle based on the global feature map. Differently, in two-stage 3D object detectors like PointRCNN [38] and PV-RCNN [37], the prediction module predicts the proposal bounding boxes of objects based on the global feature map and generate a local feature map for each object based on the combination of the global feature map and the related proposal bounding boxes at the first stage, and then the final bounding box and category of each obstacle based on each local feature map at the second stage. **Adversarial Example.** In general, given a machine learning model \(F\) and a normal sample \(x\) with label \(y\), an adversarial example \(x^{\prime}\) is generated from \(x\) by adding a slight perturbation to mislead the victim model's prediction while causing no modification to either the model's architecture or the parameters [42, 49, 23]. According to the attack goal, an adversarial example can be further categorized into untargeted and targeted. By definition, an untargeted attack aims at misleading the victim model into \(F(x^{\prime})\neq y\), while a targeted attack aims at misleading the victim model into \(F(x^{\prime})=y^{\prime}\), where \(y^{\prime}\) is the target label specified by attacker. According to [13], the targeted adversarial attack can be further represented as the optimization problem: \[\small argmin_{x^{\prime}}||x-x^{\prime}||_{p}\qquad\text{s.t.}\quad F(x^{ \prime})=y^{\prime}\text{ and }x^{\prime}\in X \tag{1}\] where the objective \(\min\|x-x^{\prime}\|\) restricts the region of perturbation (i.e., attack budget) and \(X\) denotes the input space. In the context of ADS, to cause severe safety issues, several recent adversarial attacks focus on conducting LiDAR spoofing to forge a non-existent object in the detection results of a LiDAR-based object detector, or called _appearing attacks_, on which we provide a detailed review in Section 3.2. ## 3 Security Settings ### Threat Model \(\bullet\)**Attacker's Goal**. In general, the direct goal of an appearing attack is to forge fake cars, in the detection results of the LiDAR-based object detector in ADS. To refine the attack goal above, we first analyze the following two attack scenarios of an appearing attack. **Attack Scenario 1.**_(On the Highways)_ As shown in the top part (a) of Fig.1, an attacker can spoof the LiDAR of the victim ADS when it passes by. Detecting a forged car at the immediate front, the victim will make a stop decision and decrease its speed to 0 km/h within seconds. The unpredictable emergency braking may leave no reaction time for other vehicles behind. This may lead to a rear-end collision or even more severe traffic accidents. **Attack Scenario 2.**_(At the Traffic Lights)_ Similarly, as shown in the bottom part (b) of Fig.1, the attacker conducts LiDAR spoofing when the victim ADS stops at the red light. By forging a fake car ahead, the victim will keep immobile even after the traffic signal turns green, blocking other vehicles behind and causing a traffic jam. As the two attack scenarios show, to cause a real-world threat, the forged cars are required to be not only recognized by LiDAR-based object detectors with sufficiently large confidence scores, but also close enough to result in the re-routing of the victim. Therefore, we further refine the attack goal to expect the cars to be forged in a close distance to the victim. Specifically, in this work we require a forged car to be within a \(5\sim 10\) meters to the victim to pose a sufficient threat [41, 11]. \(\bullet\)**Attacker's Capability.** Following the threat model in recent attacks [41, 11], our defense mainly aims at mitigating an attacker satisfying the following threat model: **Assumption 1.**_(Prior Knowledge)_ The attacker knows the architecture and the parameters of the LiDAR-based object detector deployed on the victim ADS (i.e., _white-box_). **Assumption 2.**_(Number of Added Points)_ The attacker can inject at most 200 points (according to [41]) into the input PC of the victim 3D object detector in one scan of LiDAR. **Assumption 3.**_(Features of Added Points)_ The attacker is allowed to inject points at any location and with arbitrary light intensity, which is imposed for a more generic defense. \(\bullet\)**Attack Process.** Before the attack starts, the attacker deploys a physical equipment to receive the lasers emitted by the victim ADS's LiDAR, and shoot lasers back to the LiDAR. Later, the LiDAR-based 3D object detectors of the victim takes the infected PC and predicts a non-existent car. Finally, the victim re-routes to avoid the non-existent car, which may lead to severe collision accidents. ### Recent Appearing Attacks Next, we review the recent appearing attacks on LiDAR-based object detectors. As one of the earliest work, Shin et al. propose a spoofing attack by randomly injecting points into a certain area regardless of the LiDAR-based object detectors of the victim ADS, which is sufficient to forge a non-existent car [39]. Inspired by Shin's work, Cao et al. standardize the attack pipeline of adversarial spoofing attack, and propose an appearing attack, Adv-LiDAR, which aims at breaking Apollo's detection system [11]. By modeling the preprocessing and postprocessing modules in Apollo's LiDAR-based object detector, Adv-LiDAR successfully uses traditional adversarial attack technology to forge non-existent cars. However, Sun et al. later prove that other 3D object detectors such as PointPillars and PointRCNN will not be affected by the the adversarial samples generated by Adv-LiDAR, and then suggest a more general black-box appearing attack based on the intrinsic physical nature of LiDARs [41]. Also, another attack by Yang et al. shares the same attack goal as the above appearing attacks but uses a different attack process and physical equipment [48]. Specifically, they use a physical object which is specially designed to tempt the 3D object detector to predict itself as a car with a falsely enlarged bounding box and therefore fabricate a non-existent part of this object in the model's perception. For completeness, we also cover this attack into the appearing attacks in experiments. ### Previous Defenses \(\bullet\)**Rationale behind Defenses by Elimination**. To eliminate the forged vehicles crafted by appearing attacks, a defense would unavoidably remove a small ratio of detected real objects from the prediction of 3D object detectors. However, we argue this would hardly cause as substantial damages to the ADS as the mistake of detecting forged vehicles. It is mainly because: (i) As described in the attacker's goal, obstacles which appear near the ADS take the most decisive effect on the vehicle's future planning. Therefore, incorrect elimination of a real obstacle far from this vehicle may have limited influence on the decision-making of the ADS [41]. (ii) In ADS, the _multi-object tracking_ (MOT) module which follows the perception module will take the predictions from the LiDAR-based object detectors as input, maintain and predict the trajectories of objects nearby [17, 31, 46]. By design, MOT usually creates an object trajectory for a newly predicted object which is constantly detected for 6 frames, while removes an overdue object trajectory which is continuously unmatched with any predicted objects for 60 frames in a common visual perception system [53] of 30 FPS. This mechanism guarantees that it is much easier for an ADS to create a fake object in its perception due to a successful appearing attack than forgetting a real object, due to the occasional misprediction of the LiDAR-based detector itself or the incorrect elimination of some real objects by such a defense. Therefore, it is reasonable to tolerate a small ratio of false alarms from defenses by elimination and recognize the importance of defending against appearing attacks by slightly trading the recall of LiDAR-based object detectors. However, the existing defense methods which are possibly against appearing attacks remain limitations in their design, so it is hard for them to maintain good performance in different scenes. To make it clear, we further analyze these defense methods and discuss their limitations accordingly. \(\bullet\)**Limitations of Universal Defenses.** SRS (Simple Random Sampling) and SOR (Statistical Outlier Removal) are two universal defense methods for PC models. They are both unaware of attacks and against adversarial attacks by removing suspect points in input PC. **(1) SRS.** SRS is in essence a random method regardless of any auxiliary information [51]. Formally speaking, given a raw input PC \(X\) with \(n\) points, SRS will randomly sample \(M(M<n)\) points from \(X\) by \(P(X)=\{\mathbb{I}_{x}|x\in X,\ \mathbb{I}_{x}\sim Bernoulli(0.5)\},\) where \(\mathbb{I}_{x}\) indicates the existence of each point \(x\) in \(X\). **(2) SOR.** For a raw input PC \(X\), SOR computes the average of the \(k\)-nearest neighbors' (kNN) distances for each point in \(X\), and counts the mean \(\mu\) and the standard deviation \(\sigma\) of these distances. Then, it recognize those points which fall outside the range of \([\mu-\alpha\cdot\sigma,\mu+\alpha\cdot\sigma]\) as noises and removes them from \(X\), where \(\alpha=1.1\) is its hyper-parameter [51]. \(\bullet\)**Limitations of Specific Defenses.** CARLO (oClusion-Aware hieRarchy anomalLy detectiOn), SVF (Sequential View Fusion) and Shadow-Catcher are three specific heuristic defense methods for 3D object detectors. They both specify the attack as a black-box appearing attack proposed in Sun's work [41], and perform defense by removing suspect points in input PC or deleting suspect objects in the final prediction. **(1) CARLO.** CARLO is a heuristic defense algorithm proposed by Sun et al. to detect the cars forged by their black-box appearing attack [41]. For each object predicted by the 3D object detectors, CARLO computes an anomalous ratio \(r\) in one of the following two ways: (1) **FSD (Free Space Detection)**, which defines \(r=\sum_{c\in S^{p}}FC(c)/|S^{c}|\), where \(S^{c}\) is a set including all the cells in this object's bounding box, and \(FC(c)\) is a \(0/1\) function indicating whether there are input points in the cell \(c\); and (2) **LPD (Laser Penetration Detection)**, which defines \(r=|S\downarrow^{p}|/|S\downarrow^{p}\bigcup S^{p}\bigcup S^{\dagger p}|\), where the superscript \(p\) indicates the corresponding set is composed of points. Specifically, \(S\downarrow^{p}\) contains the input points in the space behind this object, \(S^{p}\) is contains the input points inside this object's bounding box, and \(S\uparrow^{p}\) contains the input points in the space between this object and the LiDAR. Then, CARLO compares all these \(r\) with a fixed threshold \(R\). For those objects with \(r>R\), CARLO recognizes them as fake objects and erases them from the prediction. **(2) SVF.** Similarly, SVF is another defense algorithm suggested by Sun et al., but its key is more similar to SOR: removing outliers from the raw input PC. As an extra end-to-end network, SVF turns the raw input PC into front-view (FV) representation and uses LU-Net [9], a PC segmenter, to calculate a segmentation score for each point. SVF then concatenates these scores with their related points' input features to regenerate the input PC, and passes this augmented PC to the 3D object detector as input. **(3) Shadow-Catcher.** As our concurrent work, Shadow-Catcher [24] also exploits the physical law to improve the robustness of the 3D object detectors in self-driving system. However, Shadow-Catcher is mainly based on hand-crafted rules to determine the forged obstacles, while our work proposes the first learning-based defense scheme to model the complicated physical relation between the depth and density of real objects for defensive purposes. Specifically, Shadow-Catcher computes an anomaly score for each detected object based on the distances of the points inside its bounding box to four key lines related to its bounding box, then compare this score with a presetting threshold to determine whether the perceived obstacle is forged. As a final remark, most of the previous defenses are initially designed for mitigating specific appearing attacks. In this sense, the performance of previous defenses against each popular attack remains unjustified in a systematic way, which we accomplish in our evaluation. ## 4 Defense with Local Objectness Predictor **Methodology Overview.** As shown in Fig.2, the pipeline of our proposed defense can be divided into three stages: training sample generation, objectness predictor construction and fake object elimination. In the training sample generation stage, we construct a learning task for our local objectness predictor (LOP), which consists of pairs of points inside a small local pillar and its corresponding objectness label, annotated in a fully self-supervised way without additional annotation except for a standard training dataset for LiDAR-based object detectors. Then, in the objectness predictor construction stage, we train the LOP to learn to predict the objectness score for each pillar, i.e., the confidence of whether a local part belongs to a real object. Finally, in the fake object elimination stage, we use our trained LOP to predict an objectness score for each small pillar intersected with the bounding boxes of the predicted objects, and determine whether these objects are real by majority voting. Below, we elaborate on the insights and the technical designs in each stage of our defense. ### Training Sample Generation #### 4.1.1 Insight: Global Objectness \(\neq\) Local Objectness By inspecting the design of recent appearing attacks, we observe that most attacks focus on increasing the confidence scores of the forged obstacles, which represents the possibility of the detected object to be real. Equivalently, according to our definition of objectness, the confidence score can be explained as a _global objectness score_ related with the predicted obstacle to some extent. As most LiDAR-based object detectors by design keep those objects with higher confidence, or global objectness scores, in their final predictions, increasing confidence scores is the most direct way for the attacker to successfully forge a non-existent obstacle. However, to increase the global objectness score of a forged obstacle does not necessarily lead to a higher objectness score for each local part. With the following experiments, we observe that most of the recent appearing attacks have ignored the local difference, i.e. the spatial distance of two corresponding subsets, between a real and a forged obstacle, which leaves an exploitable trace for the defender. \(\bullet\)**A Pilot Study.** As the description in Section 3, the mainstream appearing attacks all focus on forging cars, so we mainly validate the above observation on cars. We first randomly sample one real car from the training set of KITTI [19] and \(1,000\) forged cars crafted by three mainstream appearing attacks [41, 48, 11] (later described in Section 5). Next, we translate the interior points of each ground-truth car and each forged car into its local coordinate system, rotated by the lead angle to the identical orientation. Then, for the point set \(S\) of the real car and \(S^{\prime}\) of each forged car, we measure the distance between them by using the chamfer distance [47] and the average square L2 distance of kNN as metrics. Specifically, we collect points belonging to the real car as \(S_{R}\). For each forged car, we first collect points belonging to it as \(S_{F}\). Then, we split the point space into equal-sized pillars \(p_{j}\) (as in Fig.2), and generate a point subset \(S_{F,j}=S_{F}\cap p_{j}\) for each pillar. Finally, we calculate the global difference and local difference as follows: \[D_{\text{global}}=D(S_{R},S_{F}) \tag{2}\] \[D_{\text{avg\_local}}=\frac{1}{|\{S_{F,j}\}|}\sum D(S_{R},S_{F,j}) \tag{3}\] \[D_{\text{half\_max\_local}}=\frac{1}{N_{\text{half}}}\text{Top}_{N_{\text{ half}}}(\{D(S_{R},S_{F,j})\}) \tag{4}\] where \(S_{R},S_{F}\) denote the two specific point sets defined above, \(S_{F,j}\) denotes the point sets gathered from the separated pillars of \(S_{F},D\in\{D_{C},D_{k}\}\) denotes the metric that we use to measure distance between two point sets, \(N_{\text{half}}=\lceil|\{S_{F,j}\}|/2\rceil\) is half of the number of point subsets \(S_{F,j}\), and \(\text{Top}_{k}(V)\) denotes the sum of the largest \(k\) values in \(V\). As shown in Fig.3, the local differences of the forged cars are usually larger than the global differences in both chamfer distance and average square L2 distance of kNN (with all p-value less than \(1.0\times 10^{-11}\) in Kolmogorov-Smirnov tests). We further compare the local difference and global difference for each forged car, and find that if we choose \(D_{\text{avg\_local}}\) as the local difference, there are \(55.7\%\) forged cars have larger local difference on the chamfer distance metric, and \(54.5\%\) forged cars have larger local difference on the average square L2 distance of kNN metric. If we choose \(D_{\text{half\_max\_local}}\) as the local difference, \(87.4\%\) forged cars have larger local difference on the chamfer distance metric, and \(87.5\%\) forged cars have larger local difference on the average square L2 distance of kNN metric. Similar results are observed when we repeat the experiment above on several other real cars randomly sampled from the training set of KITTI. In summary, the experimental results imply that _the local features do provide the defender with a trace to distinguish between the real and forged cars_. In fact, our insight also conforms to a recent work on enhancing the precision of LiDAR-based object detectors [15], where they suggest that with an appropriate strategy of spatial division, one small part of real objects can also contain rich enough spatial and semantic information to predict the category, bounding box and confidence score of its related object. #### 4.1.2 Technical Designs To facilitate the modeling of local object features, in the first stage we prepare a dataset \(D_{\text{obj}}\) consisting of pairs of points in each pillar from ground-truth objects and an automatically annotated objectness label based on a standard training dataset for LiDAR-based object detectors (e.g., KITTI [19]). Formally, we denote the training dataset as \(D=\{(X_{t},\{\mathbf{b}_{k}\}_{k=1}^{N_{t}})\}_{t=1}^{N}\), where \(N_{t}\) denotes the number of ground-truth objects in the PC \(X_{t}\), and \(\mathbf{b}_{k}\) denotes the bounding box of the \(k\)-th ground-truth object in \(X_{t}\). First, we split the full \(L\times W\times H\) 3D region which covers the input point clouds into a number of pillars \(\{p_{j}\}\) with an equal size \(l\times w\times H\), where \(l=1m,w=1m\) in our implementation. Then for each pillar \(p_{j}\), we generate an input-output pair, which can be represented as \((pc_{j},obj_{j})\), as follows: **Generating Input \(pc_{j}\).** We directly collect the inside points of each pillar from the input PC \(X_{t}\) to form the input feature \(pc_{j}\), i.e., \(pc_{j}=X_{t}\cap p_{j}\), composed of a batch of points' features \(x_{i}\) inside \(p_{j}\). To normalize the generated input, we constrain the size of \(pc_{j}\) as \(M_{pc}\), where \(M_{pc}\) is a fixed hyper-parameter. For those \(pc_{j}\) with a larger size, we randomly sample \(M_{pc}\) interior points as its input. Otherwise, \(pc_{j}\) is padded with \(\vec{0}\) until the size constraint is satisfied. **Generating Label \(obj_{j}\).** We first calculate the 2D Intersection over Union (IoU), the ratio of the area of intersection region over that of union region, between \(p_{j}\) and each ground-truth bounding box \(\mathbf{b}_{k}\) on the x-y plane. For each pillar \(p_{j}\), we keep the maximal IoU value over all ground-truth bounding boxes. Finally, we compare the maximal IoU value with a fixed threshold \(T_{\text{IoU}}\). If this value is greater than \(T_{\text{IoU}}\), we annotate \(obj_{j}=1\) to indicate that the pillar \(p_{j}\) contains a local part of a real object, or \(obj_{j}=0\) otherwise. Iterating over all Figure 3: The local and global differences of PCs between real and forged cars (The grey bars inside denote the overlapping region.). Figure 2: The pipeline of our proposed defense. The input space is split into a number of equal-sized pillars (in the form of blue boxes). The red box in 1 represents the bounding box of a ground-truth object during training, while the green box in 3 represents that of a predicted object from the 3D object detector during testing. the PC inputs with the pillars, we finish the collection of the training set \(D_{\text{obj}}=\{(pc_{j},obj_{j})\}\). As an analogy to the training task of masked word prediction for pretrained language models [18], this process works in a fully self-supervised manner without any additional information. ### Objectness Predictor Construction #### 4.2.1 Insight: The Inimitable Depth-Density Law Meanwhile, we find that, because recent appearing attacks are designed to cause threats in the real world, they are inevitably limited by certain physical constraints imposed by both the attacker's goal and the attack apparatus. As is introduced in Section 3.3, there exist physical upper bounds on the number of added points and the permissible distance between a fake object and LiDAR for recent appearing attacks. Behind these two limitations, we find that the capability of recent appearing attacks is inherently restricted by the _depth-density_ law [14]: with existing technology and methods, it is hard to imitate the real-world objects' relation between the _depth_, i.e. the distance between this object and the LiDAR, and the _point density_, i.e. the ratio of the number of input points inside this object's bounding box over the volume of its bounding box. \(\bullet\)**A Pilot Study.** Similar to the reason introduced in Section 4.1, we mainly validate the above observation on cars here. We first randomly sample \(1,000\) real cars from the training set of KITTI and \(1,000\) forged cars crafted by the mainstream appearing attacks described in Section 5. Then, we calculate the depth and point density for these objects based on their bounding boxes and the related points. As shown in Fig.4, the point density of real cars is approximately inversely proportional to their depth. In contrast, the point density of the forged cars seems to be independent of the depth: they can have small depth and small point density simultaneously, while this seldom happens for real cars. Though differences exist between real and forged cars in terms of the depth-density relation, it is still hard to directly distinguish them by heuristic algorithms. Due to the complexity of real-world environments, there exists the confounding region in the depth-density relation distribution (highlighted in Fig.4), which is mainly caused by some real cars occluded by others, with smaller depth and point density at the same time. Besides, the complexity is further increased by errors such as the noise in LiDAR perception and the deficiency of attack equipment. In other words, it can be improper to explicitly filter out any detected object based on the hand-crafted rules. As a data-driven approach, we alternatively encourage the LOP to actively learn to model the complicated depth-density relation of real objects, by further incorporating the depth information explicitly into the input feature of each pillar we derive in the first stage. #### 4.2.2 Technical Designs At this stage, we augment the input features in our prepared training dataset \(D_{\text{obj}}\) with the depth information. Specifically, for each generated training sample \((pc_{j},obj_{j})\) in \(D_{\text{obj}}\), we expand the feature of each point in \(pc_{j}\) from an original 4-dim vector \(x_{i}=(x,y,z,int)\) into a 7-dim one \(x^{\prime}_{i}=(dx,dy,x,y,z,int,dep)\), where \((dx,dy)\) is the point's 2D relative coordinates to the center of its corresponding pillar in the x-y plane, and \(dep=\sqrt{x^{2}+y^{2}+z^{2}}\) is the point's depth. In our preliminary, we also experimented with an alternative design with no depth information explicit in the input feature. The practice would result in a LOP which is much less effective in distinguishing the forged objects from the real ones than using our current solution. To adaptively learn the depth-density relation for distinguishing real and forged cars or other obstacles, we implement the LOP \(O\) with the architecture of an off-the-shelf backbone PC classifier (e.g., PointNet [34] or DGCNN [45]), considering their validated performance on many downstream 3D tasks. Note that the _negative samples_ in \(D_{\text{obj}}\), i.e. the generated samples with \(obj_{j}=0\), are much more than the _positive samples_, i.e. the generated samples with \(obj_{j}=1\). Thus, we delete a part of negative samples in random to keep data balance and ensure that the ratio of positive samples and negative samples does not exceed \(1:1.5\). To further alleviate the data imbalance problem, we also adopt the idea of focal loss [30] in the learning objective of LOP: \[FL(p,y)=-\alpha_{fl}(1-p_{y})^{\gamma_{fl}}log(p_{y}) \tag{5}\] where the positive constants \(\alpha_{fl},\gamma_{fl}\) (\(\gamma_{fl}>1\)) are the hyper-parameters of the focal loss, which are set by following the best practices in [30]. Besides, \(p_{y}\) is the probability of the \(y\)-th class returned by the predictor. Figure 4: The distribution chart of the depth-density relation. The blue points represent normal cars, the orange crosses represent forged cars, and the red rectangle shows the confounding region of the two. ### Fake Object Elimination Finally, we leverage the LOP to calculate the objectness score for each pillar intersected with predicted objects, and determine whether these objects are real by a majority voting among the pillars. Specifically, we first divide the detection space into equal-sized pillars, translate the input PC into a series of point subsets inside these pillars and then augment their features, similarly to the former two stage. Then we use the LOP to calculate a 0/1 objectness score for each pillar. For each object in the prediction of the 3D object detector, we search for those pillars whose 2D IoU on the x-y plane between itself and the predicted object's bounding box is greater than a specified threshold \(\beta\), and calculate the sum of their objectness scores as well as the ratio of this sum over the total number of related pillars. Finally, we recognize those objects with the ratio less than or equal to a boundary value \(B\) as fake objects, and eliminate them from the prediction. ## 5 Evaluation and Analysis ### Overview of Evaluation **(1) Victim Models.** We choose three mainstream LiDAR-based object detectors: **PointPillars**[27], **PointRCNN**[38] and **PV-RCNN**[37] as the victim models. Specifically, we adopt the implementation of these three object detectors available on an open-source project OpenPCDet [43], each of which is normally trained on KITTI dataset [19] to achieve the near state-of-the-art performance. **(2) Attack Methods.** We implement three popular appearing attacks which can be roughly categorized into _white-box_ and _black-box_ attacks. In the former case, the attacker has full access to the victim 3D detector, including the architecture and the parameters, while the latter only accesses the detector as a black-box prediction API. Specifically, the attacks are \(\bullet\)_A Variant of Adv-LiDAR_[11] (_abbrev._**Baseline**, _white-box_): Because Adv-LiDAR is specially designed for attacking Baidu's Apollo [1], it could hardly be directly transferred to attack other 3D object detectors [41]. Therefore, following its main idea, we implement a variant of ADV-LiDAR by randomly injecting a certain number of points into a specified zone, and using FGSM [20] to increase the confidence scores of those forged cars related to these points. \(\bullet\)_Yang's Work_[48] (_abbrev._**Roadside**, _white-box_): This attack forges cars by 3D printing a small and specifically-designed object, increasing their confidence scores and category scores of label car, and enlarging their bounding boxes with gradient descent. Since this attack will generate the adversarial points and then turn them into a physical object, we will only deploy the first part for our experiments. In their original work, this attack mainly aims at breaking PointRCNN with a white-box access. We thus also follow the settings in our experiments. \(\bullet\)_Sun's Work_[41] (_abbrev._**Physical**, _black-box_): This attack forges cars by duplicating real cars, which contains a limited number of points due to either inter-occlusion or intra-occlusion. The PC of the fake car is then transformed to a front-near position of the victim ADS. **(3) Baseline Defenses.** We implement **SRS**, **SOR**, **CARLO** and **Shadow-Catcher** which we have introduced in Section 3.3 as the baseline defenses. We do not consider SVF because it relies on retraining the whole 3D object detector itself, and causes much more time and computation cost compared with other baseline defenses as well as ours (Section 5.2.3). **(4) Metrics.** We choose three different metrics to evaluate the performance of our proposed defense and other defenses: \(\bullet\)**Precision** measures the proportion of the real objects in the prediction results. In the context of defense, the decrease in precision reflects whether these defenses would harm the original performance of the victim model. Following [27, 37, 38], we first choose a certain threshold \(C_{\text{conf}}\) for each 3D object detector, and remove those predicted objects with confidence scores less than \(C_{\text{conf}}\). Then we calculate the 2D IoU on the Figure 5: The relation graph of defense effect (1-ASR) and precision on cars of PointPillars under attacks. ”PointNet” and ”DGCNN” refers to LOP’s structure, with a boundary value \(B\) used to distinguish real and fake objects as the description in Section 4.3.”LPD” and ”FSD” are two strategies for CARLO to calculate the anomalous ratio, and \(M,\ k,\ R\), Threshold are the hyper-parameters of other defenses, which are all described in Section 3.3. Figure 6: The relation graph of defense effect and precision on cars of (a) PointRCNN and (b) PV-RCNN under attacks. x-y plane between the bounding boxes of each remaining predicted object and the ground-truth objects. A predicted object is true positive, if its maximal IoU value surpasses another certain threshold \(C_{\text{IoU}}\) and the predicted category is identical with the ground-truth; otherwise, the predicted object is a false positive prediction. For PointPillars and PointRCNN, we set \(C_{\text{conf}}=0.5,\ C_{\text{IoU}}=0.5\); for PV-RCNN, we set \(C_{\text{conf}}=0.7,\ C_{\text{IoU}}=0.5\). \(\bullet\)**Average Precision (AP)** is a comprehensive metric over the precision and the recall of the detection results. Specifically, AP is the average value of precision when the recall is larger than different specific values, which can be represented as \[AP=\frac{1}{11}\sum_{r\in\{0,0.1,\dots,1.0\}}max_{r^{\prime}\geq r}\text{ Precision}\text{@}(\text{Recall}=r^{\prime}) \tag{6}\] \(\bullet\)**Attack Success Rate (ASR)** measures the ratio of the number of forged cars detected by the victim 3D object detector over the total number of attack attempts, which directly reflects the performance of these defenses. A more effective defense should result in a lower ASR. **(5) Implementation of LOP.** We choose two off-the-shelf point-wise PC classification architectures, PointNet [34] and DGCNN [45], to instantiate the LOP. For those hyper-parameters of LOP described in Section 4, we set \(M_{pc}=1024,\ T_{\text{IoU}}=1\times 10^{-6},\ \alpha_{fl}=1,\ \gamma_{fl}=2,\ \beta=1 \times 10^{-3}\). ### Comparison with Baselines #### 5.2.1 Attack Scenarios First, we evaluate the performance of our defense against recent appearing attacks. We implement three recent appearing attacks to generate adversarial examples against the three mainstream 3D object detectors based on the KITTI's validation set. We evaluate the ASR of these appearing attacks along with the AP and the precision of these detectors under attacks. Besides the forged cars crafted by appearing attacks, there also remains some normal objects in the adversarial examples, which are considered in the AP and precision metrics. Table 1 and Table 2 show the AP of the detectors on cars when equipped with different defenses, and Fig.5 and Fig.6 plots the defense effectiveness (y-axis, in terms of \(1-\text{ASR}\)) and the precision on cars (x-axis) of different defenses. **Results & Analysis.** As we can see from Table 1, Table 2, Fig.5 and Fig.6, compared with SRS and CARLO, our defense simultaneously achieves higher defense effectiveness and the victim models under guard have higher AP and precision on cars. For example, under recent appearing attacks, the PointRCNN equipped with the LOP keeps AP on cars over 70% and precision on cars over 72%, while AP on cars is always less than 70% and precision on cars is always less than 63% when deploying SRS or CARLO on the PointRCNN. Compared with SOR, although in some cases our defense has slightly lower defense effectiveness (the margin is less than 5%), it always results in higher AP and precision on cars under attacks. Compared with Shadow-Catcher, although in some cases our defense has slightly lower AP on cars, it always results in higher precision on cars and better defense effectiveness under attacks. From a different perspective, we observe that other defenses only perform well when protecting certain models against specific attack techniques. For example, SOR performs better when protecting PointPillars, while CARLO performs better when defending against _Physical_. In contrast, the LOP performs well independent of the structure of the 3D object detector or the undergoing appearing attack, which implies that our proposed defense is more general than other defenses. #### 5.2.2 Benign Scenarios Then, we evaluate the performance of the victim models under guard on clean samples to measure the performance overhead brought by different defenses. Table 3 and Table 4 present the AP and precision of them in the normal circumstances. **Results & Analysis.** As Table 3 and Table 4 show, compared with existing defenses, the performance of these detectors has less degradation in the normal cases when equipped with the LOP than with other defenses. For example, the AP and precision of detectors equipped with other defenses both decrease in most cases, while for these 3D object detectors equipped with the LOP, the AP on cars even increases by \(0.33\sim 1.71\%\), and the precision on cars increases by \(2.78\sim 7.12\%\). Although Shadow-Catcher has slightly higher AP on cars than the LOP, considering the defensive advantages of LOP under different appearing attacks, our proposed defense may be more suitable for practical ADS, due to the performance-robustness balance when the detector is equipped with LOP. We further analyze why our proposed defense would even increase the performance of the victim models on cars in normal cases, while existing defenses would not: (i) The LOP mainly learns the semantic and spatial features of real objects, while other defenses focus on recognizing fake objects. (ii) The bounding boxes of cars are much larger than that of other objects, which means that there are enough samples corresponding to components of cars provided for the LOP to learn. In summary, our proposed defense incurs almost no damage on the normal performance of the victim models and may sometimes even improve the performance due to its finer granularity modeling of the obstacles. In Appendix A, we further experiment with the hyper-parameters of LOP, which validate that the model structure will not affect the performance of LOP. #### 5.2.3 Overhead Analysis Next, we evaluate the overhead in the preparation stage. Except for our LOP and SVF, other defense methods do not introduce additional learning modules and therefore no train ing is required in the preparation stage. Table 5 compares the time overhead of LOP and SVF during the preparation phase. Table 6 reports the time and the space overhead of the inference phase of each defense. We conduct the experiments with 5 repetitive tests on each case, and report the mean and the standard deviation as the final results. \(\bullet\)**Results & Analysis.** As Table 5 shows, the time overhead of SVF in the preparation phase is much more higher than that of LOP. It is mainly because SVF requires the retraining of the whole 3D object detectors from scratch, while the training task of LOP only involves a PC-based binary classifier, a much easier learning task compared with that of SVF. More importantly, once LOP is trained, it can be combined with different defense targets to provide the defense, while SVF has to retrain each target. Meanwhile, Table 6 shows, LOP incurs slightly more time and space overheads than most of the statistical defenses, which can be further reduced by some optimization techniques. For example, to simplify the implementation, we split the whole input space into pillars and use LOP to predict their objectness score during the split in this experiment. However, there is not necessary to check all pillars in the real case. We can identify the pillars which not only intersects with the predicted bounding boxes but also contains points, and only predict their objectness scores to reduce the total times of calculations. Furthermore, we can combine parts of these pillars into a batch and uses LOP to predict in a parallel way for further acceleration. In Section 5.4, we follow the optimization mentioned above to deploy LOP in the end-to-end self-driving system and reduce the time overhead caused by LOP to less than 10ms per detection, which has almost no influence on the real-time requirement of the self-driving system. ### Adaptive Attacks In this part, we evaluate whether our defense is robust against an adaptive attacker who knows the existence of LOP and in the worst case has the access to the structure and the parameters of our LOP. In this almost worst-case threat model, it is possible for the adversary to attempt to bypass our defense during the generation of forged objects. As the _Physical_ attack in [41] requires no training stage in its generation, we choose to modify the _Baseline_ attack, i.e., the attack in [11], which we refer to as the _Baseline_ attack throughout this response letter, into an adaptive attack against our defense. Specifically, we propose to generate the adversarial point cloud by simultaneously optimizing the original appearing attack objective and maximizing the score of the crafted object under LOP. To enhance the performance of the Baseline attack, we further replace the FGSM algorithm by PGD. Table 7 reports the ASR of the adaptive attacks on the three 3D object detectors when LOP is deployed or not, along with the AP and the precision of the detectors on cars under the adaptive attack. \(\bullet\)**Results & Analysis.** As we can see from Table 7, our LOP performs well when defending against the adaptive attacks above. Both the PointNet-based and the DGCNN-based LOP can reduce the ASR of the adaptive attacks by a large margin, while only a slight loss of performance on clean samples is observed. For example, when defending PointPillars, the ASR is reduced from 45% to 12% with the DGCNN-based LOP, while the decrease of AP is by less than 4%. From our perspective, the result may be because the orthogonality between the original attack target and the intention of bypassing LOP, which brings challenges for optimizing two different loss function at the same time. In summary, LOP has certain robustness against even the worst-case adaptive attack where the attack has a full white-box access to the defense module. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multirow{2}{*}{None} & \multicolumn{3}{c}{SOR} & \multicolumn{3}{c}{CARLO} \\ \cline{3-10} & & k=2 & k=10 & FSD, r=0.6 & FSD, r=0.7 & FSD, r=0.8 & LPD, r=0.6 & LPD, r=0.7 & LPD, r=0.8 \\ \hline Physical & 70.06\% & 70.43\% & 70.01\% & 65.94\% & 69.60\% & 69.64\% & 67.79\% & 69.57\% & 69.82\% \\ Baseline & 68.57\% & 68.84\% & 68.06\% & 63.49\% & 67.96\% & 67.96\% & 64.37\% & 67.99\% & 68.49\% \\ \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{None} & SRS & \multicolumn{3}{c}{Shadow-Catcher} & \multicolumn{3}{c}{Ours} \\ \cline{3-10} & & M=500 & Threshold=0.2 & Threshold=0.4 & Threshold=0.6 & PointNet, B=0.5 & PointNet, B=0.6 & DGCNN, B=0.5 & DGCNN, B=0.6 \\ \hline Physical & 70.06\% & 70.12\% & 47.72\% & 75.46\% & 77.05\% & 70.92\% & 70.97\% & 71.11\% & 70.39\% \\ Baseline & 68.57\% & 68.55\% & 57.81\% & 75.03\% & 75.95\% & 69.50\% & 69.63\% & 69.72\% & 68.61\% \\ \hline \hline \end{tabular} \end{table} Table 1: The AP on cars of PointPillars with and without LOP or other defense methods under attacks. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{PointRCNN} & \multicolumn{3}{c}{PV-RCNN} \\ \cline{2-7} & Physical & Baseline & Roadside & Physical & Baseline \\ \hline w/o. defense & 67.92\% & 65.95\% & 61.59\% & 70.11\% & 66.39\% \\ SRS (M=500) & 69.42\% & 65.44\% & 62.48\% & 70.14\% & 66.27\% \\ \hline SOR (k=2) & 72.56\% & 65.45\% & 61.84\% & 71.43\% & 64.80\% \\ SOR (k=10) & 72.63\% & 65.26\% & 65.05\% & 71.28\% & 64.09\% \\ \hline CARLO(FSD, R=0.6) & 67.16\% & 63.53\% & 60.16\% & 67.99\% & 63.27\% \\ CARLO(FSD, R=0.7) & 68.41\% & 65.29\% & 60.71\% & 70.10\% & 66.07\% \\ CARLO(FSD, R=0.8) & 68.15\% & 65.22\% & 60.69\% & 70.15\% & 65.84\% \\ CARLO(FPD, R=0.6) & 68.98\% & 65.27\% & 60.53\% & 69.23\% & 64.46\% \\ CARLO(LPD, R=0.7) & 69.22\% & 65.97\% & 61.41\% & 70.26\% & 65.71\% \\ CARLO(LPD, R=0.8) & 69.15\% & 66.04\% & 61.65\% & 70.35\% & 66.11\% \\ \hline Ours(PointNet, B=0.5) & 73.32\% & 71.87\% & 70.82\% & 71.47\% & 69.25\% \\ Ours(PointNet, B=0.6) & **73.77\%** & 72.30\% & 71.41\% & 71.86\% & 69.19\% \\ Ours(DGCNN, B=0.5) & 73.07\% & 72.29\% & 71.99\% & **71.87\%** & **69.88\%** \\ Ours(DGCNN, B=0.6) & 73.74\% & **73.42\%** & **72.84\%** & 71.51\% & 68.30\% \\ \hline \hline \end{tabular} \end{table} Table 2: The AP on cars of PointRCNN and PV-RCNN with and without LOP or other defense methods under attacks. ### System Integration To evaluate the system-level usefulness of our proposed defense, we implement the PointNet-based LOP in Baidu's Apollo 6.0.0 system in the optimized way described in Section 5.2.3, and conduct both the modular and the closed-loop control evaluation in two simulation environments in normal driving scenarios and against the _Physical_ attack. We release the implementation details in [3]. \(\bullet\)**Experimental Settings.** In the experiments, we construct two different scenarios (e.g., Single Lane Road and Borregas Ave) with random traffic in the LGSVL simulator to evaluate LOP's performance in the end-to-end system. Table 8 reports the ASR of the Physical attack on Apollo 6.0.0, together with the precision and the time cost of the 3D object detectors in Apollo's perception module when LOP is deployed or not, and Fig.7 illustrates the detection results in an end-to-end driving test when the system is deployed without or with LOP, and shows a snapshot of the attacking scenario in the experiments. \(\bullet\)**Results & Analysis.** As Table 8 shows, LOP effectively defends against appearing attacks in the end-to-end Apollo 6.0.0, with a slight proportion of time overhead (less than 10ms). As Fig.7(b) shows, the _Physical_ attack can successfully fools Apollo's perception module, and remains existent in the Dreamview even after the processing of MOT. This confirms our argument that appearing attacks is easier to be mounted in practical scenarios than disappearing attacks. In the Dreamview view of Fig.7(c), with the help of our LOP, the forged object is eliminated from Apollo's perception during the evaluation (with ASR\(=0\%\)), while the real obstacles remain intact in the perception of the ADS. Therefore, the driving trajectory of the ADS with LOP remains normal and safe during the full driving test. Besides, LOP only incurs a \(9.12ms\) overhead on the running time of the 3D detection pipeline on average and slightly brings down the FPS from 29.97 to 23.54, which still satisfies the real-time requirement of a physical self-driving system [21]. Moreover, we use the previously forged objects, which can successfully fool the perception module of Apollo for at least one frame, to further test whether they would lead to a potential harsh braking in different traces. Specifically, we measure whether the self-driving vehicle would do sudden braking, which is shown as it decelerating to 0 km/h in less than 1 second, to calculate the _harsh braking rate_, i.e., the ratio of \begin{table} \begin{tabular}{c c c c} \hline \hline & Time per sample (s) & GPU Mem (MB) & CPU Mem (MB) \\ \hline None & 0.060\(\pm\)0.005 & 1477 & 2551 \\ \hline SRS & 0.069\(\pm\)0.007 & 1473 & 2549 \\ SOR & 0.114\(\pm\)0.005 & 5827 & 2516 \\ \hline Carlo (LPD) & 0.503\(\pm\)0.003 & 1477 & 2552 \\ Carlo (FSD) & 2.463\(\pm\)0.005 & 1477 & 2506 \\ Shadow-Catcher & 0.089\(\pm\)0.002 & 1477 & 2551 \\ \hline Ours (PointNet) & 1.341\(\pm\)0.011 & 2283 & 2518 \\ Ours (DGCNN) & 1.589\(\pm\)0.013 & 3747 & 2506 \\ \hline \hline \end{tabular} \end{table} Table 6: The time and space overhead of LOP and other defenses during the inference phase. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{SOR} & \multicolumn{3}{c}{CARLO} \\ \cline{3-10} & & k=2 & k=10 & FSD, r=0.6 & FSD, r=0.7 & FSD, r=0.8 & LPD, r=0.6 & LPD, r=0.7 & LPD, r=0.8 \\ \hline AP & 72.34\(\%\) & 71.20\(\%\) & 70.58\(\%\) & 67.92\(\%\) & 72.03\(\%\) & 71.95\(\%\) & 69.80\(\%\) & 71.58\(\%\) & 72.02\(\%\) \\ Precision & 78.99\(\%\) & 78.91\(\%\) & 78.86\(\%\) & 75.06\(\%\) & 77.81\(\%\) & 78.00\(\%\) & 75.77\(\%\) & 77.56\(\%\) & 78.31\(\%\) \\ \hline \hline & None & SRS & & Shadow-Catcher & & & & Ours & \\ \cline{3-10} & & M=500 & Threshold=0.2 & Threshold=0.4 & Threshold=0.6 & PointNet, B=0.5 & PointNet, B=0.6 & DGCNN, B=0.5 & DGCNN, B=0.6 \\ \hline AP & 72.34\(\%\) & 72.33\(\%\) & 50.58\(\%\) & 77.41\(\%\) & 79.47\(\%\) & 72.86\(\%\) & 72.88\(\%\) & 73.63\(\%\) & 72.73\(\%\) \\ Precision & 78.99\(\%\) & 79.14\(\%\) & 70.25\(\%\) & 77.31\(\%\) & 76.91\(\%\) & 81.77\(\%\) & 82.38\(\%\) & 83.04\(\%\) & 83.90\(\%\) \\ \hline \hline \end{tabular} \end{table} Table 3: The AP and precision of PointPillars on cars with different defenses on clean samples. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{PointRCNN} & \multicolumn{2}{c}{PV-RCNN} \\ \cline{2-5} & AP & Precision & AP & Precision \\ \hline w/o. defense & 75.13\(\%\) & 75.04\(\%\) & 73.32\(\%\) & 73.12\(\%\) \\ SRS (M=500) & 75.52\(\%\) & 74.75\(\%\) & 73.48\(\%\) & 73.23\(\%\) \\ \hline SOR (k=2) & 74.46\(\%\) & 74.49\(\%\) & 72.52\(\%\) & 72.92\(\%\) \\ SOR (k=10) & 74.04\(\%\) & 73.88\(\%\) & 72.22\(\%\) & 72.94\(\%\) \\ \hline CARLO(LPD, R=0.6) & 73.49\(\%\) & 73.10\(\%\) & 71.43\(\%\) & 70.08\(\%\) \\ CARLO(LPD, R=0.7) & 74.63\(\%\) & 74.35\(\%\) & 73.21\(\%\) & 72.22\(\%\) \\ CARLO(LPD, R=0.8) & 74.53\(\%\) & 74.32\(\%\) & 73.20\(\%\) & 72.41\(\%\) \\ CARLO(FSD, R=0.6) & 74.17\(\%\) & 73.07\(\%\) & 72.00\(\%\) & 69.93\(\%\) \\ CARLO(FSD, R=0.7) & 74.79\(\%\) & 74.38\(\%\) & 72.94\(\%\) & 71.66\(\%\) \\ CARLO(FSD, R=0.8) & 74.89\(\%\) & 74.87\(\%\) & 73.15\(\%\) & 72.37\(\%\) \\ \hline Ours(PointNet, B=0.5) & 76.49\(\%\) & 79.29\(\%\) & 73.65\(\%\) & 77.85\(\%\) \\ Ours(PointNet, B=0.6) & 76.37\(\%\) & 80.03\(\%\) & 73.80\(\%\) & 78.53\(\%\) \\ Ours(DGCNN, B=0.5) & 76.77\(\%\) & 80.75\(\%\) & **74.50\(\%\)** & 79.61\(\%\) \\ Ours(DGCNN, B=0.6) & **76.84\(\%\)** & **81.52\(\%\)** & 73.86\(\%\)** & **80.34\(\%\)** \\ \hline \hline \end{tabular} \end{table} Table 4: The AP and precision of PointRCNN and PV-RCNN on cars with different defenses on clean samples. \begin{table} \begin{tabular}{c c c c} \hline \hline Defense & Total Time (h) & Time Per Epoch (s) & Time Per Iter (s) \\ \hline SVF (PointPillar) & \(1.2^{*}\) & \(54.0^{*}\) & \(8.21^{*}\) \\ SVF (PointRCNN) & \(3.0^{*}\) & \(135.0^{*}\) & \(20.51^{*}\) \\ SVF (PV-RCNN) & \(5.0^{*}\) & \(225.0^{*}\) & \(34.19^{*}\) \\ Ours (PointNet) & \(0.41\) & \(7.30\) & \(0.07\) \\ Ours (DGCNN) & \(0.77\) & \(13.88\) & \(0.14\) \\ \hline \hline \end{tabular} \end{table} Table 5: The time overhead of LOP and SVF during the preparation phase: “\(\star\)” means that the results are from OpenPCDI, an open source platform of 3D object detectors, which we use the training time of the specific 3D object detector to approximate the re-training time of SVF on the same detector. the test cases where the self-driving vehicle suddenly brakes when there is no real obstacle in front of it. We observe that the harsh braking rate of the Apollo without LOP is \(13.33\%\) (\(2/15\)), while, with LOP, the harsh braking rate is reduced to \(0.00\%\) (\(0/15\)). We provide the Dreamview snapshot and the details of these experiments in Appendix D. Therefore, combined with the comprehensive evaluation results on the KITTI benchmark, our end-to-end experiments further validate the system-level usefulness of our proposed defense in terms of the improved system robustness, and the acceptable overhead on the running time and the normal driving performance. ## 6 Discussion **Appearing Attacks vs. Disappearing Attacks.** Our current defense mainly focuses on appearing attacks, which form a popular attack class on LiDAR-based object detectors in ADS In contrast to appearing attacks, a disappearing attack aims at hiding the existing objects from the prediction results of the victim 3D object detector [16, 40, 50]. To accomplish this purpose, the adversary would optimally generate a 3D-printing object to the target detector would not recognize it or its neighbouring object, and put the object on the road or near some objects to mount the attack. In the previous literature, Cao et al. propose one of the earliest disappearing attacks on ADS, and successfully hide the printed objects from the LiDAR-based detection system of Baidu's Apollo by modeling its preprocessing and postprocessing phases into differentiable functions [12]. Later, Tu et al. present a more general disappearing attack which breaks the state-of-the-art 3D object detectors including PointPillars and PointRCNN, and hide the car on which the printed object is positioned from the model's prediction results [44]. Recently, Cao et al. further devise a more powerful disappearing attack, MSF-ADV, which fools the image-based 2D object detectors and LiDAR-based 3D object detectors at the same time, and causes the fusion-based detection system of Baidu's Apollo to ignore the existence of the printed objects [10]. Compared with appearing attack, we argue that a disappearing attack is not physical because it is untargeted and _single-shot_, i.e., the attacker has to put a printed object on the road or near some objects in preparation. This indicates that he/she could hardly choose the victim ADS during the attack. Moreover, the printed object can only take effect once because it might be destroyed or recognized by the people nearby after the first accident happens. In contrast, in an appearing attack the attacker can choose the victim to fire the laser and forge non-existent cars as he/she wishes, making it difficult for others to note the attack due to the almost no evidence left in the accident scene. Nevertheless, considering the severe consequences if happening, how to mitigate disappearing attacks remains a meaningful direction to pursue. **Extension to Other Attack Classes.** We further discuss the applicability of our defense for mitigating mis-categorization \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{PointPillars} & \multicolumn{3}{c}{PointRCNN} & \multicolumn{3}{c}{PV-RCNN} \\ \cline{2-9} & ASR & AP & Precision & ASR & AP & Precision & ASR & AP & Precision \\ \hline No Attack & / & 72.34\% & 78.99\% & / & 75.13\% & 75.04\% & / & 73.32\% & 73.12\% \\ \hline w/o. defense (PointNet) & 45.61\% & 68.52\% & 66.54\% & 8.78\% & 66.14\% & 55.63\% & 12.58\% & 66.37\% & 28.12\% \\ w/o. defense (DGCNN) & 44.78\% & 68.53\% & 66.65\% & 8.56\% & 65.87\% & 55.13\% & 12.53\% & 66.39\% & 28.18\% \\ \hline Ours (PointNet, B=0.5) & 22.17\% & 69.36\% & 74.11\% & 5.14\% & 71.95\% & 74.31\% & 6.42\% & 69.27\% & 60.29\% \\ Ours (PointNet, B=0.6) & 16.17\% & 69.55\% & 75.87\% & 4.42\% & 72.33\% & 76.06\% & **5.58\%** & 66.39\% & 61.20\% \\ Ours (DGCNN, B=0.5) & 17.11\% & **69.73\%** & 76.30\% & 3.86\% & 72.31\% & 76.41\% & 6.58\% & **69.89\%** & 63.62\% \\ Ours (DGCNN, B=0.6) & **12.53\%** & 68.56\% & **78.02\%** & **3.25\%** & **73.33\%** & **77.85\%** & 5.83\% & 68.37\% & **64.48\%** \\ \hline \hline \end{tabular} \end{table} Table 7: The performance of LOP against adaptive attack. The names behind “w/o defense” denotes the target LOP of attack. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Precision & ASR & time cost (ms) & FPS \\ \hline Apollo 6.0.0 (w/o. LOP) & 8.33\% & 53.66\% & 33.36ms & 29.97 \\ Apollo 6.0.0 (w/. LOP) & 100.00\% & 0.00\% & 42.48ms & 23.54 \\ \hline \hline \end{tabular} \end{table} Table 8: The performance of the perception module in an end-to-end Apollo 6.0.0 system when the 3D object detector is deployed with or without LOP. Figure 7: The simulating scenario and the Dreamview of Apollo 6.0.0 without and with our LOP under Physical attack. attacks, which aims at changing the predicted class of the target objects in the victim's detection results. In this sense, a mis-categorization attack can be seen as the combination of a disappearing attack and an appearing attack. In the above process, we observe that the crafted object would also be left with an abnormal density-depth characteristic which does not belong to the target class. Specifically, in Appendix B, we modify the appearing attacks covered in our experiments into mis-categorization attacks, which selects the objects from the _bicycle_ or _pedestrian_ classes, and injecting a limited number of points around them to fool the victim 3D detector to mis-categorize them as vehicles, and evaluate the performance of our LOP when deployed alongside the 3D detector. The experimental results in Appendix B show that our proposed defense is also effective against mis-categorization attacks due to the depth-density anomaly introduced by them. **Fusion Models as Defense Targets.** We first clarify the relation between our proposed LOP and the fusion models. According to [21], the detection frequency of existing fusion models (including FPN, FCN and AVOD) is usually lower than 15 FPS, and may be unsuitable for real-time self-driving systems due to the efficiency bottleneck. Besides, we suggest our defense is orthogonal to the fusion strategy. LOP in our defense provides a different view for the detectors to confirm their detection, while the fusion strategy incorporates new input modality to enhance robustness. Therefore, instead of viewing fusion models as a comparison group to our defense, we prefer to view the fusion models, which are by essence detectors, as our defense targets. In Appendix C, we provide a preliminary study which validates that our LOP substantially improves the robustness of fusion models against appearing attacks. For example, the PointNet-based LOP would reduce the ASR of the _Physical_ attack on EPNet [25] to 0%. In other words, we prefer not to view LOP as a competitor for the fusion models. Instead, LOP empirically improves the robustness of the fusion models, while, as no modifications is made on the image input branch, LOP would not hurt the benefits of fusion models in self-driving systems. For future works, it would be meaningful to systematically evaluate our proposed defense on more representative fusion and 3D object detection models. **Limitation and Future Directions.** Finally, we discuss the potential limitations of our proposed defense: According to the case study on the false positives from our defense, we find that our LOP may not recognize the forged obstacles well in some cases due to its uncertainty on distant vehicles. However, due to the existence of the MOT module, the self-driving system keeps refreshing the driving plan and corrects the mis-prediction of distant objects when the obstacle comes nearby. Moreover, MOT would prevent the self-driving system from ignoring a distant object only if LOP misses a distant object in several consecutive frames, the possibility of which is less than 0.1% according to our calculation. Therefore, the negative influence of LOP on the normal performance of the detector would hardly influence the normal driving behaviors of the defense target. The similar results are also provided in our end-to-end experiments in Section 5.4. Besides, due to our limited computing resources, we mainly prove the advantages of LOP in terms of computational overhead compared with SVF, while we admit the additional overhead may trade for better defense effectiveness and would not be a problem for most autonomous driving companies. Nevertheless, SVF as a retraining-based approach lies in a different defense category from our proposed plug-and-play defense module. A 3D object detection module which is enhanced by SVF can be further combined with our LOP for better defense effectiveness. As SVF still has a space for improvement in defense effectiveness [41], it would be meaningful for future works to explore their combination in the future. ## 7 Conclusion In this paper, we systematically analyze the working mechanisms of recent appearing attacks and summarize their common weaknesses in violating the depth-density law and failing to imitate the local parts of real objects. Based on the defensive insights, we propose a novel plug-and-play defense method which adopts a LOP module to work by side of an arbitrary LiDAR-based object detector to detect and eliminate forged obstacles from its prediction results. To handle the complexity of the depth-density law and the local object feature, we build the LOP with an off-the-shelf point-wise PC classifier and explicitly expand the input point feature with the derived depth information. We present extensive experiments spanning three state-of-the-art 3D object detectors and three known appearing attacks on the standard benchmark KITTI dataset, which validate the effectiveness and flexibility of our proposed defense. Furthermore, we deploy and evaluate the LOP in an end-to-end self-driving system, which validates the system-level usefulness of our proposed defense. ## Acknowledgments We would like to thank the anonymous reviewers and the shepherd for their insightful comments that helped improve the quality of the paper. This work was supported in part by the National Key Research and Development Program (2021YFB3101200), National Natural Science Foundation of China (61972099, U1736208, U1836210, U1836213, 62172104, 62172105, 61902374, 62102093, 62102091). Min Yang is a faculty of Shanghai Institute of Intelligent Electronics & Systems, Shanghai Institute for Advanced Communication and Data Science, and Engineering Research Center of Cyber Security Auditing and Monitoring, Ministry of Education, China. Mi Zhang and Min Yang are the corresponding authors.
2310.11046
Fast Graph Condensation with Structure-based Neural Tangent Kernel
The rapid development of Internet technology has given rise to a vast amount of graph-structured data. Graph Neural Networks (GNNs), as an effective method for various graph mining tasks, incurs substantial computational resource costs when dealing with large-scale graph data. A data-centric manner solution is proposed to condense the large graph dataset into a smaller one without sacrificing the predictive performance of GNNs. However, existing efforts condense graph-structured data through a computational intensive bi-level optimization architecture also suffer from massive computation costs. In this paper, we propose reforming the graph condensation problem as a Kernel Ridge Regression (KRR) task instead of iteratively training GNNs in the inner loop of bi-level optimization. More specifically, We propose a novel dataset condensation framework (GC-SNTK) for graph-structured data, where a Structure-based Neural Tangent Kernel (SNTK) is developed to capture the topology of graph and serves as the kernel function in KRR paradigm. Comprehensive experiments demonstrate the effectiveness of our proposed model in accelerating graph condensation while maintaining high prediction performance. The source code is available on https://github.com/WANGLin0126/GCSNTK.
Lin Wang, Wenqi Fan, Jiatong Li, Yao Ma, Qing Li
2023-10-17T07:25:59Z
http://arxiv.org/abs/2310.11046v2
# Fast Graph Condensation with Structure-based Neural Tangent Kernel ###### Abstract The rapid development of Internet technology has given rise to a vast amount of graph-structured data. Graph Neural Networks (GNNs), as an effective method for various graph mining tasks, incurs substantial computational resource costs when dealing with large-scale graph data. A data-centric manner solution is proposed to condense the large graph dataset into a smaller one without sacrificing the predictive performance of GNNs. However, existing efforts condense graph-structured data through a computational intensive bi-level optimization architecture also suffer from massive computation costs. In this paper, we propose reforming the graph condensation problem as a Kernel Ridge Regression (KRR) task instead of iteratively training GNNs in the inner loop of bi-level optimization. More specifically, We propose a novel dataset condensation framework (GC-SNTK) for graph-structured data, where a Structure-based Neural Tangent Kernel (SNTK) is developed to capture the topology of graph and serves as the kernel function in KRR paradigm. Comprehensive experiments demonstrate the effectiveness of our proposed model in accelerating graph condensation while maintaining high prediction performance. ## 1 Introduction Graph-structured data is widely used in our lives. Various real-world data, such as social networks [1; 2; 3; 4; 5], transportation networks [6; 7], and molecules [8; 9; 10], can be naturally represented as graphs consisting of nodes and edges. Due to the extensive application of graph-structured data, Graph Neural Networks (GNNs) [11; 12; 13; 14], as one of the advanced Deep Neural Networks (DNNs), have gained significant attention for the performance of various graph mining tasks during the past few years. The main idea of GNNs is to learn node representation via the message-passing paradigm to transform and aggregate information between direct neighbors or beyond [15; 16]. Notably, the remarkable achievement of most existing GNNs largely relies on large-scale datasets. Despite being effective, training GNNs on such large-scale datasets still presents some difficulties, as it usually requires enormous computational resources (e.g., power, memory storage, GPU runtime, etc.) caused by thousands of training iterations, hyper-parameters optimization, and neural architecture search [17; 18]. A practical approach to tackle these challenges in a data-centric manner is to condense the original large-scale datasets into synthesizing smaller yet information-rich versions [19]. As one of the most advanced paradigms, Dataset Condensation (DC), also known as dataset distillation [20], has achieved remarkable performance to obtain a small-scale synthetic version of the full dataset while preserving models' prediction performance in the image domain. For instance, 10 images distilled from the whole MNIST dataset [21] with 60,000 images are sufficient to achieve a 94% accuracy, reaching 95% performance of the full dataset [19]. Recently, early efforts explore the potential of dataset condensation on graph-structured data [18; 22]. As shown in Figure 1, similar to that in image domain, the vanilla graph condensation methods [18; 22] can be formulated as the bi-level optimization problem, so as to condense the entire graph into a small graph of a few nodes with corresponding features via various matching objectives between two GNNs models. More specifically, as illustrated in Figure 2(a), the outer synthetic graph optimization (i.e., outer loop) heavily relies on the inner GNN model trained on the synthetic graph (i.e., inner loop) [18; 22]. Despite the aforementioned success, most existing bi-level optimization-based graph condensation methods suffer from intrinsic limitations such as unstable training [23; 24], and massive computation costs [25], leading to an inferior performance on synthesized data generations. Worse still, ensuring condensed data's generalization on various GNN architectures requires multiple parameter initialization [22; 18], leading to complex three nested loops in algorithmic optimization and substantial time consumption. To tackle the aforementioned problems, in this paper, we propose to reformulate the graph condensation as a Kernel Ridge Regression (KRR) task in a closed-form solution [26; 27], instead of training GNNs (in the inner loop) with multiple iterations to learn synthetic graph data via bi-level optimization. However, leveraging KRR for graph condensation faces tremendous challenges. The KRR paradigm, which can be treated as a classification task, largely relies on the design of kernel functions to calculate the similarity matrix among instances. The vanilla kernel function, such as dot product and polynomial kernel, might hinder the expressiveness of modeling complex relationships among nodes. Therefore, the first challenge is how to design an appropriate kernel function for optimizing graph condensation in the KRR paradigm. Moreover, unlike images or text data, graph-structured data lies in non-Euclidean space with complex intrinsic dependencies among nodes. In other words, it is imperative to capture graph topological signals in non-linear kernel space. Thus, the second challenge is how to effectively take advantage of topological structure in graphs to enhance the design of kernel function in KRR for graph condensation. In this paper, we introduce a novel dataset condensation framework for graph-structured data in the node classification task, named **G**raph **C**ondensation with **S**tructure-Based **N**eural **T**angent **K**ernel (**GC-SNTK**). More specifically, a novel graph condensation framework is developed by harnessing the power of the estimated neural kernel into the KRR paradigm, which can be considered as training infinite-width neural networks through infinite steps of SGD. What's more, to capture the topological structure of graphs, a Structure-based **N**eural **T**angent **K**ernel (**SNTK**) is introduced to execute neighborhood aggregation of nodes for generating the high-quality condensed graph. Our contributions can be summarized as follows: Figure 1: Graph condensation aims to condense graph data to a smaller but informative version. In general, two GNN classifiers (i.e., GNN * We propose a principle way based on kernel ridge regression to significantly improve graph condensation efficiency, which can be achieved by replacing the inner GNNs training (i.e., inner loop) via Structure-based Neural Tangent Kernel in the KRR paradigm. * We propose a novel graph condensation framework GC-SNTK, which can effectively and efficiently synthesize a smaller graph so that various GNN architectures trained on top of it can maintain comparable generalization performance while being significantly efficient. * We conduct extensive experiments on various datasets to show the effectiveness and efficiency of the proposed framework for graph condensations. Moreover, our proposed method offers powerful generalization capability across various GNN architectures. ## 2 Related Work **Dataset Condensation (DC)**. Dataset condensation (DC) [19; 20] aims to condense a large-scale dataset into a small synthetic one, while the model trained on the small synthetic dataset should reach a comparable performance to that trained on the original dataset. [28] introduces the concept of serving the pixels of the training images as hyper-parameters and optimizing them based on gradients. This idea lays the foundation for dataset condensation [19], contributing to the establishment of the basic bi-level optimization framework for DC. Building upon the framework, various criteria for evaluating the performance of models trained on condensed data are proposed, including gradient matching [20; 29; 30], features aligning [31], training trajectory matching [32] and distribution matching [33]. Additionally, Deng et al. introduce the learnable address matrix in condensation [34], which considers the address matrix as part of the condensed dataset. Furthermore, leveraging the support of infinite-width neural network [35; 36; 37; 38], [26] propose a meta-learning approach for image dataset condensation. **Graph Condensation**. Graph condensation comprises node-level condensation [18] and graph-level condensation [22]. The former focuses on condensing a large graph into a synthetic one with a few nodes, while the latter condenses numerous graphs into a synthetic set containing only a small number of graphs. Among these works, Jin et al. first extend DC to the graph domain and introduce a node-level graph condensation approach [18]. Subsequently, graph condensation is further extended to the graph-level datasets [22], where the discrete graph structure of a synthetic graph is modeled as a probabilistic model. On the other hand, a structure-free graph condensation method [39] is demonstrated as effective. This method condenses a graph into node embedding based on the training trajectory matching. On top of that, Liu et al. propose a node-level graph condensation method built upon the receptive field distribution matching for graph data [40]. Figure 2: Bi-level graph condensation (a) and the proposed GC-SNTK (b). \(G^{\mathcal{T}}\) and \(G^{\mathcal{S}}\) denote the target and condensed graph data. \(\mathrm{GNN}_{\theta}\) is the graph neural network model with parameter \(\theta\). \(\mathcal{L}\) and \(\ell\) are the loss of the outer and inner loop, respectively. opt-alg is the optimization algorithm. The bi-level model entails a inner GNN training loop, a outer \(G^{\mathcal{S}}\) optmization loop, and \(R\)-time initialization. On the contrary, the proposed GC-SNTK only have a single \(G^{\mathcal{S}}\) optimization loop. Methodology In this section, we start by introducing the bi-level graph condensation model. Next, we provide comprehensive details on the proposed fast graph condensation framework (as shown in Figure 2(b)), and illustrate a structure-based kernel method specifically designed for graph data. Finally, a theoretical analysis of the computational complexity is presented, proving that the proposed GCN-SNTK is more efficient compared to previous SOTA method. ### Notations and Definitions In general, a graph can be represented as \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},...,v_{|\mathcal{V}|}\}\) is the set of \(|\mathcal{V}|\) nodes and \(\mathcal{E}\) is the edge set. We use \(X\in\mathbb{R}^{|\mathcal{V}|\times d}\) to denote the node features matrix, where \(d\) is the dimensional size of node features. The structural information of the graph can be represented as a adjacency matrix \(A\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\), where \(A_{ij}=1\) indicates that there is a connection between node \(v_{i}\) and \(v_{j}\), and 0 otherwise. Given a target graph dataset \(G^{\mathcal{T}}=\{X^{\mathcal{T}},A^{\mathcal{T}},Y^{\mathcal{T}}\}\) with \(N\) nodes, where \(Y^{\mathcal{T}}\) is nodes' labels. The goal of graph condensation is to condense \(G^{\mathcal{T}}\) into a graph \(G^{\mathcal{S}}=\{X^{\mathcal{S}},A^{\mathcal{S}},Y^{\mathcal{S}}\}\) with a significantly smaller nodes number \(M\) (\(M\ll N\)), while the model trained on \(G^{\mathcal{S}}\) can achieve comparable performance to that trained on the much larger target graph dataset \(G^{\mathcal{T}}\) (as shown in Figure 1). ### Bi-level Graph Condensation As shown in Figure 2(a), existing solutions regards graph condensation as a bi-level optimization problem [22], which can be formulated as follows: \[\min_{G^{\mathcal{S}}}\mathbb{E}_{\theta_{0}\sim P_{\theta_{0}}} \left[\sum_{t=0}^{T}\mathcal{L}(\mathrm{GNN}_{\theta_{t+1}},G^{\mathcal{S}},G ^{\mathcal{T}})\right] \tag{1}\] \[s.t.\quad\mathrm{GNN}_{\theta_{t+1}}=\texttt{opt-alg}_{\theta_{ t}}\left[\ell(\mathrm{GNN}_{\theta_{t}},G^{\mathcal{S}})\right].\] In Equation (1), the outer loop is responsible for optimizing the condensed graph data \(G^{\mathcal{S}}\) by minimizing the matching loss \(\mathcal{L}\). At the same time, the inner loop trains a model \(\mathrm{GNN}_{\theta}\) on condensed graph data \(G^{\mathcal{S}}\) by minimizing the training loss \(\ell\). Besides, to ensure the robust generalization of the condensed graph data \(G^{\mathcal{S}}\) on parameter initialization distribution \(P_{\theta_{0}}\), multiple initialization of the model parameters is required. As a result, the solving algorithm of the bi-level graph condensation actually has three nested loops, leading to huge computational costs in terms of time and GPU resources. ### Fast Graph Condensation via Kernel Ridge Regression To tackle the substantial computational issues, we propose to replace the \(\mathrm{GNN}_{\theta}\) in Equation (1) by the KRR paradigm. More specifically, KRR [26; 27] entails a convex optimization process with a closed-form solution, bypassing the resource-intensive \(\mathrm{GNN}_{\theta}\) training. Consequently, the bi-level optimization in the general graph condensation framework can be simplified into a more computational efficient single-level paradigm. Moreover, KRR leverages the condensed graph data \(G^{\mathcal{S}}\) as the model parameter, eliminating the need for multiple initialization to ensure robust generalization of the condensed graph data. These distinctive characteristics contribute to a significant improvement in the overall efficiency of the graph condensation. Denote \(f_{G^{\mathcal{S}}}\) as the KRR model constructed by condensed data \(G^{\mathcal{S}}\). Mathematically, the KRR model can be written as: \[f_{G^{\mathcal{S}}}(G^{\mathcal{T}})=\mathcal{K}_{\mathcal{TS}}(\mathcal{K}_{ \mathcal{SS}}+\lambda I)^{-1}Y^{\mathcal{S}}, \tag{2}\] where \(\mathcal{K}_{\mathcal{TS}}:G^{\mathcal{T}}\times G^{\mathcal{S}}\rightarrow \mathbb{R}^{N\times M}\) is a node-level kernel function. \(\lambda>0\) is the hyperparameter on regularization for preventing over-fitting to mislabeled data [41]. Equation (2) leverage condensed graph data \(G^{\mathcal{S}}\) to construct the KRR model and giving predictions of target graph data \(G^{\mathcal{T}}\). To evaluate the predicting performance, the Mean Square Error (MSE) loss is adopted: \[\mathcal{L}(f_{G^{\mathcal{S}}},G^{\mathcal{T}})=\frac{1}{2}||Y^{\mathcal{T}}- f_{G^{\mathcal{S}}}(G^{\mathcal{T}})||_{F}^{2}. \tag{3}\] Then \(G^{\mathcal{S}}\) is updated based on optimization algorithm \(\texttt{opt-alg}\) and the condensation loss \(\mathcal{L}\): \[G^{\mathcal{S}}=\texttt{opt-alg}_{G^{\mathcal{S}}}[\mathcal{L}(f_{G^{\mathcal{ S}}},G^{\mathcal{T}})]. \tag{4}\] Iteratively conducting equations from (2) to (4) until convergence, we will get the well condensed graph data \(G^{\mathcal{S}}\). Finally, the proposed graph condensation framework with KRR can be modeled as the following minimization problem: \[\begin{split}&\min_{G^{\mathcal{S}}}\mathcal{L}(f_{G^{\mathcal{S}}},G ^{\mathcal{T}})\\ & s.t.\ f_{G^{\mathcal{S}}}=\mathcal{K}_{\cdot\mathcal{S}}( \mathcal{K}_{\cdot\mathcal{S}\mathcal{S}}+\lambda I)^{-1}Y^{\mathcal{S}}.\end{split} \tag{5}\] The proposed graph condensation framework is illustrated in the Figure 2(b), where incorporating KRR in graph condensation allows the bi-level optimization architecture streamlined into a single-level one. Compared with GNN\({}_{\theta}\) in Equation (1), on one hand, the convex nature of KRR allows the optimal model to be trained without iteratively training. On the other hand, KRR does not require multiple initialization of model parameters. The two aspects contribute to the noteworthy improvement of the graph condensation efficiency. ### Structure-based Neural Tangent Kernel (SNTK) The role of the kernel function \(\mathcal{K}\) in KRR is mapping the data to a higher-dimensional space in order to capture more intricate nonlinear structures present in the data. Different kernel functions are used to capture patterns in different types of data. Therefore, the choice of the kernel function directly affects the performance of KRR predictions and, consequently, the quality of the condensed graph data. In recent years, Neural Tangent Kernel (NTK), which serves as a bridge between deep neural networks and kernel methods, has obtained extensive attention along with theoretical studies across enormous successful applications [36; 42; 43]. Specifically, NTK approximates behaviors of infinite-width neural networks, demonstrating the tremendous capabilities in modeling highly complex relationships among instances [42]. More importantly, harnessing the power of estimated NTK into KRR can be considered as training infinitely-wide deep neural networks in an infinite SGD training steps [37; 36], leading to efficient training without sacrificing model's expressivity. Given these advantages, the NTK provides a great opportunity to improve graph condensation that is typically based on the bi-level optimization method. A straightforward approach using NTK [36] in graph data is to compute the kernel matrix between nodes solely based on _their features_\(X\). For two nodes \(v_{i}\) and \(v_{j}\) with features \(x_{i}\) and \(x_{j}\), respectively, denote \(\boldsymbol{\Theta}_{ij}^{(0)}=\boldsymbol{\Sigma}_{ij}^{(0)}=x_{i}\cdot x_{j}\) as initialization. The recursive NTK workflow is given by: \[\boldsymbol{\Theta}_{ij}^{(L)}=\boldsymbol{\Theta}_{ij}^{(L-1)}\dot{\boldsymbol {\Sigma}}_{ij}^{(L)}+\boldsymbol{\Sigma}_{ij}^{(L)}, \tag{6}\] in which \[\boldsymbol{\Sigma}_{ij}^{(L)} =\alpha\mathbb{E}_{(a,b)\sim\mathcal{N}(0,\Lambda_{ij}^{(L)})}[ \sigma(a)\sigma(b)], \tag{7}\] \[\dot{\boldsymbol{\Sigma}}_{ij}^{(L)} =\alpha\mathbb{E}_{(a,b)\sim\mathcal{N}(0,\Lambda_{ij}^{(L)})}[ \hat{\sigma}(a)\hat{\sigma}(b)],\] (8) \[\boldsymbol{\Lambda}_{ij}^{(L)} =\begin{bmatrix}\boldsymbol{\Sigma}_{ii}^{(L-1)}&\boldsymbol{ \Sigma}_{ij}^{(L-1)}\\ \boldsymbol{\Sigma}_{ji}^{(L-1)}&\boldsymbol{\Sigma}_{jj}^{(L-1)}\end{bmatrix}, \tag{9}\] where \(\hat{\sigma}(a)\) is the derivative with respect to activation \(\sigma(a)\). \(\alpha\) is the coefficient related to activation \(\sigma\). \(\boldsymbol{\Theta}_{ij}^{(L)}\) is the kernel value after \(L\) iterations. However, calculating the kernel value only based on node features might easily lead to low-quality condensed graphs. Because the structural information in graphs can provide crucial insights into the relationships, dependencies, and interactions among nodes. By disregarding structural information, important contextual information and patterns in the graphs may be overlooked. Hence, considering the structural information in NTK for graph condensation is essential. To tackle this issue, we introduce a novel kernel method, namely Structure-based Neural Tangent Kernel (SNTK), to capture the structure of graphs by connecting local neighborhood aggregation and the NTK method. Based on the homophily assumption [44], SNTK involves integrating information from neighboring nodes into enhancing node representation learning, aiming to capture nodes' context and relationships. More formally, the node neighborhood aggregation of SNTK can be defined as follows: \[h_{i}=c_{i}\sum_{p\in\mathcal{N}\{i\}\cup\{i\}}x_{p}, \tag{10}\] where \(\mathcal{N}\{i\}\cup\{i\}\) indicates the set consisting of node \(v_{i}\) and its neighbors \(\mathcal{N}\{i\}\). To avoid imbalanced information propagation, the aggregation coefficient is set to \(c_{i}=(\parallel\sum_{p\in\mathcal{N}\{i\}\cup\{i\}}x_{p}\|_{2})^{-1}\). With local neighborhood aggregation, the SNTK kernel can be formulated as: \[\hat{\mathbf{\Sigma}}_{ij}^{(0)} =h_{i}\cdot h_{j}=c_{i}c_{j}\sum_{p\in\mathcal{N}\{i\}\cup\{i\}} \sum_{q\in\mathcal{N}\{j\}\cup\{j\}}\mathbf{\Sigma}_{pq}^{(0)}, \tag{11}\] \[\hat{\mathbf{\Theta}}_{ij}^{(0)} =h_{i}\cdot h_{j}=c_{i}c_{j}\sum_{p\in\mathcal{N}\{i\}\cup\{i\}} \sum_{q\in\mathcal{N}\{j\}\cup\{j\}}\mathbf{\Theta}_{pq}^{(0)},\] (12) \[\hat{\mathbf{\Theta}}_{ij}^{(L)} =\hat{\mathbf{\Theta}}_{ij}^{(L-1)}\hat{\mathbf{\Sigma}}_{ij}^{(L)}+\hat{ \mathbf{\Sigma}}_{ij}^{(L)}. \tag{13}\] Real-world scenarios often necessitate \(K\) (\(K>1\)) rounds of neighborhood aggregation to capture information from nodes' \(K\)-hop neighbors. Therefore, the aggregation followed by \(L\) iterations of kernel matrix will be recursively iterated \(K\) times. ### Matrix Form of Structure-based Neural Tangent Kernel (SNTK) To enable efficient acceleration computations on GPUs, we formulate the SNTK computational process in the matrix format. Given two different graphs \(G=\{X,A,Y\}\) and \(G^{\prime}=\{X^{\prime},A^{\prime},Y^{\prime}\}\), the initialization of SNTK is set as: \(\mathbf{\Theta}^{(0)}=\mathbf{\Sigma}^{(0)}=X(X^{\prime})^{\top}\). Denote the matrix form aggregation as \(\hat{\mathbf{\Sigma}}=Aggr(\mathbf{\Sigma})\), the matrix form for Equation (11) and (12) is given by: \[\hat{\mathbf{\Sigma}}^{(0)} =\hat{A}(C\odot\mathbf{\Sigma}^{(0)})(\hat{A}^{\prime})^{\top}, \tag{14}\] \[\hat{\mathbf{\Theta}}^{(0)} =\hat{A}(C\odot\mathbf{\Theta}^{(0)})(\hat{A}^{\prime})^{\top}, \tag{15}\] where \(\hat{A}\) and \(\hat{A}^{\prime}\) are the self-looped adjacency matrix of graphs \(G\) and \(G^{\prime}\). \(C\) is the aggregation coefficient matrix, where \(C_{ij}=c_{i}c_{j}\). \((\cdot)^{\top}\) indicates the transpose operation, \(\odot\) is the element-wise product. Then according to [37], the matrix form of the recursive iteration in Equation (13) is: \[\hat{\mathbf{\Sigma}}^{(L)} =\frac{1}{2\pi}\left[\pi-arccos(\hat{\mathbf{\Sigma}}^{(L-1)})\right], \tag{16}\] \[\hat{\mathbf{\Sigma}}^{(L)} =\frac{1}{2\pi}\left[\pi-arccos(\hat{\mathbf{\Sigma}}^{(L-1)})+\sqrt{ 1-(\hat{\mathbf{\Sigma}}^{(L-1)})}^{2}\right],\] (17) \[\hat{\mathbf{\Theta}}^{(L)} =\hat{\mathbf{\Theta}}^{(L-1)}\odot\hat{\mathbf{\Sigma}}^{(L)}+\hat{\mathbf{ \Sigma}}^{(L)}, \tag{18}\] where \(arccos(\hat{\mathbf{\Sigma}}^{(L-1)})\) and \((\hat{\mathbf{\Sigma}}^{(L-1)})^{2}\) indicate element-wise arc-cosine and squaring operation. To sum up, for \(K\) times neighborhood aggregation, where each aggregation followed by \(L\) iterations of Equation (18), the workflow for the SNTK is illustrated in **Algorithm 1**. Finally, the algorithm for proposed GC-SNTK, which incorporates KRR paradigm and SNTK, is summarised in **Algorithm 2**. ### Computational Complexity Analysis To theoretically demonstrate that the computational complexity of our proposed GC-SNTK method is lower than that of bi-level method, i.e., GCond, we conduct a comprehensive computational complexity analysis in this part. The notations used in this part are as follows: \(N\) and \(M\) represents the number of nodes in the original dataset and the condensed data, \(d\) stands for the node feature dimensionality, \(w\) signifies the GCN hidden layer width, and \(k\) represents the average number of neighbors per node (in the GCond method, \(k=M\) since the adjacency relationship between condensed nodes is constructed as a fully connected weighted graph). Additionally, \(t_{in}\) and denote the number of iterations in the inner loop and outer loop, respectively, and \(R\) corresponds to the number of training epochs, which is the initialization number of model parameters in GCond. The computational complexity of GCond method during the inner loop of GCN training is \(O(t_{in}(M^{2}d+Mdw))\). The outer loop of GCond consists of two parts: 1) optimizing the node features \(X^{S}\) or optimizing the MLP model used to update \(A^{S}\); 2) updating \(A^{S}\) according to the updated node features \(X^{S}\). Let the number of iterations for optimizing node features \(X^{S}\) be \(t_{X}\) and the number of iterations for optimizing MLP be \(t_{A}\) (\(t_{X}+t_{A}=t_{out}\)). Then the computational complexity of the outer loop is: \(O(t_{X}(Nkd+M^{2}dw)+t_{A}(M^{2}dw))\). Therefore, the total computational complexity of the **GCond** method is \(O(Rt_{out}t_{in}(M^{2}d+Mdw)+Rt_{X}(Nkd)+Rt_{out}(M^{2}dw))\). On the other hand, the computational complexity of proposed GC-SNTK mainly consists of two parts: kernel matrix calculation and KRR model construction. For the former, in practical experiments, parameters \(K\) and \(L\) are usually set to be relatively small (e.g., in the Ogbn-arxiv dataset, \(K=L=1\)). So, these parameters linearly affect the computational complexity and that is \(O(M{N}k^{2}+MN)\). For the latter, it is \(O(NM^{2})\). Therefore, the total computational complexity of the **GC-SNTK** method is \(O(RMNk^{2}+RNM^{2})\). Although the time complexity of both GCond and GC-SNTK are linearly increases with the target graph nodes number \(N\), GC-SNTK proves advantageous due to its single loop and rapid convergence, leading to a faster execution speed compared to GCond. ## 4 Experiment In this section, a performance comparison is conducted for node classification models trained on condensed data to evaluate the expressive ability of the condensed graph data. Subsequently, we conduct experiments on extremely small condensation sizes to measure the effectiveness of the condensation methods. Additionally, the efficiency of various condensation methods is assessed. The experimental results serve to validate the conclusions drawn in the computational analysis in Section 3.6. Moreover, an examination of the generalization capabilities of the condensed data is presented, along with an ablation study of the kernel method. Lastly, a sensitivity analysis of the parameters is conducted. All the experiments are conducted on an NVIDIA RTX 3090 GPU. ### Experimental Settings **Datasets**. Four different scale benchmark datasets in the graph domain are adopted, including Cora, Pubmed [45], Ogbn-arxiv [46], and Flickr [47]. These datasets vary in size, ranging from thousands of nodes to hundreds of thousands of nodes. Additionally, these datasets are categorized into two different settings: transductive (Cora, Pubmed, and Ogbn-arxiv) and inductive (Flickr). The details of the datasets are shown in Table 1. To ensure the transductive setting during experiments, we apply the graph convolution on the entire graph data of Cora, Pubmed, and Ogbn-arxiv datasets to facilitate information propagation between different nodes. Graph convolution can be modeled as \(\tilde{X}=\tilde{A}X\), where \(X\) is the node feature matrix. \(\tilde{A}=\tilde{D}^{-\frac{1}{2}}\hat{A}\tilde{D}^{-\frac{1}{2}}\), \(\hat{A}=A+I\), \(A\) is the adjacency matrix. \(\tilde{D}=diag(\sum_{j}\hat{A}_{1j},\sum_{j}\hat{A}_{2j},...,\sum_{j}\hat{A}_{ nj})\). **Baselines**. To evaluate the effectiveness of GC-SNTK, we compare it against various graph condensation methods serving as baselines, including three core-set methods Random, Herding [48], and K-center [49], as well as a simplified GCond method named as One-Step [22]. Additionally, we consider two different versions of the previous state-of-the-art (SOTA) graph condensation method, denoted as GCond (X) and GCond (X, A) [18]. The former condense the graph data into the version only including node features, while the latter including both node features and structural information. The GCond framework offers flexibility in utilizing different GNNs for the condensation and testing stages. Hence, there are various possible combinations available for this approach. For the sake of generality, in the experimental section, unless otherwise specified, we default to using the best-performing combination of SGC-GCN for all GCond methods. Specifically, the SGC [50] is employed for condensation, whereas the GCN [51] is utilized for testing. **Parameter Settings**. For SNTK, we tune the number of \(K\) and \(L\) in a range of \(\{1,2,3,4,5\}\). The hyper-parameter \(\lambda\) of KRR is tuned within the range from \(10^{-6}\) to \(10^{6}\). The values for learning rate are \(\{0.1,0.05,0.01,0.005,0.001,0.0005,0.0001\}\). The experimental parameter settings for GC-SNTK is given in Table 2. Regarding the GCond and One-Step methods, we follow the settings described in the original papers [18; 22]. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Dataset & \#Nodes & \#Edges & \#Classes & \#Features & \multicolumn{3}{c}{Split} \\ \cline{4-9} & & & & Train & Val. & Test \\ \hline Cora & 2,708 & 5,429 & 7 & 1,433 & 140 & 500 & 1,000 \\ Pubmed & 19,717 & 44,338 & 3 & 500 & 60 & 500 & 1,000 \\ Ogbn-arxiv & 169,343 & 1,166,243 & 40 & 128 & 90,941 & 29,799 & 48,603 \\ Flickr & 89,250 & 899,756 & 7 & 500 & 44,625 & 22,312 & 22,313 \\ \hline \hline \end{tabular} \end{table} Table 1: Details of the datasets. \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Learning Rate & Ridge (\(\lambda\)) & \(K\) & \(L\) \\ \hline Cora & 0.01 & 1 & 2 & 2 \\ Pubmed & 0.01 & 0.001 & 2 & 2 \\ Ogbn-arxiv & 0.001 & 0.00001 & 1 & 1 \\ Flickr & 0.001 & 0.00001 & 1 & 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Details of the parameters setting in GC-SNTK. ### Performance Comparison of Condensed Graph Data This part aims to assess the representation capability of the condensed graph data in three different condensation scales. We train nodes classification models on the condensed data to predict the labels of the test set. The classification accuracy serves as the measure for evaluating the representation capability of condensed graph data. For each dataset, we opt for three distinct condensation ratios. In the transductive scenario, \(Ratio=\frac{M}{N}\). In the inductive scenario, \(Ratio=\frac{M}{[training\ set]}\). As shown in Table 3, the proposed GC-SNTK method demonstrates its remarkable graph condensation capability. Even at condensation ratios of 0.1% (Flickr) and 0.05% (Ogbn-arxiv), GC-SNTK achieves **99%** and **89%** of the performance of the GCN model trained on the full training set, respectively. Moreover, GC-SNTK even outperforms the GCN model on Cora and Pubmed datasets. Specifically, with condensation ratios of 1.3% (Cora) and 0.08% (Pubmed), GC-SNTK achieves performance levels of **101%** and **102%** compared to that of the GCN model trained on the full training data. The results on Cora and Pubmed datasets show that GC-SNTK can improve efficiency, eliminate redundancies, and retain the most representative information in the original data. In other words, the proposed method can reduce datasets scale and enhance the entities' representation capability without scarifying the predictive performance. ### Performance with Extremely Small Condensation Size This part explores the variation in the representational performance of condensed graph data as the condensation scale decreases. We continuously reduce the number of nodes in the condensed data and observe how their performance change. The experimental results are presented in Figure 3. Due to the limitations of the Gcond algorithm, it is unable to condense graph data into node counts smaller than the number of node categories. Therefore, when the number of nodes in the condensed data is smaller than the number of classes in the target dataset (Figure 3(a) - 3(d)), only GC-SNTK is involved in the experiments. The proposed GC-SNTK method maintains strong performance even the scale of synthetic graphs is extremely small. Predictive performance of the model trained on condensed data only significantly drops when the number of nodes in the condensed data is smaller than the number of node categories. Additionally, on the Ogbn-arxiv dataset, our method achieves 63% accuracy (Figure 3(c)) even when the condensed data contains only 20 nodes (with 40 categories). The experimental results in Figure 3(e) - 3(h) show the performance comparison of GC-SNTK, with GCond and One-Step methods at smaller condensation scales. Overall, GC-SNTK exhibits outstanding performance across the majority of condensation scales on all four datasets. Particularly on the Ogbn-arxiv dataset, GC-SNTK is significantly better than the other two methods, achieving a higher accuracy of **3%**. In contrast, Gcond performs slightly worse than GC-SNTK, while the simplified optimization process of the One-Step method leads to the poorest performance. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Dataset & Ratio (Size) & Random & Herding & K-Center & One-Step & \multicolumn{3}{c}{GCond} & \multicolumn{3}{c}{**GC-SNTK (Our)**} \\ \cline{5-10} & 1.30\% (35) & 63.6\(\pm\)3.7 & 67.0\(\pm\)1.3 & 64.0\(\pm\)2.3 & 80.2\(\pm\)0.73 & 75.9\(\pm\)1.2 & 81.2\(\pm\)0.7 & **82.2\(\pm\)0.3** & 81.7\(\pm\)0.7 & \\ Cora & 2.60\% (70) & 72.8\(\pm\)1.1 & 73.4\(\pm\)1.0 & 73.2\(\pm\)1.2 & 80.4\(\pm\)1.77 & 77.5\(\pm\)0.9 & 81.0\(\pm\)0.6 & **82.4\(\pm\)0.5** & 81.5\(\pm\)0.7 & 81.1\(\pm\)0.5 \\ & 5.20\% (140) & 76.8\(\pm\)0.1 & 76.8\(\pm\)0.1 & 76.7\(\pm\)0.1 & 79.8\(\pm\)0.64 & 76.0\(\pm\)0.9 & 81.1\(\pm\)0.5 & **82.1\(\pm\)0.1** & 81.3\(\pm\)0.2 & \\ \hline \multirow{3}{*}{Pubmed} & 0.08\% (15) & 69.5\(\pm\)0.5 & 73.0\(\pm\)0.7 & 69.0\(\pm\)0.6 & 77.7\(\pm\)0.12 & 59.4\(\pm\)0.7 & 77.8\(\pm\)0.7 & **78.9\(\pm\)0.7** & 71.8\(\pm\)6.8 & \\ \cline{2-10} Pubmed & 0.15\% (30) & 73.8\(\pm\)0.8 & 75.4\(\pm\)0.7 & 73.7\(\pm\)0.8 & 77.8\(\pm\)0.7 & 77.8\(\pm\)0.7 & **79.3\(\pm\)0.3** & 74.0\(\pm\)4.9 & 77.1\(\pm\)0.3 \\ & 0.30\% (60) & 77.9\(\pm\)0.4 & 77.7\(\pm\)0.8 & 77.5\(\pm\)0.7 & 77.1\(\pm\)0.4 & 60.8\(\pm\)1.7 & 78.4\(\pm\)0.3 & **79.4\(\pm\)0.3** & 76.4\(\pm\)2.8 & \\ \hline \multirow{3}{*}{Ogbn-arxiv} & 0.05\% (40) & 77.1\(\pm\)3.9 & 52.4\(\pm\)1.8 & 77.2\(\pm\)0.3 & 59.2\(\pm\)0.0 & 61.3\(\pm\)0.5 & 59.2\(\pm\)1.6 & 15.9\(\pm\)0.3 & **64.4\(\pm\)0.2** & \\ \cline{2-10} & 0.25\% (454) & 57.3\(\pm\)1.1 & 58.6\(\pm\)1.2 & 56.8\(\pm\)0.8 & 60.1\(\pm\)0.7 & 65.4\(\pm\)0.4 & 63.2\(\pm\)0.3 & 65.5\(\pm\)1.0 & 65.1\(\pm\)0.8 & 71.4\(\pm\)0.1 \\ \cline{2-10} & 0.50\% (909) & 60.0\(\pm\)0.9 & 60.4\(\pm\)0.8 & 60.3\(\pm\)0.4 & 60.0\(\pm\)0.12 & 63.1\(\pm\)0.5 & 64.0\(\pm\)0.4 & **65.7\(\pm\)0.4** & 65.4\(\pm\)0.5 & \\ \hline \multirow{3}{*}{Flickr} & 0.10\% (44) & 41.8\(\pm\)2.0 & 42.5\(\pm\)1.8 & 42.0\(\pm\)0.7 & 45.8\(\pm\)0.47 & 45.9\(\pm\)0.4 & 16.5\(\pm\)0.4 & 46.6\(\pm\)0.3 & **46.7\(\pm\)0.1** & \\ \cline{2-10} & 0.05\% (223) & 44.0\(\pm\)0.4 & 43.9\(\pm\)0.9 & 43.2\(\pm\)0.4 & 14.6\(\pm\)0.1 & 13.4\(\pm\)0.5 & 0.2\(\pm\)0.2 & **47.1\(\pm\)0.1** & 46.7\(\pm\)0.1 & 46.8\(\pm\)0.1 & 47.2\(\pm\)0.1 \\ \cline{1-1} \cline{2-10} & 1.00\% (446) & 44.6\(\pm\)0.2 & 44.4\(\pm\)0.6 & 44.1\(\pm\)0.4 & 45.4\(\pm\)0.3 & 45.0\(\pm\)0.1 & **47.1\(\pm\)0.1** & 46.6\(\pm\)0.2 & 46.5\(\pm\)0.2 & \\ \hline \hline \end{tabular} \end{table} Table 3: Performance evaluation of the condensed data. The last column displays the classification accuracy obtained by the Graph Convolutional Neural Network (GCN) model trained on the complete training set. ### Condensation Efficiency This part focuses on evaluating the time efficiency of GC-SNTK and One-Step (faster than GCond) from an empirical perspective. As the optimization of condensed data progresses, the performance of node classification models trained on compressed data also changes over time. Therefore, we evaluate GC-SNTK, in terms of its time efficiency by observing the correlation between model performance and the time spent on optimizing condensed data. This correlation is illustrated by the curves in Figure 4, where the \(y\)-axis represents model performance and the \(x\)-axis represents the time consumption. Each curve stops when it reaches the corresponding performance indicated in Table 3. As shown in Figure 4, GC-SNTK exhibits remarkable time efficiency across all four datasets. For instance, GC-SNTK can condense the Cora and Pubmed datasets to 70 and 30 nodes in 4.01 and 3.37 seconds, which is significantly (**61.5** and **23.6** times) faster than One-Step method (Cora: 246.7s, Pubmed: 79.6s), and achieving a higher accuracy performance. Moreover, our method maintains advantages on large datasets like Ogbn-arxiv and Flickr. By condensing these two datasets to 90 and 44 nodes, GC-SNTK (Ogbn-arxiv: 871.5s, Flickr: 163.0s) outperforms the One-Step method (Ogbn-arxiv: 3161.1s, Flickr: 441.3s) by **3.6** and **2.7** times faster, respectively. Compared to One-Step, a bi-level optimization approach simplified to improve the efficiency of condensation by performing each loop only once, GC-SNTK both ensures its performance of condensed graph data and significantly speeds up the condensation process. ### Generalization of Condensed Data This part investigates the generalization of the condensed data across different graph node classification models. Specifically, we use the condensed data to train various models, including GCN [51; 52; 53], SGC [50; 16], APPNP [15; 12], GraphSAGE (SAGE) [54], and KRR [55; 56; 57] to assess their generalization performance. For the GCond method, we use four different GNN models (GCN, SGC, APPNP, SAGE) in the condensation process and evaluate the condensed graph data on all five models. The experimental results are presented in Table 4. Figure 4: Condensation efficiency consumption on the four datasets (the number after the name of dataset is the nodes size of the condensed data). Figure 3: The comparison of node classification accuracy. (a)-(d) illustrate the performance variation of GC-SNTK as the condensation size decreases to a single node. (e)-(f) represent the performance comparison of GC-SNTK, GCond, and One-Step methods at extremely small condensation sizes. Even though GC-SNTK is a non-neural network model, the data condensed by it still performs well when training neural network models. For instance, the average performance of the prediction models trained on the condensed data by GC-SNTK from the Cora and Pubmed datasets achieve accuracies of 78.0% and 75.7%, respectively. The GCond method can achieve better data condensation quality and generalization performance when using SGC as the condensation model. In contrast, the performance of the condensed data on the KRR model is poor. As a result, the overall performance of the GCond is inferior to the GC-SNTK method. ### Ablation Study In this section, we choose widely used dot product kernel function, as well as NTK for comparison, to test the impact of different kernel functions on the quality of condensed graph data. Experimental results demonstrate that our proposed structure-based neural tangent kernel method can better capture the information of graph data and improve the quality of condensed graph data. As shown in Table 5, the utilization of SNTK exhibits a more significant impact on the results across all datasets. SNTK surpasses the performance of the other two kernel functions, thereby providing evidence of its influential role in measuring the similarities among graph nodes and improving the quality of condensed graph data. ### Parameter Sensitivity Analysis In GC-SNTK, there are three parameters that influence the model performance: ridge \(\lambda\) in KRR, aggregation times \(K\), and iteration count \(L\) in SNTK. Therefore, this part experimentally studies the impact of these three parameters on graph condensation. The results of the experiment indicate that our method is not sensitive to these parameters, and as long as they are within a reasonable range, the performance differences exhibited by the model are not significant. As depicted in Fig. 5, we investigate the impacts of different values of \(K\) and \(L\). We change these variations to create 25 distinct kernel functions, which are subsequently applied to the proposed graph condensation framework. The classification accuracy on the test set is visualized using a color scale, where darker colors indicate higher accuracy. Surprisingly, even among these 25 kernel functions, their accuracy only differed by 3.1% on the Cora dataset, and 2.3% on the Pubmed dataset. Hence, it is evident that GC-SNTK is not sensitive to the parameters of SNTK. For the ridge \(\lambda\) of KRR, we conducted experiments on the Cora and Pubmed datasets, exploring 13 different magnitudes ranging from \(1\times 10^{-6}\) to \(10^{6}\). The experimental results, as shown in Fig. 6, \begin{table} \begin{tabular}{c c c c c c c c} \hline Dataset & C/T & GCN & SGC & APPNP & SAGE & KRR & Avg. \\ \hline \multirow{4}{*}{Cora} & **GCN** & 70.6 & 68.7 & 69.8 & 60.2 & 38.9 & 61.6 \\ & SGC & 80.1 & 79.3 & 78.5 & 78.2 & 73.0 & 77.8 \\ & APPNP & 73.5 & 73.1 & 72.1 & 72.3 & 63.7 & 70.9 \\ & SAGE & 77.0 & 77.7 & 77.1 & 76.1 & 23.7 & 66.3 \\ & **GC-SNTK** & 78.0 & 76.9 & 75.7 & 77.0 & 82.4 & **78.0** \\ \hline \multirow{4}{*}{Pubmed (30)} & GCN & 50.4 & 50.5 & 54.8 & 51.3 & 31.2 & 47.6 \\ & SGC & 77.1 & 76.0 & 77.1 & 77.1 & 45.2 & 70.5 \\ \cline{1-1} & APPNP & 68.0 & 60.7 & 77.5 & 73.7 & 65.7 & 69.1 \\ \cline{1-1} & SAGE & 51.9 & 58.0 & 69.9 & 65.3 & 41.3 & 57.3 \\ \cline{1-1} & **GC-SNTK** & 76.1 & 73.2 & 75.6 & 74.1 & 79.3 & **75.7** \\ \hline \end{tabular} \end{table} Table 4: The generalization capacity of the condensed data. We utilise various models to condense (C) graph data and test (T) the performance of models trained on the condensed data. \begin{table} \begin{tabular}{c c c c} \hline \multicolumn{2}{c}{Dataset} & \multicolumn{3}{c}{Kernels} \\ \cline{2-4} & Dot Product & NTK & SNTK \\ \hline Cora (70) & 78.9\(\pm\)0.3 & 80.9\(\pm\)1.4 & **82.4\(\pm\)0.5** \\ Pubmed (30) & 77.4\(\pm\)1.3 & 78.8\(\pm\)0.5 & **79.3\(\pm\)0.3** \\ Ogbn-arxiv (90) & 63.8\(\pm\)0.1 & 63.9\(\pm\)0.1 & **64.4\(\pm\)0.2** \\ Flickr (44) & 41.9\(\pm\)0.1 & 43.1\(\pm\)1.8 & **46.6\(\pm\)0.3** \\ \hline \end{tabular} \end{table} Table 5: Ablation study of the kernels on different datasets. demonstrate that GC-SNTK is also not sensitive to the ridge \(\lambda\). As long as the ridge is less than or equal to 1, the experimental results do not differ significantly. Therefore, we can conclude that the proposed method GC-SNTK is not sensitive to these parameters. ## 5 Conclusion This paper proposes a novel dataset condensation framework (GC-SNTK) for condensing graph-structured data. Specifically, GC-SNTK transforms the bi-level condensation problem into a single-level optimization task as the KRR paradigm. In addition, a structured-based kernel function (SNTK) is introduced to enhance the quality of the condensed data in KRR. Based on NTK and neighborhood aggregation, SNTK can simultaneously leverage the node features and structural information of graphs to capture the complex dependencies among nodes. The experimental results demonstrate the superiority of our proposed GC-SNTK method in efficiency and efficacy. Furthermore, the proposed GC-SNTK performs promising generalization capability across various GNN architectures on graph-structured data.
2307.15837
Existence of global solutions for the nonlocal derivation nonlinear Schrödinger equation by the inverse scattering transform method
We address the existence of global solutions to the initial value problem for the integrable nonlocal derivative nonlinear Schr\"{o}dinger equation in weighted Sobolev space $H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})$. The key to prove this result is to establish a bijectivity between potential and reflection coefficient by using the inverse scattering transform method in the form of the Riemann-Hilbert problem.
Yuan Li, Xinhan Liu, Engui Fan
2023-07-28T23:24:04Z
http://arxiv.org/abs/2307.15837v1
Existence of global solutions for the nonlocal derivation nonlinear Schrodinger equation by the inverse scattering transform method ###### Abstract We address the existence of global solutions to the initial value problem for the integrable nonlocal derivative nonlinear Schrodinger equation in weighted Sobolev space \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\). The key to prove this result is to establish a bijectivity between potential and reflection coefficient by using the inverse scattering transform method in the form of the Riemann-Hilbert problem. keywords: Nonlocal derivation nonlinear Schrodinger equation, Global solutions, Cauchy operator, Inverse scattering transform. _Mathematics Subject Classification:_ 35P25; 35Q51; 35Q15; 35A01; 35G25. ###### Contents * 1 Introduction * 2 Direct scattering transform * 2.1 Notations * 2.2 The Jost functions in the \(z\)-variable * 2.3 Regularity of the scattering data * 3 Construction of the RH problems * 3.1 The relationship between RH problems * 3.2 Reflection coefficients and Lipschitz continuity * 4 Inverse scattering transform * 4.1 Cauchy operator and the solvability of the RH problem Reconstruction formulas for the potential * 4.3 Estimates of the potential * 5 Time evolution and global solutions ## 1 Introduction It is well known that the derivative nonlinear Schrodinger (DNLS) equation [1] \[u_{t}(x,t)=iu_{xx}(x,t)+\sigma(u^{2}(x,t)\overline{u}(x,t))_{x},\ \sigma=\pm 1 \tag{1.1}\] is one of important integrable systems. In particular, the problem of global well-posedness for the DNLS equation (1.1) has been extensively studied in the last two decades. For example, Hayashi, Ozawa [2] and Wu, Guo [3; 4; 5] proved that the global solutions to the DNLS equation exists in Sobolev space if the initial data satisfy the smallness condition. The inverse scattering transform (IST) method to global well-posedness of the DNLS equation has advanced considerably in recent years. Jenkins and Liu et al. [6; 7; 8; 9] have used this techniques to prove the global well-posedness of the DNLS equation for any initial data in a weighted Sobolev space. Pelinovsky and Shimabukoro [10; 11] constructed a unique global solution of the DNLS equation in a different weighted Sobolev space by using the IST method and Backlund transformation. Bahouri and Perelman [12; 13] proved the global well-posedness of the DNLS equation in a very weak space with the help of the IST techniques. Their work has almost completely settled the problem of the global well-posedness of the DNLS equation. In order to extend the equation (1.1) into a nonlocal case, considering a linear system [14] \[\Phi_{x}=-ik^{2}\sigma_{3}\Phi+kQ(u)\Phi, \tag{1.2}\] \[\Phi_{t}=-2ik^{4}\sigma_{3}\Phi+\tilde{Q}(u)\Phi, \tag{1.3}\] where \[\sigma_{3}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix},\ \ \ \ Q(u)=\begin{bmatrix}0&u(x,t)\\ v(x,t)&0\end{bmatrix},\] \[\tilde{Q}(u)=\begin{bmatrix}k^{2}uv&-i(2ik^{3}u-ku_{x}+iku^{2}v)\\ -i(2ik^{3}v+kv_{x}+ikuv^{2})&-k^{2}uv\end{bmatrix},\] Ablowitz et al. recently found that under the symmetry reduction \(v(x,t)=\sigma\overline{u}(x,t),\ \ \sigma=\pm 1\), the compatibility condition of system (1.2) and (1.3) leads to the DNLS equation (1.1); under the symmetry reduction \[v(x,t)=i\sigma\overline{u}(-x,t),\ \ \sigma=\pm 1, \tag{1.4}\] the compatibility condition of system (1.2) and (1.3) yields the new nonlocal derivation nonlinear Schrodinger (nDNLS) equation \[u_{t}(x,t)=iu_{xx}(x,t)+i\sigma(u^{2}(x,t)\overline{u}(-x,t))_{x},\ \ x\in\mathbb{R}, \tag{1.5}\] where \(\sigma=\pm 1\), \(u(x,t)\) is a complex valued function and \(\overline{u}\) represents the conjugate complex number of \(u\). Unlike the well-known rich results on the DNLS equation, there still a little work related to the nDNLS equation (1.5). Ablowitz et al. constructed the inverse scattering transform for the equation (1.5) [14]. All possible nonlocal versions of the DNLS equations were derived by the nonlocal reduction from the Chen-Lee-Liu equation, the Kaup-Newell equation and the Gerdjikov-Ivanov equation [15]. Zhou introduced a nonlocal version of the conventional DNLS equation and derived the explicit expressions of solutions by Darboux transformations [16]. Chen et al. showed the global existence and uniqueness of the mild solution in super-critical function spaces for a Chen-Lee-Liu version of the nonlocal DNLS equation [17]. However the existence of global classical solutions to the nDNLS equation (1.5) is still unknown. The essential difficulty by using analytical method to prove this result is that the conservation law and the integrable structures cannot formulate useful norm structure for the nDNLS equation (1.5) compared to DNLS equation (1.1). In this paper we consider the Cauchy problem for the nDNLS equation (1.5) with weighted Sobolev initial data \[u(x,0)=u_{0}(x)\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R}). \tag{1.6}\] We prove the existence of global solutions to the Cauchy problem (1.5)-(1.6) under the appropriate small initial data condition (see (3.21) below). The proof is achieved by using the IST method in the form of the Riemann-Hilbert (RH) problem. By using IST method, we obtain a strong solution for nDNLS equation (1.5) compared to the mild weak solution in integral form for a Chen-Lee-Liu version of the nonlocal DNLS equation obtained in [17]. Our main results are as follows **Theorem 1**.: _For any initial date \(u_{0}\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) and satisfies the small norm restriction (3.21), then there exists a unique global solution \(u\in C(\mathbb{R},H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R}))\) to the Cauchy problem (1.5)-(1.6). Moreover, the map \(u_{0}\mapsto u\) is Lipschitz continuous from \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) to \(C(\mathbb{R},H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R}))\)._ **Remark 1**.: _Here we remark that compared to that of the DNLS equation [10], the Jost function and scattering data for the nDNLS equation (1.5) is short of symmetry properties, which This distinctive feature not only affects the solvability of the associated RH problem for the Cauchy problem (1.5)-(1.6), but also makes more analytical difficult. To ensure the existence of the solution to the RH problem, a small norm assumption (3.21) on the initial data is necessary._ The structure of the paper is as follows. In Section 2, we first establish the Jost functions in the \(z\)-plane and give its existence, asymptotics and regularity. Then, we returned to the \(k\)-plane, constructed the scattering data and gave the corresponding regularity. Section 3 constructs several RH problems and defines the corresponding reflection coefficients. In order to ensure the solvability of the RH problems, a suitable small norm condition is found. Further, the regularity of the reflection coefficients and the Lipshitz continuity with respect to the potential are also given. Section 4 proves the solvability of the RH problem with the help of properties of the Cauchy operator and gives an estimate of its solution. The Reconstruction formulas for the potential and the corresponding estimate are also obtained. Finally, we give the time evolution of the linear equation (1.3) and the proof of the Theorem 1 in Section 5. ## 2 Direct scattering transform In this section, we state some main results on the direct scattering transform associated with the Cauchy problem (1.5)-(1.6). ### Notations We introduce some notations used in this paper. The Sobolev space \[H^{m}(\mathbb{R})=\{f\in L^{2}(\mathbb{R}):\partial_{x}^{j}f\in L^{2}(\mathbb{ R}),\ j=1,2,\cdots,m\},\] equipped with the norm \[\|f\|_{H^{m}(\mathbb{R})}=\sum_{j=0}^{m}\|\partial_{x}^{j}f\|_{L^{2}(\mathbb{ R})}.\] A weighted space \(L^{p,s}(\mathbb{R})\) is defined by \[L^{p,s}(\mathbb{R})=\{f\in L^{p}(\mathbb{R}):\langle\cdot\rangle^{s}f\in L^{p} (\mathbb{R})\},\ \ s\geq 0,\] equipped with the norm \[\|f\|_{L^{p,s}(\mathbb{R})}=\|\langle\cdot\rangle^{s}f\|_{L^{p}(\mathbb{R})},\] where \(\langle\cdot\rangle=\sqrt{1+|\cdot|^{2}}\). The weighted Sobolev space \(H^{m,s}(\mathbb{R})\) is defined by \[H^{m,s}(\mathbb{R})=\{f\in L^{2}(\mathbb{R}):\langle\cdot\rangle^{s}\partial_{x }^{j}f\in L^{2}(\mathbb{R}),\ j=1,2,\cdots,m\},\ \ s\geq 0,\] equipped with the norm \[\|f\|_{H^{m,s}(\mathbb{R})}=\sum_{j=1}^{m}\|\langle\cdot\rangle^{s}\partial_{x }^{j}f\|_{L^{2}(\mathbb{R})},\ s\geq 0.\] ### The Jost functions in the \(z\)-variable We consider the case that the compatibility condition of system (1.2) and (1.3) under the symmetry reduction (1.4) leads to the nDNLS equation (1.5). In fact, due to the invariance of \(x\to-x\), the sign of \(\sigma=\pm 1\) does not matter in the DNLS type equations, so it is sufficient to analyse only for \(\sigma=1\). According to the inverse scattering theory, the time dependence of the function is ignored for the moment and the time variable \(t\) is seen as fixed. Note that the equation (1.2), unlike the nonlocal nonlinear Schrodinger equation, the standard fixed point argument for Volterra's integral equations associated with the linear equation (1.2) is not uniform in \(k\) as \(|k|\to\infty\) if \(Q\in L^{1}(\mathbb{R})\). Therefore, we solve this problem below by transforming the linear equation (1.2). For any \(u(x)\in L^{\infty}(\mathbb{R})\), \(k\in\mathbb{C}\), we define the following two transformations \[\Phi_{1}(x,k)=T_{1}(x,k)\Phi(x,k),\ \ \Phi_{2}(x,k)=T_{2}(x,k)\Phi(x,k), \tag{2.1}\] where \[T_{1}(x,k)=\begin{bmatrix}1&0\\ v(x)&2ik\end{bmatrix},\ \ T_{2}(x,k)=\begin{bmatrix}2ik&-u(x)\\ 0&1\end{bmatrix}. \tag{2.2}\] The transformation (2.1) changes the spectral problem (1.2) into the following two Zakharov-Shabat type spectral problems: \[\partial_{x}\Phi_{1}=-ik^{2}\Phi_{1}+Q_{1}(u)\Phi_{1},\ \ Q_{1}(u)=\frac{1}{2i} \begin{bmatrix}-u(x)v(x)&u(x)\\ 2iv_{x}(x)-u(x)v^{2}(x)&u(x)v(x)\end{bmatrix}, \tag{2.3}\] and \[\partial_{x}\Phi_{2}=-ik^{2}\Phi_{2}+Q_{2}(u)\Phi_{2},\ \ Q_{2}(u)=\frac{1}{2i} \begin{bmatrix}-u(x)v(x)&-2iu_{x}(x)-u^{2}(x)v(x)\\ v(x)&u(x)v(x)\end{bmatrix}. \tag{2.4}\] Let \(\Phi_{1}^{\pm}(x,k),\Phi_{2}^{\pm}(x,k)\) be even vectorial Jost function solutions satisfying the spectral problems (2.3) and (2.4) respectively, and have the following asymptotic properties \[\Phi_{1}^{\pm}(x,k)\to e^{-ik^{2}x},\ \ \Phi_{2}^{\pm}(x,k)\to e^{ik^{2}x},\ \ x\to\pm\infty.\] It is more convenient to define a new complex variable \(z=k^{2}\) and work with the normalized Jost solutions \[\mu_{\pm}(x,z)=\Phi_{1}(x,k)e^{izx},\ \ \nu_{\pm}(x,z)=\Phi_{2}(x,k)e^{-izx}, \tag{2.5}\] with the asymptotic behavior \[\mu_{\pm}(x,z)\to e_{1},\ \ \nu_{\pm}(x,z)\to e_{2},\ \ x\to\pm\infty. \tag{2.6}\] Here \(e_{1}=[1,0]^{T}\) and \(e_{2}=[0,1]^{T}\). The normalized Jost solutions have the following integral expression \[\mu_{\pm}(x,z)=e_{1}+\int_{\pm\infty}^{x}\begin{bmatrix}1&0\\ 0&e^{2iz(x-y)}\end{bmatrix}Q_{1}(u(y))\mu_{\pm}(y,z)dy, \tag{2.7}\] \[\nu_{\pm}(x,z)=e_{2}+\int_{\pm\infty}^{x}\begin{bmatrix}e^{2iz(x-y)}&0\\ 0&1\end{bmatrix}Q_{2}(u(y))\nu_{\pm}(y,z)dy. \tag{2.8}\] The following lemmas will introduce the existence, uniqueness, analyticity, asymptotics, smoothness, and Lipschitz continuity of the Jost functions \(\mu_{\pm}(x,z)\) and \(\nu_{\pm}(x,z)\) satisfying the equations (2.7) and (2.8) respectively. **Lemma 1**.: _If \(u\in H^{1,1}(\mathbb{R})\), then for every \(z\in\mathbb{R}\), integral equations (2.7) and (2.8) there exists unique solutions \(\mu_{\pm}(x,z)\in L^{\infty}_{x}(\mathbb{R})\) and \(\nu_{\pm}(x,z)\in L^{\infty}_{x}(\mathbb{R})\) respectively, and_ \[\|\mu_{\mp}(x,z)\|_{L^{\infty}_{x}}+\|\nu_{\pm}(x,z)\|_{L^{\infty}_{x}}\leq C, \ \ z\in\mathbb{C}^{\pm}, \tag{2.9}\] _where \(C\) is a constant independent of \(z\). Moreover, \(\forall x\in\mathbb{R}\), \(\mu_{-}(x,z),\nu_{+}(x,z)\) are continued analytically in \(z\in\mathbb{C}^{+}\), while \(\mu_{+}(x,z),\nu_{-}(x,z)\) are continued analytically in \(z\in\mathbb{C}^{-}\)._ Proof.: We provide a detailed proof using \(\mu_{-}(x,z)\) as an example, and the proofs for the other Jost functions are similar. We denote the \(L^{1}\) matrix norm of the \(2\times 2\) matrix function \(Q\) as \[\|Q\|_{L^{1}(\mathbb{R})}:=\sum_{i,j=1}^{2}\|Q_{i,j}\|_{L^{1}}.\] If \(u\in H^{1,1}(\mathbb{R})\), then \(Q_{1}\in L^{1}(\mathbb{R})\). Define Neumann series \[\omega_{0}(x,z)=e_{1},\ \ \omega_{n+1}(x,z)=\int_{-\infty}^{x}F(x,y,z) \omega_{n}(y)dy,\] where \(F(x,y,z)=\mathrm{diag}[1,e^{2iz(x-y)}]Q_{1}(y)\). For every \(\mathrm{Im}z\geq 0\) and for every \(x\in\mathbb{R}\), we have \[|\omega_{1}(x,z)|\leq\int_{-\infty}^{x}|F(x,y,z)\omega_{0}(y)|dy \leq\int_{-\infty}^{x}|Q_{1}(y)|dy\triangleq\rho(x),\] and \(\rho(x)\leq\|Q_{1}\|_{L^{1}}\), \(\rho_{x}(x)=|Q_{1}(x)|\). Further we have \[|\omega_{2}(x,z)|\leq\int_{-\infty}^{x}|Q_{1}(y)\omega_{1}(y)|dy \leq\int_{-\infty}^{x}\rho_{y}(y)\rho(y)dy\leq\frac{\|Q_{1}\|_{L^{1}}^{2}}{2}.\] Using mathematical induction we get \[|\omega_{n}(x,z)|\leq\frac{\|Q_{1}\|_{L^{1}}^{n}}{n!},\] therefore \[|\sum_{n=0}^{\infty}\omega_{n}(x,z)|\leq\sum_{n=0}^{\infty}\frac{ \|Q_{1}\|_{L^{1}}^{n}}{n!}=e^{\|Q_{1}\|_{L^{1}}^{n}}. \tag{2.10}\] We know by direct verification that \(\sum_{n=0}^{\infty}\omega_{n}(x,z)\triangleq\mu_{-}(x,z)\) satisfies (2.7) and that \(\mu_{-}(x,z)\) is a unique solution of (2.7) by Gronwall's inequality. In addition, uniform boundedness (2.9), as well as analyticity, is easily obtained from (2.10). **Lemma 2**.: _If \(u\in H^{1,1}(\mathbb{R})\), then for every \(x\in\mathbb{R}\), we have_ \[\lim_{|z|\to\infty}\mu_{\pm}(x,z)=\mu_{\pm}^{\infty}(x)e_{1},\ \ \mu_{\pm}^{\infty}(x):=e^{-\frac{1}{2i}\int_{\pm\infty}^{x}u(y)v(y)dy}, \tag{2.11}\] \[\lim_{|z|\to\infty}\nu_{\pm}(x,z)=\nu_{\pm}^{\infty}(x)e_{2},\ \ \nu_{\pm}^{\infty}(x):=e^{\frac{1}{2i}\int_{\pm\infty}^{x}u(y)v(y)dy}. \tag{2.12}\] _Moreover, if \(u\in C^{1}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\), then for every \(x\in\mathbb{R}\), we have_ \[\lim_{|z|\to\infty}z[\mu_{\pm}(x,z)-\mu_{\pm}^{\infty}(x)e_{1}]= \alpha_{\pm}(x), \tag{2.13}\] \[\lim_{|z|\to\infty}z[\nu_{\pm}(x,z)-\nu_{\pm}^{\infty}(x)e_{2}]= \beta_{\pm}(x), \tag{2.14}\] _where \(\alpha_{\pm}(x):=[\alpha_{\pm}^{(1)}(x),\alpha_{\pm}^{(2)}(x)]^{T},\ \ \beta_{\pm}(x):=[\beta_{\pm}^{(1)}(x), \beta_{\pm}^{(2)}(x)]^{T}\) and_ \[\alpha_{\pm}^{(1)}(x) =\frac{1}{4}e^{-\frac{1}{2i}\int_{\pm\infty}^{x}u(y)v(y)dy}\int_{ \pm\infty}^{x}[u(y)v_{y}(y)-\frac{1}{2i}u^{2}(y)v^{2}(y)]dy,\] \[\alpha_{\pm}^{(2)}(x) =-\frac{1}{2i}\partial_{x}[v(x)e^{-\frac{1}{2i}\int_{\pm\infty}^{ x}u(y)v(y)dy}],\] \[\beta_{\pm}^{(1)}(x) :=-\frac{1}{2i}\partial_{x}[u(x)e^{\frac{1}{2i}\int_{\pm\infty}^{ x}u(y)v(y)dy}],\] \[\beta_{\pm}^{(2)}(x) :=-\frac{1}{4}e^{\frac{1}{2i}\int_{\pm\infty}^{x}u(y)v(y)dy}\int_ {\pm\infty}^{x}[u_{y}(y)v(y)+\frac{1}{2i}u^{2}(y)v^{2}(y)]dy.\] Proof.: We still only give the proof of \(\mu_{-}(x,z)\) and the rest is similar. Note that the limit here is along a contour that is in the analytical domain of the Jost functions and makes \(|\text{Im}(z)|\to\infty\). Let \(\mu_{-}(x,z)=[\mu_{-}^{(1)}(x,z),\mu_{-}^{(2)}(x,z)]^{T}\), and according to the integral equation (2.7) we get \[\mu_{-}^{(1)}(x,z) =1-\frac{1}{2i}\int_{-\infty}^{x}u(y)v(y)\mu_{-}^{(1)}(y,z)-u(y) \mu_{-}^{(2)}(y,z)dy, \tag{2.15}\] \[\mu_{-}^{(2)}(x,z) =\frac{1}{2i}\int_{-\infty}^{x}e^{2iz(x-y)}\varphi(y,z)dy, \tag{2.16}\] where \[\varphi(x,z)=[2i\partial_{x}v(x)-u(x)v^{2}(x)]\mu_{-}^{(1)}(x,z)+u(x)v(x)\mu_ {-}^{(2)}(x,z).\] When \(\text{Im}(z)>0\), Lebesgue's dominated convergence theorem can be applied to (2.16) due to \(u\in H^{1,1}(\mathbb{R})\) and uniform boundedness (2.9) holds, yielding \[\lim_{|z|\to\infty}\mu_{-}^{(2)}(x,z)=0. \tag{2.17}\] Substituting (2.17) into (2.15) gives the limit (2.11). In order to obtain the formula (2.13), (2.16) is rewritten in the following form \[\mu_{-}^{(2)}(x,z)= \frac{1}{2i}\int_{-\infty}^{x-\delta}e^{2iz(x-y)}\varphi(y,z)dy+ \frac{\varphi(x,z)}{2i}\int_{x-\delta}^{x}e^{2iz(x-y)}dy\] \[+\frac{1}{2i}\int_{x-\delta}^{x}e^{2i(x-y)}[\varphi(y,z)-\varphi( x,z)]dy\] \[\triangleq h_{1}(x,z)+h_{2}(x,z)+h_{3}(x,z).\] Since \(u\in C^{1}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\), we have \(\varphi(\cdot,z)\in C(\mathbb{R})\cap L^{1}(\mathbb{R})\), \(\forall\ \mathrm{Im}(z)>0\). If we let \(\delta=[\mathrm{Im}(z)]^{-1/2}\), then \[\lim_{|z|\to\infty}z\mu_{-}^{(2)}(x,z) =\lim_{|z|\to\infty}zh_{2}(x,z)=\frac{1}{4}\lim_{|z|\to\infty} \varphi(x,z)\] \[=\frac{1}{4}[2i\partial_{x}v(x)-v^{2}(y)u(y)]\mu_{-}^{\infty}(x)\] \[=\alpha_{-}^{(2)}(x).\] Taking the derivative of the equation (2.15) with respect to the \(x\) variable and using \(\overline{\mu}_{-}^{\infty}(x)\) as the integrating factor, we get \[\mu_{-}^{(1)}(x,z)=\mu_{-}^{\infty}(x)+\frac{1}{2i}\mu_{-}^{\infty}(x)\int_{- \infty}^{x}u(y)\overline{\mu}_{-}^{\infty}(y)\mu_{-}^{(2)}(y,z)dy,\] and hence \[\lim_{|z|\to\infty}z[\mu_{-}^{(1)}(x,z)-\mu_{-}^{\infty}(x)] =\frac{1}{2i}\mu_{-}^{\infty}(x)\lim_{|z|\to\infty}\int_{-\infty} ^{x}u(y)\overline{\mu}_{-}^{\infty}(y)z\mu_{-}^{(2)}(y,z)dy\] \[=\alpha_{-}^{(1)}(x).\] **Lemma 3**.: _Suppose that \(u\in H^{1,1}(\mathbb{R})\), then we have_ \[\mu_{\pm}(x,z)-\mu_{\pm}^{\infty}(x)e_{1},\ \nu_{\pm}(x,z)-\nu_{\pm}^{\infty}(x)e _{2}\in L_{x}^{\infty}(\mathbb{R}^{\pm},H_{z}^{1}(\mathbb{R})), \tag{2.18}\] _and the map_ \[u\mapsto[\mu_{\pm}(x,z)-\mu_{\pm}^{\infty}(x)e_{1},\nu_{\pm}(x,z)-\nu_{\pm}^{ \infty}(x)e_{2}] \tag{2.19}\] _is Lipschitz continuous from \(H^{1,1}(\mathbb{R})\) to \(L_{x}^{\infty}(\mathbb{R}^{\pm},H_{z}^{1}(\mathbb{R}))\)._ _Moreover, if \(u\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\), then we have_ \[z(\mu_{\pm}(x,z)-\mu_{\pm}^{\infty}(x)e_{1})-\alpha_{\pm}(x)\in L_{x}^{\infty }(\mathbb{R},L_{z}^{2}(\mathbb{R})), \tag{2.20}\] \[z(\nu_{\pm}(x,z)-\nu_{\pm}^{\infty}(x)e_{2})-\beta_{\pm}(x)\in L_{x}^{\infty }(\mathbb{R},L_{z}^{2}(\mathbb{R})), \tag{2.21}\] _and the map_ \[u\mapsto[z(\mu_{\pm}(x,z)-\mu_{\pm}^{\infty}(x)e_{1})-\alpha_{\pm}(x),z(\nu_{ \pm}(x,z)-\nu_{\pm}^{\infty}(x)e_{2})-\beta_{\pm}(x)] \tag{2.22}\] _is Lipschitz continuous from \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) to \(L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R}))\)._ Proof.: As usual, we still only give the proof of \(\mu_{-}(x,z)\) and the rest is similar. Before we start the formal proof, we define the operator \(K\) \[Kf(x,z):=\int_{-\infty}^{x}\begin{bmatrix}1&0\\ 0&e^{2iz(x-y)}\end{bmatrix}Q_{1}(y)f(y,z)dy, \tag{2.23}\] and give its corresponding properties. For every column vector \(f(x,z)\in L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R}))\), the following estimate can be obtained by mathematical induction \[\|(K^{n}f)(x,z)\|_{L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R}))}\leq\frac{ \|Q_{1}\|_{L^{1}(\mathbb{R})}^{n}}{n!}\|f(x,z)\|_{L_{x}^{\infty}(\mathbb{R},L_ {z}^{2}(\mathbb{R}))},\] which implies that the operator \(I-K\) is invertible in the space \(L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R}))\) and \[\big{\|}(I-K)^{-1}\big{\|}_{L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R})) \to L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R}))}\leq\sum_{n=0}^{\infty} \frac{\|Q_{1}\|_{L^{1}(\mathbb{R})}^{n}}{n!}=e^{\|Q_{1}\|_{L^{1}(\mathbb{R})}}. \tag{2.24}\] In fact, the operator \(I-K\) is also invertible in space \(L_{x}^{\infty}(\mathbb{R}^{-},L_{z}^{2}(\mathbb{R}))\), again with \[\big{\|}(I-K)^{-1}\big{\|}_{L_{x}^{\infty}(\mathbb{R}^{-},L_{z}^{2}(\mathbb{R} ))\to L_{x}^{\infty}(\mathbb{R}^{-},L_{z}^{2}(\mathbb{R}))}\leq\sum_{n=0}^{ \infty}\frac{\|Q_{1}\|_{L^{1}(\mathbb{R})}^{n}}{n!}=e^{\|Q_{1}\|_{L^{1}( \mathbb{R})}}. \tag{2.25}\] Thanks to the definition of the operator \(K\), we can rewrite the integral equation (2.7) as \[(I-K)\mu_{-}(x,z)=e_{1}. \tag{2.26}\] Subtracting \((I-K)\mu_{-}^{\infty}e_{1}\) from both sides of the above equation, we have \[(I-K)(\mu_{-}-\mu_{-}^{\infty}e_{1})=e_{1}-(I-K)\mu_{-}^{\infty}e_{1}\triangleq g (x,z)e_{2}, \tag{2.27}\] where \[g(x,z)=\int_{-\infty}^{x}e^{2iz(x-y)}w(y)dy,\ \ w(x)=\partial_{x}[v(x)e^{- \frac{1}{2i}\int_{-\infty}^{x}uvdy}].\] If \(u\in H^{1,1}(\mathbb{R})\), it is easy to obtain \(w(x)\in L^{2,1}(\mathbb{R})\). By Proposition 1 in reference [10] it follows that \(g(x,z)\in L_{x}^{\infty}(\mathbb{R},L_{z}^{2}(\mathbb{R}))\). Then, we can obtain \(\mu_{-}(x,z)-\mu_{-}^{\infty}(x)e_{1}\in L_{x}^{\infty}(\mathbb{R},L_{z}^{2} (\mathbb{R}))\) by using (2.24) and (2.27). We take derivative of the equation (2.26) in \(z\) and obtain \[(I-K)\eta(x,z)=g_{1}(x,z)e_{1}+g_{2}(x,z)e_{2}+g_{3}(x,z)e_{2}, \tag{2.28}\] where \[\eta(x,z)=[\partial_{z}\mu_{-}^{(1)},\partial_{z}\mu_{-}^{(2)}-2ix\mu _{-}^{(2)}]^{T},\] \[g_{1}(x,z)=\int_{-\infty}^{x}yu(y)\mu_{-}^{(2)}(y,z)dy,\] \[g_{2}(x,z)=-\int_{-\infty}^{x}ye^{2iz(x-y)}[2iv_{y}(y)-v^{2}(y)u( y)][\mu_{-}^{(1)}(y,z)-\mu_{-}^{\infty}(y)]dy,\] \[g_{3}(x,z)=-\int_{-\infty}^{x}ye^{2iz(x-y)}[2iv_{y}(y)-v^{2}(y)u( y)]\mu_{-}^{\infty}(y)dy.\] Combining (2.25), (2.27) and Proposition 1 in [10], we can get the following estimates \[\sup_{x\in\mathbb{R}^{-}}\|g_{1}(x,z)e_{1}+g_{2}(x,z)e_{2}+g_{3}( x,z)e_{2}\|_{L_{2}^{2}(\mathbb{R})}\] \[\leq C\sup_{x\in\mathbb{R}^{-}}\|\langle x\rangle[\mu_{-}(x,z)-\mu_{- }^{\infty}(x)e_{1}]\|_{L_{z}^{2}(\mathbb{R})}+\tilde{C}\] \[\leq C\sup_{x\in\mathbb{R}^{-}}\|\langle x\rangle g(x,z)e_{2}\|_{L_{z} ^{2}(\mathbb{R})}+\tilde{C}\] \[\leq C\|w\|_{L^{2,1}(\mathbb{R}^{-})}+\tilde{C},\] where \(C\) and \(\tilde{C}\) are two positive constants depend on \(\|u\|_{H^{1,1}(\mathbb{R})}\). Using the property (2.25) again gives \(\eta(x,z)\in L_{x}^{\infty}(\mathbb{R}^{-},L_{z}^{2}(\mathbb{R}))\), which implies \(\partial_{z}[\mu_{-}(x,z)-\mu_{-}^{\infty}(x)e_{1}]\in L_{x}^{\infty}(\mathbb{ R}^{-},L_{z}^{2}(\mathbb{R}))\). Therefore, we complete the proof of \(\mu_{-}\) in (2.18). Next, we prove that the map (2.19) is Lipschitz continuous. Suppose that \(u,\tilde{u}\in H^{1,1}(\mathbb{R})\) satisfy \(\|u\|_{H^{1,1}(\mathbb{R})},\|\tilde{u}\|_{H^{1,1}(\mathbb{R})}\leq\gamma\) for some \(\gamma>0\). Denote the corresponding Jost functions by \(\mu_{-}(x,z)\) and \(\tilde{\mu}_{-}(x,z)\) respectively. And we can define \(\tilde{K},\tilde{g},\tilde{w}\) corresponding to \(\tilde{u}\) similarly to the equations (2.23) and (2.27), then \[(\mu_{-}-\mu_{-}^{\infty}e_{1})-(\tilde{\mu}_{-}-\tilde{\mu}_{-}^ {\infty}e_{1})\] \[= (I-K)^{-1}ge_{2}-(I-\tilde{K})^{-1}\tilde{g}e_{2}\] \[= (I-K)^{-1}(g-\tilde{g})e_{2}+(I-K)^{-1}(K-\tilde{K})(I-\tilde{K} )^{-1}\tilde{g}e_{2}.\] According to (2.24) and Proposition 1 in [10], we can know that \[\sup_{x\in\mathbb{R}}\|(I-K)^{-1}(g-\tilde{g})e_{2}\|_{L^{2}_{z}}\] \[\leq c_{1}(\gamma)\sup_{x\in\mathbb{R}}\|g-\tilde{g}\|_{L^{2}_{z}}\] \[\leq c_{1}(\gamma)\|w-\tilde{w}\|_{L^{2}}\] \[\leq c_{1}(\gamma)\|\mu_{-}^{\infty}-\tilde{\mu}_{-}^{\infty}\|_{L^{ \infty}}+c_{2}(\gamma)\|u-\tilde{u}\|_{H^{1,1}}\] \[\leq c_{1}(\gamma)\|\int_{-\infty}^{x}(|u(y)|^{2}-|\tilde{u}(y)|^{2} )dy\|_{L^{\infty}}+c_{2}(\gamma)\|u-\tilde{u}\|_{H^{1,1}}\] \[\leq c_{3}(\gamma)\|u-\tilde{u}\|_{H^{1,1}},\] and \[\sup_{x\in\mathbb{R}}\|(I-K)^{-1}(K-\tilde{K})(I-\tilde{K})^{-1} \tilde{g}e_{2}\|_{L^{2}_{z}}\] \[\leq c_{4}(\gamma)\sup_{x\in\mathbb{R}}\|(K-\tilde{K})(I-\tilde{K})^ {-1}\tilde{g}e_{2}\|_{L^{2}_{z}}\] \[\leq c_{4}(\gamma)\sup_{x\in\mathbb{R}}\|(I-\tilde{K})^{-1}\tilde{g}e _{2}\|_{L^{2}_{z}}\|u-\tilde{u}\|_{H^{1,1}}\] \[\leq c_{4}(\gamma)\sup_{x\in\mathbb{R}}\|\tilde{g}e_{2}\|_{L^{2}_{z}} \|u-\tilde{u}\|_{H^{1,1}}\] \[\leq c_{4}(\gamma)\|\tilde{w}\|_{L^{2}}\|u-\tilde{u}\|_{H^{1,1}}\] \[\leq c_{4}(\gamma)\|u-\tilde{u}\|_{H^{1,1}},\] where \(c_{3}(\gamma)\) and \(c_{4}(\gamma)\) are two positive \(\gamma\)-dependent constants. Thus, we have \[\sup_{x\in\mathbb{R}}\|(\mu_{-}-\mu_{-}^{\infty}e_{1})-(\tilde{\mu}_{-}- \tilde{\mu}_{-}^{\infty}e_{1})\|_{L^{2}_{z}}\leq c(\gamma)\|u-\tilde{u}\|_{H^ {1,1}},\] where \(c(\gamma)\) is a positive \(\gamma\)-dependent constant. A similar analysis of (2.28) using (2.25) and Proposition 1 in [10] shows that there exists \(c(\gamma)\) such that \[\sup_{x\in\mathbb{R}^{-}}\|\partial_{z}(\mu_{-}-\mu_{-}^{\infty}e_{1})- \partial_{z}(\tilde{\mu}_{-}-\tilde{\mu}_{-}^{\infty}e_{1})\|_{L^{2}_{z}} \leq c(\gamma)\|u-\tilde{u}\|_{H^{1,1}},\] Therefore, we prove the Lipschitz continuity of (2.19). In order to prove (2.20), we can imitate (2.27) and define \(\hat{g}(x,z)\) as follows \[(I-K)[z(\mu_{-}-\mu_{-}^{\infty}e_{1})-\alpha_{-}]\] \[= zge_{2}-(I-K)\alpha_{-}\] \[\triangleq \hat{g}(x,z)e_{2}.\] By applying the similar analysis to the above equation, we can prove that the result (2.20) and the map (2.22) is Lipschitz continuous. ### Regularity of the scattering data In order to build the scattering data, we need to consider the Jost solution of the original spectral problem (1.2). Based on the transformation (2.1) and the definition of \(\mu(x,z),\nu(x,z)\) (2.5), we can define the normalized Jost functions for the spectral problem (1.2) as follows \[\varphi_{\pm}(x,k)=T_{1}^{-1}(x,k)\mu_{\pm}(x,z),\ \ \psi_{\pm}(x,k)=T_{2}^{-1}(x,k)\nu_{ \pm}(x,z),\ \ k\in\mathbb{C}\setminus\{0\}, \tag{2.29}\] then, the two Jost functions each satisfy the following Volterra's integral equation \[\varphi_{\pm}(x,k)=e_{1}+k\int_{\pm\infty}^{x}\begin{pmatrix}1&0\\ 0&e^{2ik^{2}(x-y)}\end{pmatrix}Q(y)\varphi_{\pm}(y,z)dy, \tag{2.30}\] \[\psi_{\pm}(x,k)=e_{2}+k\int_{\pm\infty}^{x}\begin{pmatrix}e^{2ik^{2}(x-y)}&0 \\ 0&1\end{pmatrix}Q(y)\psi_{\pm}(y,z)dy. \tag{2.31}\] When \(k=0\), it is obvious that \(\varphi_{\pm}(x,0)=e_{1}\), \(\psi_{\pm}(x,0)=e_{2}\) and \(\mu_{\pm}(x,0)=[1,v(x)]^{T}\), \(\nu_{\pm}(x,0)=[-u(x),1]^{T}\). We note that \(\varphi_{\pm}=[\varphi_{\pm}^{(1)},\varphi_{\pm}^{(2)}]^{T}\) and \(\psi_{\pm}=[\psi_{\pm}^{(1)},\psi_{\pm}^{(2)}]^{T}\). According to Lemma 1, (2.6) and the definition (2.29), it is easy to obtain the following corollary. **Corollary 1**.: _If \(u\in H^{1,1}(\mathbb{R})\), then \(\varphi_{\pm}(x,k)\) and \(\psi_{\pm}(x,k)\) admit the following properties_ 1. \(\forall\ k^{2}\in\mathbb{R}\backslash\{0\}\)_, integral equations (_2.30_) and (_2.31_) there exists unique solutions_ \(\varphi_{\pm}(x,k)\in L_{x}^{\infty}(\mathbb{R})\) _and_ \(\psi_{\pm}(x,k)\in L_{x}^{\infty}(\mathbb{R})\) _respectively._ 2. \(\forall x\in\mathbb{R}\)_,_ \(\varphi_{-}(x,k)\) _and_ \(\psi_{+}(x,k)\) _are analytic in the first and third quadrant of the_ \(k\) _plane,_ \(\varphi_{+}(x,k)\) _and_ \(\psi_{-}(x,k)\) _are analytic in the second and fourth quadrant of the_ \(k\) _plane._ 3. \(\varphi_{\pm}(x,k)\to e_{1},\ \psi_{\pm}(x,k)\to e_{2},\ \ x\to\pm\infty\)_._ 4. \(\varphi_{\pm}^{(1)}(x,k),\ \psi_{\pm}^{(2)}(x,k)\) _are even in_ \(k\)_,_ \(\varphi_{\pm}^{(2)}(x,k),\ \psi_{\pm}^{(1)}(x,k)\) _are odd in_ \(k\)_._ 5. \(\varphi_{-}(x,k)=\begin{bmatrix}0&1\\ \pm 1&0\end{bmatrix}\overline{\psi_{+}(-x,\pm i\overline{k})},\ \ \psi_{-}(x,k)= \begin{bmatrix}0&\pm 1\\ 1&0\end{bmatrix}\overline{\varphi_{+}(-x,\pm i\overline{k})}.\) By the theory of ODE, we can define the following scattering matrix associated with the spectral problem (1.2) \[\begin{split}&\left[\varphi_{-}(x,k)e^{-ik^{2}x}\quad\psi_{-}(x,k)e ^{ik^{2}x}\right]\\ =&\left[\varphi_{+}(x,k)e^{-ik^{2}x}\quad\psi_{+}(x,k) e^{ik^{2}x}\right]\begin{bmatrix}a(k)&c(k)\\ b(k)&d(k)\end{bmatrix},\end{split} \tag{2.32}\] then, the scattering data \(a(k),b(k),c(k),d(k)\) are related to the Wronskian of the system via the relations below \[a(k) =W(\varphi_{-}(x,k)e^{-ik^{2}x},\psi_{+}(x,k)e^{ik^{2}x}), \tag{2.33}\] \[b(k) =W(\varphi_{+}(x,k)e^{-ik^{2}x},\varphi_{-}(x,k)e^{-ik^{2}x}),\] (2.34) \[c(k) =W(\psi_{-}(x,k)e^{ik^{2}x},\psi_{+}(x,k)e^{ik^{2}x}),\] (2.35) \[d(k) =W(\varphi_{+}(x,k)e^{-ik^{2}x},\psi_{-}(x,k)e^{ik^{2}x}). \tag{2.36}\] The scattering data \(a(k),b(k)\) and \(d(k)\) can also be expressed in the following integral form \[a(k) =1+k\int_{-\infty}^{+\infty}u(y)\varphi_{-}^{(2)}(y,k)dy=1-k\int_ {-\infty}^{+\infty}v(y)\psi_{+}^{(1)}(y,k)dy, \tag{2.37}\] \[b(k) =k\int_{-\infty}^{+\infty}v(y)\varphi_{-}^{(1)}(y,k)e^{-2ik^{2}y} dy=k\int_{-\infty}^{+\infty}u(y)\varphi_{+}^{(1)}(y,k)e^{-2ik^{2}y}dy,\] (2.38) \[d(k) =1-k\int_{-\infty}^{+\infty}u(y)\varphi_{+}^{(2)}(y,k)dy=1+k\int _{-\infty}^{+\infty}v(y)\psi_{-}^{(1)}(y,k)dy. \tag{2.39}\] Further, we can obtain the following properties of the scattering data from Lemma 2, 3, Corollary 1 and the relation (2.29). **Corollary 2**.: _If \(u\in H^{1,1}(\mathbb{R})\), then the scattering data \(a(k),b(k),c(k),d(k)\) admit the following properties_ 1. \(a(k)\) _and_ \(d(k)\) _extend analytically to the first and third quadrant of the_ \(k\) _plane and the first and third quadrant of the_ \(k\) _plane, respectively._ 2. \(a(k)\to e^{-\frac{1}{2i}\int_{-\infty}^{+\infty}uvdy}\triangleq a_{\infty}, \;\;d(k)\to e^{\frac{1}{2i}\int_{-\infty}^{+\infty}uvdy}\triangleq d_{\infty}, \;\;|k|\to\infty.\)__ 3. \(a(k),d(k)\) _are even in_ \(k\)_, and_ \(b(k),c(k)\) _are odd in_ \(k\)_._ 4. \(a(k)=\overline{a(\pm i\overline{k})},\;d(k)=\overline{d(\pm i\overline{k})}, \;c(k)=\mp\overline{b(\pm i\overline{k})},\;\;k\in\mathbb{R}\cup i\mathbb{R}.\)__ _ * \(a(k)d(k)-b(k)c(k)=1,\ \ k\in\mathbb{R}\cup i\mathbb{R}\)_._ Note that property (iv) [14] in Corollary 2, the scattering data \(a(k)\) and \(d(k)\) are not related, which is a distinctive feature of the nDNLS equation compared to its conventional counterpart. This feature makes our subsequent analysis more difficult, and we will subsequently give specific ways to overcome this difficulty. **Lemma 4**.: _If \(u\in H^{1,1}(\mathbb{R})\), then we have_ \[a(k)-a_{\infty},\ d(k)-d_{\infty},\ kb(k),\ k^{-1}b(k),\ kc(k),\ k^{-1}c(k)\in H ^{1}_{z}(\mathbb{R}), \tag{2.40}\] _and the map_ \[u\mapsto[a(k)-a_{\infty},d(k)-d_{\infty},kb(k),k^{-1}b(k),kc(k),k^{-1}c(k)] \tag{2.41}\] _is Lipschitz continuous from \(H^{1,1}(\mathbb{R})\) to \(H^{1}_{z}(\mathbb{R})\)._ _Moreover, if \(u\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\), then we have_ \[kb(k),\ k^{-1}b(k),\ kc(k),\ k^{-1}c(k)\in L^{2,1}_{z}(\mathbb{R}). \tag{2.42}\] _and the map_ \[u\mapsto[kb(k),k^{-1}b(k),kc(k),k^{-1}c(k)] \tag{2.43}\] _is Lipschitz continuous from \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) to \(L^{2,1}_{z}(\mathbb{R})\)._ Proof.: The proof mainly relies on Lemma 3 and the relation (2.29), where (2.29) implies that \[\begin{cases}\varphi_{\pm}^{(1)}(x,k)=\mu_{\pm}^{(1)}(x,z),\\ \varphi_{\pm}^{(2)}(x,k)=-\dfrac{1}{2ik}v(x)\mu_{\pm}^{(1)}(x,z)+\dfrac{1}{2 ik}\mu_{\pm}^{(2)}(x,z),\end{cases} \tag{2.44}\] and \[\begin{cases}\psi_{\pm}^{(1)}(x,k)=\dfrac{1}{2ik}\nu_{\pm}^{(1)}(x,z)+\dfrac{ 1}{2ik}u(x)\nu_{\pm}^{(2)}(x,z),\\ \psi_{\pm}^{(2)}(x,k)=\nu_{\pm}^{(2)}(x,z).\end{cases} \tag{2.45}\] By the Wronskian determinant (2.33), property (ii) in the corollary 2 and (2.44), we have \[\begin{split} a(k)-a_{\infty}=&\det\begin{bmatrix}\mu_{-}^ {(1)}(x,z)&\psi_{+}^{(1)}(x,k)\\ \varphi_{-}^{(2)}(x,k)&\nu_{+}^{(2)}(x,z)\end{bmatrix}-\det\begin{bmatrix} \mu_{-}^{\infty}(x)&0\\ 0&\nu_{+}^{\infty}(x)\end{bmatrix}\\ =&[\mu_{-}^{(1)}(0,z)-\mu_{-}^{\infty}(0)][\nu_{+}^{(2)}(0,z)-\nu_{+}^{\infty} (0)]-\varphi_{-}^{(2)}(0,k)\psi_{+}^{(1)}(0,k)\\ &+\mu_{-}^{\infty}(0)[\nu_{+}^{(2)}(0,z)-\nu_{+}^{\infty}(0)]+\nu_{+}^{ \infty}(0)[\mu_{-}^{(1)}(x,z)-\mu_{-}^{\infty}(0)].\end{split} \tag{2.46}\] We rewrite the term \(\varphi_{-}^{(2)}(0,k)\psi_{+}^{(1)}(0,k)\) in (2.46) as \[\begin{split}\varphi_{-}^{(2)}(0,k)\psi_{+}^{(1)}(0,k)=& \frac{1}{2i}k^{-1}\varphi_{-}^{(2)}(0,k)u(0)\nu_{+}^{\infty}(0)\\ &+\frac{1}{2i}k^{-1}\varphi_{-}^{(2)}(0,k)[2ik\psi_{+}^{(1)}(0,k )-u(0)\nu_{+}^{\infty}(0)],\end{split} \tag{2.47}\] where \[2ik\psi_{+}^{(1)}(0,k)-u(0)\nu_{+}^{\infty}(0)=\nu_{+}^{(1)}(0,z)+u(0)[\nu_{+}^ {(2)}(0,z)-\nu_{+}^{\infty}(0,z)]\in H_{z}^{1}(\mathbb{R}),\] according to (2.45) and (2.18). Below we prove that \(k^{-1}\varphi_{\pm}^{(2)}(0,k)\in H_{z}^{1}(\mathbb{R})\). From the integral expression (2.30) we can directly obtain \[k^{-1}\varphi_{\pm}^{(2)}(0,k)=\int_{\pm\infty}^{0}v(y)e^{-2izy}\mu_{\pm}^{(1 )}(y,z)dy,\] therefore, \[\begin{split}&\|k^{-1}\varphi_{\pm}^{(2)}(0,k)\|_{L_{z}^{2}}\\ \leq&\|\int_{\pm\infty}^{0}v(y)e^{-2izy}[\mu_{\pm}^{ (1)}(y,z)-\mu_{\pm}^{\infty}(y)]dy\|_{L_{z}^{2}}+\|\int_{\pm\infty}^{0}v(y)e^{ -2izy}\mu_{\pm}^{\infty}(y)dy\|_{L_{z}^{2}}\\ \leq&\int_{\pm\infty}^{0}|v(y)|\|\mu_{\pm}^{(1)}(y, z)-\mu_{\pm}^{\infty}(y)\|_{L_{z}^{2}}dy+\|v\|_{L^{2}}\\ \leq&\sup_{x\in\mathbb{R}}\|\mu_{\pm}^{(1)}(x,z)-\mu_ {\pm}^{\infty}(x)\|_{L_{z}^{2}}\|v\|_{L^{1}}+\|v\|_{L^{2}}.\end{split}\] And \(\partial_{z}(k^{-1}\varphi_{\pm}^{(2)}(0,k))\in L_{z}^{2}(\mathbb{R})\) can be proved similarly, therefore we have \(k^{-1}\varphi_{\pm}^{(2)}(0,k)\in H_{z}^{1}(\mathbb{R})\). By (2.47) and the Banach algebra property of \(H_{z}^{1}(\mathbb{R})\), we have \[\varphi_{-}^{(2)}(0,k)\psi_{+}^{(1)}(0,k)\in H_{z}^{1}(\mathbb{R}).\] Further combining (2.46) and (2.18) we can get \(a(k)-a_{\infty}\in H_{z}^{1}(\mathbb{R})\). By using the same process as above, we can obtain \(d(k)-d_{\infty}\in H_{z}^{1}(\mathbb{R})\). From the Wronskian determinant representation (2.34) and (2.44) we have \[\begin{split} 2ikb(k)=&\det\begin{bmatrix}\varphi_{+}^{(1)}(0,k )&\varphi_{-}^{(1)}(0,k)\\ 2ik\varphi_{+}^{(2)}(0,k)&2ik\varphi_{-}^{(2)}(0,k)\end{bmatrix}\\ =&\det\begin{bmatrix}\mu_{+}^{(1)}(0,z)&\mu_{-}^{(1)}(0,z)\\ \mu_{+}^{(2)}(0,z)&\mu_{-}^{(2)}(0,z)\end{bmatrix},\end{split} \tag{2.48}\] so \(kb(k)\in H^{1}_{z}(\mathbb{R})\) is easily obtained according to (2.18). The same is true for \(kc(k)\). Again, thanks to the Wronskian determinant of the scattering data, using (2.35) we get the following equation \[k^{-1}b(k)=\det\begin{bmatrix}\mu_{+}^{(1)}(0,z)&\mu_{-}^{(1)}(0,z)\\ k^{-1}\varphi_{+}^{(2)}(0,k)&k^{-1}\varphi_{-}^{(2)}(0,k)\end{bmatrix},\] thus, \(k^{-1}b(k)\in H^{1}_{z}(\mathbb{R})\). The regularity \(k^{-1}\psi_{\pm}^{(1)}(0,k)\in H^{1}_{z}(\mathbb{R})\) can be obtained similarly, as proven by \(k^{-1}\varphi_{\pm}^{(2)}(0,k)\in H^{1}_{z}(\mathbb{R})\), that therefore \(k^{-1}c(k)\in H^{1}_{z}(\mathbb{R})\). This completes the proof of (2.40). The conclusion (2.42) can be obtained from (2.20), (2.21) and the Wronskian determinants (2.34), (2.35). The Lipschitz continuity (2.41) and (2.43) follows from the Lipschitz continuity (2.19) and (2.22). ## 3 Construction of the RH problems ### The relationship between RH problems We define the reflection coefficients by \[r_{1}(k):=\frac{b(k)}{a(k)},\ \ r_{2}(k):=\frac{c(k)}{d(k)},\ \ k\in\mathbb{R} \cup i\mathbb{R}. \tag{3.1}\] From the relation (2.32) and the corollary 1, 2, we can define a sectionally analytical matrix \[\Psi(x,k):=\begin{cases}[\frac{\varphi_{-}(x,k)}{a(k)},\psi_{+}(x,k)],&\text{ Im}k^{2}>0,\\ &\\ [\varphi_{+}(x,k),\frac{\psi_{-}(x,k)}{d(k)}],&\text{Im}k^{2}<0,\end{cases} \tag{3.2}\] then \(\Psi(x,k)\) solves the following RH problem. **RH Problem 1**.: _Find a matrix-valued function \(\Psi(x,k)\) that satisfies the following conditions_ * _Analyticity:_ \(\Psi(x,k)\) _is analytical in_ \(\mathbb{C}\setminus\{\mathbb{R}\cup i\mathbb{R}\}\)_._ * _Jump condition:_ \(\Psi(x,k)\) _has continuous boundary values_ \(\Psi_{\pm}(x,k)\) _on_ \(\mathbb{R}\cup i\mathbb{R}\) _and_ \[\Psi_{+}(x,k)-\Psi_{-}(x,k)=\Psi_{-}(x,k)S(x,k),\ \ k\in\mathbb{R}\cup i \mathbb{R},\] (3.3) _where_ \[S(x,k)=\begin{bmatrix}-r_{1}(k)r_{2}(k)&-r_{2}(k)e^{-2ik^{2}x}\\ r_{1}(k)e^{2ik^{2}x}&0\end{bmatrix}.\] (3.4) * _Asymptotic condition:_ \[\Psi(x,k)\to[e^{\frac{1}{2i}\int_{x}^{+\infty}uvdy}e_{1},e^{-\frac{1}{2i}\int_ {x}^{+\infty}uvdy}e_{2}]\triangleq\Psi_{\infty}(x),\ \ |k|\to\infty.\] (3.5) Notice that the reconstruction formulas (2.13) and (2.14) are built in the \(z\)-plane, so we next need to consider the RH problem in the \(z\)-plane. First, the reflection coefficients in the \(z\)-plane are established as follows \[r_{-}(z):=2ikr_{1}(k)=\frac{2ikb(k)}{a(k)},\ \ r_{+}(z):=-\frac{r_{2}(k) }{2ik}=-\frac{c(k)}{2ikd(k)},\ \ z\in\mathbb{R}. \tag{3.6}\] Starting from (2.32) and using the relation (2.29), a sectionally analytical matrix function can be defined by \[\Gamma(x,z):=\begin{cases}[\frac{\mu_{-}(x,z)}{a(z)},\gamma_{+}(x,z)],&\text{ Im}z>0,\\ &\\ [\mu_{+}(x,z),\frac{\gamma_{-}(x,z)}{d(z)}],&\text{Im}z<0,\end{cases} \tag{3.7}\] where \[\gamma_{\pm}(x,z)= \frac{1}{2ik}T_{1}(x,k)T_{2}^{-1}(x,k)\nu_{\pm}(x,z) \tag{3.8}\] \[= -\frac{1}{4z}\begin{bmatrix}1&u(x)\\ v(x)&u(x)v(x)-4z\end{bmatrix}\nu_{\pm}(x,z),\] and \(a(z)=a(k),d(z)=d(k)\), this notation is reasonable because both \(a(k)\) and \(d(k)\) are even functions with respect to \(k\). \(\Gamma(x,z)\) satisfies the following RH problem in \(z\)-plane. **RH Problem 2**.: _Find a matrix-valued function \(\Gamma(x,z)\) that satisfies the following conditions_ * _Analyticity:_ \(\Gamma(x,z)\) _is analytical in_ \(\mathbb{C}\setminus\mathbb{R}\) * _Jump condition:_ \(\Gamma(x,z)\) _has continuous boundary values_ \(\Gamma_{\pm}(x,z)\) _on_ \(\mathbb{R}\) _and_ \[\Gamma_{+}(x,z)-\Gamma_{-}(x,z)=\Gamma_{-}(x,z)R(x,z),\ \ z\in\mathbb{R},\] (3.9) _where_ \[R(x,z)=\begin{bmatrix}r_{+}(z)r_{-}(z)&r_{+}(z)e^{-2izx}\\ r_{-}(z)e^{2izx}&0\end{bmatrix}.\] (3.10) * _Asymptotic condition:_ \[\Gamma(x,z)\to\Psi_{\infty}(x),\ \ |z|\to\infty.\] (3.11) We can further normalize the above RH problem 2 to a RH problem with the identity matrix as the boundary condition by defining the following matrix \[M(x,z):=[\Psi_{\infty}(x)]^{-1}\Gamma(x,z), \tag{3.12}\] which then satisfy the following RH problem. **RH Problem 3**.: _Find a matrix-valued function \(M(x,z)\) that satisfies the following conditions_ * _Analyticity:_ \(M(x,z)\) _is analytical in_ \(\mathbb{C}\setminus\mathbb{R}\)_._ * _Jump condition:_ \(M(x,z)\) _has continuous boundary values_ \(M_{\pm}(x,z)\) _on_ \(\mathbb{R}\) _and_ \[M_{+}(x,z)-M_{-}(x,z)=M_{-}(x,z)R(x,z),\ \ z\in\mathbb{R}.\] (3.13) * _Asymptotic condition:_ \[M(x,z)\to I,\ \ |z|\to\infty.\] (3.14) Lemma 2 gives the connection formulas between the potential \(u(x)\) and the Jost functions in the \(z\)-plane, and this connection can be further used to obtain properties of the potential \(u(x)\) by studying the RH problem \(M(x,z)\) in the \(z\)-plane. Therefore, the solvability of the RH problem 3 plays an important role in the subsequent deduction. At the beginning, we considered starting directly from RH Problem 3 by adding small norm restriction on the reflection coefficients \(r_{\pm}(z)\) in order to be able to use Zhou vanishing lemma [23] for the jump matrix \(R(x,z)\). However, we found that the reflection coefficients \(r_{\pm}(z)\) in the \(z\)-plane did not have good enough properties to allow us to draw further conclusions to prove Theorem 1. Therefore, in the following we will further transform RH Problem 3 by creating two new RH problems related to the matrix \(S(x,k)\) in (3.4) and using vanishing lemma for \(S(x,k)\). Observing the expressions (3.4) and (3.10) for the matrices \(S(x,k)\) and \(R(x,z)\) respectively, we find that \(S(x,k)\) and \(R(x,z)\) are related as follows \[R(x,z)\rho_{j}(k)=\rho_{j}(k)S(x,k),\ \ z\in\mathbb{R},\ \ k\in\mathbb{R}\cup i \mathbb{R},\ \ j=1,2, \tag{3.15}\] where \[\rho_{1}(k)=\begin{bmatrix}1&0\\ 0&2ik\end{bmatrix},\ \ \rho_{2}(k)=\begin{bmatrix}\frac{1}{2ik}&0\\ 0&1\end{bmatrix},\ \ k\in\mathbb{C}\setminus\{0\}. \tag{3.16}\] We establish two new matrices \[N_{j}(x,k):=M(x,z)\rho_{j}(k)-\rho_{j}(k),\ \ j=1,2. \tag{3.17}\] It is easy to derive that \(N_{j}(x,k)\) satisfies the new RH problems: **RH Problem 4**.: _Find the matrix-valued function \(N_{j}(x,k),\ j=1,2\) that satisfies the following conditions_ * _Analyticity:_ \(N_{j}(x,k)\) _is analytical in_ \(\mathbb{C}\setminus\{\mathbb{R}\cup i\mathbb{R}\}\)_._ * _Jump condition:_ \(N_{j}(x,k)\) _has continuous boundary values_ \(N_{j,\pm}(x,k)\) _on_ \(\mathbb{R}\cup i\mathbb{R}\) _and_ \[N_{j,+}(x,k)-N_{j,-}(x,k)=N_{j,-}(x,k)S(x,k)+F_{j}(x,k),\ \ k\in\mathbb{R}\cup i \mathbb{R},\] (3.18) _where_ \(F_{j}(x,k):=\rho_{j}(k)S(x,k),\ j=1,2\)_._ * _Asymptotic condition:_ \[N_{j,\pm}(x,k)\to 0,\ \ |k|\to\infty.\] (3.19) ### Reflection coefficients and Lipschitz continuity We define the Hermitian part of \(I+S(x,k)\) by \[I+S_{H}(x,k):= I+\frac{1}{2}[S(x,k)+S^{H}(x,k)]\] \[= \begin{bmatrix}1-\operatorname{Re}[r_{1}(k)r_{2}(k)]&\frac{1}{2 }[\bar{r}_{1}(k)-r_{2}(k)]e^{-2ik^{2}x}\\ \frac{1}{2}[r_{1}(k)-\bar{r}_{2}(k)]e^{2ik^{2}x}&1\end{bmatrix},\] where the superscript \(H\) means Hermite conjugate. In order to ensure the solvability of the RH problems, it is necessary to require that \(I+S_{H}(x,k)\) be positive definite matrix, i.e. with the following restrictions on the reflection coefficients \(r_{1}(k)\) and \(r_{2}(k)\) \[1-\mathrm{Re}[r_{1}(k)r_{2}(k)]>0,\ \ 1-\frac{1}{4}|r_{1}(k)+\bar{r}_{2}(k)|^{2}>0. \tag{3.20}\] Next, we present the following Proposition which gives a sufficient condition for (3.20). **Proposition 1**.: _If \(u\in H^{1,1}(\mathbb{R})\) satisfies the small norm restriction_ \[\|Q_{1}(u)\|_{L^{1}(\mathbb{R})}\leq 0.295, \tag{3.21}\] _then \(|r_{j}(k)|<1,j=1,2\), for every \(k\in\mathbb{R}\cup i\mathbb{R}\)._ Proof.: Let us define the operator \(K_{1}\) by \[K_{1}f(x,z):=e_{1}+\int_{-\infty}^{x}\begin{pmatrix}1&0\\ 0&e^{2iz(x-y)}\end{pmatrix}Q_{1}(u(y))f(y,z)dy. \tag{3.22}\] For every \(\mathrm{Im}z\geq 0\), we assume that \(f(x,z)=[f_{1}(x,z),f_{1}(x,z)]^{T}\), \(g(x,z)=[g_{1}(x,z),g_{1}(x,z)]^{T}\in L_{x}^{\infty}(\mathbb{R})\), then \[K_{1}f-K_{1}g=\begin{bmatrix}\frac{1}{2i}\int_{-\infty}^{x}-uv(f_{1}-g_{1})+u( f_{2}-g_{2})dy\\ \frac{1}{2i}\int_{-\infty}^{x}[(2iv_{y}-v^{2}u)(f_{1}-g_{1})+uv(f_{2}-g_{2})]e^ {2iz(x-y)}dy\end{bmatrix},\] thus, for every \(\mathrm{Im}z\geq 0\), \[\|K_{1}f-K_{1}g\|_{L_{x}^{\infty}}\leq \frac{1}{2}(2\|\partial_{x}v\|_{L^{1}}+\|uv^{2}\|_{L^{1}}+2\|uv \|_{L^{1}}+\|u\|_{L^{1}})\|f-g\|_{L_{x}^{\infty}}\] \[= \|Q_{1}(u)\|_{L^{1}}\|f-g\|_{L_{x}^{\infty}}.\] We assume that \[\|Q_{1}(u)\|_{L^{1}}<c, \tag{3.23}\] where \(c\) is a positive constant. According to the definition of the operator \(K_{1}\) in (3.22) and integral expression (2.7), it is easy to obtain \(\mu_{-}(x,z)=K_{1}\mu_{-}(x,z)\). We set \(f(x,z)=\mu_{-}(x,z),\ g(x,z)=[0,0]^{T}\), then for every \(\mathrm{Im}z\geq 0\), we have \[\|\mu_{-}(x,z)-e_{1}\|_{L_{x}^{\infty}}=\|K_{1}\mu_{-}(x,z)-e_{1}\|_{L_{x}^{ \infty}}<c\|\mu_{-}(x,z)\|_{L_{x}^{\infty}}.\] Moreover, \[\|\mu_{-}(x,z)\|_{L^{\infty}_{x}}<\frac{1}{1-c},\ \ \|\mu_{-}(x,z)-e_{1}\|_{L^{ \infty}_{x}}<\frac{c}{1-c}, \tag{3.24}\] where \(0<c<1\). The estimate (3.24) also holds for \(\mu_{+}(x,z)\). Let us first consider the case \(0<c\leq\frac{1}{2}\), which implies that there is \(\frac{c}{1-c}\leq 1\). The integral expression (2.37),(2.38) for the scattering data \(a(k),b(k)\) and the relation (2.29) show that \[a(k)=1-\frac{1}{2i}\int_{\mathbb{R}}[u(x)v(x)\mu_{-}^{(1)}(x,z)-u(x)\mu_{-}^{( 2)}(x,z)]dx,\] \[k^{-1}b(k)=\int_{\mathbb{R}}v(x)\mu_{-}^{(1)}(x,z)e^{-2ik^{2}x}dx.\] Hence, for every \(\mathrm{Im}z\geq 0\) \[|a(k)|= 1-|\frac{1}{2i}\int_{\mathbb{R}}[u(x)v(x)\mu_{-}^{(1)}(x,z)-u(x) \mu_{-}^{(2)}(x,z)]dx|\] \[\geq 1-\frac{1}{2}(\|uv\|_{L^{1}}\|\mu_{-}^{(1)}(x,z)-1\|_{L^{\infty} _{x}}+\|uv\|_{L^{1}}-\|u\|_{L^{1}}\|\mu_{-}^{(2)}(x,z)\|_{L^{\infty}_{x}})\] \[\geq 1-\|Q_{1}(u)\|_{L^{1}}\] \[> 1-c, \tag{3.25}\] and \[|k^{-1}b(k)|\leq \int_{\mathbb{R}}|v(x)\mu_{-}^{(1)}(x,z)|dx\leq\|\mu_{-}^{(1)}(x, z)\|_{L^{\infty}_{x}}\|v\|_{L^{1}}\] \[\leq \frac{2\|Q_{1}(u)\|_{L^{1}}}{1-c}<\frac{2c}{1-c}.\] From (2.48), we have \[|kb(k)|= \frac{1}{2}|\mu_{+}^{(1)}(x,z)\mu_{-}^{(2)}(x,z)-\mu_{+}^{(2)}(x, z)\mu_{-}^{(1)}(x,z)|\] \[\leq \frac{1}{2}[\|\mu_{+}^{(1)}(x,z)-1\|_{L^{\infty}_{x}}\|\mu_{-}^{( 2)}(x,z)\|_{L^{\infty}_{x}}+\|\mu_{-}^{(2)}(x,z)\|_{L^{\infty}_{x}}\] \[-\|\mu_{+}^{(2)}(x,z)\|_{L^{\infty}_{x}}\|\mu_{-}^{(1)}(x,z)-1\|_ {L^{\infty}_{x}}+\|\mu_{+}^{(2)}(x,z)\|_{L^{\infty}_{x}}]\] \[< \frac{c}{(1-c)^{2}}.\] The above estimates give \[|b(k)|^{2}\leq|k^{-1}b(k)||kb(k)|<\frac{2c^{2}}{(1-c)^{3}}. \tag{3.26}\] In order to make \(|r_{1}(k)|<1\), i.e. \(|b(k)|<|a(k)|\), we just need to choose the appropriate constant \(c\) such that \(\frac{2c^{2}}{(1-c)^{3}}<(1-c)^{2}\) and \(0<c\leq\frac{1}{2}\). Conclusion \(|r_{1}(k)|<1\) holds, particularly, if \(0<c\leq 0.295\), but is violated for \(0.296<c\leq\frac{1}{2}\). The second case \(\frac{1}{2}<c<1\) can be analysed in the same way, but unfortunately there is no \(c\) satisfying the condition such that \(|r_{1}(k)|<1\). Thus when \(u\in H^{1,1}(\mathbb{R})\) and the constraint (3.21) is satisfied, we have \(\|Q_{1}(u)\|_{L^{1}}\leq 0.259\) and hence \(|r_{1}(k)|<1\), for every \(k\in\mathbb{R}\cup i\mathbb{R}\). It is easy to get \(\|Q_{1}(u)\|_{L^{1}}=\|Q_{2}(u)\|_{L^{1}}\). Therefore, the same result can be obtained similarly for \(r_{2}(k)\) by using the integral expression (2.39) for the scattering data \(d(k)\) and the relationship (iv) between \(b(k)\) and \(c(k)\) in the Corollary 2. **Corollary 3**.: _If \(u\in H^{1,1}(\mathbb{R})\) satisfies the small norm restriction (3.21), then_ \[\frac{1}{a(k)},\ \frac{1}{d(k)},\ b(k),\ c(k)\in L^{\infty}_{z}( \mathbb{R}). \tag{3.27}\] _Furthermore, there exists a constant \(c_{0}\) such that_ \[|r_{j}(k)|\leq c_{0}<1,\ \ \forall\ k\in\mathbb{R}\cup i\mathbb{R},\ \ j=1,2. \tag{3.28}\] Proof.: According to the (3.25) and (3.26) in the proof of the Proposition 1 show that when \(c=0.295\), then \[|a(k)|>0.705,\ \ |b(k)|<0.7048.\] The same is true for \(d(k)\) and \(c(k)\). Therefore, there exists a constant \(c_{0}\) such that (3.28) holds. Before we continue, let us give two Propositions to illustrate the nature of the reflection coefficients \(r_{j}(k),j=1,2\) and \(r_{\pm}(z)\) by using the Lemma 4 and Corollary 3. **Proposition 2**.: _If \(u\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfies the condition (3.21), then \(r_{1}(k),r_{2}(k)\in L^{2,1}_{z}(\mathbb{R})\)._ Proof.: By the definition (3.1) of \(r_{1}(k)\) and with the help of Holder's inequality, the following estimate can be introduced \[\|r_{1}(k)\|^{2}_{L^{2,1}_{z}}=\|\langle z\rangle\frac{b(k)}{a(k) }\|^{2}_{L^{2}_{z}}\leq \|\frac{1}{a(k)}\|^{2}_{L^{\infty}_{z}}\|\langle z\rangle kb(k) \langle z\rangle k^{-1}b(k)\|_{L^{1}_{z}}\] \[\leq \|\frac{1}{a(k)}\|^{2}_{L^{\infty}_{z}}\|kb(k)\|_{L^{2,1}_{z}}\| k^{-1}b(k)\|_{L^{2,1}_{z}}.\] From (2.42) and (3.27), we have \(r_{1}(k)\in L^{2,1}_{z}(\mathbb{R})\). The same is true for \(r_{2}(k)\). **Proposition 3**.: _If \(u\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfies the condition (3.21), then \(r_{\pm}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\) and \(kr_{\pm}(z),\ 2ikzr_{+}(z)\in L^{\infty}_{z}(\mathbb{R})\). Moreover, the map_ \[u\mapsto[r_{-}(z),r_{+}(z)] \tag{3.29}\] _is Lipschitz continuous from \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) to \(H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\)._ Proof.: This proof is based on the results of the Lemma 4 and Corollary 3. By the definition (3.6) of \(r_{-}(z)\) we have \[\|r_{-}(z)\|_{L^{2}_{z}}=\|\frac{2ikb(k)}{a(k)}\|_{L^{2}_{z}}\leq 2\|\frac{1}{ a(k)}\|_{L^{\infty}_{z}}\|kb(k)\|_{L^{2}_{z}},\] thus, \(r_{-}(z)\in L^{2}_{z}(\mathbb{R})\). According to the Sobolev imbedding theorem, the Sobolev space \(H^{1}(\mathbb{R})\) is imbedded in the space \(C^{0,\frac{1}{2}}(\overline{\mathbb{R}})\). Hence, \[a(k)-a_{\infty},\ kb(k)\in C^{0,\frac{1}{2}}(\overline{\mathbb{R}}),\] is given by (2.40). It can be further verified that when \(\frac{1}{a(k)}\in L^{\infty}_{z}(\mathbb{R})\), \(\frac{1}{a(k)}\) also belongs to \(C^{0,\frac{1}{2}}(\overline{\mathbb{R}})\), which in turn gives \[r_{-}(z)=\frac{2ikb(k)}{a(k)}\in C^{0,\frac{1}{2}}(\overline{\mathbb{R}}).\] Based on the above results we can differentiate \(r_{-}(z)\) in \(z\), then \[\|\partial_{z}r_{-}(z)\|_{L^{2}_{z}}= 2\|\frac{\partial_{z}[kb(k)]a(k)-kb(k)\partial_{z}a(k)}{a^{2}(k)} \|_{L^{2}_{z}}\] \[\leq 2\|\frac{1}{a(k)}\|_{L^{\infty}_{z}}\|\partial_{z}[kb(k)]\|_{L^ {2}_{z}}+2\|\frac{1}{a(k)}\|^{2}_{L^{\infty}_{z}}\|kb(k)\|_{L^{\infty}_{z}}\| \partial_{z}a(k)\|_{L^{2}_{z}}.\] From (2.40) and the property that \(H^{1}(\mathbb{R})\) is imbedded in \(L^{\infty}(\mathbb{R})\), it follows that \(r_{-}(z)\in H^{1}_{z}(\mathbb{R})\). The result \(r_{-}(z)\in L^{2,1}_{z}(\mathbb{R})\) is obtained immediately from (2.42), that is \[\|r_{-}(z)\|_{L^{2,1}_{z}}=2\|\frac{kb(k)}{a(k)}\|_{L^{2,1}_{z}}\leq\|\frac{1} {a(k)}\|_{L^{\infty}_{z}}\|kb(k)\|_{L^{2,1}_{z}}.\] The proof of \(r_{-}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\) has been completed. The same is true for \(r_{+}(z)\). Using the above results, it is possible to obtain \[|kr_{\pm}(z)|^{2}=|zr_{\pm}^{2}(z)|= |\int_{0}^{z}r_{\pm}^{2}(s)+2sr_{\pm}(s)r_{\pm}^{\prime}(s)ds|\] \[\leq \|r_{\pm}\|_{L_{z}^{2}}^{2}+\|r_{\pm}^{\prime}\|_{L_{z}^{2}}^{2}+ \|r_{\pm}\|_{L_{z}^{2,1}}^{2}\] \[\leq \|r_{\pm}(z)\|_{H_{z}^{1}\cap L_{z}^{2,1}}^{2},\] thus \(kr_{\pm}(z)\in L_{z}^{\infty}(\mathbb{R})\). We can define \(\tilde{r}_{-}(z):=\frac{2ikc(k)}{d(k)}\), and a similar proof as above gives \(2ikzr_{+}(z)=-k\hat{r}_{-}(z)\in L_{z}^{\infty}(\mathbb{R})\). We define the scattering date \(\tilde{a},\tilde{b}\) and reflection coefficient \(\tilde{r}_{-}\) corresponding to potential \(\tilde{u}\), then \[r_{-}-\tilde{r}_{-}=\frac{2ik(b-\tilde{b})}{a}+\frac{2ik\tilde{b}}{a\tilde{a}} [(\tilde{a}-\tilde{a}_{\infty})-(a-a_{\infty})]+\frac{2ik\tilde{b}}{a\tilde{a }}(\tilde{a}_{\infty}-a_{\infty}).\] The Lipschitz continuity of the map (3.29) can be given by the above expression and Lemma 4. The corresponding assertion for \(r_{+}(z)\) can be proved similarly. ## 4 Inverse scattering transform ### Cauchy operator and the solvability of the RH problem In fact, for any column vector \(f\in\mathbb{C}^{2}\), we have \[\text{Re}[f^{H}(I+S(x,k))f]= \frac{1}{2}[f^{H}(I+S(x,k))f+f^{H}(I+S^{H}(x,k))f]\] \[= f^{H}(I+S_{H}(x,k))f.\] Therefore, the following very important conclusion is drawn in terms of investigating the solvability of the RH problem. **Proposition 4**.: _If the reflection coefficients \(r_{1}(k),r_{2}(k)\) satisfy the condition (3.28), then there exists positive constants \(\eta_{-}\) and \(\eta_{+}\) for any \(x\in\mathbb{R}\) and any column vector \(f\in\mathbb{C}^{2}\) such that_ \[\text{Re}[f^{H}(I+S(x,k))f]\geq\eta_{-}f^{H}f,\ \ k\in\mathbb{R}\cup i \mathbb{R}, \tag{4.1}\] \[\|(I+S(x,k))f\|\leq\eta_{+}\|f\|,\ \ k\in\mathbb{R}\cup i \mathbb{R}. \tag{4.2}\] _where \(\|f\|\) denotes the Euclidean norm of vectors in \(\mathbb{C}^{2}\)._ Proof.: When the reflection coefficient \(r_{1}(k),r_{2}(k)\) satisfy (3.28), it follows from (3.20) that \(I+S_{H}(x,k)\) is a positive definite matrix. Thus there exists a unitary matrix \(U\) such that \[U^{H}(I+S_{H}(x,k))U=\text{diag}[\lambda_{-}(k),\lambda_{+}(k)],\] where \(\lambda_{-}(k),\lambda_{+}(k)\) are the eigenvalues of the matrix \(I+S_{H}(x,k)\) and have the following expressions \[\lambda_{\pm}(k)=\frac{2-\text{Re}[r_{1}(k)r_{2}(k)]\pm\sqrt{\text{Re}^{2}[r_{ 1}(k)r_{2}(k)]+|r_{1}(k)-\overline{r}_{2}(k)|^{2}}}{2}.\] Since there exists \(c_{0}\) that makes \(|r_{j}(k)|\leq c_{0}<1\), there exists a constant \(\eta_{-}>0\) such that \[\lambda_{-}(k)=\frac{4-|r_{1}+\overline{r}_{2}|^{2}}{4-2\text{Re}[r_{1}r_{2}]+ 2\sqrt{\text{Re}^{2}[r_{1}r_{2}]+|r_{1}-\overline{r}_{2}|^{2}}}\geq\eta_{-}.\] We set that \(U^{H}f\) is equal to \(g=[g_{1},g_{2}]^{T}\), then \[\text{Re}[f^{H}(I+S(x,k))f]= f^{H}(I+S_{H}(x,k))f=\lambda_{-}(k)|g_{1}|^{2}+\lambda_{+}(k)|g_{2}|^{2}\] \[\geq \eta_{-}g^{H}g=\eta_{-}f^{H}f.\] Let \(f=[f_{1},f_{2}]^{T}\), then \[(I+S(x,k))f=\begin{bmatrix}(1-r_{1}(k)r_{2}(k))f_{1}-r_{2}(k)e^{-2ik^{2}x}f_{2 }\\ r_{1}(k)e^{2ik^{2}x}f_{1}+f_{2}\end{bmatrix}.\] It is clear from the calculation that when \(r_{1}(k),r_{2}(k)\) satisfy (3.28), there exists \(\eta_{+}\) such that \[\|(I+S(x,k))f\|^{2}\leq\eta_{+}^{2}(|f_{1}|^{2}+|f_{2}|^{2})\leq\eta_{+}^{2}\| f\|^{2}.\] For a given function \(f\in L^{p}(\mathbb{R})\) with \(1\leq p<\infty\), the Cauchy operator \(\mathcal{C}\) is defined by \[\mathcal{C}(f)(z):=\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{f(s)}{s-z}ds,\ \ z\in\mathbb{C}\setminus\mathbb{R}. \tag{4.3}\] When \(z\pm\varepsilon i\) (\(z\in\mathbb{R},\epsilon>0\)) approach to a point on the real axis \(z\in\mathbb{R}\) transversely from the upper and the lower half planes respectively, the Cauchy operator \(\mathcal{C}\) becomes the Cauchy projection operators \(\mathcal{C}^{\pm}\) defined respectively by \[\mathcal{C}^{\pm}(f)(z):=\lim_{\varepsilon\to 0}\frac{1}{2\pi i}\int_{\mathbb{R}} \frac{f(s)}{s-(z\pm\varepsilon i)}ds,\ \ z\in\mathbb{R}. \tag{4.4}\] If a function \(f\) in \(L^{p}(\mathbb{R}),\ 1\leq p<\infty\), the Hilbert transform \(\mathcal{H}\) is defined by \[\mathcal{H}(f)(z):=\frac{1}{\pi}\lim_{\varepsilon\to 0}(\int_{-\infty}^{z- \varepsilon}+\int_{z+\varepsilon}^{\infty})\frac{f(s)}{s-z}ds,\ \ z\in\mathbb{R}. \tag{4.5}\] For every \(1<p<\infty\), \(\mathcal{H}\) is a bounded operator form \(L^{p}(\mathbb{R})\) to \(L^{p}(\mathbb{R})\). Some important properties of the Cauchy operator \(\mathcal{C}\) and the Cauchy projection operators \(\mathcal{C}^{\pm}\) are enumerated by references [18, 19], see the following proposition. **Proposition 5**.: _For every \(f\in L^{p}(\mathbb{R}),\ 1\leq p<\infty\), the Cauchy operator \(\mathcal{C}\) admits the following properties:_ 1. \(\mathcal{C}(f)(z)\) _is analytic in_ \(z\in\mathbb{C}\setminus\mathbb{R}\)_._ 2. \(\mathcal{C}(f)(z)\to 0,\ \ |z|\to\infty\)_._ 3. _In particular, if_ \(f\in L^{1}(\mathbb{R})\)_, then_ \[\lim_{|z|\to\infty}z\mathcal{C}(f)(z)=-\frac{1}{2\pi i}\int_{\mathbb{R}}f(s) ds,\ \ z\in\mathbb{C}\setminus\mathbb{R}.\] (4.6) _For every \(f\in L^{p}(\mathbb{R}),\ 1<p<\infty\), the Cauchy projection operators \(\mathcal{C}^{\pm}\) admit the following properties:_ 1. _There exists a positive constant_ \(C_{p}\) _(with_ \(C_{2}=1\)_) such that_ \[\|\mathcal{C}^{\pm}(f)\|_{L^{p}}\leq C_{p}\|f\|_{L^{p}}.\] (4.7) 2. \(\mathcal{C}^{\pm}(f)(z)=\pm\frac{1}{2}f(z)-\frac{i}{2}\mathcal{H}(f)(z),\ \ z\in\mathbb{R}.\)__ 3. \(\mathcal{C}^{+}-\mathcal{C}^{-}=I,\ \ \mathcal{C}^{+}+\mathcal{C}^{-}=-i \mathcal{H}\)_._ In the following we consider the solvability of the RH Problem 4 in the space of \(L^{2}_{z}(\mathbb{R})\). It follows from Proposition 5 that if \(N_{j,-}(x,k)\in L^{2}_{z}(\mathbb{R})\) is a solution of the Fredholm integral equation \[N_{j,-}(x,k)=\mathcal{C}^{-}(N_{j,-}(x,k)S(x,k)+F_{j}(x,k))(z),\ \ j=1,2,\ z\in \mathbb{R}, \tag{4.8}\] then \(N_{j,+}(x,k)\in L^{2}_{z}(\mathbb{R})\) can be obtained from the projection equation \[N_{j,+}(x,k)=\mathcal{C}^{+}(N_{j,-}(x,k)S(x,k)+F_{j}(x,k))(z),\ \ j=1,2,\ z\in \mathbb{R}. \tag{4.9}\] A further application of Proposition 5 yields that for any \(x\in\mathbb{R}\), the RH Problem 4 has a solution given by the Cauchy operator \[N_{j}(x,k)=\mathcal{C}(N_{j,-}(x,k)S(x,k)+F_{j}(x,k))(z),\ \ j=1,2,\ z\in \mathbb{C}\setminus\mathbb{R}. \tag{4.10}\] The integral equation (4.8) can be equivalently rewritten as \[(I-\mathcal{C}^{-}_{S})N_{j,-}(x,k)=\mathcal{C}^{-}(F_{j})(x,k),\ \ j=1,2,\ k\in\mathbb{R}\cup i\mathbb{R}, \tag{4.11}\] where \(\mathcal{C}^{-}_{S}N_{j,-}:=\mathcal{C}^{-}(N_{j,-}S)\). We illustrate the solvability of the integral equation (4.8) in Proposition 6 below by considering its equivalent form (4.11). It should be noted that the subscript \(j\) takes either \(1\) or \(2\), and we will not emphasize this point in the following analysis. **Proposition 6**.: _If the reflection coefficients \(r_{1}(k),r_{2}(k)\in L^{2}_{z}(\mathbb{R})\) satisfy the condition (3.28) and \(r_{\pm}(z)\in L^{2}_{z}(\mathbb{R})\), then \(\forall x\in\mathbb{R}\), there exists a unique solution \(N_{j,-}(x,k)\in L^{2}_{z}(\mathbb{R})\) of the linear inhomogeneous equation (4.11). Moreover, the inverse operator \((I-\mathcal{C}^{-}_{S})^{-1}\) exists and is a bounded operator from \(L^{2}_{z}(\mathbb{R})\) to \(L^{2}_{z}(\mathbb{R})\)._ Proof.: By the definition of \(S(x,k)\) (3.4) and the definition of \(F_{j}(x,k)\) (3.18), it is immediate that \(\forall x\in\mathbb{R}\), \(S(x,k)\in L^{2}_{z}(\mathbb{R})\cap L^{\infty}_{z}(\mathbb{R})\) and \(F_{j}(x,k)\in L^{2}_{z}(\mathbb{R})\). According to references [21, 22, 23], it is known that the operator \(I-\mathcal{C}^{-}_{S}\) is a Fredholm operator of the index zero. So we only need to prove that the zero solution to the equation \[(I-\mathcal{C}^{-}_{S})f=0 \tag{4.12}\] is unique in \(L^{2}_{z}(\mathbb{R})\). Suppose that there exists a nonzero row vector solution \(f\in L^{2}_{z}(\mathbb{R})\) for (4.12), then we define \[f_{1}(z):=\mathcal{C}(fS)(z),\ \ f_{2}(z):=[\mathcal{C}(fS)(\overline{z})]^{H}.\] By the property (i) of Cauchy operator in Proposition 5, \(f_{1}(z),f_{2}(z)\) are analytic functions in \(\mathbb{C}\setminus\mathbb{R}\). Making a semicircle \(C_{R}\) in the upper half plane \(\mathbb{C}^{+}\) with the zero as the centre and \(R\) as the radius, we have \[\int_{-R}^{R}f_{1}(s)f_{2}(s)ds=\int_{C_{R}}f_{1}(s)f_{2}(s)ds\] by the Cauchy-Goursat theorem. Let \(R\to\infty\) and using the limit (4.6) and the property (iii) of Cauchy projection operators in Proposition 5 we get \[0= \int_{\mathbb{R}}f_{1}(s)f_{2}(s)ds\] \[= \int_{\mathbb{R}}\mathcal{C}^{+}(fS)(s)[\mathcal{C}^{-}(fS)( \overline{s})]^{H}ds\] \[= \int_{\mathbb{R}}[\mathcal{C}^{-}(fS)(s)+fS(s)][\mathcal{C}^{-}( fS)(s)]^{H}ds\] \[= \int_{\mathbb{R}}f(s)(I+S)f(s)^{H}ds.\] The above equality contradicts the result (4.1), so the linear equation (4.11) has a unique solution \(N_{j,-}(k)\) in \(L^{2}_{z}(\mathbb{R})\). In the following we prove that the operator \(I-\mathcal{C}^{-}_{S}\) is invertible in the space \(L^{2}_{z}(\mathbb{R})\). For every row vector \(g\in L^{2}_{z}(\mathbb{R})\), we write \[(I-\mathcal{C}^{-}_{S})g=G. \tag{4.13}\] From (4.7) it is easy to see that \(G\in L^{2}_{z}(\mathbb{R})\). Suppose \(g\) can be decomposed as \(g=g_{+}-g_{-}\), from the property (iii) of Cauchy projection operators in Proposition 5 we know that \(g_{+},g_{-}\) satisfy the following two equations respectively \[g_{-}-\mathcal{C}^{-}(g_{-}S)=\mathcal{C}^{-}(G),\ \ g_{+}-\mathcal{C}^{-}(g_{+}S)= \mathcal{C}^{+}(G). \tag{4.14}\] From the above proof and the fact that \(\mathcal{C}^{-}(G),\mathcal{C}^{+}(G)\in L^{2}_{z}(\mathbb{R})\), it follows that the decomposition \(g=g_{+}-g_{-}\) exists and is unique. We first consider \(g_{-}\). Two analytic functions in \(\mathbb{C}\setminus\mathbb{R}\) are defined by \[h_{1}(z):=\mathcal{C}(g_{-}S)(z),\ h_{2}(z):=[\mathcal{C}(g_{-}S+G)(\overline {z})]^{H}.\] Similarly, we make a semicircle in \(\mathbb{C}^{+}\) with zero as its centre and \(R\) as its radius, then letting \(R\to\infty\) gives \[0= \int_{\mathbb{R}}h_{1}(z)h_{2}(z)dz\] \[= \int_{\mathbb{R}}\mathcal{C}^{+}(g_{-}S)[\mathcal{C}^{-}(g_{-}S+ G)]^{H}dz\] \[= \int_{\mathbb{R}}[\mathcal{C}^{-}(g_{-}S)+g_{-}S][\mathcal{C}^{- }(g_{-}S+G)]^{H}dz\] \[= \int_{\mathbb{R}}[g_{-}-\mathcal{C}^{-}(G)+g_{-}S]g_{-}^{H}dz,\] which implies that \[\int_{\mathbb{R}}g_{-}(I+S)g_{-}^{H}dz=\int_{\mathbb{R}}\mathcal{C}^{-}(G)g_{-}^{ H}dz.\] According to the estimated equations (4.1) and (4.7), there is a positive constant \(\eta_{-}\) such that \[\eta_{-}\|g_{-}\|_{L^{2}_{z}}^{2}\leq\mathrm{Re}\int_{\mathbb{R}}g_{-}(I+S)g_{- }^{H}dz=\mathrm{Re}\int_{\mathbb{R}}\mathcal{C}^{-}(G)g_{-}^{H}dz\leq\|G\|_{L^ {2}_{z}}\|g_{-}\|_{L^{2}_{z}}.\] Therefore we have \[\|(I-\mathcal{C}^{-}_{S})^{-1}\mathcal{C}^{-}(G)\|_{L^{2}_{z}}=\|g_{-}\|_{L^{2 }_{z}}\leq\eta_{-}^{-1}\|G\|_{L^{2}_{z}}. \tag{4.15}\] Next we consider \(g_{+}\) and use the property \(\mathcal{C}^{+}-\mathcal{C}^{-}=I\) of the Cauchy projection operators to rewrite the second equation of (4.14) in the following form \[g_{+}(I+S)-\mathcal{C}^{+}(g_{+}S)=\mathcal{C}^{+}(G).\] This time, we make a semicircle in the lower half plane \(\mathbb{C}^{-}\), and using similar means as above we can obtain \[\int_{\mathbb{R}}g_{+}(I+S)^{H}g_{+}^{H}dz=\int_{\mathbb{R}}\mathcal{C}^{+}(G )(I+S)^{H}g_{+}^{H}dz.\] According to the estimated equations (4.1) and (4.2), there are positive constants \(\eta_{-},\eta_{+}\) such that \[\eta_{-}\|g_{+}\|_{L^{2}_{z}}^{2}\leq \mathrm{Re}\int_{\mathbb{R}}g_{+}(I+S)^{H}g_{+}^{H}dz=\mathrm{Re} \int_{\mathbb{R}}\mathcal{C}^{+}(G)(I+S)^{H}g_{+}^{H}dz\] \[\leq \eta_{+}\|G\|_{L^{2}_{z}}\|g_{+}\|_{L^{2}_{z}}.\] So we have \[\|(I-\mathcal{C}^{-}_{S})^{-1}\mathcal{C}^{+}(G)\|_{L^{2}_{z}}=\|g_{+}\|_{L^{ 2}_{z}}\leq\eta_{-}^{-1}\eta_{+}\|G\|_{L^{2}_{z}}. \tag{4.16}\] Combining (4.15) with (4.16) it follows that there exists a positive constant \(\eta\) such that \[\|(I-\mathcal{C}^{-}_{S})^{-1}G\|_{L^{2}_{z}}\leq\eta\|G\|_{L^{2}_{z}}. \tag{4.17}\] From the above Proposition 6 it can be seen that (4.10) is a solution to the RH problem 4. Therefore RH problem 3 is also solvable according to the relationship (3.17) between the RH problem 3 and 4. By the Beals-Coifman theorem in [21] it follows that the solution to the RH problem 3 is unique, and hence the solution to the RH problem 4 is also unique. We consider mainly the RH problem 3 in the \(z\)-plane. We note that \[M_{\pm}(x,z):=[m_{\pm}(x,z),n_{\pm}(x,z)], \tag{4.18}\] and the superscripts (1), (2) used later represent the first and second columns of the square matrix, respectively. However, for column vectors the superscripts (1) and (2) denote the first and second rows of the vector, respectively. Recalling the relation (3.17) as well as (3.16), then we have \[N_{1,\pm}(x,k)= M_{\pm}(x,z)\rho_{1}(k)-\rho_{1}(k)\] \[= [m_{\pm}(x,z)-e_{1},2ik(n_{\pm}(x,z)-e_{2})].\] Moreover, equations (4.8), (3.15) and \(F_{1}(x,k)=\rho_{1}(k)S(x,k)\) imply that \[N_{1,\pm}(x,k)= \mathcal{C}^{\pm}(N_{1,-}(x,k)S(x,k)+F_{1}(x,k))(z)\] \[= \mathcal{C}^{\pm}((M_{-}(x,z)-I)\rho_{1}(k)S(x,k)+\rho_{1}(k)S(x, k))(z)\] \[= \mathcal{C}^{\pm}(M_{-}(x,z)R(x,z)\rho_{1}(k))(z)\] \[= \mathcal{C}^{\pm}[(M_{-}(x,z)R(x,z))^{(1)},2ik(M_{-}(x,z)R(x,z))^ {(2)}](z).\] Combining the above two expressions for \(N_{1,\pm}(x,k)\), we can obtain \[m_{\pm}(x,z)-e_{1} =\mathcal{C}^{\pm}((M_{-}(x,z)R(x,z))^{(1)})(z), \tag{4.19}\] \[2ik(n_{\pm}(x,z)-e_{2}) =\mathcal{C}^{\pm}(2ik(M_{-}(x,z)R(x,z))^{(2)})(z). \tag{4.20}\] Using the same means for \(N_{2,\pm}(x,k)\), we have \[N_{2,\pm}(x,k)= M_{\pm}(x,z)\rho_{2}(k)-\rho_{2}(k)\] \[= [\frac{1}{2ik}(m_{\pm}(x,z)-e_{1}),n_{\pm}(x,z)-e_{2}],\] and \[N_{2,\pm}(x,k)= \mathcal{C}^{\pm}(N_{2,-}(x,k)S(x,k)+F_{2}(x,k))(z)\] \[= \mathcal{C}^{\pm}(M_{-}(x,z)R(x,z)\rho_{2}(k))(z)\] \[= \mathcal{C}^{\pm}[\frac{1}{2ik}(M_{-}(x,z)R(x,z))^{(1)},(M_{-}(x, z)R(x,z))^{(2)}](z).\] These results indicate that \[\frac{1}{2ik}(m_{\pm}(x,z)-e_{1}) =\mathcal{C}^{\pm}(\frac{1}{2ik}(M_{-}(x,z)R(x,z))^{(1)})(z), \tag{4.21}\] \[n_{\pm}(x,z)-e_{2} =\mathcal{C}^{\pm}((M_{-}(x,z)R(x,z))^{(2)})(z). \tag{4.22}\] Notice that we can combine equations (4.19), (4.22) and write them together in the form \[M_{\pm}(x,z)=I+\mathcal{C}^{\pm}(M_{-}(x,z)R(x,z))(z),\ \ z\in\mathbb{R}. \tag{4.23}\] Thus the unique solution to RH problem 3 can be expressed as \[M(x,z)=I+\mathcal{C}(M_{-}(x,z)R(x,z))(z),\ \ z\in\mathbb{C}\setminus\mathbb{R}, \tag{4.24}\] where \(M_{-}(x,z)R(x,z)\) can be rewritten as \[M_{-}(x,z)R(x,z)=[n_{+}(x,z)r_{-}(z)e^{2izx},m_{-}(x,z)r_{+}(z)e^{-2izx}] \tag{4.25}\] by utilizing the equations (3.10), (3.13) and (4.18). Further, (4.20) and (4.21) can also be rewritten as \[2ik\mathcal{C}^{\pm}(m_{-}(x,z)r_{+}(z)e^{-2izx})(z) =\mathcal{C}^{\pm}(2ikm_{-}(x,z)r_{+}(z)e^{-2izx})(z), \tag{4.26}\] \[\frac{1}{2ik}\mathcal{C}^{\pm}(n_{+}(x,z)r_{-}(z)e^{2izx})(z) =\mathcal{C}^{\pm}(\frac{1}{2ik}n_{+}(x,z)r_{-}(z)e^{2izx})(z). \tag{4.27}\] By the relation (3.17) and Proposition 6, the following corollary can easily be obtained. **Corollary 4**.: _If the reflection coefficients \(r_{1}(k),r_{2}(k)\in L^{2}_{z}(\mathbb{R})\) satisfy the condition (3.28) and \(r_{\pm}(z)\in L^{2}_{z}(\mathbb{R})\), then the unique solution \(M(x,z)\) to RH problem 3 satisfies \(M_{\pm}(x,z)-I\in L^{2}_{z}(\mathbb{R})\)._ ### Reconstruction formulas for the potential According to the solution (3.7) for the RH problem 2 and the relation (3.12) it follows that \[\Gamma_{+}(x,z) =[\frac{\mu_{-}(x,z)}{a(z)},\gamma_{+}(x,z)]=\Psi_{\infty}(x)M_{ +}(x,z),\] \[\Gamma_{-}(x,z) =[\mu_{+}(x,z),\frac{\gamma_{-}(x,z)}{d(z)}]=\Psi_{\infty}(x)M_{ -}(x,z).\] By using the limits \[\lim_{|z|\to\infty}4z\gamma_{+}^{(1)}(x,z)=-u(x)\nu_{+}^{\infty}(x),\ \ \text{and}\ \ \lim_{|z|\to\infty}z\mu_{+}^{(2)}(x,z)=\alpha_{+}^{(2)}(x)\] in Lemma 2, we have \[\lim_{|z|\to\infty}zn_{+}^{(1)}(x,z)= -\frac{1}{4}u(x)e^{i\int_{x}^{+\infty}u(y)v(y)dy}, \tag{4.28}\] \[\lim_{|z|\to\infty}zm_{-}^{(2)}(x,z)= -\frac{1}{2i}e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}\partial _{x}[v(x)e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}]. \tag{4.29}\] Because \(r_{\pm}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\), it is easy to know that \(\forall x\in\mathbb{R},R(x,z)\in L^{1}_{z}(\mathbb{R})\cap L^{2}_{z}(\mathbb{ R})\) and furthermore from \(M_{\pm}(x,z)-I\in L^{2}_{z}(\mathbb{R})\) we know that \(M_{\pm}(x,z)R(x,z)\in L^{1}_{z}(\mathbb{R})\). Thus for any \(x\in\mathbb{R}\), it follows that \[\lim_{|z|\to\infty}z\mathcal{C}(M_{-}R)(z)=-\frac{1}{2\pi i}\int_{\mathbb{R}} M_{-}(x,z)R(x,z)dz\] by (4.6). With the help of (4.25), we can rewrite (4.24) as \[\begin{bmatrix}m_{\pm}^{(1)}-1&n_{\pm}^{(1)}\\ m_{\pm}^{(2)}&n_{\pm}^{(2)}-1\end{bmatrix}=\mathcal{C}\begin{bmatrix}n_{+}^{(1 )}r_{-}e^{2izx}&m_{-}^{(1)}r_{+}e^{-2izx}\\ n_{+}^{(2)}r_{-}e^{2izx}&m_{-}^{(2)}r_{+}e^{-2izx}\end{bmatrix},\ \ z\in\mathbb{C} \setminus\mathbb{R}, \tag{4.30}\] so that the reconstruction formulas (4.28) and (4.29) can be rewritten as \[u(x)e^{i\int_{x}^{+\infty}u(y)v(y)dy}=\frac{2}{\pi i}\int_{\mathbb{R}}m_{-}^{ (1)}(x,z)r_{+}(z)e^{-2izx}dz, \tag{4.31}\] \[e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}\partial_{x}[v(x)e^{\frac{1}{2i} \int_{x}^{+\infty}u(y)v(y)dy}]=\frac{1}{\pi}\int_{\mathbb{R}}n_{+}^{(2)}(x,z) r_{-}(z)e^{2izx}dz. \tag{4.32}\] In fact, we cannot get any more information from the RH problem 2, 3 in the \(z\)-plane, and the reconstruction formulas (4.31) and (4.32) are not sufficient to recover the potential \(u(x)\) on the whole line \(\mathbb{R}\). So next we introduce the scalar RH problem \(\delta(z)\) to transform the jump matrix \(R(x,z)\) of the RH problem in the \(z\)-plane to obtain more information. **RH Problem 5**.: _Find a scalar function \(\delta(z)\) that satisfies the following conditions_ * _Analyticity:_ \(\delta(z)\) _is analytical in_ \(\mathbb{C}\setminus\mathbb{R}\) * _Jump condition:_ \(\delta(z)\) _has continuous boundary values_ \(\delta_{\pm}(z)\) _on_ \(\mathbb{R}\) _and_ \[\delta_{+}(z)-\delta_{-}(z)=\delta_{-}(z)r_{+}(z)r_{-}(z),\ \ z\in\mathbb{R}.\] (4.33) * _Asymptotic condition:_ \[\delta(z)\to 1,\ \ |z|\to\infty.\] (4.34) It follows from the properties of the Cauchy projection operator that \[\delta_{\pm}(z)=e^{\mathcal{C}^{\pm}(\log(1+r_{+}r_{-}))(z)},\ \ z\in\mathbb{R}, \tag{4.35}\] satisfy the jump condition (4.33), where \(\log(1+r_{+}(z)r_{-}(z))\in\mathrm{L}^{2}_{z}(\mathbb{R})\), which is proved in the Proposition 7. The analytic continuation of functions \(\delta_{\pm}(z)\) in \(\mathbb{C}\setminus\mathbb{R}\) can be expressed by the Cauchy operator \[\delta(z)=e^{\mathcal{C}(\log(1+r_{+}r_{-}))(z)},\ \ \mathbb{C}\setminus\mathbb{R}, \tag{4.36}\] which is the unique solution to RH problem 5. We set \[M_{\delta}(x,z):=M(x,z)\delta^{-\sigma_{3}}(z)=M(x,z)\begin{bmatrix}\delta^{- 1}(z)&0\\ 0&\delta(z)\end{bmatrix}, \tag{4.37}\] and \[r_{\delta,+}(z):=\delta_{+}(z)\delta_{-}(z)r_{+}(z),\ \ r_{\delta,-}(z):= \delta_{+}^{-1}(z)\delta_{-}^{-1}(z)r_{-}(z),\ \ z\in\mathbb{R}. \tag{4.38}\] Then, \(M_{\delta}(x,z)\) satisfies the new RH problem: **RH Problem 6**.: _Find a matrix-valued function \(M_{\delta}(x,z)\) that satisfies the following conditions_ * _Analyticity:_ \(M_{\delta}(x,z)\) _is analytical in_ \(\mathbb{C}\setminus\mathbb{R}\)_._ * _Jump condition:_ \(M_{\delta}(x,z)\) _has continuous boundary values_ \(M_{\delta,\pm}(x,z)\) _on_ \(\mathbb{R}\) _and_ \[M_{\delta,+}(x,z)-M_{\delta,-}(x,z)=M_{\delta,-}(x,z)R_{\delta}(x,z),\ \ z\in\mathbb{R},\] (4.39) _where_ \[R_{\delta}(x,z)=\begin{bmatrix}0&r_{\delta,+}(z)e^{-2izx}\\ r_{\delta,-}(z)e^{2izx}&r_{\delta,+}(z)r_{\delta,-}(z)\end{bmatrix}.\] (4.40) * _Asymptotic condition:_ \[M_{\delta}(x,z)\to I,\ \ |z|\to\infty.\] (4.41) In the next analysis, we define \[\widehat{f}(\lambda)=\frac{1}{2\pi}\int_{\mathbb{R}}f(\xi)e^{-i\lambda\xi}d\xi\] as the Fourier transform of the function \(f\), then \[\|f\|_{L^{2}}^{2}=2\pi\|\widehat{f}\|_{L^{2}}^{2},\ \ \|f\|_{H^{1}}=\sqrt{2\pi}\| \widehat{f}\|_{L^{2,1}}.\] **Proposition 7**.: _If \(r_{\pm}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\) and \(r_{j}(k),\ j=1,2\) satisfy the condition (3.28), then \(r_{\delta,\pm}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\)._ Proof.: We first prove that \(\log(1+r_{+}(z)r_{-}(z))\in H^{1}_{z}(\mathbb{R})\). Let \(\omega=r_{+}(z)r_{-}(z)\), then \(\omega\in\mathbb{C}\) and \(|\omega|=|r_{1}r_{2}|\leq c_{0}^{2}<1\). Since \(\mathrm{Log}\omega\) can define a single-valued branch satisfying \(\log 1=0\) in \(\omega\in\mathbb{C}\setminus(-\infty,0]\), so \(f(\omega)=\log(1+\omega)\) is analytic in \(\omega\in\mathbb{C}\setminus(-\infty,-1]\) and \(f^{\prime}(\omega)\) is bounded on the disc \(B(0,c_{0}^{2})\). It follows from the complex mean value theorem that \(\forall z_{1},z_{2}\in\mathbb{R}\), \(\omega_{1}=r_{+}(z_{1})r_{-}(z_{1})\), \(\omega_{2}=r_{+}(z_{2})r_{-}(z_{2})\), there exists two points \(\xi_{1},\xi_{2}\) on the segment of the line joining \(\omega_{1},\omega_{2}\) such that \[|f(\omega_{1})-f(\omega_{2}))|\leq 2\sup_{\omega\in B(0,c_{0}^{2})}|f^{\prime}( \omega)||\omega_{1}-\omega_{2}|.\] Because \(r_{+}(0)r_{-}(0)=-r_{1}(0)r_{2}(0)=0\), it follows that \[|\log(1+r_{+}(z)r_{-}(z))|\leq C|r_{+}(z)r_{-}(z)|,\] where \(C\) is a constant associated with \(c_{0}^{2}\). Therefore, we have \(\log(1+r_{+}r_{-})\in L^{2}_{z}(\mathbb{R})\). It is easy to prove \(\partial_{z}\log(1+r_{+}r_{-})\in L^{2}_{z}(\mathbb{R})\), so \(\log(1+r_{+}r_{-})\in H^{1}_{z}(\mathbb{R})\). Next we prove that \(\delta_{\pm}(z)\in L^{\infty}_{z}(\mathbb{R})\). Based on Fourier theory, we rewrite the following integral equation \[\mathcal{C}^{+}(\log(1+r_{+}r_{-}))(z)= \lim_{\varepsilon\to 0}\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{ \log(1+r_{+}(s)r_{-}(s))}{s-(z+\varepsilon i)}ds\] \[= \lim_{\varepsilon\to 0}\frac{1}{2\pi i}\int_{\mathbb{R}}\int_{ \mathbb{R}}\log(\widehat{1+r_{+}r_{-}})(\lambda)e^{i\lambda s}d\lambda\frac {1}{s-(z+\varepsilon i)}ds\] \[= \lim_{\varepsilon\to 0}\int_{\mathbb{R}}\log(\widehat{1+r_{+}r_{-}}) (\lambda)\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{e^{i\lambda s}}{s-(z+ \varepsilon i)}dsd\lambda.\] Notice that the integrand function of the integral \[\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{e^{i\lambda s}}{s-(z+\varepsilon i)}ds\] has first order pole \(z+i\varepsilon\in\mathbb{C}^{+}\). When \(\lambda>0\), we make a sufficiently large semicircle \(C_{R}\) of radius \(R\) centered at zero in \(\mathbb{C}^{+}\) such that \(z+i\varepsilon\) is included in the semicircular disk \(D_{R}\). By using the residue theorem, we have \[\frac{1}{2\pi i}\int_{-R}^{R}\frac{e^{i\lambda s}}{s-(z+\varepsilon i)}ds+ \frac{1}{2\pi i}\int_{C_{R}}\frac{e^{i\lambda s}}{s-(z+\varepsilon i)}ds=e^{i \lambda(z+i\varepsilon)},\ \ \lambda>0.\] Let \(R\to\infty\), we have \[\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{e^{i\lambda s}}{s-(z+\varepsilon i)}ds= e^{i\lambda(z+i\varepsilon)},\ \ \lambda>0\] by Jordan theorem. Similarly, when \(\lambda<0\), we have \[\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{e^{i\lambda s}}{s-(z+\varepsilon i)}ds =0,\ \ \lambda<0.\] Therefore, we have \[\|\mathcal{C}^{+}(\log(1+r_{+}r_{-}))(z)\|_{L^{\infty}_{z}}=\| \int_{0}^{+\infty}\log(\widehat{1+r_{+}}r_{-})(\lambda)e^{i\lambda z}d\lambda \|_{L^{\infty}_{z}}\] \[\leq \|\widehat{\log(\widehat{1+r_{+}}r_{-})}\|_{L^{1}}\leq\sqrt{\pi} \|\widehat{\log(\widehat{1+r_{+}}r_{-})}\|_{L^{2,1}}=\frac{1}{\sqrt{2}}\|\log (1+r_{+}r_{-})\|_{H^{1}_{z}},\] which means that \[\delta_{+}(z)=e^{\mathcal{C}^{+}(\log(1+r_{+}r_{-}))(z)}\in L^{\infty}_{z}( \mathbb{R}).\] A similar proof can be given for \(\delta_{-}(z)\in L^{\infty}_{z}(\mathbb{R})\). Finally, we prove that the conclusion \(r_{\delta,\pm}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\). We prove the statement for \(r_{\delta,+}(z)\) only. The proof for \(r_{\delta,-}(z)\) is analogous. From the definition (4.38) of \(r_{\delta,+}(z)\) and the conclusions \(\delta_{\pm}(z)\in L^{\infty}_{z}(\mathbb{R})\) and \(r_{+}(z)\in L^{2,1}_{z}(\mathbb{R})\), it immediately follows that \(r_{\delta,+}(z)\in L^{2,1}_{z}(\mathbb{R})\). By the property (iii) of the Cauchy projection operator in the Proposition 5 we get \[\delta_{+}(z)\delta_{-}(z)=e^{-i\mathcal{H}(\log(1+r_{+}r_{-}))(z)}.\] By the properties of the Hilbert transform \(\mathcal{H}\), we have \[\|\partial_{z}\mathcal{H}(\log(1+r_{+}r_{-}))\|_{L^{2}_{z}}=\|\mathcal{H}( \partial_{z}\log(1+r_{+}r_{-}))\|_{L^{2}_{z}}\leq\|\partial_{z}\log(1+r_{+}r_ {-})\|_{L^{2}_{z}},\] and therefore \(r_{\delta,+}(z)\in H^{1}_{z}(\mathbb{R})\). From the Proposition 7 it follows that the unique solution to RH problem 6 can be represented by the Cauchy operator \[M_{\delta}(x,z)=I+\mathcal{C}(M_{\delta,-}(x,\cdot)R_{\delta}(x, \cdot))(z),\ \ z\in\mathbb{C}\setminus\mathbb{R}, \tag{4.42}\] which projections on the \(\mathbb{R}\) are \[M_{\delta,\pm}(x,z)=I+\mathcal{C}^{\pm}(M_{\delta,-}(x,\cdot)R_{ \delta}(x,\cdot))(z),\ \ z\in\mathbb{R}. \tag{4.43}\] Similarly, we denote \[M_{\delta,\pm}(x,z):=[m_{\delta,\pm}(x,z),n_{\delta,\pm}(x,z)],\] then \(M_{\delta,-}(x,z)R_{\delta}(x,z)\) can be rewritten as \[M_{\delta,-}(x,z)R_{\delta}(x,z)=[n_{\delta,-}(x,z)r_{\delta,-}( z)e^{2izx},m_{\delta,+}(x,z)r_{\delta,+}(z)e^{-2izx}]. \tag{4.44}\] We have \[\lim_{|z|\to\infty}zn_{\delta,+}^{(1)}(x,z)= -\frac{1}{4}u(x)e^{i\int_{x}^{+\infty}u(y)v(y)dy}, \tag{4.45}\] \[\lim_{|z|\to\infty}zm_{\delta,-}^{(2)}(x,z)= -\frac{1}{2i}e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy} \partial_{x}[v(x)e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}], \tag{4.46}\] from (4.28) and (4.29), where (4.34) is used. Since \(\delta_{\pm}(z)\in L_{z}^{\infty}(\mathbb{R})\) and \(r_{\delta,\pm}(z)\in H_{z}^{1}(\mathbb{R})\cap L_{z}^{2,1}(\mathbb{R})\), we know that \(\forall x\in\mathbb{R},R_{\delta}(x,z)\in L_{z}^{1}(\mathbb{R})\cap L_{z}^{2} (\mathbb{R})\) and \(M_{\delta,-}(x,z)R_{\delta}(x,z)\in L_{z}^{1}(\mathbb{R})\). Then the formulas (4.45) and (4.46) can be rewritten as \[u(x)e^{i\int_{x}^{+\infty}u(y)v(y)dy} =\frac{2}{\pi i}\int_{\mathbb{R}}m_{\delta,+}^{(1)}(x,z)r_{ \delta,+}(z)e^{-2izx}dz, \tag{4.47}\] \[e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}\partial_{x}[v(x)e^ {\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}] =\frac{1}{\pi}\int_{\mathbb{R}}n_{\delta,-}^{(2)}(x,z)r_{\delta, -}(z)e^{2izx}dz. \tag{4.48}\] according to the character (4.6) for Cauchy operator \(\mathcal{C}\) in Proposition 5. ### Estimates of the potential We shall estimate the potential \(u(x)\) with the help of two sets of reformulation formulas (4.31), (4.32) and (4.47), (4.48). This is preceded by some lemmas and corollaries. **Lemma 5**.: _If \(r_{\pm}(z)\in H^{1}_{z}(\mathbb{R})\), then_ \[\sup_{x\in\mathbb{R}}\|\mathcal{C}^{\pm}(r_{+}(z)e^{-2izx})\|_{L^{ \infty}_{z}}\leq\frac{1}{\sqrt{2}}\|r_{+}\|_{H^{1}_{z}}, \tag{4.49}\] \[\sup_{x\in\mathbb{R}}\|\mathcal{C}^{\pm}(r_{-}(z)e^{2izx})\|_{L^{ \infty}_{z}}\leq\frac{1}{\sqrt{2}}\|r_{-}\|_{H^{1}_{z}}, \tag{4.50}\] _and_ \[\sup_{x\in\mathbb{R}^{+}}\|\langle x\rangle\mathcal{C}^{+}(r_{+}( z)e^{-2izx})\|_{L^{2}_{x}} \leq\|r_{+}\|_{H^{1}_{z}}, \tag{4.51}\] \[\sup_{x\in\mathbb{R}^{+}}\|\langle x\rangle\mathcal{C}^{-}(r_{-} (z)e^{-2izx})\|_{L^{2}_{z}} \leq\|r_{-}\|_{H^{1}_{z}}. \tag{4.52}\] Proof.: We only give a detailed proof for \(\mathcal{C}^{+}(r_{+}(z)e^{-2izx})\). The proofs for the remaining cases are similar. Using the same method as for Proposition 7, we can rewrite \(\mathcal{C}^{+}(r_{+}(z)e^{-2izx})\) as \[\mathcal{C}^{+}\left(r_{+}(z)e^{-2izx}\right)(z) =\lim_{\varepsilon\to 0}\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{r_{+}( s)e^{-2isx}}{s-(z+i\varepsilon)}ds\] \[=\lim_{\varepsilon\to 0}\frac{1}{2\pi i}\int_{\mathbb{R}}\widehat{r}_ {+}(\lambda)\int_{\mathbb{R}}\frac{e^{i(\lambda-2x)s}}{s-(z+i\varepsilon)}dsd\lambda\] \[=\int_{2x}^{\infty}\widehat{r}_{+}(\lambda)e^{i(\lambda-2x)z}d\lambda\] according to the residue theorem and Jordan theorem. Hence, we have \[\sup_{x\in\mathbb{R}}\left\|\mathcal{C}^{+}\left(r_{+}(z)\mathrm{e}^{-2izx} \right)\right\|_{L^{\infty}_{z}}\leq\|\widehat{r}_{+}\|_{L^{1}}\leq\sqrt{\pi} \,\|\widehat{r}_{+}\|_{L^{2,1}}=\frac{1}{\sqrt{2}}\,\|r_{+}\|_{H^{1}_{z}}\,.\] Since \(\widehat{r}_{+}\in L^{2,1}(\mathbb{R})\), we have \[\sup_{x\in\mathbb{R}^{+}}\|\langle x\rangle\mathcal{C}^{+}(r_{+}( z)e^{-2izx})\|_{L^{2}_{z}}= \sup_{x\in\mathbb{R}^{+}}\left\|\langle x\rangle\int_{2x}^{\infty} \widehat{r}_{+}(\lambda)\mathrm{e}^{i(\lambda-2x)z}d\lambda\right\|_{L^{2}_{z }}\] \[\leq \sqrt{2\pi}\,\|\widehat{r}_{+}\|_{L^{2,1}}=\|r_{+}\|_{H^{1}}\] by Proposition 1 in reference [10]. **Corollary 5**.: _If \(r_{\delta,\pm}(z)\in H^{1}_{z}(\mathbb{R})\), then_ \[\sup_{x\in\mathbb{R}}\|\mathcal{C}^{\pm}(r_{\delta,+}(z)e^{-2izx} )\|_{L^{\infty}_{z}}\leq\frac{1}{\sqrt{2}}\|r_{\delta,+}\|_{H^{1}_{z}}, \tag{4.53}\] \[\sup_{x\in\mathbb{R}}\|\mathcal{C}^{\pm}(r_{\delta,-}(z)e^{2izx} )\|_{L^{\infty}_{z}}\leq\frac{1}{\sqrt{2}}\|r_{\delta,-}\|_{H^{1}_{z}}, \tag{4.54}\] _and_ \[\sup_{x\in\mathbb{R}^{-}}\|\langle x\rangle\mathcal{C}^{-}(r_{\delta, +}(z)e^{-2izx})\|_{L^{2}_{z}}\leq\|r_{\delta,+}\|_{H^{1}_{z}}, \tag{4.55}\] \[\sup_{x\in\mathbb{R}^{-}}\|\langle x\rangle\mathcal{C}^{+}(r_{ \delta,-}(z)e^{2izx})\|_{L^{2}_{z}}\leq\|r_{\delta,-}\|_{H^{1}_{z}}. \tag{4.56}\] **Lemma 6**.: _If \(r_{\pm}(z)\in H^{1}_{z}(\mathbb{R})\) and \(r_{j}(k),j=1,2\) satisfy the condition (3.28), then there exists a positive constant \(C\) such that_ \[\sup_{x\in\mathbb{R}^{+}}\|\langle x\rangle m^{(2)}_{-}(x,z)\|_{L ^{2}_{z}}\leq C\|r_{-}\|_{H^{1}_{z}}, \tag{4.57}\] \[\sup_{x\in\mathbb{R}^{+}}\|\langle x\rangle n^{(1)}_{+}(x,z)\|_{ L^{2}_{z}}\leq C\|r_{+}\|_{H^{1}_{z}}. \tag{4.58}\] _If in addition, \(r_{\pm}(z)\in L^{2,1}_{z}(\mathbb{R})\), then there exists another positive constant \(C\) such that_ \[\sup_{x\in\mathbb{R}}\|\partial_{x}m^{(2)}_{-}(x,z)\|_{L^{2}_{z} }\leq C(\|r_{+}\|_{H^{1}_{z}\cap L^{2,1}_{z}}+\|r_{-}\|_{H^{1}_{z}\cap L^{2,1}_ {z}}), \tag{4.59}\] \[\sup_{x\in\mathbb{R}}\|\partial_{x}n^{(1)}_{+}(x,z)\|_{L^{2}_{z}} \leq C(\|r_{+}\|_{H^{1}_{z}\cap L^{2,1}_{z}}+\|r_{-}\|_{H^{1}_{z}\cap L^{2,1}_ {z}}), \tag{4.60}\] _where the constant \(C\) depends on \(\|r_{\pm}\|_{H^{1}_{z}\cap L^{2,1}_{z}}\)._ Proof.: We define matrix \(\tilde{M}(x,z):=[m_{-}(x,z)-e_{1},n_{+}(x,z)-e_{2}]\) according to the results (4.51) and (4.52), and we know that \[[m_{-}-e_{1},n_{+}-e_{2}]=[\mathcal{C}^{-}(n_{+}r_{-}e^{2izx}), \mathcal{C}^{+}(m_{-}r_{+}e^{-2izx})] \tag{4.61}\] by (4.23) and (4.25). Also, starting from the expression (3.10) for \(R(x,z)\), we define the matrices \[R_{-}(x,z):=\begin{bmatrix}0&0\\ r_{-}(z)e^{2izx}&0\end{bmatrix},\ \ R_{+}(x,z):=\begin{bmatrix}0&r_{+}(z)e^{-2 izx}\\ 0&0\end{bmatrix}, \tag{4.62}\] then \[R_{-}(x,z)+R_{+}(x,z)=(I-R_{+}(x,z))R(x,z). \tag{4.63}\] By calculation we can obtain \[\tilde{M}-\mathcal{C}^{+}(\tilde{M}R_{+})-\mathcal{C}^{-}(\tilde{M}R_{-})=F, \tag{4.64}\] where \[F(x,z)=\begin{bmatrix}0&\mathcal{C}^{+}(r_{+}(z)e^{-2izx})(z)\\ \mathcal{C}^{-}(r_{-}(z)e^{2izx})(z)&0\end{bmatrix}.\] The left-hand side of the equation (4.64) can be rewritten as \[\tilde{M}-\mathcal{C}^{+}(\tilde{M}R_{+})-\mathcal{C}^{-}(\tilde{M}R _{-})= \tilde{M}-\tilde{M}R_{+}-\mathcal{C}^{-}(\tilde{M}R_{-}+\tilde{M}R_ {+})\] \[= \tilde{M}(I-R_{+})-\mathcal{C}^{-}(\tilde{M}(I-R_{+})R)\] according to \(\mathcal{C}^{+}-\mathcal{C}^{-}=I\) and (4.63). We note that \(\tilde{M}(x,z)(I-R_{+}(x,z))\triangleq J(x,z)\), then (4.64) can be rewritten as \[J-\mathcal{C}^{-}(JR)=F, \tag{4.65}\] where \[J(x,z)=\begin{bmatrix}m_{-}^{(1)}-1&n_{+}^{(1)}-(m_{-}^{(1)}-1)r_{+}e^{-2izx} \\ m_{-}^{(2)}&n_{+}^{(2)}-1-m_{-}^{(2)}r_{+}e^{-2izx}.\end{bmatrix}.\] We denote the superscripts \((R_{1}),(R_{2})\) represent the first and second rows of the square matrix, respectively. Multiplying the matrix \(\rho_{1}(k)\) on both sides of the equation (4.65) and considering the second row of the matrices give \[(J\rho_{1})^{(R_{2})}-(\mathcal{C}^{-}(JR)\rho_{1})^{(R_{2})}=(F\rho_{1})^{(R_ {2})}. \tag{4.66}\] The relation (3.15) \(R\rho_{1}=\rho_{1}S\) and the equation (4.26) yield \[(\mathcal{C}^{-}(JR)\rho_{1})^{(R_{2})}=\mathcal{C}^{-}((J\rho_{1})^{(R_{2})}S),\] so (4.66) can be rewritten as \[(I-\mathcal{C}^{-}_{S})(J\rho_{1})^{(R_{2})}=(F\rho_{1})^{(R_{2})}. \tag{4.67}\] By the (4.17), we know that for every \(x\in\mathbb{R}\), there exists a positive constant \(\eta\) such that \[\|(J\rho_{1})^{(R_{2})}\|_{L_{z}^{2}}\leq\eta\|(F\rho_{1})^{(R_{2})}\|_{L_{z} ^{2}},\] which implies \[\|m_{-}^{(2)}\|_{L_{z}^{2}}+\|2ik(n_{+}^{(2)}-1-m_{-}^{(2)}r_{+}e^{-2izx})\|_ {L_{z}^{2}}\leq\eta\|\mathcal{C}^{-}(r_{-}e^{2izx})\|_{L_{z}^{2}}. \tag{4.68}\] From (4.52) we immediately get (4.57), and in addition, from (3.28) we have \[\|2ik(n_{+}^{(1)}-1)\|_{L_{z}^{2}}\leq(\eta+1)\|\mathcal{C}^{-}(r_{-}e^{2izx}) \|_{L_{z}^{2}}. \tag{4.69}\] Similarly, by multiplying the matrix \(\rho_{2}(k)\) on both sides of the equation (4.65) and considering the first row of the matrices, we can obtain for every \(x\in\mathbb{R}\), there exists a positive constant \(\eta\) such that \[\|\frac{1}{2ik}(m_{-}^{(1)}(x,z)-1)\|_{L_{z}^{2}}+\|n_{+}^{(1)}-(m_{-}^{(1)}-1)r _{+}e^{-2izx}\|_{L_{z}^{2}}\leq\eta\|\mathcal{C}^{+}(r_{+}e^{-2izx})\|_{L_{z}^{ 2}}. \tag{4.70}\] Further, from (3.28) we have \[\|n_{+}^{(1)}\|_{L_{z}^{2}} \leq\|\frac{1}{2ik}(m_{-}^{(1)}-1)2ikr_{+}e^{-2izx}\|_{L_{z}^{2}}+ \eta\|\mathcal{C}^{+}(r_{+}e^{-2izx})\|_{L_{z}^{2}}\] \[\leq 2\eta\|\mathcal{C}^{+}(r_{+}e^{-2izx})\|_{L_{z}^{2}},\] using (4.51) we can get (4.58). We take the derivative of both sides of the equation (4.64) with respect to the variable \(x\) to obtain \[\partial_{x}\tilde{M}-\mathcal{C}^{+}((\partial_{x}\tilde{M})R_{+})-\mathcal{ C}^{-}((\partial_{x}\tilde{M})R_{-})=\hat{F}, \tag{4.71}\] where \[\hat{F}(x,z)= \partial_{x}F+\mathcal{C}^{+}(\tilde{M}\partial_{x}R_{+})- \mathcal{C}^{-}(\tilde{M}\partial_{x}R_{-})\] \[= 2i\begin{bmatrix}0&-\mathcal{C}^{+}(zr_{+}e^{-2izx})(z)\\ \mathcal{C}^{-}(zr_{-}e^{2izx})(z)&0\end{bmatrix}\] \[+2i\begin{bmatrix}\mathcal{C}^{-}(n_{+}^{(1)}zr_{-}e^{2izx})(z)& -\mathcal{C}^{+}((m_{-}^{(1)}-1)zr_{+}e^{-2izx})(z)\\ \mathcal{C}^{-}((n_{+}^{(2)}-1)zr_{-}e^{2izx})(z)&-\mathcal{C}^{+}(m_{-}^{(2)} zr_{+}e^{-2izx})(z)\end{bmatrix}.\] Similarly, we can rewrite (4.71) as \[\hat{J}-\mathcal{C}^{-}(\hat{J}R)=\hat{F}, \tag{4.72}\] where \[\hat{J}(x,z) :=\partial_{x}\tilde{M}(x,z)(I-R_{+}(x,z))\] \[=\begin{bmatrix}\partial_{x}m_{-}^{(1)}&-r_{+}e^{-2izx}\partial_ {x}m_{-}^{(1)}+\partial_{x}n_{+}^{(1)}\\ \partial_{x}m_{-}^{(2)}&-r_{+}e^{-2izx}\partial_{x}m_{-}^{(2)}+\partial_{x}n_{ +}^{(2)}\end{bmatrix}.\] Repeating the same procedure as in the above proof and using (4.68),(4.70) and the Proposition 3 yields the results (4.59) and (4.60). **Corollary 6**.: _If \(r_{\delta,\pm}(z)\in H^{1}_{z}(\mathbb{R})\) and \(r_{j}(k),j=1,2\) satisfy the condition (3.28), then there exists a positive constant \(C\) such that_ \[\sup_{x\in\mathbb{R}^{-}}\|\langle x\rangle m^{(2)}_{\delta,+}(x,z )\|_{L^{2}_{z}}\leq C\|r_{\delta,-}\|_{H^{1}}, \tag{4.73}\] \[\sup_{x\in\mathbb{R}^{-}}\|\langle x\rangle n^{(1)}_{\delta,-}(x,z )\|_{L^{2}_{z}}\leq C\|r_{\delta,+}\|_{H^{1}}. \tag{4.74}\] _If in addition, \(r_{\delta,\pm}(z)\in L^{2,1}_{z}(\mathbb{R})\), then there exists another positive constant \(C\) such that_ \[\sup_{x\in\mathbb{R}}\|\partial_{x}m^{(2)}_{\delta,+}(x,z)\|_{L^ {2}_{z}}\leq C(\|r_{\delta,+}\|_{H^{1}_{z}\cap L^{2,1}_{z}}+\|r_{\delta,-}\|_{ H^{1}_{z}\cap L^{2,1}_{z}}), \tag{4.75}\] \[\sup_{x\in\mathbb{R}}\|\partial_{x}n^{(1)}_{\delta,-}(x,z)\|_{L^ {2}_{z}}\leq C(\|r_{\delta,+}\|_{H^{1}_{z}\cap L^{2,1}_{z}}+\|r_{\delta,-}\|_ {H^{1}_{z}\cap L^{2,1}_{z}}), \tag{4.76}\] _where the constant \(C\) depends on \(\|r_{\delta,\pm}\|_{H^{1}_{z}\cap L^{2,1}_{z}}\)._ **Proposition 8**.: _If \(r_{\pm}(z)\in H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\) and \(r_{j}(k),\ j=1,2\) satisfy the condition (3.28), then \(u\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfies_ \[\|u\|_{H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})}\leq C(\|r_{+}\|_{H^{1}_{z}( \mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})}+\|r_{-}\|_{H^{1}_{z}(\mathbb{R})\cap L ^{2,1}_{z}(\mathbb{R})}), \tag{4.77}\] _where C is a constant depend on \(\|r_{\pm}\|_{H^{1}_{z}\cap L^{2,1}_{z}}\). Moreover, the map_ \[[r_{-}(z),r_{+}(z)]\mapsto u \tag{4.78}\] _is Lipschitz continuous from \(H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})\) to \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\)._ Proof.: We first prove that \(u\in L^{2,1}(\mathbb{R})\). The reconstruction formula (4.31) can be rewritten as \[u(x)e^{i\int_{x}^{+\infty}u(y)v(y)dy}=\frac{2}{\pi i}\int_{\mathbb{R}}r_{+}(z)e ^{-2izx}dz+h_{1}(x),\] where \[h_{1}(x):=\frac{2}{\pi i}\int_{\mathbb{R}}[m^{(1)}_{-}(x,z)-1]r_{+}(z)e^{-2 izx}dz.\] From (4.23) and (4.25), we have \[h_{1}(x)= \frac{2}{\pi i}\int_{\mathbb{R}}\mathcal{C}^{-}(n^{(1)}_{+}(x,z) r_{-}(z)e^{2izx})(z)r_{+}(z)e^{-2izx}dz\] \[= -\frac{2}{\pi i}\int_{\mathbb{R}}n^{(1)}_{+}(x,z)r_{-}(z)e^{2izx} \mathcal{C}^{+}(r_{+}e^{-2izx})(z)dz,\] and therefore \[\sup_{x\in\mathbb{R}^{+}}|\langle x\rangle^{2}h_{1}(x)| \leq\frac{2}{\pi}\|r_{-}\|_{L^{\infty}_{z}}\sup_{x\in\mathbb{R}^{+} }\|\langle x\rangle n_{+}^{(1)}\|_{L^{\infty}_{z}}\sup_{x\in\mathbb{R}^{+}}\| \langle x\rangle\mathcal{C}^{+}(r_{+}e^{-2izx})\|_{L^{2}_{z}}\] \[\lesssim_{\|r_{\pm}\|_{H^{1}}}\|r_{+}\|_{H^{1}}\] from (4.51) and (4.58). Furthermore, \[\|u\|_{L^{2,1}(\mathbb{R}^{+})} =\frac{2}{\pi}\|\langle x\rangle\int_{\mathbb{R}}r_{+}(z)e^{-2izx }dz\|_{L^{2}(\mathbb{R}^{+})}+\|\langle x\rangle h_{1}(x)\|_{L^{2}(\mathbb{R}^ {+})}\] \[\lesssim\|\langle x\rangle\widehat{r}_{+}\|_{L^{2}}+\sup_{x\in \mathbb{R}^{+}}|\langle x\rangle^{2}h_{1}(x)|\|\langle x\rangle^{-1}\|_{L^{2}}\] \[\lesssim_{\|r_{\pm}\|_{H^{1}}}\|r_{+}\|_{H^{1}}.\] Similarly, we can rewrite (4.47) as \[u(x)e^{i\int_{x}^{+\infty}u(y)v(y)dy}=\frac{2}{\pi i}\int_{\mathbb{R}}r_{ \delta,+}(z)e^{-2izx}dz+h_{2}(x)\] where \[h_{2}(x):=\frac{2}{\pi i}\int_{\mathbb{R}}[m_{\delta,+}^{(1)}(x,z)-1]r_{ \delta,+}(z)e^{-2izx}dz.\] Similar to the above proof we have \[\|u\|_{L^{2,1}(\mathbb{R}^{-})}\lesssim_{\|r_{\delta,\pm}\|_{H^{1}}}\|r_{ \delta,+}\|_{H^{1}}\lesssim_{\|r_{\pm}\|_{H^{1}}}\|r_{+}\|_{H^{1}}+\|r_{-}\|_ {H^{1}}\] from (4.23), (4.25), (4.55), (4.74) and Proposition 7. Thus we have completed the proof of \(u\in L^{2,1}(\mathbb{R})\). Next we prove that \(u\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\). By the same analytical technique we can obtain \[\|\partial_{x}[v(x)e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}]\| _{L^{2,1}(\mathbb{R}^{+})}\lesssim_{\|r_{\pm}\|_{H^{1}}}\|r_{-}\|_{H^{1}},\] \[\|\partial_{x}[v(x)e^{\frac{1}{2i}\int_{x}^{+\infty}u(y)v(y)dy}]\| _{L^{2,1}(\mathbb{R}^{-})}\lesssim_{\|r_{\pm}\|_{H^{1}}}\|r_{+}\|_{H^{1}}+\|r_ {-}\|_{H^{1}}\] from the reconstruction formulas (4.32) and (4.48). Because \(H^{1}(\mathbb{R})\) is embedded into \(L^{\infty}(\mathbb{R})\), we have \[\|u\|_{H^{1,1}(\mathbb{R})}\lesssim_{\|r_{\pm}\|_{H^{1}}}\|r_{+}\|_{H^{1}}+\|r _{-}\|_{H^{1}}.\] We take the derivative of (4.32) and (4.48) with respect to the variable \(x\), and using the same analysis we can get \[\|u\|_{H^{2}(\mathbb{R})}\lesssim_{\|r_{\pm}\|_{H^{1}\cap L^{2,1}}}\|r_{+}\|_{ H^{1}\cap L^{2,1}}+\|r_{-}\|_{H^{1}\cap L^{2,1}}\] by Lemma 5, 6, and the corollary 5, 6. Finally, we prove that the map (4.78) is Lipschitz continuous. Suppose that \(r_{\pm},\tilde{r}_{\pm}\in H^{1}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) satisfy \(\|r_{\pm}\|_{H^{1}\cap L^{2,1}},\|\tilde{r}_{\pm}\|_{H^{1}\cap L^{2,1}}\leq\gamma\) for some \(\gamma>0\). Denote the corresponding potentials by \(u\) and \(\tilde{u}\) respectively. Using the reconstruction formulas (4.31), (4.32), (4.47), (4.48) and repeating the above proof we can obtain that there exists \(\gamma\)-dependent constant \(C(\gamma)\) such that \[\|u-\tilde{u}\|_{H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})}\leq C(\gamma)(\|r _{+}-\tilde{r}_{+}\|_{H^{1}\cap L^{2,1}}+\|r_{-}-\tilde{r}_{-}\|_{H^{1}\cap L^{ 2,1}}).\] ## 5 Time evolution and global solutions Suppose that the fundamental vector solution \(\phi(x,t,k)\) satisfying the Lax pair (1.2) and (1.3) associated with the potential \(u(x,t)\) has the following form \[\varphi_{\pm}(x,t,k)e^{-ik^{2}x-2ik^{4}t},\ \ \psi_{\pm}(x,t,k)e^{ik^{2}x+2ik^{4}t}.\] For any fixed \(t\), \(\varphi_{\pm}(x,t,k)\) and \(\psi_{\pm}(x,t,k)\) has the same conclusion as the Corollary 1, and satisfies the same boundary conditions \[\varphi_{\pm}(x,t,k)\to e_{1},\ \ \psi_{\pm}(x,t,k)\to e_{2},\ \ \text{as}\ x\to\pm\infty.\] We define matrices \[J_{-}(x,t,k) :=[\varphi_{-}(x,t,k),\psi_{-}(x,t,k)]e^{-ik^{2}\sigma_{3}x},\] \[J_{+}(x,t,k) :=[\varphi_{+}(x,t,k),\psi_{+}(x,t,k)]e^{-ik^{2}\sigma_{3}x},\] that satisfy equation (1.2), then by the theory of ODE it follows that there exists \(\Lambda(t,k)=\begin{bmatrix}a(t,k)&c(t,k)\\ b(t,k)&d(t,k)\end{bmatrix}\) such that \(J_{-}(x,t,k)=J_{+}(x,t,k)\Lambda(t,k)\), i.e. \[\begin{split}&\left[\varphi_{-}(x,t,k)e^{-ik^{2}x}\ \ \ \psi_{-}(x,t,k)e^{ik^{2}x}\right]\\ =&\left[\varphi_{+}(x,t,k)e^{-ik^{2}x}\ \ \ \psi_{+}(x,t,k)e^{ik^{2}x} \right]\begin{bmatrix}a(t,k)&c(t,k)\\ b(t,k)&d(t,k)\end{bmatrix}.\end{split} \tag{5.1}\] Thus a version of the scattering relation (2.32) with time \(t\) is established. And we can define the time-dependent reflection coefficients \[r_{1}(t,k):=\frac{b(t,k)}{a(t,k)},\ \ r_{2}(t,k):=\frac{c(t,k)}{d(t,k)},\ \ k\in \mathbb{R}\cup i\mathbb{R} \tag{5.2}\] \[r_{-}(t,z):=2ikr_{1}(t,k),\ \ r_{+}(t,z):=-\frac{r_{2}(t,k)}{2ik},\ \ z\in\mathbb{R}. \tag{5.3}\] Some properties of the reflection coefficients are given in the following lemma. **Lemma 7**.: _If \(u_{0}\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfies the condition (3.21), then for every \(t\in\mathbb{R}\)_ \[\|r_{j}(t,k)\|_{L^{2,1}_{z}\cap L^{\infty}_{z}}=\|r_{j}(0,k)\|_{L^{2,1}_{z} \cap L^{\infty}_{z}},\ \ j=1,2, \tag{5.4}\] _and \(\forall t\in[0,T]\)_ \[\|r_{\pm}(t,z)\|_{H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})}\leq C(T) \|r_{\pm}(0,z)\|_{H^{1}_{z}(\mathbb{R})\cap L^{2,1}_{z}(\mathbb{R})}, \tag{5.5}\] _where \(C(T)\) is a constant that depends on \(T\) and is linear with respect to \(T\)._ Proof.: We define the matrices \[\tilde{J}_{-}(x,t,k):=J_{-}(x,t,k)e^{-2ik^{4}\sigma_{3}t},\ \ \tilde{J}_{+}(x,t,k):=J_{+}(x,t,k)e^{-2ik^{4}\sigma_{3}t},\] that satisfy the linear systems (1.2) and (1.3). It follows from relation (5.1) that there exists \(\tilde{\Lambda}(t,k)\) such that \[\tilde{J}_{-}(x,t,k)=\tilde{J}_{+}(x,t,k)\tilde{\Lambda}(t,k),\] where \[\tilde{\Lambda}(t,k)=e^{2ik^{4}\sigma_{3}t}\Lambda(t,k)e^{-2ik^{4}\sigma_{3}t} =\begin{bmatrix}a(t,k)&c(t,k)e^{4ik^{4}t}\\ b(t,k)e^{-4ik^{4}t}&d(t,k)\end{bmatrix}.\] Substituting \(\tilde{J}_{+}(x,t,k)\tilde{\Lambda}(t,k)\) into (1.3) and using the zero trace property of (1.2), it follows that \(\partial_{t}\tilde{\Lambda}(t,k)=0\), which means that \[a(t,k)=a(0,k),\ b(t,k)=b(0,k)e^{4ik^{4}t},\] \[d(t,k)=d(0,k),\ c(t,k)=c(0,k)e^{-4ik^{4}t}.\] So we have \[r_{1}(t,k)=r_{1}(0,k)e^{4iz^{2}t},\ r_{2}(t,k)=r_{2}(0,k)e^{-4iz^ {2}t}, \tag{5.6}\] \[r_{-}(t,z)=r_{-}(0,z)e^{4iz^{2}t},\ r_{+}(t,z)=r_{+}(0,z)e^{-4 iz^{2}t}, \tag{5.7}\] which imply that \(\forall t\in\mathbb{R}\), \(|r_{j}(t,k)|=|r_{j}(0,k)|,\ j=1,2\) and \(|r_{\pm}(t,z)|=|r_{\pm}(0,z)|,z\in\mathbb{R}\). Because of \[\|\partial_{z}r_{\pm}(t,z)\|_{L^{2}_{z}}= \|\partial_{z}r_{\pm}(0,z)\mp 8iztr_{\pm}(0,z)\|_{L^{2}_{z}}\] \[\leq \|\partial_{z}r_{\pm}(0,z)\|_{L^{2}_{z}}+8t\|\partial_{z}r_{\pm} (0,z)\|_{L^{2,1}_{z}},\] we have that the inequality (5.5) holds. We denote \(M(x,t,z)\) as the solution for the RH problem 3 under time-dependent reflection coefficients \(r_{\pm}(t,z)\). Since the properties (5.4) and (5.5) of the time-dependent reflection coefficients, it follows from Proposition 4 that \(M(x,t,z)\) exists uniquely. We denote \(\Psi(x,t,z)\) as the unique solution for the RH problem 1 under time-dependent reflection coefficients \(r_{j}(t,k),\ k=1,2\). By inverse scattering theory [20], [6] it can be verified that the \(\Psi(x,t,z)e^{-ik^{2}\sigma_{3}x-2ik^{4}\sigma_{3}t}\) satisfies the Lax pair (1.2) and (1.3). Therefore the function \(u(x,t)\) reconstituted from the reconstruction formulas is the solution of (1.5). The following proposition shows that \(u(x,t)\) can be controlled by \(u_{0}\) when \(t\in[0,T]\). **Proposition 9**.: _If \(u_{0}\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfies the condition (3.21), then for every \(t\in[0,T]\) we have_ \[\left\|u(\cdot,t)\right\|_{H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})}\leq C(T )\left\|u_{0}\right\|_{H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})}, \tag{5.8}\] _and the mapping_ \[H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\ni u_{0}\mapsto u(x,t)\in C([0,T],H^ {2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})) \tag{5.9}\] _is Lipschitz continuous._ Proof.: It follows from Proposition 8, Proposition 3 and Lemma 7 that \[\left\|u(\cdot,t)\right\|_{H^{2}\cap H^{1,1}} \leq C_{1}(\left\|r_{+}(t,z)\right\|_{H^{1}_{z}\cap L^{2,1}_{z}}+ \left\|r_{-}(t,z)\right\|_{H^{1}_{z}\cap L^{2,1}_{z}})\] \[\leq C_{2}(T)(\left\|r_{+}(0,z)\right\|_{H^{1}_{z}\cap L^{2,1}_{z }}+\left\|r_{-}(0,z)\right\|_{H^{1}_{z}\cap L^{2,1}_{z}})\] \[\leq C(T)\left\|u_{0}\right\|_{H^{2}\cap H^{1,1}}.\] The proof of (5.8) is completed. Before proving the Lipschitz continuity of mapping (5.9), we show that \(u(x,t)\) is continuous with respect to time \(t\) in the sense of the norm \(H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\). In fact, for any \(t_{1},t_{2}\in[0,T]\) \[\left\|r_{-}(t_{1},\cdot)-r_{-}(t_{2},\cdot)\right\|_{H^{1}(\mathbb{R})\cap L ^{2,1}(\mathbb{R})}\lesssim\left\|r_{-}(0,\cdot)\right\|_{H^{1}(\mathbb{R}) \cap L^{2,1}(\mathbb{R})}.\] For any \(\varepsilon>0\), there exists \(N>0\) such that \[\left\|r_{-}(0,\cdot)\right\|_{H^{1}(|z|>N)\cap L^{2,1}(|z|>N)}<\varepsilon,\] when \(|z|\leq N\), we have \[|e^{4iz^{2}(t_{1}-t_{2})}-1| \leq 4N^{2}|t_{1}-t_{2}|,\] \[|t_{1}e^{4iz^{2}t_{1}}-t_{2}e^{4iz^{2}t_{2}}| \leq(4N^{2}T+1)|t_{1}-t_{2}|,\] and \[\left\|r_{-}(t_{1},\cdot)-r_{-}(t_{2},\cdot)\right\|_{H^{1}(|z| \leq N)\cap L^{2,1}(|z|\leq N)}\] \[\leq C(N,T)|t_{1}-t_{2}|\left\|r_{-}(0,\cdot)\right\|_{H^{1}(|z|\leq N ))\cap L^{2,1}(|z|\leq N)},\] thus \[\left\|r_{-}(t_{1},\cdot)-r_{-}(t_{2},\cdot)\right\|_{H^{1}(\mathbb{R})\cap L ^{2,1}(\mathbb{R})}<\varepsilon.\] We can get similar result for \(r_{+}(t,z)\), and Proposition 8 gives \[\left\|u(\cdot,t_{1})-u(\cdot,t_{2})\right\|_{H^{2}(\mathbb{R})\cap H^{1,1}( \mathbb{R})}<\varepsilon.\] Let \(u_{0},\tilde{u}_{0}\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfy \(\left\|u_{0}\right\|_{H^{2}\cap H^{1,1}},\left\|\tilde{u}_{0}\right\|_{H^{2} \cap H^{1,1}}\leq U\) for some \(U>0\). Denote the corresponding scattering data by \(r_{\pm},\tilde{r}_{\pm}\) respectively. Estimates (5.5) and Lipschitz continuity in Proposition 3 and Proposition 8 shows that \[\left\|u-\tilde{u}\right\|_{C([0,T],H^{2}\cap H^{1,1})}=\left\|u( \cdot,t^{*})-\tilde{u}(\cdot,t^{*})\right\|_{H^{2}\cap H^{1,1}}\] \[\leq C_{1}(U,T)(\left\|r_{+}(t^{*},\cdot)-\tilde{r}_{+}(t^{*},\cdot) \right\|_{H^{1}\cap L^{2,1}}+\left\|r_{-}(t^{*},\cdot)-\tilde{r}_{-}(t^{*}, \cdot)\right\|_{H^{1}\cap L^{2,1}})\] \[\leq C_{2}(U,T)(\left\|r_{+}(0,\cdot)-\tilde{r}_{+}(0,\cdot)\right\|_ {H^{1}\cap L^{2,1}}+\left\|r_{-}(0,\cdot)-\tilde{r}_{-}(0,\cdot)\right\|_{H^{ 1}\cap L^{2,1}})\] \[\leq C(U,T)\|u_{0}-\tilde{u}_{0}\|_{H^{2}\cap H^{1,1}},\] where \(t^{*}\in[0,T]\), and \(C(U,T)\) is a polynomial function with respect to \(T\). Finally, we give the proof of the Theorem 1. **Proof of Theorem 1:** Let \(T_{max}>0\) be the maximal existence time of the local solution \(u(x,t)\). Suppose that \(T_{max}<\infty\), then by the estimate (5.8), there exists a constant \(M\) and a time series \(t_{n}\to T_{max}\) such that \[\|u(\cdot,t_{n})\|_{H^{2}\cap H^{1,1}}\leq M.\] Then there exists \(\delta>0\) and sufficiently large \(n\) such that \(t_{n}+\delta>T_{max}\) and the solution \(u(x,t)\) exists on \([0,t_{n}+\delta]\), which contradicts the fact that \(T_{max}\) is the maximal existence time. Let \(u_{0},\tilde{u}_{0}\in H^{2}(\mathbb{R})\cap H^{1,1}(\mathbb{R})\) satisfy \(\left\|u_{0}\right\|_{H^{2}\cap H^{1,1}},\left\|\tilde{u}_{0}\right\|_{H^{2} \cap H^{1,1}}\leq U\) for some \(U>0\). Define \[d(u(x,t),\tilde{u}(x,t)):=\sum_{n=1}^{\infty}\frac{\left\|u-\tilde{u}\right\|_ {n}}{2^{n}(1+\left\|u-\tilde{u}\right\|_{n})},\] where \[\left\|u(x,t)\right\|_{n}=\left\|u(x,t)\right\|_{C([0,n],H^{2}\cap H^{1,1})}, \ \ n\in\mathbb{N}.\] Lipschitz continuity (5.9) shows that \[d(u(x,t),\tilde{u}(x,t)) \leq\sum_{n=1}^{\infty}\frac{C(U,n)\left\|u_{0}-\tilde{u}_{0} \right\|_{H^{2}\cap H^{1,1}}}{2^{n}(1+C(U,n)\left\|u_{0}-\tilde{u}_{0}\right\| _{H^{2}\cap H^{1,1}})}\] \[\leq\sum_{n=1}^{\infty}\frac{C(U,n)}{2^{n}}\left\|u_{0}-\tilde{u }_{0}\right\|_{H^{2}\cap H^{1,1}}\] \[\leq C(U)\left\|u_{0}-\tilde{u}_{0}\right\|_{H^{2}\cap H^{1,1}}.\] Thus the proof of Theorem 1 is achieved. **Acknowledgements** This work is supported by the National Natural Science Foundation of China (Grant No. 11671095,51879045). **Data Availability Statements** The data which support the findings of this study are available within the article. **Conflict of Interest** The authors have no conflicts to disclose.
2310.16729
On the Kashaev signature conjecture
In 2018, Kashaev introduced a square matrix indexed by the regions of a link diagram, and conjectured that it provides a novel way of computing the Levine-Tristram signature and Alexander polynomial of the corresponding oriented link. In this article, we show that for the classical signature (i.e. the Levine-Tristram signature at -1), this conjecture follows from the seminal work of Gordon-Litherland. We also relate Kashaev's matrix to Kauffman's "Formal Knot Theory" model of the Alexander polynomial. As a consequence, we establish the Alexander polynomial and classical signature parts of the conjecture for arbitrary links, as well as the full conjecture for definite knots.
David Cimasoni, Livio Ferretti
2023-10-25T15:57:58Z
http://arxiv.org/abs/2310.16729v2
# On the Kashaev signature conjecture ###### Abstract. In 2018, Kashaev introduced a square matrix indexed by the regions of a link diagram, and conjectured that it provides a novel way of computing the Levine-Tristram signature and Alexander polynomial of the corresponding oriented link. In this article, we show that for the classical signature (i.e. the Levine-Tristram signature at \(-1\)), this conjecture follows from the seminal work of Gordon-Litherland. We also relate Kashaev's matrix to Kauffman's "Formal Knot Theory" model of the Alexander polynomial. As a consequence, we establish the Alexander polynomial and classical signature parts of the conjecture for arbitrary links, as well as the full conjecture for definite knots. Key words and phrases:Link diagrams, Levine-Tristram signature, Alexander polynomial 2020 Mathematics Subject Classification: 57K10 ## 1. Introduction The Levine-Tristram signature \(\sigma_{L}\) and Alexander polynomial \(\Delta_{L}\) of an oriented link \(L\) in the \(3\)-sphere \(S^{3}\) are among the most studied and best understood link invariants. They can be defined as follows. Let \(F\) be a Seifert surface for \(L\), i.e. an oriented connected compact surface \(F\) smoothly embedded in \(S^{3}\) with oriented boundary \(\partial F=L\). Let \[\alpha\colon H_{1}(F;\mathbb{Z})\times H_{1}(F;\mathbb{Z})\longrightarrow \mathbb{Z}\] be the associated _Seifert form_, i.e. the bilinear map defined by \(\alpha([x],[y])=\operatorname{lk}(x^{-},y)\), where \(\operatorname{lk}\) stands for the linking number, and \(x^{-}\subset S^{3}\setminus F\) denotes the cycle \(x\subset F\) pushed in the negative normal direction off \(F\). Writing \(A\) for an associated matrix and fixing \(\omega\in S^{1}\setminus\{1\}\), the matrix \[H(\omega):=(1-\omega)A+(1-\overline{\omega})A^{T}\] is Hermitian and therefore has a well-defined signature, namely the number of positive eigenvalues minus the number of negative ones. The _Levine-Tristram signature_ of \(L\)[13, 17] is the map \[\sigma_{L}\colon S^{1}\setminus\{1\}\longrightarrow\mathbb{Z}\] given by \(\sigma_{L}(\omega)=\operatorname{sign}H(\omega)\). This map is a well-defined link invariant, i.e. depends neither on the choice of the Seifert surface \(F\), nor on the choice of a basis of its homology (see e.g. [14]). The same methods can be used to show that the _Alexander polynomial_ \[\Delta_{L}(t)=\det\left(t^{1/2}A-t^{-1/2}A^{T}\right)\in\mathbb{Z}[t^{\pm 1 /2}]\] is also a well-defined invariant of the oriented link \(L\). (This normalized version is often referred to as the _Alexander-Conway polynomial_ of \(L\)[1, 4, 11]). These invariants enjoy numerous alternative definitions, most of them dating back several decades (see [3] for a survey of the Levine-Tristram signature). It therefore came as a surprise when, in a recent attempt to understand the metaplectic invariants of Goldschmidt-Jones [7], Kashaev conjectured a novel way to compute the Levine-Tristram signature and Alexander polynomial of an oriented link [9]. We now recall his construction. To any oriented link diagram \(D\), Kashaev associates a matrix \(\tau_{D}\) indexed by the regions of \(D\) with coefficients in the polynomial ring \(\mathbb{Z}[x]\). It is defined as the sum over all crossings \(c\) of \(D\) of the matrix \[\tau_{c}=\operatorname{sgn}(c)\begin{pmatrix}i&j&k&\ell\\ 2x^{2}-1&x&1&x\\ x&1&x&1\\ 1&x&2x^{2}-1&x\\ x&1&x&1\end{pmatrix}\, \tag{1}\] where \(\operatorname{sgn}(c)=\pm 1\) denotes the sign of \(c\), and the regions \(i,j,k,\ell\) around the crossing \(c\) are labeled as illustrated in Figure 1. Equation (1) should be understood as describing the non-vanishing coefficients of a matrix indexed by the regions of \(D\); also, if the diagram is not _reduced_, i.e. if the regions \(i,j,k,\ell\) around a crossing \(c\) are not all distinct, then one should add the corresponding rows and columns of \(\tau_{c}\). One easily checks that the coefficients of \(\tau_{D}\) actually belong to the ring \(\mathbb{Z}[2x]\). **Example 1.1**.: Consider the positive trefoil knot, with diagram \(D\) and regions numbered as follows. The associated Kashaev matrix is given by \[\tau_{D}=\begin{pmatrix}3&3&2x&2x&2x\\ 3&3&2x&2x&2x\\ 2x&2x&4x^{2}-2&1&1\\ 2x&2x&1&4x^{2}-2&1\\ 2x&2x&1&1&4x^{2}-2\end{pmatrix}.\] Kashaev then studies the effect of Reidemeister moves on \(\tau_{D}\), obtaining the following result: given any oriented link diagram \(D\) and any \(x\in\mathbb{R}\), the integer \[\operatorname{sign}(\tau_{D}[x])-w(D)\] is a link invariant, where \(\operatorname{sign}(\tau_{D}[x])\) is the signature of the symmetric matrix \(\tau_{D}\) evaluated at \(x\in\mathbb{R}\), and \(w(D)\) is the _writhe_ of \(D\), namely the number of positive crossings minus the number of negative ones. Even though not stated formally in [9], it also follows from this investigation that the torsion of the \(\mathbb{Z}[2x]\)-module presented by \(\tau_{D}\) is an invariant (as well as the full module provided \(D\) is connected). In particular, its determinantal ideals yield invariant polynomials (up to multiplication by a unit of \(\mathbb{Z}[2x]\), i.e. up to sign). Kashaev conjectures that these are well-known invariants. **Conjecture 1** (Kashaev [9]).: Let \(D\) be an oriented diagram for an oriented link \(L\). 1. If \(\widetilde{\tau}_{D}\) denotes the matrix \(\tau_{D}\) evaluated at \(2x=t^{1/2}+t^{-1/2}\) with two rows and columns corresponding to two adjacent regions removed, then we have \[\det(\widetilde{\tau}_{D})=\pm\Delta_{L}(t)^{2}\in\mathbb{Z}[t^{\pm 1/2}]\,.\] 2. For any \(\omega\in S^{1}\setminus\{1\}\), we have \[\operatorname{sign}(\tau_{D}[x])-w(D)=2\sigma_{L}(\omega)\,,\] where \(2x=\omega^{1/2}+\omega^{-1/2}\in\mathbb{R}\). Note that only the second point of this conjecture is explicitely stated in [9]. Nevertheless, the first point is a rather obvious guess from the examples computed in [9], and was discussed by the authors with Rinat Kashaev, hence the attribution. In the present article, we prove the first point of this conjecture in full generality, the second point for the _classical signature_\(\sigma(L)=\sigma_{L}(-1)\)[18], and the full conjecture for _definite knots_, namely knots admitting a Seifert matrix \(A\) such that \(A+A^{T}\) is (positive or negative) definite. In other words, we have the following result. Figure 1. Labeling of the regions in Equation (1) **Theorem 1**.: _Let \(D\) be an oriented diagram for an oriented link \(L\)._ 1. _The matrix_ \(\widetilde{\tau}_{D}\) _satisfies_ \(\det(\widetilde{\tau}_{D})=\pm\Delta_{L}(t)^{2}\)_._ 2. _We have the equality_ \(\operatorname{sgn}(\tau_{D}[0])-w(D)=2\sigma(L)\)_._ 3. _If_ \(L\) _is a definite knot, then the equality_ \(\operatorname{sgn}(\tau_{D}[x])-w(D)=2\sigma_{L}(\omega)\) _holds for all_ \(\omega\in S^{1}\setminus\{1\}\)_, with_ \(2x=\omega^{1/2}+\omega^{-1/2}\)_._ Our proof of the first point relies on a relation between the Kashaev matrix \(\tau_{D}\) and Kauffman's "Formal Knot Theory" model of the Alexander polynomial [10, 12]. As for the second point, it follows from relating \(\tau_{D}[0]\) with two copies of the Goeritz matrix and harnessing the seminal result of Gordon-Litherland for the classical signature [8]. These two results then imply the third one in a rather straightforward way (and actually yield the conjecture for a wider class of knots, see Remark 3.3). Therefore, the present article not only provides a proof of parts of the Kashaev conjecture. Its purpose is also to show that this conjecture can be understood as a reformulation of Kauffman's model for \(\Delta_{L}\) together with a rather surprising extension of the Gordon-Litherland theorem from the classical signature \(\sigma(L)\) to the full Levine-Tristram signature \(\sigma_{L}\) (see Remarks 3.1 and 3.2). This paper is organised as follows. In Section 2, we recall the necessary background on the aforementioned Kauffman model for \(\Delta_{L}\) (Section 2.1) and Gordon-Litherland formula for \(\sigma(L)\) (Section 2.2). Section 3 contains the proof of Theorem 1, each of the three points being dealt with in an individual subsection. **Acknowledgments.** The authors would like to thank Sebastian Baader, Pierre Bagnoud, Anthony Conway, Livio Liechti and Rinat Kashaev for useful discussions. Support from the Swiss NSF grant 200021-212085 is thankfully acknowledged. ## 2. Preliminaries This section deals with the preliminaries to the proof of Theorem 1. We start in Section 2.1 by recalling Kauffman's model for \(\Delta_{L}\), while Section 2.2 contains a brief presentation of the Gordon-Litherland formula for \(\sigma(L)\). ### The Kauffman model for the Alexander polynomial One of the main points of Kauffman's _Formal Knot Theory_ treatise [10] (see also [12]) is the construction of an original combinatorial model for the normalized Alexander-Conway polynomial. It can be summarized as follows. Given an oriented diagram \(D\) of a link \(L\), Kauffman defines a matrix \(K(D)\) whose rows are indexed by the crossings of \(D\) and whose columns are indexed by the regions of \(D\). For any region \(i\) and crossing \(c\) of \(D\), the coefficient \(K(D)_{ci}\) is the label of corner corresponding to the region \(i\) at the crossing \(c\), as in Figure 2. (If a region abuts a corner from two sides, then the corresponding labels should be added.) Writing \(\widetilde{K}(D)\) for the matrix obtained from \(K(D)\) by deleting two columns corresponding to adjacent regions, Kauffman proves that \(\det\widetilde{K}(D)\) is an invariant of \(L\) up to sign, and satisfies \[\det\widetilde{K}(D)=\pm\Delta_{K}(t)\in\mathbb{Z}[t^{\pm 1/2}]\,. \tag{2}\] _Remark 2.1_.: As one easily checks, the matrix \(K(D)\) can be transformed to the matrix \(\big{(}\widetilde{K}(D)\ \ 0\ \ 0\big{)}\) by adding to the two last columns (corresponding to adjacent regions) linear combinations of the others. Figure 2. Kauffman’s labels ### The Gordon-Litherland formula for the classical signature In their celebrated article [8], Gordon and Litherland define a quadratic form associated to any (non-necessarily orientable) spanning surface of a link \(L\). Using 4-dimensional techniques, they show how this form relates the classical signature \(\sigma(L)\) to a Goeritz matrix of \(L\)[6], thus providing a simple, diagrammatic way of computing the signature. We now briefly outline this result. Consider an oriented link \(L\) and an oriented diagram \(D\) for \(L\). Color the regions of \(\mathbb{R}^{2}\setminus D\) with two colours, say black and white (denoted by \(\mathsf{b}\) and \(\mathsf{w}\)), in a checkerboard manner. Then, for any choice \(\mathsf{v}\in\{\mathsf{b},\mathsf{w}\}\) among these two colours, one can associate to a crossing \(c\) of \(D\) two signs \(\eta_{\mathsf{v}}(c)\) and \(t_{\mathsf{v}}(c)\) as described in Figure 3. Note that \(\eta_{\mathsf{v}}(c)\) depends on the under/over crossing information but not on the orientation, while \(t_{\mathsf{v}}(c)\) depends on the orientation, but not on the under/over crossing. The _Goeritz matrix_ associated to the colour \(\mathsf{v}\) is the matrix \(G_{\mathsf{v}}(D)=(g_{ij})_{i,j}\) indexed by the regions of colour \(\mathsf{v}\) and defined by \[g_{ij}=\sum_{c\sim i,\,c\sim j}\eta_{\mathsf{v}}(c)\] for \(i\neq j\), where the sum is over all crossings \(c\) incident to both regions \(i\) and \(j\), and by \[g_{ii}=-\sum_{k\neq i}g_{ik}\,.\] A Goeritz matrix is always symmetric, so its signature is well defined, but it is not invariant under Reidemeister moves. In order to produce an invariant, one needs to consider the correction term \[\mu_{\mathsf{v}}(D)=\sum_{c\,:\,t_{\mathsf{v}}(c)=-1}-\eta_{\mathsf{v}}(c)\,,\] the sum being over all the crossings with \(t_{\mathsf{v}}(c)=-1\). Gordon and Litherland prove that for any checkerboard colour \(\mathsf{v}\), we have \[\operatorname{sign}G_{\mathsf{v}}(D)-\mu_{\mathsf{v}}(D)=\sigma(L)\,. \tag{3}\] _Remark 2.2_.: The sum of all the columns in a Goeritz matrix \(G\) is equal to zero, so \(G\) is congruent to \((0)\oplus\widetilde{G}\), where \(\widetilde{G}\) is obtained from \(G\) by deleting the first row and first column. The matrix \(\widetilde{G}\) is often referred to as the Goeritz matrix, while \(G\) is sometimes called _pre-Goeritz_. However, this change is irrelevant in the computation of the signature, and it will be more practical for us to consider the entire matrix. ## 3. Proof of Theorem 1 This section contains the proof of Theorem 1: Section 3.1 deals with the first point, Section 3.2 with the second, and Section 3.3 with the third. ### The Alexander polynomial and signature jumps The aim of this section is to prove the first point of Theorem 1, and to derive a lemma about the behavior of the Kashaev signature. Let us fix an oriented diagram \(D\) for an oriented link \(L\), and first focus on the relation between \(\tau_{D}=\tau_{D}[x]\) and \(\Delta_{L}\). To do so, we fix an ordering of the regions of \(D\) and of its crossings, denoted by \(c_{1},\cdots,c_{n}\). Recall the associated Kauffman matrix \(K(D)\) defined in Section 2.1. **Proposition 3.1**.: _If \(D\) is a diagram for an oriented link \(L\), then we have the equality_ \[\tau_{D}[x]=K(D)^{T}S(D)K(D),\] _where \(2x=t^{1/2}+t^{-1/2}\) and \(S(D)\) stands for the diagonal matrix \(\operatorname{diag}(\operatorname{sgn}(c_{1}),\cdots,\operatorname{sgn}(c_{n}))\)._ Proof.: The proof consists in expanding the matrix on the right-hand side of the equality (that we write \(K^{T}SK\) for simplicity), and comparing it to \(\tau_{D}\) evaluated at \(2x=t^{1/2}+t^{-1/2}\). Given any regions \(i,j\) of \(D\), we have \[\left(K^{T}SK\right)_{ij}=\sum_{c\sim i,\,c\sim j}\operatorname{sgn}(c)K_{ci} K_{cj}\,,\] where the sum is over all the crossings of \(D\) incident to the region \(i\) and to the region \(j\), and the coefficients of \(K\) are given by the labels of Figure 2. If \(i\) and \(j\) are different regions of the same colour, then each crossing \(c\) incident to both \(i\) and \(j\) contributes \(\operatorname{sgn}(c)\) to the coefficient \(\left(K^{T}SK\right)_{ij}\). This is precisely the contribution of \(c\) to \(\left(\tau_{D}\right)_{ij}\) (recall Equation (1)), so this case is checked. Let us now assume that \(i\) and \(j\) are two regions of different colours. Then, every edge of \(D\) adjacent to \(i\) and \(j\) gives two contributions to \(\left(K^{T}SK\right)_{ij}\), one for each crossing adjacent to the edge: these contributions sum up to \(t^{1/2}+t^{-1/2}=2x\) (resp. \(-t^{1/2}-t^{-1/2}=-2x\)) if both crossings are positive (resp. negative), and vanish if the crossings have different signs. This coincides with the contributions of these two crossings to \(\left(\tau_{D}\right)_{ij}\). Finally, let us assume that \(i=j\). If the orientation of \(D\) near a crossing \(c\) incident to \(i\) yields a coherent orientation of the region \(i\), then the crossing \(c\) contributes \(\operatorname{sgn}(c)\) to the \(i^{\text{th}}\) diagonal coefficient of both matrices \(\tau_{D}\) and \(K^{T}SK\). If this is not the case, then the crossing \(c\) yields different polynomial coefficients to both these matrices. Note that the number of contributions of crossings incident to the fixed region \(i\) giving such different coefficients is always even. (If the diagram is reduced, the number of such crossings is even, but in general, one must consider crossing contributions.) Hence, we can start from one such contribution, move along the boundary of the region \(i\) in a fixed direction, and group them by pairs of consecutive such contributions. The sum of two consecutive contributions is \(t+t^{-1}=2(2x^{2}-1)\) (resp. \(-t-t^{-1}=-2(2x^{2}-1)\)) if both crossings are positive (resp. negative) and \(0\) if the crossings have different signs. Once again, this coincides with the sum of these two contributions to \(\left(\tau_{D}\right)_{ii}\). Proof of Theorem 1 (i).: Let \(D\) be a diagram for an oriented link \(L\), and let \(\widetilde{K}(D)\) denote the Kauffman matrix \(K(D)\) with two columns corresponding to adjacent regions removed. By Proposition 3.1, we have \[\widetilde{\tau}_{D}=\widetilde{K}(D)^{T}S(D)\widetilde{K}(D)\,. \tag{4}\] The equality \(\det\widetilde{\tau}_{D}=\pm\Delta_{L}(t)^{2}\) now follows from Kauffman's model in the form of Equation (2). _Remark 3.1_.: Note that the relation (4) between the Kauffman and Kashaev matrices not only allows us to prove the Alexander polynomial part of Kashaev's conjecture. It also shows that this latter statement is actually equivalent to Kauffman's result. Therefore, an independent proof of the first point of Conjecture 1 would automatically provide a new proof of Kauffman's model for the Alexander polynomial. Applying Remark 2.1, we immediately get the following corollary. **Corollary 3.1**.: _The matrix \(\tau_{D}\) is congruent to \(\widetilde{\tau}_{D}\oplus(0)^{\oplus 2}\), with \(\widetilde{\tau}_{D}=\widetilde{K}(D)^{T}S(D)\widetilde{K}(D)\). _ Using this relation to the Alexander polynomial, we now study the jumps of Kashaev's signature, defined as \[J^{\pm}(x):=\pm(\lim_{y\to x^{\pm}}\operatorname{sign}(\tau_{D}[y])- \operatorname{sign}(\tau_{D}[x]))\,.\] To do so, we denote by \(\operatorname{mult}_{\omega}(\Delta_{L})\) the multiplicity of \(\omega\in S^{1}\) as a root of the polynomial \(\Delta_{L}\). **Lemma 3.1**.: _Let \(L\) be a link with non-vanishing Alexander polynomial \(\Delta_{L}\), and let \(D\) be a diagram of \(L\). After the change of variable \(2x=\omega^{1/2}+\omega^{-1/2}\) with \(\omega\in S^{1}\), the signature \(\operatorname{sign}(\tau_{D}[x])\) becomes a step function on \(S^{1}\) which can have discontinuities only at roots of \(\Delta_{L}(t)\), and whose jumps satisfy \(|J^{\pm}(\omega)|\leq 2\operatorname{mult}_{\omega}(\Delta_{L})\)._ Proof.: By Corollary 3.1, it is enough to study the jumps of the signature of \(\widetilde{\tau}_{D}=\widetilde{K}(D)^{T}S(D)\widetilde{K}(D)\). Moreover, since \(\det\widetilde{\tau}_{D}=\pm\Delta_{L}^{2}\neq 0\), it follows that the signature of \(\widetilde{\tau}(D)\) can jump only at the roots of \(\Delta_{L}\), and that the jumps \(|J^{\pm}(\omega)|\) are bounded by the nullity of \(\widetilde{\tau}_{D}\) at \(t=\omega\). Let us consider \(\widetilde{\tau}_{D}\) as a matrix with coefficients in \(\mathbb{R}[t^{\pm 1/2}]\). Since this ring is a principal ideal domain, there exist matrices \(P,Q\in\operatorname{GL}(\mathbb{R}[t^{\pm 1/2}])\) such that \[P\,\widetilde{\tau}_{D}\,Q=\begin{pmatrix}d_{1}&&\\ &\ddots&\\ &&d_{n}\end{pmatrix}\] with \(d_{i}\in\mathbb{R}[t^{\pm 1/2}]\). In particular, we have \(d_{1}\cdots d_{n}=\Delta_{L}^{2}\) up to multiplication by units of \(\mathbb{R}[t^{\pm 1/2}]\). But the nullity of \(\widetilde{\tau}_{D}\) at \(t=\omega\) is the number of \(d_{i}\) such that \(d_{i}(\omega)=0\), which is bounded by \(\operatorname{mult}_{\omega}(\Delta_{L}^{2})\). Therefore, we get the inequality \(|J^{\pm}(\omega)|\leq\operatorname{mult}_{\omega}(\Delta_{L}^{2})=2 \operatorname{mult}_{\omega}(\Delta_{L})\). ### The classical signature We now study the relation of Kashaev's matrix with the classical signature \(\sigma(L)=\sigma_{L}(-1)\). Under the usual change of variables \(2x=t^{1/2}+t^{-1/2}\), this corresponds to studying \(\tau_{D}\) at \(x=0\). Consider an oriented link diagram \(D\) whose regions are coloured in checkerboard manner with two colours \(\mathsf{b}\) and \(\mathsf{w}\). For any colour \(\mathsf{v}\in\{\mathsf{b},\mathsf{w}\}\) and any crossing \(c\), recall the signs \(\eta_{\mathsf{v}}(c)\) and \(t_{\mathsf{v}}(c)\) defined in Figure 3. The proof of the following result is immediate. **Lemma 3.2**.: _For any colour \(\mathsf{v}\) and crossing \(c\), we have the equality \(\eta_{\mathsf{v}}(c)t_{\mathsf{v}}(c)=\operatorname{sgn}(c)\). _ Recall the definition of the correction term \(\mu_{\mathsf{v}}(D)=-\sum_{c\,:\,t_{\mathsf{v}}(c)=-1}\eta_{\mathsf{v}}(c)\). **Lemma 3.3**.: _For any diagram \(D\), we have \(\mu_{\mathsf{w}}(D)+\mu_{\mathsf{b}}(D)=w(D)\)._ Proof.: The definition of \(\mu_{\mathsf{v}}\) together with Lemma 3.2 yield \[\mu_{\mathsf{w}}(D)+\mu_{\mathsf{b}}(D) =\sum_{c\,:\,t_{\mathsf{w}}(c)=-1}-\eta_{\mathsf{w}}(c)+\sum_{c\,: \,t_{\mathsf{w}}(c)=-1}-\eta_{\mathsf{b}}(c)\] \[=\sum_{c\,:\,t_{\mathsf{w}}(c)=-1}-\eta_{\mathsf{w}}(c)+\sum_{c\,: \,t_{\mathsf{w}}(c)=1}\eta_{\mathsf{w}}(c)\] \[=\sum_{c\,:\,t_{\mathsf{w}}(c)=-1}t_{\mathsf{w}}(c)\eta_{\mathsf{ w}}(c)+\sum_{c\,:\,t_{\mathsf{w}}(c)=1}t_{\mathsf{w}}(c)\eta_{\mathsf{w}}(c)\] \[=\sum_{c\,:\,t_{\mathsf{w}}(c)=-1}\operatorname{sgn}(c)+\sum_{c\,: \,t_{\mathsf{w}}(c)=1}\operatorname{sgn}(c)=w(D)\,. \qed\] Let us now consider the matrix \(\tau_{D}[x]\) evaluated at \(x=0\). Note that if two regions \(i\) and \(j\) have different colours, then the corresponding coefficient \(\tau_{D}[0]_{ij}\) vanishes. Hence, we see that \(\tau_{D}[0]\) splits as the direct sum \[\tau_{D}[0]=\tau_{\mathsf{w}}(D)\oplus\tau_{\mathsf{b}}(D)\,,\] where the matrix \(\tau_{\mathsf{w}}(D)\) (resp. \(\tau_{\mathsf{b}}(D)\)) is indexed by the white (resp. black) regions of \(D\). For any colour \(\mathsf{v}\in\{\mathsf{w},\mathsf{b}\}\), the definition of \(\tau_{D}\) and Lemma 3.2 yield \[\tau_{\mathsf{v}}(D)_{ij}=\begin{cases}\sum\limits_{c\sim i,\,c\sim j}\eta_{ \mathsf{v}}(c)t_{\mathsf{v}}(c),&\text{ if }i\neq j\\ \sum\limits_{c\sim i}-\eta_{\mathsf{v}}(c),&\text{ if }i=j,\end{cases}\] where the first (resp. second) sum is over all crossings incident to both regions \(i\) and \(j\) (resp. incident to the region \(i\)). Moreover, given two regions \(i\) and \(j\) of the same colour \(\mathsf{v}\), for any crossing \(c\) incident to \(i\) and \(j\), the sign \(t_{\mathsf{v}}(c)\) depends only on \(i\) and \(j\), and can therefore be denoted by \(t_{ij}\). We get \[\tau_{\mathsf{v}}(D)_{ij}=\begin{cases}t_{ij}\sum\limits_{c\sim i,c\sim j}\eta_ {\mathsf{v}}(c),&\text{ if }i\neq j\\ \sum\limits_{c\sim i}-\eta_{\mathsf{v}}(c),&\text{ if }i=j\,.\end{cases}\] Comparing this with the definition of the Goeritz matrices in Section 2.2, we see that \(\tau_{D}[0]\) is very close to the direct sum \(G_{\mathsf{w}}(D)\oplus G_{\mathsf{b}}(D)\): the only differences are the signs \(t_{ij}\) that appear in \(\tau_{\mathsf{v}}(D)\) but not in \(G_{\mathsf{v}}(D)\). Everything is now set up for the proof of the conjecture for the classical signature. Proof of Theorem 1 (ii).: Let \(D\) be a diagram for an oriented link \(L\). Applying twice the Gordon-Litherland formula (3) together with Lemma 3.3, we have \[2\sigma(L)=\operatorname{sign}G_{\mathsf{w}}(D)-\mu_{\mathsf{w}}(D)+ \operatorname{sign}G_{\mathsf{b}}(D)-\mu_{\mathsf{b}}(D)=\operatorname{sign}G _{\mathsf{w}}(D)+\operatorname{sign}G_{\mathsf{b}}(D)-w(D)\,.\] Therefore, we only need to find a congruence between \(\tau_{D}[0]=\tau_{\mathsf{w}}(D)\oplus\tau_{\mathsf{b}}(D)\) and \(G_{\mathsf{w}}(D)\oplus G_{\mathsf{b}}(D)\). To simplify the arguments, we now use the well-known fact that every link admits a _special diagram_ (see e.g. [2, Proposition 13.14]). This can be understood as a diagram such that for every colour \(\mathsf{v}\), the sign \(t_{\mathsf{v}}(c)\) does not depend on \(c\). Taking \(D\) special and choosing the checkerboard colouring such that \(t_{\mathsf{w}}(c)=1\) for all \(c\), we get \(\tau_{\mathsf{w}}(D)=G_{\mathsf{w}}(D)\) and \[\tau_{\mathsf{b}}(D)_{ij}=\begin{cases}\sum\limits_{c\sim i,c\sim j}-\eta_{ \mathsf{b}}(c),&\text{ if }i\neq j\\ \sum\limits_{c\sim i}-\eta_{\mathsf{b}}(c),&\text{ if }i=j\,.\end{cases}\] We are now left with proving that \(\tau_{\mathsf{b}}(D)\) is congruent to \(G_{\mathsf{b}}(D)\). Since we have \(t_{\mathsf{w}}(c)=1\) for all \(c\), any white region abuts an even number of crossings. It follows that the adjacency graph associated to the black regions is a bipartite graph. This implies the equality \(\tau_{\mathsf{b}}(D)=P^{T}G_{\mathsf{b}}(D)P\), with \(P\) the diagonal matrix with diagonal coefficient equal \(1\) or \(-1\) according to whether the corresponding black region belongs to one set of the bipartition or the other. _Remark 3.2_.: Note that the relationship between \(\tau_{D}[0]\) and the direct sum of two Goeritz matrices not only allows us to prove the classical signature part of Kashaev's conjecture. It also shows that this latter statement is equivalent to the Gordon-Litherland formula, provided one knows that \(\operatorname{sign}G_{\mathsf{v}}(D)-\mu_{\mathsf{v}}(D)\) is an invariant (an easy fact already established by Goeritz [6]). Therefore, the second point of Conjecture 1 should be understood as an extension of the Gordon-Litherland formula from the classical signature to the full Levine-Tristram signature. ### The signature conjecture for definite knots In this final section, we conclude the proof of Theorem 1 by establishing the signature conjecture for definite knots. To do so, let us start by focusing on the behaviour of \(\tau_{D}[x]\) at \(x=1\). **Lemma 3.4**.: _If \(D\) is a diagram for an oriented knot \(K\), then \(\operatorname{sgn}(\tau_{D}[1])=w(D)\)._ Proof.: By Corollary 3.1, the matrix \(\tau_{D}\) is congruent to \(\widetilde{\tau}_{D}\oplus(0)^{\oplus 2}\) with \[\widetilde{\tau}_{D}=\widetilde{K}(D)^{T}S(D)\widetilde{K}(D)\,,\] so \(\operatorname{sign}(\tau_{D}[x])=\operatorname{sign}(\widetilde{\tau}_{D}[ \omega])\) for all \(x\in\mathbb{R}\) and \(\omega\in S^{1}\) with \(2x=\omega^{1/2}+\omega^{-1/2}\). Evaluating Equation (2) at \(x=1\), which corresponds to \(\omega=1\), we get that \(\widetilde{K}(D)[1]\) is an integer-valued matrix whose determinant is equal to \(\pm\Delta_{K}(1)\). Since \(K\) is a knot, this value is non-zero. Therefore, by Silvester's law of inertia, we now have \[\operatorname{sign}(\tau_{D}[1])=\operatorname{sign}(\widetilde{\tau}_{D}[1]) =\operatorname{sign}(\widetilde{K}(D)^{T}S(D)\widetilde{K}(D))[1]= \operatorname{sign}(S(D))=w(D)\,.\qed\] Now, everything is ready to conclude the proof of Theorem 1. Proof of Theorem 1 (iii).: The classical signature of an arbitrary knot \(K\) satisfies \(|\sigma(K)|\leq 2g(K)\), with \(g(K)\) the genus of \(K\). If \(K\) is definite, then the opposite inequality holds by definition, yielding the equality \(|\sigma(K)|=2g(K)\). On the other hand, it is well-known that the span of the Alexander polynomial \(\Delta_{K}\) is bounded above by \(2g(K)\) (see e.g. [14, Proposition 6.13]), so \(\Delta_{K}\) has at most \(2g(K)\) roots counted with multiplicity. Moreover, the Levine-Tristram signature of a knot vanishes close to \(\omega=1\) (see e.g. [5]), it can only have discontinuities at roots of \(\Delta_{K}\), and the jumps are bounded by the multiplicity of the corresponding roots ([16, Theorem 2]; see also [5]). It follows that, since \(K\) is a definite knot, all the roots of the Alexander polynomial lie on the unit circle and all the jumps of the signature at discontinuities are maximal. The Levine-Tristram signature is therefore uniquely determined by the zeros of the Alexander polynomial (in \(S^{1}\)). Now, let \(D\) be a diagram for \(K\). By Theorem 1 (ii), we have \[\operatorname{sign}(\tau_{D}[0])-w(D)=2\sigma(K)\] and by Lemma 3.4, we have \[\operatorname{sign}(\tau_{D}[1])-w(D)=0\,.\] As above, by the bounds in Lemma 3.1, this forces all the jumps of \(\operatorname{sign}(\tau_{D}[x])\) to be maximal. Since the jumps of \(\operatorname{sign}(\tau_{D}[x])\) are exactly the double of the jumps of \(\sigma_{K}(\omega)\), we therefore get the equality \[\operatorname{sgn}(\tau_{D}[x])-w(D)=2\sigma_{K}(\omega)\] for all \(\omega\in S^{1}\), concluding the proof. _Remark 3.3_.: In fact, our proof of Kashaev's signature conjecture applies to a wider class of knots, namely any knot \(K\) such that \(|\sigma(K)|\) is equal to the number of roots of \(\Delta_{K}\) on \(S^{1}\). This is for instance the case if \(K\) is the boundary of a Murasugi sum of two Seifert surfaces with symmetric, positive definite Seifert form, as proved by Liechti [15, Proposition 5.6]. This includes, in particular, all positive arborescent Hopf plumbings.
2308.14374
Online Continual Learning on Hierarchical Label Expansion
Continual learning (CL) enables models to adapt to new tasks and environments without forgetting previously learned knowledge. While current CL setups have ignored the relationship between labels in the past task and the new task with or without small task overlaps, real-world scenarios often involve hierarchical relationships between old and new tasks, posing another challenge for traditional CL approaches. To address this challenge, we propose a novel multi-level hierarchical class incremental task configuration with an online learning constraint, called hierarchical label expansion (HLE). Our configuration allows a network to first learn coarse-grained classes, with data labels continually expanding to more fine-grained classes in various hierarchy depths. To tackle this new setup, we propose a rehearsal-based method that utilizes hierarchy-aware pseudo-labeling to incorporate hierarchical class information. Additionally, we propose a simple yet effective memory management and sampling strategy that selectively adopts samples of newly encountered classes. Our experiments demonstrate that our proposed method can effectively use hierarchy on our HLE setup to improve classification accuracy across all levels of hierarchies, regardless of depth and class imbalance ratio, outperforming prior state-of-the-art works by significant margins while also outperforming them on the conventional disjoint, blurry and i-Blurry CL setups.
Byung Hyun Lee, Okchul Jung, Jonghyun Choi, Se Young Chun
2023-08-28T07:42:26Z
http://arxiv.org/abs/2308.14374v1
# Online Continual Learning on Hierarchical Label Expansion ###### Abstract Continual learning (CL) enables models to adapt to new tasks and environments without forgetting previously learned knowledge. While current CL setups have ignored the relationship between labels in the past task and the new task with or without small task overlaps, real-world scenarios often involve hierarchical relationships between old and new tasks, posing another challenge for traditional CL approaches. To address this challenge, we propose a novel multi-level hierarchical class incremental task configuration with an online learning constraint, called hierarchical label expansion (HLE). Our configuration allows a network to first learn coarse-grained classes, with data labels continually expanding to more fine-grained classes in various hierarchy depths. To tackle this new setup, we propose a rehearsal-based method that utilizes hierarchy-aware pseudo-labeling to incorporate hierarchical class information. Additionally, we propose a simple yet effective memory management and sampling strategy that selectively adopts samples of newly encountered classes. Our experiments demonstrate that our proposed method can effectively use hierarchy on our HLE setup to improve classification accuracy across all levels of hierarchies, regardless of depth and class imbalance ratio, outperforming prior state-of-the-art works by significant margins while also outperforming them on the conventional disjoint, blurry and i-Blurry CL setups. + Footnote †: \(*\) Equal contribution, \(\dagger\) Corresponding authors. ## 1 Introduction In real-world continual learning scenarios, new knowledge often augments existing understanding, typically following a hierarchical path from general to specific classes. This hierarchical structure is not an anomaly, but rather an inherent part of many disciplines. The schema theory [10, 43] in cognitive psychology and the conceptual clustering theory [29] in machine learning both emphasize hierarchical organization of knowledge. The COBWEB algorithm [21], a prominent machine learning method, uses hierarchical clustering for grouping related instances into meaningful categories. Hierarchical organization is also observed in biology's taxonomy theory [9], classifying organisms based on shared traits, and in chemistry [28], where elements are arranged hierarchically according to their atomic properties. However, despite the prevalence of hierarchical relationships in these areas, many previous continual learning works [4, 6, 7, 32] do not fully incorporate these relationships. This may be an area that needs more attention, as hierarchical relationships could play a role in knowledge evolution in incremental learning. Here we introduce a novel CL setup called Hierarchical Label Expansion (HLE), designed to account for hierarchical class relationships in task-free online CL. In HLE, class learning is incremental, with fine-grained classes derived from prior coarse-grained ones, effectively mirroring real-world knowledge accumulation. As our proposed approach is designed for online continual learning, where data is seen only once in the data stream, each task's data is disjoint. We assess our models' performance using any-time inference [32] and evaluate classification accuracy for all levels of hierarchy. This demonstrates the potential of our approach to complement existing CL methods and enhance their evaluation. HLE encompasses both single and multiple hierarchy depths, as well as balanced and imbalanced class data scenarios. To tackle the CL on HLE, we propose a new CL method that utilizes pseudo-labeling based memory management (PL) and flexible memory sampling (FMS). This method effectively exploits hierarchy information between class labels in the dataset, resembling how knowledge is accumulated in real-world scenarios. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods by substantial margins in HLE, while remaining superior in performance on existing CL setups including disjoint, blurry [6] and i-Blurry [32]. We summarize our contributions as follows: 1. We propose new online class-incremental, hierarchy-aware, task-free CL setups called HLE, designed to simulate how knowledge is accumulated in real-world scenarios. 2. We propose a new online CL method, PL-FMS, that consists of pseudo-labeling (PL) based memory management and flexible memory sampling (FMS) to better exploit hierarchy information and address the HLE setup. 3. We evaluate our approach on CIFAR100, Stanford-Cars, iNaturalist-19, and a novel dataset named ImageNet-Hier100, demonstrating that our method outperforms prior state-of-the-art works by significant margins on HLE while still outperforming them on the existing disjoint, blurry and i-Blurry CL setups. ## 2 Related Work **Continual learning setups.** Continual Learning (CL) setups can be classified into three categories: task-incremental, class-incremental, and domain-incremental learning setups [16, 46]. Our work focuses on the class-incremental learning setting proposed by [40], where task identity is not given during inference, and the model is required to solve each task seen so far and infer which task it is presented with. CL setups can be classified as either online [5, 20, 25, 27] or offline [2, 13, 39, 40, 44]. Our work focuses on the more challenging online CL setup where streamed samples are only used once, compared to the offline CL setup where data from each task can be used multiple times to train the model. CL setups can also be categorized as task-free [1, 4, 35] or task-based [19, 36, 44, 45]. Our work focuses on the former, where the model continuously learns and adapts to incoming data without explicit task information, unlike the latter where the model is informed about the tasks it must learn and adapt to. Despite the considerable attention given to enhancing CL methods, their evaluation has been limited to rather restricted CL settings. To address this, novel CL setups with blurry task boundaries and corrupted labels in data stream [6, 7, 32] have been proposed. A CL setup where classes are shared across tasks and presented sequentially as a stream with limited access to previous data was proposed by [6], while [7] suggested an online blurry CL setup with noisy labels. Recently, a new setup called 'i-Blurry' [32] has been proposed, which combines the advantages of both blurry and disjoint setups by allowing continuous encounters of overlapping classes without suffering from restrictions of blurry and disjoint. However, earlier works all assumed independent class labels, which is often not the case in reality. Our work proposes a complementary CL setup that models hierarchically correlating relationships between labels for online learning depicted in Figure 1. **Hierarchical classification.** Various studies have utilized data's hierarchical structure to enhance tasks like image classification [8, 11, 30], multi-label classification [49], object recognition [41], and semi-supervised approaches [23, 48]. The hierarchical taxonomy is typically employed through label-embedding, hierarchical architecture-based, and hierarchical loss-based methods. The label-embedding method maps class labels to vectors to represent semantic relationships and optimizes a loss on these embedded soft vectors. DeViSE [22] maximizes the cosine similarity between image and label embeddings. It maps target classes to a unit hypersphere and penalizes the output that is more similar to false label embeddings using a ranking loss. Liu _et al._[37] use hyperbolic geometry to learn hierarchical representations and minimize the Poincare distance between Poincare label embeddings and image feature embeddings, similar to DeViSE. Hierarchical architecture-based methods incorporate Figure 1: Comparison sketch between conventional, blurry, and our HLE setups. (a) Conventional task-free online CL setup gradually introduces new classes and classifies data without task identification (b) Blurry task-free online CL setup where classes are divided into major and minor categories at each task, with varying proportions, leads to unclear task boundaries (c) Proposed HLE CL setup features class label expansion where child class labels are added to parent class labels throughout the learning process. class hierarchy into the classifier architecture. Wu _et al._[50] jointly optimize a multi-task loss function with cross-entropy loss applied at each hierarchy level. Redmon _et al._[41] propose a probabilistic model, YOLOv2, for object detection and classification, with softmax applied at every coarse-category level to address the mutual exclusion of all classes in conventional softmax classifier. Chang _et al._[11] propose a multi-granularity classification architecture that uses level-specific classifiers to optimize fine-grained and coarse-grained recognition separately and improve fine-grained classification performance. Hierarchical loss-based method incorporates hierarchical class relationships into the loss function and penalizes incorrect predictions while encouraging those that follow the hierarchy. Deng _et al._[17] directly minimized the expected WordNet LCA height using kNN- and SVM-based classifiers, while Zhao _et al._[53] modified multi-class logistic regression and added an 'overlapping-group lasso penalty' to encourage the use of similar features for closely related classes. Bertinetto _et al._[8] proposed the hierarchical cross-entropy approach, where the loss function is based on conditional probabilities given parent-class probabilities. ## 3 Hierarchical Label Expansion In this section, we introduce our proposed HLE setup and present its configurations. Section 3.1 details the setup formulation, where the model is provided with samples only for the classes belonging to a single hierarchy level for each task. Section 3.2 describes the construction of single and multiple hierarchy depth scenarios in HLE to observe knowledge expansion at different levels. ### Hierarchical CL Configurations Our HLE setup involves task-free online learning, where the model incrementally learns classes from various hierarchies both vertically and horizontally, agnostic to the task boundaries. The model is presumed to first learn coarse-grained classes, followed by fine-grained classes. Figure 1(c) provides an overview of the HLE setup. Formally, we consider the model encounters a stream of data points denoted by \(\mathcal{T}=((x_{1},y_{1}),(x_{2},y_{2}),\cdots)\), where \((x_{j},y_{j})\) is sampled from a data distribution \(\mathcal{D}_{\mathbb{X}\times\mathbb{Y}}\), \(x_{j}\in\mathbb{X}\) is the \(j\)th input (image) for the model, and \(y_{j}\in\mathbb{Y}\) is the class label of \(x_{j}\). Often, the sequential tasks with the index \(k\) can divide the data stream \(\mathcal{T}\) into disjoint sub-sequences \(\mathcal{T}_{1},\mathcal{T}_{2},\cdots\), where \(\mathcal{T}_{k}=((x_{j},y_{j}))_{j=t(k)}^{t(k+1)-1}\) and \(t(k)\) is the start sample index for the \(k\)-th task. We define the class subset for the \(k\)-th task as \(\mathbb{Y}_{k}\subseteq\mathbb{Y}\), which represents the set of classes that the model encounters during the \(k\)th-task. The conventional CL usually assumes that the sampling distribution varies over time and the sampling distributions for tasks are mutually exclusive, _i.e._, \(\mathbb{Y}_{k}\cap\mathbb{Y}_{l}=\emptyset\) for \(k\neq l\). However, there exist scenarios where more practical contexts need to be taken into account for reality. For example, the i-Blurry CL setup [32] assumes that each task has both shared subset of classes \(\mathbb{Y}^{s}\), trained throughout the learning process, and disjoint subset of classes \(\mathbb{Y}_{k}^{d}\), trained only at a specific task. For this case, the class subset \(\mathbb{Y}_{k}\) is defined as \(\mathbb{Y}_{k}=\mathbb{Y}^{s}\cup\mathbb{Y}_{k}^{d}\), which implies that \(\mathbb{Y}_{k}\cap\mathbb{Y}_{l}=\mathbb{Y}^{s}\neq\emptyset\). In a different direction to complement existing CL setups, our HLE allows more structures on \(\mathbb{Y}\) by constructing a label relation between classes in \(\mathbb{Y}\). Specifically, we consider that \(\mathbb{Y}\) consists of classes from \(H\) levels, so \(\mathbb{Y}=\bigcup_{h=1}^{H}\mathbb{Y}^{h}\) and \(\mathbb{Y}^{h}\cap\mathbb{Y}^{h^{\prime}}=\emptyset\) where \(\mathbb{Y}^{h}\) is the label subset whose hierarchy level is \(h\). By \(h\), the smaller value of \(h\) represents the hierarchy level for more coarse-grained classes. In the HLE setup, each task conducts the label expansion for a subset of classes in level \(h\) to their fine-grained classes in level \((h+1)\). That is, the labels are expanded by one level during a task. Let \(\mathbb{Y}_{k}^{h}\subseteq\mathbb{Y}^{h}\) be the label subset for level \(h\) that has been trained by the model until the \(k\)-th task. For the \((k+1)\)-th task, a subset \(\mathbb{Y}_{k+1}^{h}\) of \(\mathbb{Y}_{k}^{h}\) is selected to be newly expanded to a set of their fine-grained classes \(\mathbb{Y}_{k,\text{new}}^{h+1}\), resulting in \(\mathbb{Y}_{k}=\mathbb{Y}_{k,\text{new}}^{h+1}\). To handle multiple hierarchy levels, our model consists of an encoder \(f\) for feature embedding and multiple classifiers \(\{g^{h}\}_{h=1}^{H}\), each corresponding to a hierarchy level. Specifically, \(g^{h}(f(x))\) predicts the classes within level \(h\) encountered until the current iteration. Regardless of its hierarchical position, each input is assigned a single label during training, and the model remains unaware of the hierarchy among classes. The hierarchy level is instead given as a soft hint to the model. ### Hierarchical CL Depth Scenarios Our HLE setup includes two scenarios: single-depth and multiple-depth scenarios (existing setups are 0-depth), for hierarchical label expansion as depicted in Figure 2. In the single-depth scenario, incremental learning is observed horizontally within the same hierarchy level, while in the multiple-depth scenario, new classes are introduced with in Figure 2: An illustration of two HLE scenarios. (a) In single-depth scenario, fine-grained classes grow horizontally from coarse-grained ones within the same level. (b) In multiple-depth scenario, classes grow vertically from coarse to fine across different hierarchy levels. creasing levels of specificity vertically. For the single-depth scenario, the model learns for all parent classes at the first task and partially expands them through subsequent tasks. The single-depth scenario involves horizontal incremental learning within the same hierarchy level, starting with parent classes and broadening them in following tasks. This scenario is further explored through dual-label (overlapping data) and single-label (disjoint data) setups, as detailed in Table 1. In the multiple-depth scenario, the model's ability to learn and expand hierarchical knowledge is tested while navigating complex vertical hierarchies by increasing the hierarchy level of classes to be learned for subsequent tasks, meaning that the model learns for classes of hierarchy level \(h\) at the \(h\)th-task. ## 4 Pseudo Labeling-based Flexible Memory Sampling (PL-FMS) In this section, we introduce our method which employs a rehearsal-based incremental learning approach, where models are trained using previously seen data from a stream buffer. Our method incorporates pseudo-labeling to fully utilize the hierarchical class relationship and a memory sampling strategy to flexibly build the training batch from stored and incoming data. Further details on our method's two main components, Pseudo-Labeling (PL) based Memory Management and Flexible Memory Sampling (FMS), are followed in sections 4.1 and 4.2, respectively. ### Pseudo-Labeling based Memory Management We introduce a novel memory management strategy that uses the model's predictions to generate pseudo-labels for each hierarchy level in our HLE setup, as shown in Figure 3(a). This strategy is referred to as Pseudo-Labeling (PL) based memory management. Basically, it first finds the modal label that are the most frequent in memory for class balance [6, 32, 39]. Let \(\mathcal{M}\) be the memory that stores samples from the data stream and \(\mathcal{M}_{y}=\{(x_{n},y_{n})\in\mathcal{M}|y_{n}=y\}\) be the subset of the memory whose samples belong to the class \(y\). For rehearsal-based method, we need to remove a sample from the memory to accept a new sample once \(|\mathcal{M}|\) reaches the maximum memory size. To achieve this, we identify the class with the highest number of samples in the memory, which we denote as \(\bar{y}=\text{arg}\max_{y}|\mathcal{M}_{y}|\). Prior works [6, 32, 39] have typically removed samples only from \(\mathcal{M}_{\bar{y}}\). To further improve the efficiency, we propose to consider samples from other classes hierarchically related to \(\bar{y}\). To do so, we use the class probability predicted by the network, denoted as \(p^{h}(x)=\sigma(g^{h}(f(x)))\in\mathbb{R}^{|\mathbb{V}^{h}|}\) for level \(h\), where \(\sigma(\cdot)\) is the soft-max function. We use the model to predict classes that are hierarchically related to \(\bar{y}\). We do this by accumulating the model's predictions for samples in \(\mathcal{M}_{\bar{y}}\) for all levels, except for the level of \(\bar{y}\). The classes with the most predictions for each level are then identified, defined as: \[\hat{y}^{h}(\mathcal{M}_{\bar{y}})=\text{arg}\max_{y\in\mathbb{V}^{h}}\sum_{(x,\bar{y})\in\mathcal{M}_{\bar{y}}}\mathbf{1}_{y}(x), \tag{1}\] where \(\mathbf{1}_{y}(x)\) is an indicator function defined as: \[\mathbf{1}_{y}(x)=\left\{\begin{array}{ll}1,&y=\text{arg}\max_{i}\;\;p^{h} _{i}(x)\\ 0,&\text{otherwise}.\end{array}\right.\] In other words, the class at level \(h\) that has the most predictions in \(\mathcal{M}_{\bar{y}}\) is deemed as the class hierarchically related to \(\bar{y}\). By using the predicted classes for the other levels, we Figure 3: Sketch of our proposed method, PL-FMS’s two components: PL and FMS. (a) Pseudo-Labeling based memory management (PL) outlines the method of discarding a data sample, which will be replaced with incoming data, based on its effect on reducing loss, irrespective of its label’s nature (true or pseudo)..(b) Flexible Memory Sampling (FMS) shows formation of the training batch by filtering and compensating data samples. construct an index set of candidate samples to be removed from the memory as: \[\mathcal{I}_{\tilde{y}}=\{j|(x_{j},y_{j})\in\mathcal{M}_{\tilde{y}}\cup\bigcup_{k= 0,k\neq h}^{H}\mathcal{M}_{\tilde{y}^{k}}\}. \tag{2}\] To determine the index of a sample to remove, we adopt the sample-wise loss importance value, \(\mathcal{H}_{n}\), introduced by [32]. Specifically, \(\mathcal{H}_{n}\) is computed as: \[\mathcal{H}_{n}=L(\theta)-L(\theta_{n}),\] where \(L(\theta)=\sum_{(x,y)\in\mathcal{M}}l(x,y;\theta)\) is the averaged loss in the memory and \(\theta_{n}=\theta-\nabla_{\theta}l(x_{n},y_{n};\theta)\). By using the loss importance value, we find the index \(\hat{j}\) of the sample to remove whose measured importance is the least: \[\hat{j}=\text{arg}\min_{j\in\mathcal{I}_{g}}\;\mathcal{H}_{j}. \tag{3}\] That is, we measure the decrease in loss for each sample during training and subsequently removes the data from the memory whose loss decrease is the least. ### Flexible Memory Sampling (FMS) Prior rehearsal-based methods [14, 27, 3, 51] proposed directly including the stream buffer in training, leading to bias toward the data stream distribution and negatively impacting the model's performance. Using only memory samples for training was also suggested by [32], but it limited adaptability to new classes. To balance the usage of memory and data stream, we propose Flexible Memory Sampling (FMS), a simple yet effective sampling strategy that flexibly adjusts the number of stream samples in the training batch. The approach is depicted in Figure 3 (b). To construct a training batch \(B_{t}\) at iteration \(t\), ER utilizes all samples in the stream buffer \(S_{t}\) and takes samples from the memory in an amount equal to \(|S_{t}|\), which results in \(|B_{t}|=2|S_{t}|\). Unlike ER, FMS randomly excludes samples from \(S_{t}\) in the training process. Let \(T_{c}\) be the iteration when the class \(c\) has been encountered for the first time. Then, we selectively include stream samples of class \(c\) with increasing probability as \(t-T_{c}\) gets larger, gradually adopting new classes from the stream buffer. In proportional to the value, the probability to include a stream sample of class \(c\) is determined by a Bernoulli distribution for each class as: \[\rho_{t}(c)\sim\text{Ber}\left(\text{min}\left(\frac{t-T_{c}}{T},1\right) \right), \tag{4}\] where \(T\) is a hyper-parameter that adjusts how fast the network adopts the stream samples for training. Therefore, it resembles the memory-only training of [32] immediately after encountering new classes, while it becomes more like the sampling approach of ER as \(t-T_{c}\) gets larger. By combining those two strategies, we call our proposed method Pseudo Labeling-based Flexible Memory Sampling (PL-FMS). A detailed description of the algorithm for PL-FMS can be found in the supplementary material. ## 5 Experiments ### Experimental Setups **Datasets.** We evaluate the Hierarchical Label Expansion (HLE) setup with a single-depth scenario on three datasets: **CIFAR100**[34], **Stanford Cars**[33], and a newly constructed dataset called **ImageNet-Hier100**. CIFAR100 and Stanford Cars datasets each have 2 levels of hierarchy, with a total of (20,100) classes and (9,196) classes, respectively. The hierarchical taxonomy provided in each dataset was followed for the experiments. Additionally, we artificially constructed the ImageNet-Hier100, which is a subset of ImageNet [18] based on the taxonomy of WordNet [38]. This dataset also has 2 levels of hierarchy with a total of (10,100) classes. Details on the curation of ImageNet data to construct ImageNet-Hier100 dataset are available in the supplementary material. We evaluate the HLE setup with a multiple-depth scenario on two datasets: **CIFAR100**[34] and **iNaturalist-19**[47]. For CIFAR100, we follow the hierarchical taxonomy as described in [24], where the dataset has 5 levels of hierarchy with (2, 4, 8, 20, 100) classes, excluding the root node. For iNaturalist-19, we use the taxonomy in [8], where the dataset has 7 levels of hierarchy with (3, 4, 9, 34, 57, 72, 1010) classes, excluding the root node. Notably, only the iNaturalist-19 dataset is class-imbalanced among the two datasets. Further details regarding the number of classes introduced at each task, dataset characteristics are available in the supplementary material. **Baselines.** To provide a baseline for our method, we compare it with a range of previous works. We compare our rehearsal-based methods with previous works that were conducted under conventional CL setup, including **ER**[42], **EWC++**[12], and **MIR**[3]. We also compare our rehearsal-based methods with works that were used in recently proposed CL setup, including **RM**[6] and **CLIB**[32]. For regularization-based methods, we compare our methods with **BiC**[51] and **GDumb**[39]. In the single-depth scenario, we evaluated all baseline methods, while in the multiple-depth scenario, we excluded MIR and GDumb as GDumb had the lowest performance and MIR had similar performance to ER, EWC++, and BiC. Further details about the experimental setup are available in the supplementary material. **Scenarios.** We conducted experiments in two scenarios: a single-depth hierarchy level and a multiple-depth hierarchy level, as detailed in Section 3.2 and illustrated in Figure 2. Our HLE setup assumes disjoint data between tasks and is primarily evaluated under the single-label scenario. However, as described in Section 3.2, we also conducted experiments under a dual-label scenario for the single-depth hierarchy level, where data had labels for both hierarchy levels. **Evaluation metrics.** We employ two primary evaluation metrics in our study: final classification accuracy for all hierarchy levels and any-time inference. Classification accuracy at the final task is a commonly used metric in evaluating continual learning methods, as demonstrated in previous works [12, 26, 46]. This metric measures the model's accuracy after all tasks have been learned as reported in the experimental tables. We also use any-time inference, as recommended in [32], to assess the model's performance at any given time, crucial for observing knowledge expansion in our task-free setup. We report final accuracy in tables and any-time inference in figures for clarity over time. More details on these metrics are in the supplementary material. **Implementation details.** We implemented prior work using the [32] codebase, and applied AutoAugment [15] and CutMix [52] as per their setup, but modified CutMix to mix samples only from the same hierarchy level to preserve the label distribution. We used ResNet34 as the base feature encoder across all methods, and adjusted batch sizes and update rates for each dataset: CIFAR100 (16, 3), ImageNet-Hier100 and iNaturalist-19 (64, 0.25), Stanford Cars (64, 0.5). Memory sizes were 1000, 2000, 5000, and 8000 for Stanford Cars, CIFAR100, ImageNet-Hier100, and iNaturalist-19, respectively. All methods except GDumb, CLIB, and PL-FMS used the Adam optimizer [31] with an initial learning rate of 0.0003 and an exponential learning rate scheduler. CLIB and our method used the same scheduler following the CLIB codebase. GDumb and CLIB adhered to their original optimization configurations. ### Single-Depth Scenario Analysis In the single-depth hierarchy scenario, knowledge expands horizontally within the same hierarchy level, as depicted in Figure 2(a). The proposed HLE setup was evaluated on three datasets: CIFAR100 and ImageNet-Hier100, both class-balanced, and Stanford Cars, a class-imbalanced dataset, as reported in Table 1 and Figure 4. Among baseline methods, GDumb showed the worst performance, while other methods showed varying performance depending on the dataset and hierarchy level. In CIFAR100, RM and BiC outperformed other baseline methods in hierarchy level 1 and 2, respectively. EWC++ and MIR demonstrated moderate performance in both hierarchy levels, while CLIB exhibited comparable performance to RM and BiC in hierarchy level 1. In ImageNet-Hier100, MIR showed the best performance in hierarchy level 1, while RM exhibited the best performance in hierarchy level 2. BiC showed moderate performance in hierarchy level 1, while EWC++ and ER demonstrated similar performance in hierarchy level 2. For Stanford Cars, MIR showed the best performance in hierarchy level 1, while CLIB performed \begin{table} \begin{tabular}{c|c c c c c c|c c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c|}{Single-Label Scenario} & \multicolumn{6}{c}{Dual-Label Scenario} \\ & \multicolumn{2}{c}{CIFAR100} & \multicolumn{2}{c}{ImageNet-Hier100} & \multicolumn{2}{c|}{Stanford Cars} & \multicolumn{2}{c}{CIFAR100} & \multicolumn{2}{c}{ImageNet-Hier100} & \multicolumn{2}{c}{Stanford Cars} \\ & H=1 & H=2 & H=1 & H=2 & H=1 & H=2 & H=1 & H=2 & H=1 & H=2 & H=1 & H=2 \\ \hline ER & 37.8\(\pm\)2.06 & 31.3\(\pm\)0.78 & 73.4\(\pm\)1.91 & 55.7\(\pm\)1.87 & 28.4\(\pm\)0.73 & 4.01\(\pm\)0.06 & 42.0\(\pm\)0.57 & 25.5\(\pm\)0.33 & 78.8\(\pm\)0.82 & 57.2\(\pm\)1.89 & 37.8\(\pm\)0.72 & 5.3\(\pm\)0.44 \\ EWC++ & 34.3\(\pm\)0.68 & 27.1\(\pm\)0.80 & 73.4\(\pm\)0.99 & 54.0\(\pm\)1.34 & 27.9\(\pm\)0.74 & 3.42\(\pm\)0.33 & 39.9\(\pm\)2.26 & 23.3\(\pm\)1.93 & 76.3\(\pm\)1.20 & 53.0\(\pm\)3.32 & 38.3\(\pm\)0.47 & 3.17\(\pm\)0.36 \\ BiC & 38.8\(\pm\)0.41 & 33.4\(\pm\)1.41 & 72.5\(\pm\)0.09 & 58.7\(\pm\)0.78 & 27.1\(\pm\)1.08 & 3.05\(\pm\)0.29 & 42.1\(\pm\)1.06 & 28.0\(\pm\)1.01 & 77.7\(\pm\)1.24 & 60.4\(\pm\)0.30 & 36.5\(\pm\)1.04 & 3.26\(\pm\)0.34 \\ MIR & 35.0\(\pm\)1.47 & 28.6\(\pm\)0.18 & 74.5\(\pm\)0.90 & 57.3\(\pm\)1.93 & 28.6\(\pm\)1.09 & 4.50\(\pm\)0.44 & 42.4\(\pm\)0.95 & 26.2\(\pm\)1.79 & 78.5\(\pm\)0.57 & 56.0\(\pm\)2.25 & **43.1\(\pm\)1.18** & 5.02\(\pm\)0.74 \\ RM & 39.3\(\pm\)0.83 & 25.9\(\pm\)0.89 & 69.7\(\pm\)2.02 & 61.0\(\pm\)0.86 & 16.5\(\pm\)0.45 & 28.3\(\pm\)0.64 & 38.2\(\pm\)0.66 & 25.7\(\pm\)1.12 & 71.5\(\pm\)0.73 & 63.1\(\pm\)0.89 & 18.1\(\pm\)2.54 & 32.9\(\pm\)0.28 \\ GDumb & 26.2\(\pm\)0.87 & 18.6\(\pm\)0.09 & 53.4\(\pm\)1.18 & 37.2\(\pm\)0.33 & 16.6\(\pm\)2.31 & 4.50\(\pm\)0.12 & 25.7\(\pm\)0.83 & 8.5\(\pm\)1.11 & 59.2\(\pm\)0.54 & 42.3\(\pm\)0.54 & 15.0\(\pm\)1.04 & 4.06\(\pm\)0.33 \\ CLIB & 38.4\(\pm\)0.58 & 32.6\(\pm\)0.59 & 64.6\(\pm\)0.72 & 49.4\(\pm\)1.32 & 20.8\(\pm\)0.08 & 4.52\(\pm\)0.78 & 44.5\(\pm\)0.87 & 37.1\(\pm\)0.20 & 71.3\(\pm\)0.76 & 55.4\(\pm\)0.35 & 19.1\(\pm\)4.30 & 3.83\(\pm\)0.78 \\ \hline PL-FMS & **43.7\(\pm\)0.13** & **36.4\(\pm\)0.62** & **77.8\(\pm\)1.32** & **64.6\(\pm\)0.97** & **30.7\(\pm\)4.39** & **13.2\(\pm\)0.29** & **49.0\(\pm\)0.19** & **39.5\(\pm\)0.64** & **79.5\(\pm\)0.54** & **67.2\(\pm\)0.41** & 42.0\(\pm\)3.59 & **26.8\(\pm\)3.27** \\ \hline \end{tabular} \end{table} Table 1: Experimental results of baseline methods and our proposed method evaluated on HLE setup for single-depth hierarchy scenario in CIFAR100, ImageNet-Hier100, and Stanford Cars. Dual-label means overlapping data between tasks, and single-label means disjoint data between tasks. Classification accuracy on hierarchy level 1 and 2 at the final task (%) was measured for all datasets, and the results were averaged over three different random seeds. Figure 4: Any-time inference results on CIFAR100 and Stanford Cars datasets for single-depth hierarchy. H=1 is parent classes and H=2 child classes. Task index 1 receives parent class labeled data and subsequent indexes receive child class labeled data. Each data point shows average accuracy over three runs (\(\pm\) std. deviation). well in hierarchy level 2. ER and BiC displayed similar performance in hierarchy level 1, while GDumb and RM exhibited the lowest and similar performance. In hierarchy level 2, all baseline methods showed similar performance, with overall accuracy between 3% and 5%. Our proposed method, PL-FMS, outperformed every baseline method in all single-label scenarios, with the largest improvement seen in the class-imbalanced dataset. It is worth noting that RM is a task-aware learning method that has demonstrated high performance under the HLE setup. This is achieved by a two-stage training approach, where the model is first trained on stream data samples and then fine-tuned using memory data samples resulting in an upsurge in performance near task boundaries. BiC includes a bias correction layer that effectively reduces dataset bias, but it does not directly improve performance near task boundaries. Additionally, MIR has shown significant performance by selecting high-loss importance samples, which helps to address the problem of catastrophic forgetting. However, GDumb consistently exhibits performance decay due to its fixed regularization coefficient, which limits its ability to adapt to new tasks. ### Multiple-Depth Scenario Analysis Our proposed HLE setup was evaluated on two datasets: class-balanced CIFAR100 and class-imbalanced iNaturalist-19, with the results reported in Table 2 and Figure 5. The multiple-depth hierarchy scenario involves vertical knowledge expansion across all hierarchy levels, as shown in Figure 2 (b). All baseline methods were included except for GDumb and MIR. GDumb displayed consistently low performance across all datasets and hierarchy levels in single-depth hierarchy. MIR exhibited similar performance to that of ER and EWC++ in most cases, making it redundant to report separately. Our method, PL-FMS outperforms all baseline methods in CIFAR100, with the performance gap increasing significantly from hierarchy level 4 onwards, as reported in Table 2. EWC++ had the lowest performance across all hierarchy levels, while ER performed similarly, but slightly better. RM and BiC had competing performances until hierarchy level 5. Throughout the hierarchy levels, CLIB's performance improved, ranking second among the baselines in the last hierarchy level. Note that most baseline methods suffer from catastrophic forgetting at all task indexes, but the most significant performance drop occurs at task boundary between task 4 and 5, as shown in Figure 5. This is due to the fact that the sampling strategy used by baseline methods for training batches fails to consider the biased class distribution induced by sub-categorization. On the other hand, PL-FMS and CLIB exhibit only a mild performance drop by avoiding direct adoption of the stream buffer. PL-FMS outperformed all baseline methods in iNaturalist-19 except for level 1, with RM and CLIB showing the best performance in deeper hierarchy levels. EWC++ performed best only at the coarsest level and rapidly deteriorated thereafter, while BiC exhibited the worst performance overall. ER, EWC++, and BiC exhibited performance decline with increasing hierarchy levels, whereas RM and CLIB demonstrated significant performance improvements in comparison. In Table 2, we observe a similar performance transition \begin{table} \begin{tabular}{c|c c c c c|c c c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{8}{c|}{CIFAR100} & \multicolumn{8}{c}{iNaturalist-19} \\ & H=1 & H=2 & H=3 & H=4 & H=5 & H=1 & H=2 & H=3 & H=4 & H=5 & H=6 & H=7 \\ \hline ER & 71.5\(\pm\)4.44 & 58.4\(\pm\)4.58 & 36.6\(\pm\)4.78 & 18.1\(\pm\)4.28 & 7.47\(\pm\)1.61 & 84.9\(\pm\)6.03 & 84.9\(\pm\)0.68 & 59.8\(\pm\)1.55 & 29.3\(\pm\)3.28 & 17.8\(\pm\)3.95 & 10.3\(\pm\)3.95 & 1.50\(\pm\)0.77 \\ EWC++ & 70.9\(\pm\)2.83 & 56.6\(\pm\)4.26 & 35.8\(\pm\)5.93 & 15.8\(\pm\)3.94 & 6.43\(\pm\)1.28 & **87.4\(\pm\)2.38** & 80.7\(\pm\)1.19 & 66.1\(\pm\)9.80 & 29.4\(\pm\)4.48 & 18.1\(\pm\)6.53 & 15.1\(\pm\)5.73 & 1.88\(\pm\)1.15 \\ BiC & 71.6\(\pm\)1.01 & 63.5\(\pm\)2.48 & 54.7\(\pm\)0.61 & 33.8\(\pm\)0.41 & 19.8\(\pm\)0.78 & 79.5\(\pm\)1.44 & 76.3\(\pm\)12.1 & 54.0\(\pm\)27.4 & 22.9\(\pm\)10.3 & 14.8\(\pm\)9.88 & 11.2\(\pm\)7.78 & 1.34\(\pm\)1.41 \\ RM & 74.2\(\pm\)3.99 & 65.0\(\pm\)4.18 & 50.9\(\pm\)1.40 & 37.6\(\pm\)0.60 & 24.5\(\pm\)2.54 & 74.0\(\pm\)5.57 & 69.7\(\pm\)4.21 & 54.4\(\pm\)2.20 & 40.7\(\pm\)1.15 & 37.4\(\pm\)0.85 & 35.1\(\pm\)0.44 & 11.3\(\pm\)0.33 \\ CLIB & 70.6\(\pm\)4.05 & 59.5\(\pm\)1.22 & 47.6\(\pm\)5.06 & 32.6\(\pm\)1.76 & 22.5\(\pm\)2.08 & 87.2\(\pm\)2.26 & 81.3\(\pm\)4.78 & 62.4\(\pm\)4.10 & 41.5\(\pm\)0.97 & 35.3\(\pm\)0.70 & 33.2\(\pm\)1.19 & 8.07\(\pm\)0.94 \\ \hline PL-FMS & **74.5\(\pm\)4.63** & **65.6\(\pm\)3.34** & **56.0\(\pm\)3.36** & **42.7\(\pm\)1.79** & **30.8\(\pm\)1.54** & 86.1\(\pm\)3.15 & **88.4\(\pm\)3.79** & **70.6\(\pm\)3.17** & **49.6\(\pm\)2.42** & **43.9\(\pm\)1.86** & **41.3\(\pm\)2.57** & **13.6\(\pm\)0.28** \\ \hline \end{tabular} \end{table} Table 2: Experimental results reported for baseline methods and our proposed method evaluated on the HLE setup for the multiple-depth hierarchy scenario in CIFAR100 and iNaturalist-19. The classification accuracy on all hierarchy levels at the final task(%) was measured for all datasets, and the results were averaged over three different random seeds. Figure 5: Any-time inference results on CIFAR100 dataset for multiple-depth hierarchy. H=1 represents the coarsest level and H=5 represents the finest level of class hierarchy. The dotted line represents the point at which the model is fully given the task data for the corresponding task index. The reported data points represent the average accuracy over three runs (\(\pm\) std. deviation). across the two datasets. However, at the hierarchy level 7, other baseline methods except for RM and CLIB show performance near 1%, while RM, CLIB, and our method perform much better in the highest hierarchy level with performance above 10%. We believe that ER, EWC++, and BiC exhibit significantly worse performance than RM, CLIB, and our method because they have not been tested under robust conditions, while RM and CLIB were proposed under more realistic conditions with blurry task boundaries and data streams. These methods are better equipped to deal with hierarchical knowledge formulation, which requires capturing common features throughout hierarchy trees. Overall, we observe that our method performs especially strongly under class imbalance situations, which is more similar to real-world scenarios. ### Label Regime Analysis Table 1 presents the results of our experiment on a single-depth hierarchy, which we conducted under two scenarios: dual-label and single-label. Our dual-label scenario showed similar trends to the single-label scenario, with GDumb being the worst-performing method. Baseline methods that performed well in the single-label scenario had moderate performance in the dual-label scenario. Notably, incorporating the dual-label scenario resulted in an overall higher performance for the baseline methods in hierarchy level 1, although this was not consistent for hierarchy level 2 and varied among methods. Our proposed method, PL-FMS, consistently showed higher performance in the dual-label scenario across all datasets and hierarchy levels, suggesting that it is more adept at capturing hierarchy information in such scenarios, while still performing well in the single-label scenario against baseline methods. ### Prior CL Setups Analysis Table 3 reports the results of our proposed HLE setup and baseline methods evaluated on various CL setups. Figure 1 depicts the difference between HLE and conventional CL setups. We evaluated the methods on disjoint, blurry [6], and i-Blurry [32] setups to check for code reproducibility and to observe whether our method could perform well on different setups. As reported in [32], CLIB exhibited superior or competitive performance to the other baseline methods across all previous setups, especially with large margin for the i-Blurry setup, since it has design for the i-Blurry setup. Note that our FMS outperformed CLIB for all the prior setups, which indicates that our method is not limited to the suggested HLE setup. ### Ablation Study In Table 4, we conducted an ablation study to determine the contribution of each component in our proposed method for multi-depth, single-label, and dual label scenarios. The two components, PL and FMS, were evaluated separately to observe the performance gain achieved by each component. Results indicate that PL contributes more to the overall performance gain compared to FMS. However, when used together, the two components benefit each other and show higher performance gain for all scenarios. We also compared our method against an oracle result obtained via offline batch learning on all classes simultaneously and an approach leveraging true class hierarchy labels (PL-FMS-T). As seen in Table 5, our method gains from scenarios where true class hierarchy is available. Table 6 shows how the hyperparameter \(T\) in PL-FMS, controlling the network's adaptation speed during training, affects performance. Choosing a value of 5,000 for \(T\) yielded the highest accuracy, especially in fine-grained hierarchy classes across all scenarios. \begin{table} \begin{tabular}{c|c c c c c|c c c|c c} \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c|}{Multiple-Depth} & \multicolumn{5}{c}{Single-Label} & \multicolumn{2}{c}{Dual-Label} \\ & H=1 & H=2 & H=3 & H=4 & H=5 & H=1 & H=2 & H=1 & H=2 \\ \hline T=500 & 79.0 & 62.4 & 52.1 & 37.2 & 27.4 & 40.0 & 32.8 & 47.6 & 36.7 \\ T=1,500 & 73.0 & 65.4 & 50.2 & 36.4 & 28.7 & 38.3 & 32.8 & 46.6 & 36.9 \\ T=5,000 & **74.5** & **65.6** & **56.0** & **42.7** & **30.8** & **43.7** & **36.4** & **49.0** & **39.5** \\ T=5,000 & 72.7 & 64.7 & 50.5 & 34.6 & 28.3 & 41.4 & 33.5 & 48.8 & 38.6 \\ T=50,000 & 77.9 & 67.0 & 52.9 & 39.0 & 28.8 & 39.0 & 31.7 & 46.3 & 34.4 \\ \hline \end{tabular} \end{table} Table 6: Effect of hyperparameter \(T\) in Eq. 4 (%) of PL-FMS on CIFAR100. \begin{table} \begin{tabular}{c|c c c} \hline Methods & Disjoint [5] & Blurry [6] & i-Blurry [32] \\ \hline ER & 36.6\(\pm\)1.35 & 24.5\(\pm\)1.79 & 38.7\(\pm\)0.51 \\ EWC++ & 36.7\(\pm\)1.04 & 24.3\(\pm\)1.20 & 38.7\(\pm\)1.06 \\ MIR & 34.5\(\pm\)0.97 & 24.0\(\pm\)0.34 & 38.1\(\pm\)0.69 \\ RM & 35.4\(\pm\)1.12 & 37.8\(\pm\)0.81 & 36.7\(\pm\)1.32 \\ GDumb & 26.3\(\pm\)0.43 & 25.9\(\pm\)0.08 & 32.1\(\pm\)0.63 \\ CLIB & 38.0\(\pm\)1.44 & 38.3\(\pm\)0.42 & 43.4\(\pm\)0.44 \\ \hline FMS & **39.2\(\pm\)0.34** & **41.3\(\pm\)1.98** & **45.3\(\pm\)1.02** \\ \hline \end{tabular} \end{table} Table 3: Experimental results of baseline and FMS evaluated on three CL setups: conventional (disjoint), blurry, and i-Blurry. Test accuracy at the final task (%) was measured for each setup and averaged over three runs with standard deviation reported. \begin{table} \begin{tabular}{c|c c c c|c c} \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c|}{Multiple-Depth} & \multicolumn{5}{c}{Single-Label} & \multicolumn{2}{c}{Dual-Label} \\ & H=1 & H=2 & H=3 & H=4 & H=5 & H=1 & H=2 & H=1 & H=2 \\ \hline T=500 & 79.0 & 62.4 & 52.1 & 37.2 & 27.4 & 40.0 & 32.8 & 47.6 & 36.7 \\ T=1,500 & 73.0 & 65.4 & 50.2 & 36.4 & 28.7 & 38.3 & 32.8 & 46.6 & 36.9 \\ T=5,000 & **74.5** & **65.6** & **56.0** & **42.7** & **30.8** & **43.7** & **36.4** & **49.0** & **39.5** \\ T=5,000 & 72.7 & 64.7 & 50.5 & 34.6 & 28.3 & 41.4 & 33.5 & 48.8 & 38.6 \\ T=50,000 & 77.9 & 67.0 & 52.9 & 39.0 & 28.8 & 39.0 & 31.7 & 46.3 & 34.4 \\ \hline \end{tabular} \end{table} Table 5: CIFAR100 multiple-depth scenario results (%) across three runs. 'Oracle’: All-classes-at-once (offline batch learning, assuming unlimited access to true class hierarchy labels during training). ’PL-FMS-T’: PL-FMS with true class hierarchy labels. ## 6 Conclusion In this work, we propose hierarchical label expansion (HLE), novel hierarchical class incremental task configurations with an online learning constraint, that complement existing CL setups by mimicking knowledge expansion. Then, we propose Pseudo-Labeling (PL) based memory management and Flexible Memory Sampling (FMS) to tackle this newly proposed CL setups for fully exploiting the inherent data hierarchy. Our proposed method outperforms prior state-of-the-art works by significant margins on our HLE setups across all levels of hierarchies, regardless of depth and class imbalance while also outperforming them on the previous disjoint, blurry and i-Blurry CL setups. ## Acknowledgments This work was partly supported by the National Research Foundation of Korea(NRF) grants funded by the Korea government (MSIT) (NRF-2022R1A4A1030579, NRF-2022M3C1A309202211, NRF-2022R1A2C4002300 5%), IITP grants (No.2020-0-01361, AI GS Program (Yonsei University) 5%, No.2021-0-02068, AI Innovation Hub 5%, 2022-0-00077 5%, 2022-0-00113 5%, 2022-0-00959 5%) funded by the Korea government (MSIT), and Creative-Pioneering Researchers Program through Seoul National University. Also, the authors acknowledged the financial support from the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University.
2310.01255
Physics-Dynamics-Chemistry Coupling Across Different Meshes in LFRic-Atmosphere: Formulation and Idealised Tests
The main components of an atmospheric model for numerical weather prediction are the dynamical core, which describes the resolved flow, and the physical parametrisations, which capture the effects of unresolved processes. Additionally, models used for air quality or climate applications may include a component that represents the evolution of chemicals and aerosols within the atmosphere. While traditionally all these components use the same mesh with the same resolution, we present a formulation for the different components to use a series of nested meshes, with different horizontal resolutions. This gives the model greater flexibility in the allocation of computational resources, so that resolution can be targeted to those parts which provide the greatest benefits in accuracy. The formulation presented here concerns the methods for mapping fields between meshes, and is designed for the compatible finite element discretisation used by LFRic-Atmosphere, the Met Office's next-generation atmosphere model. Key properties of the formulation include the consistent and conservative transport of tracers on a mesh that is coarser than the dynamical core, and the handling of moisture to ensure mass conservation without generation of unphysical negative values. Having presented the formulation, it is then demonstrated through a series of idealised test cases which show the feasibility of this approach.
Alex Brown, Thomas M. Bendall, Ian Boutle, Thomas Melvin, Ben Shipway
2023-10-02T14:45:12Z
http://arxiv.org/abs/2310.01255v1
Physics-Dynamics-Chemistry Coupling Across Different Meshes in LFRic-Atmosphere: Formulation and Idealised Tests ###### Abstract The main components of an atmospheric model for numerical weather prediction are the _dynamical core_, which describes the resolved flow, and the _physical parametrisations_, which capture the effects of unresolved processes. Additionally, models used for air quality or climate applications may include a component that represents the evolution of chemicals and aerosols within the atmosphere. While traditionally all these components use the same mesh with the same resolution, we present a formulation for the different components to use a series of nested meshes, with different horizontal resolutions. This gives the model greater flexibility in the allocation of computational resources, so that resolution can be targeted to those parts which provide the greatest benefits in accuracy. The formulation presented here concerns the methods for mapping fields between meshes, and is designed for the compatible finite element discretisation used by LFRic-Atmosphere, the Met Office's next-generation atmosphere model. Key properties of the formulation include the consistent and conservative transport of tracers on a mesh that is coarser than the dynamical core, and the handling of moisture to ensure mass conservation without generation of unphysical negative values. Having presented the formulation, it is then demonstrated through a series of idealised test cases which show the feasibility of this approach. ## 1 Introduction Due to the complexity of the equations that describe the evolution of the atmosphere, the numerical models typically used in simulating the weather and climate are broken down into different components, each describing different processes. The _dynamical core_ (or "dynamics") discretises the equations for resolved fluid motions. The _physical parametrisations_ (or "physics") capture the non-fluid processes and the non-resolved fluid processes. A final component, most often found in climate and air quality models, describes the transport of aerosols and chemicals, and the reactions between the chemicals (this component will be referred to collectively as "chemistry" throughout). As discussed by Gross et al. (2018), these components are generally written independently from one another, but coupled together in some way to form the whole atmospheric model. This structure has often evolved naturally, as the complexity of the equations governing the Earth system necessitates different terms being discretised and evaluated separately. Traditionally, in numerical weather prediction (NWP) and climate models the dynamical core, physical parametrisations and the chemistry component are computed on the same mesh, and often this choice has been made to simplify the coupling between the different components. Notable exceptions to this include those models which use spectral element or spectral transform methods in their dynamical core, such as ECMWF's IFS model (Roberts et al., 2018),Malardel et al. (2016); NCAR's CAM-SE spectral element model, in which Herrington et al. (2019) and Herrington et al. (2019) have recently explored the use of a coarser physics grid; and the Department of Energy's E3SM spectral element model, in which Hannah et al. (2021) and Bradley et al. (2022) have investigated the use of alternative physics and tracer transport grids. Another related endeavour is a climate configuration of the Met Office's Unified Model (UM), known as Junior-Senior, which is motivated by reducing the large computational cost of the chemistry component (Stringer et al., 2018). This work explores removing the assumption that the different atmospheric components use the same grid, in the context of the Met Office's next-generation LFRic-Atmosphere model, which uses a compatible finite element discretisation in its dynamical core, GungHo. In this paper we present a formulation for coupling together the different resolution dynamics, physics and chemistry components, inspired by the approach of Herrington et al. (2019) and Herrington et al. (2019). The formulation is then tested through a series of idealised examples. Future work will seek to investigate and understand the consequences of this new capability within full NWP and climate models. ### Background and Motivation Whilst traditionally the components of atmospheric models use a mesh of the same resolution, the concept of using different meshes for different components is not novel. However, there are contrasting arguments for how the resolution of the physical parametrisations should be changed relative to that of the dynamical core. Gross et al. (2018) presents both arguments. On the one hand, it is argued that computing physics on a high-resolution mesh means sampling the fields from the dynamical core more finely, comparing this to the "subcolumns" approach that is used in some cloud-schemes. The physical parametrisations generally describe non-linear processes, so computing these at a higher resolution may give a better representation of their effect on the resolved flow. One example of a model that uses a higher resolution for the physical parametrisations is ECMWF's spectral IFS model (Roberts et al., 2018). The grid on which physical parametrisations are computed on has more degrees of freedom than the number of wave modes used in the spectral part of the dynamical core, which Malardel et al. (2016) found to give reduced aliasing and better mass conservation. Whether the same benefits would apply to non-spectral models is not clear. On the other hand, as argued by Lander and Hoskins (1997), the physics should perhaps only be passed well-resolved "believable" scales from the dynamics, as the numerical errors in the solutions may be amplified by the non-linear physical parametrisations. These numerical errors are likely to be largest at the smallest scales of motion, which are generally poorly-resolved by the dynamical core. Therefore, by computing the physics at coarser resolution to dynamics, the physical parametrisations only act upon fields from which these poorly resolved scales have been filtered. This approach of Lander and Hoskins (1997) was also considered in the context of a spectral transform model. A final factor relates to that of computational cost. If the different components of the atmospheric model can use different grids, then the computational resources can be targeted to the part of the model that provides the greatest benefit. Alternatively, there may be parts of the model whose resolution can be reduced without particularly degrading the solution quality, freeing up computational resources to be assigned elsewhere, possibly into increasing model complexity rather than resolution. Some of these ideas have been explored in the spectral element CAM-SE model. Herrington et al. (2019) implemented an alternative quasi-equal-area finite volume physics grid, which had the same number of degrees of freedom as the dynamics grid, and on which tracer advection is also computed. This reduced grid imprinting and spurious vertical velocity noise over orography. Herrington et al. (2019) extended this to use a coarser physics grid, with a 5/9 reduction in the number of columns in the physics grid with respect to the dynamics grid, while the tracer grid remained at the same effective resolution as the dynamics grid. In Herrington et al. (2019) and 7, the model's prognostic variables were mapped from the dynamics grid to the other grids, while only increments were mapped back to the dynamics grid. Momentum components were interpolated by evaluating their basis functions at the physics degrees of freedom, while pressure and temperature variables were integrated over the coarse control volumes. Tracers are mapped to the physics grid by a high-order reconstruction that preserves tracer shape, linear correlations and conserves mass. Increments were mapped with an alternative algorithm which alters the mixing ratio increment in order to also preserve shape, linear correlations, conserve mass, as well as maintaining consistency and positivity. Herrington et al. (2019) demonstrated that the effective resolution was not degraded through aquaplanet simulations, allowing for future computational savings. It also reduced noise over steep orography at element boundaries, a common problem in spectral element models, shown through a Held-Suarez test with real orography. Hannah et al. (2021) expanded on this approach in the spectral element E3SM model, investigating the use of a higher-resolution mesh for the physics parametrisations, but found no qualitative benefit. When the physics parametrisation mesh was lower-resolution, Hannah et al. (2021) showed no degradation in the solution for a simulated climate. The lower-resolution physics grid was further shown to reduce grid imprinting over orography and demonstrated significant computational savings. Bradley et al. (2022) extended this by implementing an alternative grid for tracer transport, using a interpolation semi-Lagrangian finite element transport scheme, in the E3SM model where physics and chemistry are on the lower-resolution physics grid. The Met Office's hybrid-resolution version of the UKESM earth system model (Stringer et al., 2018) (Junior Senior) runs a high-resolution version of the Met Office's Unified Model (UM) without UKCA chemistry and aerosol (dynamics and physics) driving a low-resolution version of the UM with UKCA (dynamics, physics and chemistry). This is motivated by reducing the significant computational cost of the chemistry model, which contains a significant number of chemical and aerosol species. The formulation presented in this paper could be used to address the same problem in the new LFRic-Atmosphere model. The work presented here has taken significant inspiration from Herrington et al. (2019) and Herrington et al. (2019), but differs in some key elements. Whereas those works used a spectral element dynamical core, this work considers a dynamical core with the lowest-order compatible finite element discretisation of Melvin et al. (2019). This has implications for the staggering of the different prognostic variables and hence the operators used to map these fields. In this work, the mesh used for the physical parametrisations has the same structure as the dynamical core (but of different resolution and with cells exactly nested within or exactly nesting those of the dynamical core), whereas the approach in CAM-SE used a finite volume grid overlaying the spectral element grid. The two approaches also preserve similar but subtly different properties. ### Scope The formulation presented in this work is designed for a model with three constituent parts: a dynamical core, coupled to a chemistry and aerosol model, and a set of physical parametrisations. The dynamical core evolves a set of dynamical prognostic variables (including fields describing the moist composition of the atmosphere), while the chemistry and aerosol component evolves a different set of variables, describing the chemical and aerosol species in the atmosphere. The physical parametrisations provide updates to the dynamical prognostic variables and the chemical and aerosol species, but also depend on a set of prescribed auxiliary variables. The chemicals and aerosols do not feed directly back into the dynamical core, but they may appear as auxiliary fields to the physics schemes. Following the motivations laid out earlier in Section 1.1, the formulation is designed for three different types of interaction between these components: 1. physical parametrisations that are computed on a finer mesh than the dynamical core; 2. physical parametrisations that are computed on a coarser mesh than the dynamical core; 3. a chemistry and aerosol component (including tracer transport) computed on a coarser mesh than the dynamical core. The interactions between components that use different meshes involve mapping fields from one mesh to another. To avoid complications relating to the averaging of vector-valued fields, only physics parametrisations providing updates to scalar-valued fields are computed on a different mesh to the dynamical core. The choices of mesh for these components are constrained by some crucial simplifications. The three-dimensional meshes are extruded, so that they are the product of a two-dimensional horizontal mesh with a vertical one-dimensional mesh. The two-dimensional horizontal mesh consists of quadrilateral cells, resulting in hexahedral cells in the three-dimensional mesh. All the components use meshes with the same vertical structure, so that the resolution only differs in the horizontal part. Cells on a finer mesh are exactly nested within those of a coarser mesh, which offers two significant design advantages. Firstly, it is straightforward to calculate the size of the overlapping region between cells on two different meshes. Secondly, this facilitates an efficient parallel distribution of memory so that data corresponding to fields on different meshes can be geographically distributed in the same way, minimising the amount of data communication required to map fields from one mesh to another. The purpose of this paper is to present the formulation used in LFRic-Atmosphere for coupling together these components when they use meshes of different resolutions. The approach is demonstrated through a series of idealised test cases, which illustrate various aspects of the formulation. In particular we focus on the transport of tracers on a coarser mesh, and at this stage do not demonstrate a dynamical core coupled to a full suite of physical parametrisations or a chemistry and aerosol model. Future work will extend this approach to full NWP and climate configurations and will explore the consequences of different choices of mesh for individual physics schemes and the subsequent consequences on the model's performance. The remainder of the paper is organised as follows. Section 2 specifies the prognostic variables used by the model, and also sets out the notation used in this paper to describe the formulation for coupling components of different resolutions. Then, Section 3 discusses the properties of the formulation that we consider to be important. The formulation, including the specific operators for mapping fields between meshes, is presented in Section 4, which also shows that these operators satisfy the properties of Section 3. Section 5 demonstrates the formulation through idealised test cases. ## 2 Preliminaries ### Prognostic Variables LFRic-Atmosphere's dynamical core, called GungHo, solves for the wind velocity \(\mathbf{u}\), the dry density \(\rho_{d}\), the Exner pressure \(\Pi\) and the (dry) potential temperature \(\theta\). There are \(N_{r}\) species of moisture which are described through mass mixing ratios, with the \(r\)-th species given by \(m_{r}:=\rho_{r}/\rho_{d}\), where \(\rho_{r}\) is a moisture density. Collectively these prognostic variables can be described as a single state vector \(\mathbf{X}\), \[\mathbf{X}=\left(\mathbf{u},\rho_{d},\Pi,\theta,m_{1},\dots,m_{N_{r}}\right). \tag{1}\] The mass mixing ratio \(a_{Y}:=\rho_{Y}/\rho_{d}\) is also used to represent the \(Y\)-th chemical/aerosol species, so that if the model evolves \(N_{Y}\) species, then the vector \(\mathbf{Y}\) can be used for the chemical and aerosol species: \[\mathbf{Y}=\left(a_{1},\dots,a_{N_{Y}}\right). \tag{2}\] The model solves the compressible Euler equations, with additional equations for the moisture, chemical and aerosol variables: \[\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u}\mathbf{\cdot}\mathbf{ \nabla}\right)\mathbf{u}+2\mathbf{\Omega}\times\mathbf{u}+\frac{c_{p}\theta(1+m_{v}R_{v}/ R_{d})}{1+\sum_{r=1}^{N_{r}}m_{r}}\mathbf{\nabla}\Pi+\mathbf{g}=\mathbf{S}_{u}, \tag{3a}\] \[\frac{\partial\rho_{d}}{\partial t}+\mathbf{\nabla}\cdot\left(\rho_ {d}\mathbf{u}\right)=0,\] (3b) \[\frac{\partial\theta}{\partial t}+\left(\mathbf{u}\mathbf{\cdot}\mathbf{ \nabla}\right)\theta=S_{\theta},\] (3c) \[\frac{\partial m_{r}}{\partial t}+\left(\mathbf{u}\mathbf{\cdot}\mathbf{ \nabla}\right)m_{r}=S_{r},\quad r\in[1,N_{r}],\] (3d) \[\frac{\partial a_{Y}}{\partial t}+\left(\mathbf{u}\mathbf{\cdot}\mathbf{ \nabla}\right)a_{Y}=S_{Y},\quad Y\in[1,N_{Y}], \tag{3e}\] where \(\mathbf{S}_{u}\), \(S_{\theta}\) and \(S_{r}\) represent the changes to the prognostic variables that are computed through the physical parametrisations. The \(S_{Y}\) variables describes sources, sinks and reactive effects computed by the chemical and aerosol model. Equation (3) is supplemented by the equation of state for an ideal gas, \[\Pi=\left(\frac{\rho_{d}R_{d}\theta(1+m_{v}R_{v}/R_{d})}{p_{0}}\right)^{\frac {R_{d}}{\rho_{r}+R_{d}}}, \tag{4}\] with \(m_{v}\) as the mixing ratio of water vapour. The constants are: the specific gas constant for dry air \(R_{d}\), the specific gas constant for water vapour \(R_{v}\), the specific heat capacity of dry air at constant pressure \(c_{p}\), the reference pressure \(p_{0}\), the gravitational field vector \(\mathbf{g}\) and the Earth's rotation vector \(\mathbf{\Omega}\). ### Overview of LFRic-Atmosphere LFRic-Atmosphere is the Met Office's new weather forecasting and climate model, designed to exploit the next generation of supercomputers, as described in Adams et al. (2019). A major issue for adapting the Met Office's Unified Model (UM) (Wood et al., 2014; Walters et al., 2017) to these supercomputers is the latitude-longitude mesh used for global simulations by the UM's dynamical core, ENDGame. The latitude-longitude mesh has a convergence of spatial points at the poles, which leads to a bottleneck in data communication and a resolution gap between the poles and the equator. This presents an unsustainable constraint on the UM's scalability as horizontal resolution is increased. Key to LFRic's design is the use of a quasi-uniform cubed-sphere mesh, in both the physical parametrisations and the dynamical core, GungHo. ENDGame used C-grid and Charney-Phillips staggerings to obtain good linear wave dispersion properties and to avoid computational modes It was been shown by Cotter and Shipton (2012); Cotter and Thuburn (2014) and Thuburn and Cotter (2015) that a compatible finite element discretisation can replicate these desirable properties, while also facilitating the move to a non-orthogonal mesh. In the compatible finite element discretisation used by GungHo, all of the prognostic variables are discretised as a sum of coefficients multiplying basis functions, with the basis functions localised to a single element or set of elements surrounding a cell edge or vertex. A _finite element_ is described by the choice of basis functions (usually polynomials) and their continuity between cells; then the combination of a finite element with the model's mesh defines the _function space_. In a _compatible_ finite element discretisation, variables lie in function spaces that form a de Rham complex, so that the vector calculus relationships between the discretised variables mimic those from the continuous equations. A formal discussion of these concepts can be found in Arnold et al. (2010) and Cotter (2023). GungHo uses the lowest-order finite elements of the Raviart-Thomas de Rham complex, that are extended to hexahedral cells through a tensor-product construction. In this compatible finite element set-up, the prognostic variables are contained within three function spaces: \(\mathbb{V}_{u}\), \(\mathbb{V}_{\theta}\) and \(\mathbb{V}_{\rho}\) (with the subscript denoting the variables contained within those spaces). The DoFs of \(\mathbb{V}_{\rho}\) lie at the centre of cells, which corresponds to basis functions that are constant within a cell (and discontinuous between cells). The Arakawa C-grid is replicated by staggering the DoFs of \(\mathbb{V}_{u}\) from those of \(\mathbb{V}_{\rho}\), so that the the DoFs of \(\mathbb{V}_{u}\) are located at the faces of cells. Then the values of fields at the \(\mathbb{V}_{u}\) DoFs represent the normal fluxes that field through the faces of the element. The compatibility of \(\mathbb{V}_{u}\) and \(\mathbb{V}_{\rho}\) means that for any \(\mathbf{u}\in\mathbb{V}_{u}\), then \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{u}\in\mathbb{V}_{\rho}\). The DoFs of \(\mathbb{V}_{\theta}\) are co-located with the vertical component of \(\mathbb{V}_{u}\), and so the DoFs are located at the centre of the top or bottom surfaces of cells, which was shown by Melvin et al. (2018) to mimic the Charney-Phillips staggering. More description of these spaces is given by Melvin et al. (2019) and Bendall et al. (2020), while representations of them are displayed in Table 1. With these function spaces, (3) is discretised by taking \(\mathbf{u}\in\mathbb{V}_{u}\) and \(\rho_{d},\Pi\in\mathbb{V}_{\rho}\). In this work we consider chemicals and aerosol variables with mixing ratios \(a_{Y}\in\mathbb{V}_{\rho}\), although the formulation in Section 4 can be extended to the case of \(a_{Y}\in\mathbb{V}_{\theta}\). The moisture variables are co-located with \(\theta\), so that \(\theta,m_{r}\in\mathbb{V}_{\theta}\), to give an accurate representation of the saturation curve and the latent heat exchanges associated with changes of phase. ### Moisture conservation With \(m_{r}\in\mathbb{V}_{\theta}\), conservation of the mass of moisture requires more steps than if it were located in \(\mathbb{V}_{\rho}\). As described by Bendall et al. (2023), this is addressed in GungHo by the introduction of a vertically-shifted mesh, whose vertical levels are halfway between those of the primary mesh. The top and bottom surfaces of the primary mesh and the vertically-shifted mesh coincide. The density of a moisture species \(\widetilde{\rho}_{r}\) is defined on this vertically-shifted mesh, using the same elements as \(\mathbb{V}_{\rho}\) (with DoFs in cell centres), with this new space written as \(\widetilde{\mathbb{V}}_{\rho}\), where the tilde \(\widetilde{\cdot}\) denotes a quantity on the vertically-shifted mesh. The vertically-shifted mesh then has one more level than the primary mesh, so that \(\widetilde{\mathbb{V}}_{\rho}\) has the same number of DoFs as \(\mathbb{V}_{\theta}\). A similar mesh is used by Thuburn (2022) to obtain entropy conservation with a Charney-Phillips staggering. The moisture density is calculated from \(m_{r}\) and \(\rho_{d}\) by converting the two fields to the \(\widetilde{\mathbb{V}}_{\rho}\) space. This uses two operators, \(\mathcal{M}:\mathbb{V}_{\theta}\to\widetilde{\mathbb{V}}_{\rho}\) and \(\mathcal{Q}:\mathbb{V}_{\rho}\to\widetilde{\mathbb{V}}_{\rho}\), so that \[\widetilde{\rho}_{r}=\mathcal{M}[m_{r}]\times\mathcal{Q}[\rho_{d}], \tag{5}\] \begin{table} \begin{tabular}{c|c|c|c|c} Space & \(\mathbb{V}_{u}\) & \(\mathbb{V}_{\theta}\) & \(\mathbb{V}_{\rho}\) & \(\widetilde{\mathbb{V}}_{\rho}\) \\ \hline Variables & \(\mathbf{u}\) & \(\theta\), \(m_{r}\) & \(\rho_{d}\), \(\Pi\), \(a_{Y}\) & \(\widetilde{\rho}_{r}\) \\ \hline \end{tabular} \end{table} Table 1: The finite elements used by GungHo in the discretisation of its prognostic variables. The spaces \(\mathbb{V}_{u}\) and \(\mathbb{V}_{\rho}\) form part of a de Rham complex, so that if \(\mathbf{u}\in\mathbb{V}_{u}\) then \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{u}\in\mathbb{V}_{\rho}\). The degrees of freedom for \(\mathbb{V}_{u}\) correspond to the fluxes through each face of the hexahedron, while there is one degree of freedom per cell for \(\mathbb{V}_{\rho}\), representing the field’s value at the cell’s centre. The degrees of freedom for \(\mathbb{V}_{\theta}\) are in the centre of the top and bottom faces of cells. The density of the \(r\)-th moisture species, \(\widetilde{\rho}_{r}\) is described using the same elements as \(\mathbb{V}_{\rho}\) but on a vertically-shifted mesh. with the values of \(\widetilde{\rho}_{r}\) given by the pointwise product of \(\mathcal{M}[m_{r}]\) and \(\mathcal{Q}[\rho_{d}]\). The details of these operators will be discussed in Section 4.5. The dynamical core then conserves the following definition of moist mass: \[\int_{\Omega}\widetilde{\rho}_{r}\,\mathrm{d}V. \tag{6}\] There is a vertically-shifted mesh corresponding to each mesh with different horizontal resolution, and so the shifting operators \(\mathcal{M}\) and \(\mathcal{Q}\) can also be defined on meshes with finer and coarser horizontal resolutions. ### Notation It is convenient at this point to introduce the notation that is used in the rest of the paper. Let the dynamical prognostic variables \(\mathbf{X}\) evolved by the model be contained in some abstract space \(\mathbb{V}_{X}\) so that \(\mathbf{X}\in\mathbb{V}_{X}\), while the prognostic chemicals and aerosols \(\mathbf{Y}\) are contained in a space \(\mathbb{V}_{Y}\). These components may use meshes of different resolutions to one another. Entities on a mesh that is finer resolution than that of the dynamical core are denoted with a hat \(\widehat{\cdot}\). An overline \(\widehat{\cdot}\) denotes entities on a coarser mesh than that of the dynamical core. As mentioned in the previous section, a tilde \(\widehat{\cdot}\) is used to denote entities on a vertically-shifted mesh. Unadorned entities are on the same mesh as that used by the dynamical core. With this notation, the components of the model described in Section 1.2 that we will use in the remainder of the paper can be represented by the following operators: 1. the dynamical core, \(\mathcal{D}:\mathbb{V}_{X}\to\mathbb{V}_{X}\); 2. physics schemes that are computed on a finer mesh than the dynamical core, \(\widetilde{\mathcal{P}}:\mathbb{V}_{X}\to\mathbb{V}_{X}\); 3. physics schemes that are computed on a coarser mesh than the dynamical core, \(\overline{\mathcal{P}}:\mathbb{V}_{X}\to\mathbb{V}_{X}\); 4. the chemistry and aerosol component on a coarser mesh than the dynamical core, \(\overline{\mathcal{C}}:(\overline{\mathbb{V}}_{Y},\overline{\mathbb{V}}_{X}) \to\overline{\mathbb{V}}_{Y}\). The interactions between components that use different meshes involve mapping fields from one mesh to another. These mappings can also be represented by the action of operators: \[\mathcal{A}:\widehat{\mathbb{V}}_{X}\to\mathbb{V}_{X},\quad\mathcal{B}: \mathbb{V}_{X}\to\widehat{\mathbb{V}}_{X}, \tag{7}\] so that \(\mathcal{A}\) maps fields to a coarser mesh, while \(\mathcal{B}\) maps fields to a finer mesh. Related operators can be defined to mapping fields between \(\mathbb{V}_{X}\) and \(\overline{\mathbb{V}}_{X}\), although for brevity these are also denoted by \(\mathcal{A}\) and \(\mathcal{B}\). Thus \(\mathcal{A}\) is akin to the _restriction_ operators used in the geometric multi-grid solver technique (see for instance Maynard et al. (2020), whereas \(\mathcal{B}\) performs the role of a _prolongation_ operator. It is also helpful to introduce the identification and reconstruction operators for mapping fields to finer meshes: \[\mathcal{I}:\mathbb{V}_{X}\to\widehat{\mathbb{V}}_{X},\quad\mathcal{R}: \mathbb{V}_{X}\to\widehat{\mathbb{V}}_{X}. \tag{8}\] Figure 1: A vertical cross-section illustrating the vertically-shifted mesh used in GungHo to describe moisture density, with solid black lines showing the top/bottom surfaces of elements and dotted grey lines showing the vertical centres of the levels. The moisture mixing ratio \(m_{r}\) is co-located with \(\theta\) at the top and bottom surfaces of elements on the primary mesh, while the moisture density \(\widetilde{\rho}_{r}\) is described at cell centres on a vertically-shifted mesh. The top and bottom surfaces of elements on the vertically-shifted mesh coincide with the cell centres of elements on the primary mesh, so that the elements are shifted relative to those on the primary mesh. The vertically-shifted mesh has one more level than the primary mesh. In the absence of the orography (described in the next section) \(\overline{\mathbb{V}}_{X}\subset\mathbb{V}_{X}\subset\widehat{\mathbb{V}}_{X}\). Fields on a coarser mesh can therefore be exactly represented, or _identified_, on a finer mesh, with this operation denoted by \(\mathcal{I}\). The identification operators only use information from a single coarse cell to determine the value of a field in a cell on a finer mesh. In contrast, the _reconstruction_ operator \(\mathcal{R}\) uses a stencil that takes field values from neighbouring coarse cells to obtain a higher-order reconstruction of the field. These operators are discussed in more detail in Section 4. Some of the operators and their interactions are illustrated in Figure 2, while are the operators are listed in Table 2. ### Orography GungHo uses terrain-following coordinates to describe the orography, so that the vertical coordinates of the mesh's vertices are modified to capture the effect of the planet's surface. In general, the top and bottom faces of cells are sloped, while the lateral faces are aligned with the model's vertical direction. When the model uses multiple meshes, the orography is first defined through the coordinates of the vertices on the finest mesh. The vertices of cells in the coarser meshes are chosen to be coincident with the corresponding vertices on the finer mesh. It should be noted that once the meshes have been modified to describe orography, cells in one layer on one mesh may overlap with cells of a different layer from another mesh. The volume of the fine cells nested within a coarse cell may not necessarily equal the volume of the coarse cell. This strategy is illustrated in Figure 3. ## 3 Properties of Formulation Following the approach of Herrington et al. (2019) and Herrington et al. (2019), before introducing our formulation for coupling the components across different meshes, we list properties that we consider desirable for the \begin{table} \begin{tabular}{l|c|c|c} Operator & Notation & Domain & Co-domain \\ \hline Dynamical core & \(\mathcal{D}\) & \(\mathbb{V}_{X}\) & \(\mathbb{V}_{X}\) \\ Fine physics scheme & \(\widehat{\mathcal{P}}\) & \(\widehat{\mathbb{V}}_{X}\) & \(\widehat{\mathbb{V}}_{X}\) \\ Coarse physics scheme & \(\overline{\mathcal{P}}\) & \(\overline{\mathbb{V}}_{X}\) & \(\overline{\mathbb{V}}_{X}\) \\ Chemistry/aerosol model & \(\overline{\mathcal{C}}\) & \(\left(\overline{\mathbb{V}}_{X},\overline{\mathbb{V}}_{X}\right)\) & \(\overline{\mathbb{V}}_{Y}\) \\ Restriction & \(\mathcal{A}\) & \(\widehat{\mathbb{V}}_{X}\) & \(\mathbb{V}_{X}\) \\ Prolongation & \(\mathcal{B}\) & \(\mathbb{V}_{X}\) & \(\widehat{\mathbb{V}}_{X}\) \\ Identification & \(\mathcal{I}\) & \(\mathbb{V}_{X}\) & \(\widehat{\mathbb{V}}_{X}\) \\ Reconstruction & \(\mathcal{R}\) & \(\mathbb{V}_{X}\) & \(\widehat{\mathbb{V}}_{X}\) \\ Shifting operator for density & \(\mathcal{Q}\) & \(\mathbb{V}_{\rho}\) & \(\overline{\mathbb{V}}_{\rho}\) \\ Shifting operator for mixing ratio & \(\mathcal{M}\) & \(\mathbb{V}_{\theta}\) & \(\widehat{\mathbb{V}}_{\rho}\) \\ \end{tabular} \end{table} Table 2: A list of the operators used in the formulation of Section 4, showing the domain and co-domain. Figure 2: A representation of a general atmospheric model with different components on different meshes, showing the three configurations considered in this work. The dynamical core, described by operator \(\mathcal{D}\), evolves the prognostic variables \(\boldsymbol{X}\). This is coupled to physical parametrisations \(\widehat{\mathcal{P}}\) and \(\overline{\mathcal{P}}\), which are computed on finer and coarser meshes respectively. The final model component is \(\overline{\mathcal{C}}\), which describes the evolution of \(\overline{\boldsymbol{Y}}\), the chemical and aerosol variables. These chemicals and aerosols may be used as auxiliary variables by a physics scheme (for instance a radiation scheme). formulation to possess. Throughout Sections 3 and 4, the properties will generally be discussed for mapping between \(\mathbb{V}_{X}\) and \(\mathbb{V}_{X}\), as the same operators are used for mapping between \(\mathbb{V}_{X}\) and \(\mathbb{V}_{X}\). 1. **Reversibility**. The combination of restriction and prolongation operators must be chosen so that mapping a field from a coarser mesh to a finer mesh and back results in an unchanged field, i.e. \[\mathcal{A}\left[\mathcal{B}\left[\mathbf{X}\right]\right]=\mathbf{X}.\] (9) This does not hold if the roles of \(\mathcal{A}\) and \(\mathcal{B}\) are reversed, as information is lost as a field on a finer mesh is restricted to a coarser mesh. 2. **Preservation of a steady-state**. Consider a physical parametrisation that is computed upon a different mesh to the dynamical core. If this physical parametrisation does not change the prognostic variables _on the mesh of the physical parametrisation_, then the prognostic variables on the mesh of the dynamical core must not be changed by the combined process of mapping the prognostic fields to the physical parametrisation, computing the physical parametrisation and then mapping back. 3. **Conservation of mass of chemicals and aerosols**. When chemicals and aerosols are transported on the same mesh as the dynamical core, the masses of chemicals and aerosols are conserved. This should still be true if these chemicals and aerosols are represented on a coarser mesh than the dynamical core, so that the transport of chemicals and aerosols conserves \[\int_{\overline{\Omega}}\mathcal{A}\left[\rho_{d}\right]\overline{a}_{Y}\, \mathrm{d}V,\] (10) where \(\overline{\Omega}\) is the domain described by the coarser mesh. 4. **Preservation of constant chemical and aerosol mixing ratios**. The transport of chemicals and aerosols on a coarse mesh must preserve a constant mixing ratio. This can be described as _consistent transport_, as it implies that the chemical/aerosol densities evolve consistently with the density of dry air. 5. **Local conservation of mass of moisture species**. The dynamical core and physical parametrisations conserve the mass of moisture, in the absence of physical sources and sinks. This conservation is local, in the sense that there is a local closed mass budget, as moisture obeys a conservative form of the transport equation. The mapping operators for moisture should also locally conserve the mass of moisture locally within a coarse cell and over the fine cells contained within it. 6. **Preservation of constant mixing ratios of moisture species**. If a mixing ratio field takes a constant value \(C\), then this must be preserved by the mixing ratio mapping operators (denoted by subscript \(m\)), so that \[\mathcal{A}_{m}\left[C\right]=C,\quad\text{and}\quad\mathcal{B}_{m}\left[C \right]=C.\] (11) 7. **Avoid generation of negative moisture mixing ratios**. Negative values of moisture mixing ratios are unphysical and so must not be generated by the mapping formulation. This is a weaker requirement than local shape preservation, which was considered by Herrington et al. (2019), because the physical parametrisations themselves do not enforce local shape preservation, whereas they do ensure that negative values are not generated. Figure 3: An illustration of the strategy for describing the domain’s orography for different meshes, through a vertical cross-section of one layer of elements. The solid lines represent elements from the finest mesh, while dashed lines represent a mesh with intermediate resolution and the dotted lines showing the shape of the coarsest mesh. The discretisation uses terrain-following coordinates, so the mesh’s vertical coordinates are distorted to describe the orography. The cell vertices of any coarser mesh are chosen to coincide with the appropriate vertices on the finest mesh, which defines the representation of the orography on the coarser meshes. 8. **Preservation of linear correlation of moisture mixing ratios**. If two moisture mixing ratios are linearly correlated on one mesh, so that \(m_{1}=\alpha m_{2}+\beta\) for constants \(\alpha\) and \(\beta\), then this linear correlation should hold after the two fields are mapped to another mesh. As described by Lauritzen and Thuburn (2012), these correlations can be important for determining the evolution of these variables. This is also a property held by the approach of Herrington et al. (2019). 9. **Accuracy**. The order of accuracy of the prolongation mapping should match the accuracy of the dynamical core. For GungHo, this means second-order accuracy in space so that a field varying linearly in space should be exactly represented. As discussed by Herrington et al. (2019), conservation of other properties such as axial angular momentum, entropy or energy may be desirable but can be difficult to attain. However, GungHo does not inherently conserve these properties so we do not see it as essential that they should be conserved by the formulation presented in the next section. ## 4 Formulation To satisfy the desirable properties listed in Section 3, we place two requirements on the operators in the formulation: **Requirement 1**.: _The restriction operator \(\mathcal{A}\) must act as the inverse of the identification operator \(\mathcal{I}\), so that for any prognostic variable \(\boldsymbol{X}\),_ \[\mathcal{A}\left[\mathcal{I}\left[\boldsymbol{X}\right]\right]=\boldsymbol{X}. \tag{12}\] **Requirement 2**.: _The restriction operator \(\mathcal{A}\) and the prolongation operator \(\mathcal{B}\) must preserve a constant zero field, \(\boldsymbol{0}\):_ \[\mathcal{A}\left[\boldsymbol{0}\right]=\boldsymbol{0}\quad\text{and}\quad \mathcal{B}\left[\boldsymbol{0}\right]=\boldsymbol{0}. \tag{13}\] Note that Requirement 2 applies to all fields, while the stronger constraint of Property 6 applies to just moisture mixing ratios. Before discussing the restriction and prolongation operators for each of the prognostic variables, it is helpful to present features that are common to the operators for each of the scalar prognostic variables (the wind field is treated separately). To obtain the reversibility discussed in Property 1, the prolongation operators are chosen for all scalar variables (with an additional subtlety for the moisture variables discussed in Section 4.5) so that \[\mathcal{B}\left[\boldsymbol{X}\right]\equiv\mathcal{R}\left[\boldsymbol{X} \right]-\mathcal{I}\left[\mathcal{A}\left[\mathcal{R}\left[\boldsymbol{X} \right]\right]\right]+\mathcal{I}\left[\boldsymbol{X}\right]. \tag{14}\] This has the same form as the recovery operator used by Bendall et al. (2019) and Bendall and Wimmer (2023) to obtain reversibility and mass conservation when recovering fields from lower to higher-order finite element spaces. Then, given Requirement 1, it can be seen that this structure for \(\mathcal{B}\) will satisfy Property 1, as \[\mathcal{A}\left[\mathcal{B}\left[\boldsymbol{X}\right]\right]=\mathcal{A} \left[\mathcal{R}\left[\boldsymbol{X}\right]\right]-\mathcal{A}\left[\mathcal{ I}\left[\mathcal{A}\left[\mathcal{R}\left[\boldsymbol{X}\right]\right]\right] \right]+\mathcal{A}\left[\mathcal{I}\left[\boldsymbol{X}\right]\right]= \mathcal{A}\left[\mathcal{R}\left[\boldsymbol{X}\right]\right]-\mathcal{A} \left[\mathcal{R}\left[\boldsymbol{X}\right]\right]+\boldsymbol{X}=\boldsymbol {X}, \tag{15}\] and the choice of (14) ensures that Property 1 is obtained. With the form of (14), the reconstruction operator \(\mathcal{R}\) defines the accuracy of the prolongation operator, while the remaining two terms can be considered as a correction to provide reversibility. To meet Property 9, \(\mathcal{R}\) should then be chosen to have the same order of accuracy as the dynamical core. This form also means that the extrema of \(\boldsymbol{X}\) will always lie within the extrema of \(\mathcal{B}\left[\boldsymbol{X}\right]\). To obtain Property 2, we take the same approach as Herrington et al. (2019). Denoting the field before and after the physical parametrisation by superscripts \(n\) and \(n+1\), so that \(\boldsymbol{X}^{n+1}=\mathcal{P}\left[\boldsymbol{X}^{n}\right]\), then the increment corresponding the the physical parametrisation is simply \[\Delta\mathcal{P}\left[\boldsymbol{X}^{n}\right]=\boldsymbol{X}^{n+1}- \boldsymbol{X}^{n}. \tag{16}\] To perform a physical parametrisation on a different mesh to the dynamical core, the updated prognostic fields are computed through \[\boldsymbol{X}^{n+1}=\boldsymbol{X}^{n}+\mathcal{A}\left[\Delta\widehat{ \mathcal{P}}\left[\mathcal{B}\left[\boldsymbol{X}^{n}\right]\right],\quad \text{or}\quad\boldsymbol{X}^{n+1}=\boldsymbol{X}^{n}+\mathcal{B}\left[ \Delta\overline{\mathcal{P}}\left[\mathcal{A}\left[\boldsymbol{X}^{n}\right] \right]\right]. \tag{17}\] Thus before physical parametrisations, prognostic variables are mapped from one mesh to another, while after physical parametrisations, increments are mapped between meshes. The situations considered by Property 2 can be expressed in terms of increments, as if \(\overline{\mathcal{P}}\left[\mathcal{A}\left[\boldsymbol{X}\right]\right]= \mathcal{A}\left[\boldsymbol{X}\right]\) then \(\Delta\overline{\mathcal{P}}\left[\mathcal{A}\left[\boldsymbol{X}\right] \right]=\boldsymbol{0}\). Provided that Requirement 2 holds, then in this situation, \(\boldsymbol{X}^{n+1}=\boldsymbol{X}^{n}+\mathcal{B}\left[\boldsymbol{0}\right]= \boldsymbol{X}^{n}\). A similar relation holds if the physical parametrisation is performed on a finer mesh. However, the construction of (17) makes satisfying the preservation of moisture positivity (Property 7) challenging when the physical parametrisation is computed on a coarser mesh. Although it is assumed that physical parametrisations do not generate negative moisture mixing ratio values on the mesh upon which they act, when the increment is mapped to the dynamical core mesh and added to the original mixing ratio field this can still generate spurious negative values. The solution to this is discussed in Section 4.5. The remainder of this section specifies the particular restriction and prolongation operator for each prognostic variable, with a subscript to the operator denoting the variable, e.g. \(\mathcal{A}_{u}\) for the restriction operator for the velocity \(\boldsymbol{u}\). The prolongation operators \(\mathcal{B}_{\rho}\), \(\mathcal{B}_{\theta}\) and \(\mathcal{B}_{\Pi}\) take the form of (14), so only the identification and restriction operators \(\mathcal{I}\) and \(\mathcal{A}\) need specifying. ### Mapping operators for the pressure and potential temperature fields The mapping operators for the Exner pressure \(\Pi\) and potential temperature \(\theta\) are very similar. The only difference is that \(\Pi\) is expressed at points located in cell centres, while \(\theta\) is vertically staggered from this. As the vertical structure of the different meshes is the same, the operators involve only horizontal reconstruction or averaging. Since the properties in Section 3 relating to \(\Pi\) and \(\theta\) are the same, the operators for \(\Pi\) and \(\theta\) take the same form as one another. Therefore this section only presents the operators for \(\Pi\). The restriction of \(\Pi\) from a fine mesh to a coarse mesh consists of taking the arithmetic mean of the values in the fine cells contained within each coarse cell. Let the Exner pressure field in the \(j\)-th fine cell within the \(i\)-th coarse cell in the \(k\)-th layer be denoted by \(\widehat{\Pi}|_{i,j}^{k}\), and the value in the corresponding coarse cell be \(\Pi|_{i}^{k}\). If there are \(N_{j}\) fine cells in the \(i\)-th coarse cell then the action of \(\mathcal{A}_{\Pi}\) is given by \[\mathcal{A}_{\Pi}\left[\widehat{\Pi}\right]\,\equiv\,\Pi|_{i}^{k}=\frac{1}{N_ {j}}\sum_{j=1}^{N_{j}}\widehat{\Pi}|_{i,j}^{k}. \tag{18}\] The identification operator \(\mathcal{I}_{\Pi}\) is simply: \[\mathcal{I}_{\Pi}\left[\Pi\right]\equiv\widehat{\Pi}|_{i,j}^{k}=\,\Pi|_{i}^{k}\,. \tag{19}\] Then this combination of \(\mathcal{I}_{\Pi}\) and \(\mathcal{A}_{\Pi}\) satisfies Requirement 1, as \[\mathcal{A}_{\Pi}\left[\mathcal{I}_{\Pi}\left[\Pi|_{i}^{k}\right]\right]= \frac{1}{N_{j}}\sum_{j=1}^{N_{j}}\,\Pi|_{i}^{k}=\,\Pi|_{i}^{k}\,. \tag{20}\] The final operator is the reconstruction operator \(\mathcal{R}_{\Pi}\), which uses a stencil over the \(N_{l}\) neighbouring cells, with these cells indexed by \(l\). The operator is a simple weighted sum, \[\mathcal{R}_{\Pi}\left[\Pi\right]\equiv\widehat{\Pi}|_{i,j}^{k}=\sum_{l=1}^{N_ {l}}c_{i,j}^{l}\,\Pi|_{i}^{k,l}\,, \tag{21}\] where the coefficients \(c_{i,j}^{l}\) sum to unity and can be chosen to give any particular reconstruction. To give an order of accuracy approaching second-order, in this work the coefficients correspond to a linear reconstruction. The operator \(\mathcal{B}_{\Pi}\) can then be found from (14) to be described by \[\mathcal{B}_{\Pi}\left[\Pi\right]\equiv\widehat{\Pi}|_{i,j}^{k}=\,\Pi|_{i}^{k }+\sum_{l=1}^{N_{l}}c_{i,j}^{l}\,\,\Pi|_{i}^{k,l}-\frac{1}{N_{j}}\sum_{m=1}^{N _{j}}\sum_{l=1}^{N_{l}}c_{i,m}^{l}\,\,\Pi|_{i}^{k,l}\,. \tag{22}\] With these choices of operator, both \(\mathcal{A}\) and \(\mathcal{B}\) preserve a constant Exner pressure field or potential temperature fields (and hence also satisfy Requirement 2). ### Mapping operators for the density field Key to achieving the local conservation of mass of moisture, chemical and aerosol species is choosing the dry density mapping operators so that they conserve mass within a coarse cell. As discussed in Section 2.1, in GungHo density fields are represented by values at cell centres, and the basis functions for these fields are constant within a cell. Then the mass in a cell is simply given by the product of the value of the density field for that cell with the cell's volume. Let the \(k\)-th cell in the \(i\)-th column be denoted by \(e_{i}^{k}\), while the \(j\)-th cell on a finer mesh that is nested within it is \(\widehat{e}_{i,j}^{k}\). The restriction operator \(\mathcal{A}_{\rho}\left[\widetilde{\rho}\right]\) is defined by \[\mathcal{A}_{\rho}[\widetilde{\rho}]\equiv\rho|_{i}^{k}=\frac{1}{\int_{e_{i}^ {k}}\mathrm{d}V}\sum_{j=1}^{N_{j}}\widehat{\rho}|_{i,j}^{k}\int_{\widehat{e}_{ i,j}^{k}}\mathrm{d}V. \tag{23}\] This ensures that mass is conserved within a coarse cell by the restriction process. If \(\sum_{j=1}^{N_{j}}\int_{\widehat{e}_{i,j}^{k}}\mathrm{d}V=\int_{e_{i}^{k}} \mathrm{d}V\) then a constant density field is preserved by this restriction, but as discussed in Section 2.5, this is not necessarily true when the mesh is distorted by orography. If the volume of the domain is different between the two meshes then it is not possible to both locally conserve mass and preserve a constant density. The identification operator \(\mathcal{I}_{\rho}\) must conserve mass within a coarse element, so that it is given by \[\mathcal{I}_{\rho}\left[\rho\right]\equiv\widehat{\rho}|_{i,j}^{k}=\frac{ \int_{e_{i}^{k}}\mathrm{d}V}{N_{j}\int_{\widehat{e}_{i,j}^{k}}\mathrm{d}V}\ \rho|_{i}^{k}\,, \tag{24}\] which also combines with \(\mathcal{A}_{\rho}\) to satisfy Requirement 1. The reconstruction operator \(\mathcal{R}_{\rho}\) does not need to conserve mass, as conservation of mass is only required by \(\mathcal{B}_{\rho}\). Therefore the reconstruction operator \(\mathcal{R}_{\rho}\) is taken to be \(\mathcal{R}_{\Pi}\). Conservation of mass of \(\mathcal{B}_{\rho}\) follows from conservation of mass of \(\mathcal{A}_{\rho}\) and \(\mathcal{I}_{\rho}\). ### Mapping operators for the wind field In the formulation considered in this work, the physical parametrisations that provide increments to the wind are not computed on a different mesh to the dynamical core mesh, although scalar quantities contributing to those physical parametrisations may be calculated on different meshes. However the wind field must still be mapped to other meshes, for the transport of aerosols on a coarser mesh and also as an auxiliary field to physical parametrisations that return increments to other scalar fields. The wind field is described in GungHo through its normal component to cell faces. We describe the faces of fine cells that coincide with the faces of coarse cells as being on the _exterior_ of coarse cells, while those faces that do not coincide are on the _interior_ of coarse cells. The operators presented in this section are motivated by the consistent transport of chemicals and aerosols which will be discussed in Section 4.4. Key to this is for the operators to conserve the velocity flux through the faces of coarse cells. Let the \(N_{f}\) faces of the element \(e_{i}^{k}\) be denoted by \(\Gamma_{i,f}^{k}\) (so that faces are indexed by \(f\)). Similarly, the faces of the element \(\widehat{e}_{i,j}^{k}\) are given by \(\widehat{\Gamma}_{i,j,f}^{k}\). However the face \(\Gamma_{i,f}^{k}\) coincides with \(N_{g}\) faces of fine elements, which can be written as \(\widehat{\Gamma}_{i,f}^{k,g}\), where \(g\) is the index of the coincident fine faces. The value of \(N_{g}\) may be different for different faces \(\Gamma_{i,f}^{k}\). The variables \(\,u_{i,f}^{k}\) and \(\,\widehat{u}_{i,f}^{k,g}\) are the contravariant wind components that correspond to the faces \(\Gamma_{i,f}^{k}\) and \(\widehat{\Gamma}_{i,j,f}^{k}\). With this notation, the restriction operator \(\mathcal{A}_{u}\) is defined through \[\mathcal{A}_{u}\left[\widetilde{\mathbf{u}}\right]\equiv u|_{i,f}^{k}=\frac{1 }{\int_{\Gamma_{i,f}^{k}}\mathrm{d}A}\sum_{g=1}^{N_{g}}\widehat{u}|_{i,f}^{k,g }\int_{\widehat{u}_{i,f}^{k,g}}\mathrm{d}A, \tag{25}\] where \(\mathrm{d}A\) is the measure of the surface integral for a cell face. Only those fine mesh values that are on the exterior of coarse cells contribute to the restriction. The prolongation operator \(\mathcal{B}_{u}\) takes a different form to those used for the scalar fields, as identification and reconstruction operators do not get defined. The fine cell values on the exterior of coarse cells are obtained through \[\mathcal{B}_{u}[\mathbf{u}]=\,\widehat{u}|_{i,f}^{k}=\frac{\int_{\Gamma_{i,f} ^{k}}\mathrm{d}A}{N_{g}\int_{\Gamma_{i,f}^{k}}\mathrm{d}A}\ u|_{i,f}^{k}\,. \tag{26}\] The horizontal wind for the faces of fine cells that are interior to coarse cells are obtained through linear interpolation of the values from opposite faces of the coarse cell. As the \(\mathbb{V}_{u}\) basis functions are linear functions in the direction of the normal component, this prolongation emulates an identification operator. With these choices of operator, Requirement 2 is satisfied as the zero vector is mapped from one mesh to another. Since the wind values for the faces on the interior of the coarse cell do not contribute to the restriction operator, these do not need to be considered. As the wind field does not directly have physics increments computed on different meshes, it is not necessary to build a higher-order reconstruction operator \(\mathcal{R}_{u}\). ### Conservative and consistent transport of chemicals and aerosols Local mass conservation of tracers is achieved by transporting the density \(\rho_{Y}\) using a conservative form of the transport equation. If the mass fluxes of dry air and of the \(Y\)-th tracer species are defined as \(\mathbf{F}_{d}:=\rho_{d}\mathbf{v}\) and \(\mathbf{F}_{Y}:=\rho_{Y}\mathbf{v}\), equations (3b) and (3e) for the transport of dry density and tracers can be written as \[\frac{\partial\rho_{d}}{\partial t}+\mathbf{\nabla}\mathbf{\cdot}\mathbf{F}_{d}=0,\quad \frac{\partial\rho_{Y}}{\partial t}+\mathbf{\nabla}\mathbf{\cdot}\mathbf{F}_{Y}=0, \tag{27}\] where the sources and sinks of the tracers have been omitted. Following the approach taken by Lauritzen et al. (2011, 2014); Zangl et al. (2015) and Thuburn (2022), the tracer mass flux can be expressed as \(\mathbf{F}_{Y}=m_{Y}\mathbf{F}_{d}\) such that the tracer transport obeys \[\frac{\partial\rho_{Y}}{\partial t}+\mathbf{\nabla}\mathbf{\cdot}(m_{Y}\mathbf{F}_{d})=0. \tag{28}\] Using the same dry mass flux \(\mathbf{F}_{d}\) to transport both \(\rho_{d}\) and \(\rho_{Y}\) is key to ensuring consistent tracer transport. However as \(\rho_{Y}\) is transported on a coarser mesh than \(\rho_{d}\), it is necessary to map \(\mathbf{F}_{d}\) to the coarser mesh. The approach described in this section is similar to the framework presented by Bendall et al. (2023) used for conservative and consistent transport of moisture species on a vertically-shifted mesh. To begin, it is assumed that the discretised transport of \(\rho_{d}\in\mathbb{V}_{\rho}\) and \(\overline{\rho}_{Y}\in\overline{\mathbb{V}}_{\rho}\) can be expressed as two-time level schemes: \[\rho_{d}^{n+1}=\rho_{d}^{n}-\Delta t\mathbf{\nabla}\mathbf{\cdot}\mathbf{F}_{d}\quad\text {and}\quad\overline{\rho}_{Y}^{n+1}=\overline{\rho}_{Y}^{n}-\Delta t\overline{ \mathbf{\nabla}}\mathbf{\cdot}\mathcal{F}\left[\overline{a}_{Y},\overline{\mathbf{F}}_{d }\right], \tag{29}\] with the superscript \(n\) denoting a field at the \(n\)-th time level and where the flux \(\overline{\mathbf{F}}_{Y}\) has been calculated by a flux operator \(\mathcal{F}:\overline{\mathbb{V}}_{\rho},\overline{\mathbb{V}}_{u}\to \overline{\mathbb{V}}_{u}\). The divergence operators act such that \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot} \mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot} \mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot} \mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot }\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot} \mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot }\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\ **Requirement 4**.: _For all \(\mathbf{F}_{d}\in\mathbb{V}_{u}\), the restriction and divergence operators commute so that_ \[\mathcal{A}_{\rho}\left[\mathbf{\nabla}\mathbf{\cdot}\mathbf{F}_{d}\right]=\mathbf{\nabla}\mathbf{ \cdot}\mathcal{A}_{u}\left[\mathbf{F}_{d}\right]. \tag{35}\] The combination of restriction operators (23) and (25) satisfy Requirement 4 given the divergence operator (31), as for \(\mathbf{u}\in\mathbb{V}_{u}\), the operation \(\mathcal{A}_{\rho}\left[\mathbf{\nabla}\mathbf{\cdot}\mathbf{u}\right]\) can be expressed through \[\sum_{j=1}^{N_{j}}\left(\mathbf{\nabla}\mathbf{\cdot}\mathbf{u}\right)\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The operator \(\mathcal{M}^{-1}:\widetilde{\mathbb{V}}_{\rho}\rightarrow\mathbb{V}_{\theta}\) is the inverse of \(\mathcal{M}\), and is straightforward to compute for this choice of \(\mathcal{M}\). Instead of interpolating, the values at the top and bottom of the column are obtained through linear extrapolation (with a correction to avoid the generation of negative values). The operators \(\mathcal{M}\) and \(\mathcal{M}^{-1}\) preserve a constant mixing ratio field, and are both linear. Since moisture conservation is defined through the density \(\widetilde{\rho}_{r}\), it is necessary to specify how the restriction and identification operators interact with the shifted mesh. Using the definitions (23), (24) and (38) of the operators \(\mathcal{A}_{\rho}\), \(\mathcal{I}_{\rho}\) and \(\mathcal{Q}\), it can be shown that \(\mathcal{Q}\) commutes with both \(\mathcal{A}_{\rho}\) and \(\mathcal{I}_{\rho}\) so that for any \(\widehat{\rho}\in\widetilde{\mathbb{V}}_{\rho}\) or \(\rho\in\mathbb{V}_{\rho}\) \[\mathcal{Q}\left[\mathcal{A}_{\rho}\left[\widehat{\rho}\right]\right]= \mathcal{A}_{\rho}\left[\mathcal{Q}\left[\widehat{\rho}\right]\right]\quad \text{and}\quad\mathcal{Q}\left[\mathcal{I}_{\rho}\left[\rho\right]\right]= \mathcal{I}_{\rho}\left[\mathcal{Q}\left[\rho\right]\right]. \tag{41}\] #### 4.5.1 Restriction and Identification operators With these definitions, the restriction operator for the mixing ratio field is \(\mathcal{A}_{m}\), which can be written in terms of existing operators as \[\mathcal{A}_{m}\left[\widehat{m}_{r}\right]\equiv\mathcal{M}^{-1}\left[ \mathcal{A}_{\rho}\left[\mathcal{M}\left[\widehat{m}_{r}\right]\times \mathcal{Q}\left[\widehat{\rho}_{d}\right]\right]/\mathcal{Q}\left[\mathcal{A }_{\rho}\left[\widehat{\rho}_{d}\right]\right]\right], \tag{42}\] while the related identification operator \(\mathcal{I}\) is given by \[\mathcal{I}_{m}\left[m_{r}\right]\equiv\mathcal{M}^{-1}\left[\mathcal{I}_{ \rho}\left[\mathcal{M}\left[m_{r}\right]\times\mathcal{Q}\left[\rho_{d}\right] \right]/\mathcal{Q}\left[\mathcal{B}_{\rho}\left[\rho_{d}\right]\right]\right]. \tag{43}\] These choices are designed so that Requirement 1 is satisfied: the operators involve expressing the moisture field as a density and then restricting or identifying that density. As \(\mathcal{A}_{\rho}\left[\mathcal{I}_{\rho}\left[\rho\right]\right]=\rho\), it then follows that \(\mathcal{A}_{m}\left[\mathcal{I}_{m}\left[m_{r}\right]\right]=m_{r}\). By construction, these operators also provide Property 5, since the restriction and identification processes act upon a density field, so mass is naturally conserved by the mappings. As all of the constituent operators are linear, then \(\mathcal{A}_{m}\) and \(\mathcal{I}_{m}\) are also linear and so satisfy Property 8. The restriction operator \(\mathcal{A}_{m}\) preserves a constant mixing ratio (Property 6) since \[\mathcal{A}_{m}[C]=\mathcal{M}^{-1}\left[\mathcal{A}_{\rho}\left[C\mathcal{Q} \left[\widehat{\rho}_{d}\right]\right]/\mathcal{Q}\left[\mathcal{A}_{\rho} \left[\widehat{\rho}_{d}\right]\right]\right]=\mathcal{M}^{-1}\left[C \mathcal{A}_{\rho}\left[\mathcal{Q}\left[\widehat{\rho}_{d}\right]\right]/ \mathcal{A}_{\rho}\left[\mathcal{Q}\left[\widehat{\rho}_{d}\right]\right] \right]=C, \tag{44}\] as \(\mathcal{Q}\) commutes with \(\mathcal{A}_{\rho}\) and \(\mathcal{I}_{\rho}\). Finally, provided that \(\mathcal{B}_{\rho}[\rho_{d}]\) is positive (which it should be for well-behaved density fields), then \(\mathcal{I}_{m}\) and \(\mathcal{A}_{m}\) cannot generate negative mixing ratio values, as none of the operators \(\mathcal{M}^{-1}\), \(\mathcal{M}\), \(\mathcal{Q}\), \(\mathcal{A}_{\rho}\) and \(\mathcal{I}_{\rho}\) can generate negative values. #### 4.5.2 Prolongation operator An initial prolongation operator is defined by \[\mathcal{B}_{m}^{\dagger}\left[m_{r}\right]\equiv\mathcal{R}_{\theta}\left[m_{ r}\right]-\mathcal{I}_{m}\left[\mathcal{A}_{m}\left[\mathcal{R}_{\theta} \left[m_{r}\right]\right]\right]+\mathcal{I}_{m}\left[m_{r}\right]. \tag{45}\] As this has the same structure as (14), it satisfies Property 1. Then a modified prolongation operator is \[\mathcal{B}_{m}\left[m_{r}\right]\equiv(1-\lambda)\mathcal{R}_{\theta}\left[m_ {r}\right]-(1-\lambda)\mathcal{I}_{m}\left[\mathcal{A}_{m}\left[\mathcal{R}_{ \theta}\left[m_{r}\right]\right]\right]+\mathcal{I}_{m}\left[m_{r}\right] \tag{46}\] where \(\lambda\) is a field in the same space on the same mesh as \(m_{r}\), and takes values between \(0\) and \(1\). The inclusion of \(\lambda\) is to prevent the generation of negative mixing ratios, which will be discussed in Section 4.5.3. This operator can also be expressed as: \[\mathcal{B}_{m}=(1-\lambda)\mathcal{B}_{m}^{\dagger}+\lambda\mathcal{I}_{m}. \tag{47}\] Since \(\mathcal{A}_{m}\) is a linear operator, and \(\mathcal{B}_{m}\) is a linear combination of \(\mathcal{B}_{m}^{\dagger}\) and \(\mathcal{I}_{m}\), \(\mathcal{B}_{m}\) also satisfies Property 1. As with \(\mathcal{A}_{m}\) and \(\mathcal{I}_{m}\), the operator \(\mathcal{B}_{m}\) is linear and conserves mass within a coarse element. A constant mixing ratio is also preserved by \(\mathcal{B}_{m}^{\dagger}\) as \[\mathcal{B}_{m}^{\dagger}\left[C\right]=\mathcal{R}_{\theta}\left[C\right]- \mathcal{I}_{m}\left[\mathcal{A}_{m}\left[\mathcal{R}_{\theta}\left[C\right] \right]\right]+\mathcal{I}_{m}\left[C\right]=C-\mathcal{I}_{m}\left[\mathcal{A }_{m}\left[C\right]\right]+\mathcal{I}_{m}\left[C\right]=C-\mathcal{I}_{m} \left[C\right]+\mathcal{I}_{m}\left[C\right]=C, \tag{48}\] so \(\mathcal{B}_{m}^{\dagger}\) preserves a constant. The same steps can also be used to show that \(\mathcal{B}_{m}\) preserves a constant mixing ratio. #### 4.5.3 Prevention of negative mixing ratios In this formulation, there are two situations in which negative moisture mixing ratios can be generated by the mapping process, unless care is taken. The first is the prolongation of a mixing ratio field to a finer mesh. The second is the addition of an increment to a mixing ratio field, when that increment has been calculated on a coarser mesh. The solution in both situations involves combining the field which may be negative with one that is guaranteed not to be. This is described through the operator \(\Lambda:\mathbb{V}_{\theta},\mathbb{V}_{\theta}\rightarrow\mathbb{V}_{\theta}\). To determine the action of \(\Lambda\) and the value of \(\lambda\), consider \(m_{r}^{-}\in\mathbb{V}_{\theta}\), a mixing ratio field which may contain negative values, and \(m_{r}^{+}\in\mathbb{V}_{\theta}\), whose values are guaranteed not to be negative. It is possible to define an operator \(\Lambda\) which blends \(m_{r}^{-}\) and \(m_{r}^{+}\) to create a field \(m_{r}\) which is also guaranteed not to be negative, through \[m_{r}=\Lambda\left[m_{r}^{-},m_{r}^{+}\right]\equiv(1-\lambda)m_{r}^{-}+ \lambda m_{r}^{+}, \tag{49}\] where \(\lambda\in\overline{\mathbb{V}}_{\theta}\) is a field on a coarser mesh whose values lie between \(0\) and \(1\). To find appropriate values of \(\lambda\), consider the values of the mixing ratio field in one coarse cell. If \(m_{r}^{-}|_{i,j}^{k}<0\), then \(m_{r}|_{i,j}^{k}=0\) if \[\lambda|_{i}^{k}=\frac{-m_{r}^{-}|_{i,j}^{k}}{m_{r}^{+}|_{i,j}^{k}-m_{r}^{-}| _{i,j}^{k}}, \tag{50}\] which is found by rearranging (49). Negativity will be prevented by taking \[\lambda|_{i}^{k}=\begin{cases}\ (2019) and Kent et al. (2023), uses an iterative semi-implicit time stepping scheme with a nested outer-inner loop structure like that of ENDGame, Wood et al. (2014). Transport terms are treated explicitly in the outer loop using a Method of Lines (MoL) structure with finite volume spatial discretisation. Faster terms describing wave motions are treated implicitly in the inner loop, which consists of an iterative Newton solve. All variables are transported using the MoL scheme described in Melvin et al. (2019), with vertical-horizontal Strang split to reduce the number of required substeps when the vertical Courant number is large. Dry density is transported conservatively, while the potential temperature and wind are transported in advective forms. When included, moisture species are transported conservatively and consistently with the scheme described in Bendall et al. (2023). The cloud microphysics scheme, used in two of the test cases, is a simple evaporation-condensation scheme with latent heat feedback, so that the moisture species are water vapour and cloud liquid. The scheme is called after the transport step within the outer loop of the algorithm. ### Tracer transport on the sphere In this test case, a dry density field \(\rho_{d}\) is transported by a prescribed wind on a fine resolution mesh, and a mixing ratio field \(\overline{a}_{Y}\) is transported on a coarse mesh. This mimics the transport of tracers (for instance aerosols and chemicals) at a lower resolution, driven by a higher resolution dynamical core. This test is a variant of the time-dependent, deformational and divergent flow on the surface of the sphere from Nair and Lauritzen (2010), Lauritzen et al. (2012) to the sphere. The spherical flow is defined as \(u_{0}=2\pi R/\tau\), \(v_{0}=R/\tau\). \[u=u_{0}\cos\left(\phi\right)-v_{0}\cos\left(\frac{\pi t}{\tau}\right)\sin \left(\phi\right)\cos\left(\lambda-\frac{u_{0}t}{R}\right) \tag{56}\] \[v=v_{0}\cos\left(\frac{\pi t}{\tau}\right)\sin\left(\lambda-\frac{u_{0}t}{R} \right), \tag{57}\] where \((\lambda,\phi)\) are the longitude and latitude, \(R=6.3781\times 10^{6}\) m is the radius of the earth and \(\tau=2000\) s is the length of the simulation. The finer and coarser meshes are C32 and C16 meshes respectively, where C\(n\) denotes a cubed-sphere mesh with \(n\times n\) cells per panel. In this case., the mesh is a two-dimensional spherical surface and the time step is \(\Delta t=4\) s. The dry density initially varies with latitude while the tracer mixing ratio takes the form of two Gaussian hills. The initial conditions are \[\rho_{d}=\rho_{0}+(\rho_{t}-\rho_{0})\cos\phi,\qquad\overline{a}_{Y}=a_{0}+a_ {t}e^{-(L_{r_{1}}/R_{c_{1}})^{2}}+a_{t}e^{-(L_{r_{2}}/R_{c_{2}})^{2}}, \tag{58}\] Figure 4: The procedure to compute physical parametrisations for moisture mixing ratios upon finer or coarser meshes, including the steps to prevent the generation of negative values. The mixing ratio field on the dynamical core mesh before the parametrisation is \(m_{r}^{n}\), while the resulting mixing ratio field is given by \(m_{r}^{n+1}\). The upper half of the diagram describes a physical parametrisation on a finer mesh, where negative values can be generated by the prolongation to the finer mesh. The lower half represents a physical parametrisation on a coarser mesh, where negative values could be caused by the addition of a tendency \(\Delta\overline{m}_{r}\) computed on a coarse mesh to the original field. where \(\rho_{0}=0.5\) kg m\({}^{-3}\), \(\rho_{t}=1.0\) kg m\({}^{-3}\), \(a_{0}=0.5\) kg kg\({}^{-1}\) and \(a_{t}=1.0\) kg kg\({}^{-1}\). \(L_{c_{1}}\) and \(L_{c_{2}}\) are the great circle distances between the local coordinate and the centre's of the bubbles \((\lambda_{c_{1}},\phi_{c_{1}})=(-\pi/4,0)\), \((\lambda_{c_{2}},\phi_{c_{2}})=(\pi/4,0)\) calculated as \[L(\mathbf{x},\mathbf{x}_{c})=\arccos[\sin(\phi)\sin(\phi_{c})+\cos(\phi)\cos(\phi_{c}) \cos(\lambda-\lambda_{c})], \tag{59}\] where \(\mathbf{x}=(\lambda,\phi)\) and \(\mathbf{x}_{c}=(\lambda_{c},\phi_{c})\). The evolution of the mixing ratio \(\overline{a}_{Y}\) is shown in Figure 5. This test can be used to demonstrate the conservation of mass of the tracer that is being transported on the coarse mesh. Figure 6 shows time series of the tracer mass, comparing the approach described in 4.4 with an advective form of the transport equation, showing that mass is indeed conserved. ### Moist gravity wave The next test is the moist gravity wave test case from Bendall et al. (2020), adapted from the inertia-gravity wave test case of Skamarock and Klemp (1994). The final state in this test is spatially smooth, so it can be Figure 5: The \(\overline{a}_{Y}\) field used in the transport test case on the surface of a sphere in Section 5.1. (Left) the initial condition, (centre) a computed state at \(t=\tau/2\), as the hills have been deformed by the flow, and (right) the computed solution at \(t=\tau\), as the tracers have returned to close to their initial condition. Contours are spaced by \(5\times 10^{-3}\) kg kg\({}^{-1}\). The superimposed arrows indicate the magnitude and direction of the transporting velocity field. Figure 6: Time series demonstrating the conservation of mass by tracer transport on a coarser mesh. The evolution of tracer mass with initial conditions given by (58), comparing the transport of a tracer with a purely advective transport scheme against the conservative transport described in Section 4.4, showing that mass is indeed conserved in the latter case. used to meaningfully measure the errors in the discretisation at different resolutions. This allows the effect of computing the physical parametrisation at a different resolution to be quantified. Like the rising bubble case of Bryan and Fritsch (2002), the atmosphere is initially saturated and cloudy everywhere. Thus as air parcels move, water evaporates and condenses - which is captured in our model through the use a Kessler physics scheme with no rain. This physics scheme can be performed on a different mesh to the dynamical core. The domain is a two-dimensional vertical slice with height and length (10 km, 300 km). The initial conditions are the same as Bendall et al. (2020), but with the exception that the definition of the wet equivalent potential temperature differs. In particular, LFRic-Atmosphere uses a latent heat \(L_{v}\) which is constant with respect to temperature, and the heat capacities \(c_{p}\) and \(c_{v}\) used only the dry component of air. This means that the wet equivalent potential temperature is \[\theta_{e}=\theta e^{L_{v}m_{v}/c_{p}T}. \tag{60}\] The same perturbation of and iterative procedure of Bendall et al. (2020) is used for the initial conditions. All runs used a mesh with 200 vertical levels and a time step of \(\Delta t\) = 1.2 s. The final state is shown in the right-hand side of Figure 7. To investigate the effects using different meshes for the dynamical core and the physical parametrisation, we calculated the \(L^{2}\) error norm of the final \(\theta_{e}\) field relative to a high-resolution reference solution. The convergence plot in Figure 7 demonstrates that the errors are strongly dependent on the resolution of the dynamics mesh rather than the physics mesh. Increasing the horizontal resolution of the physics with respect to dynamics has no noticeable effect. Decreasing the resolution is shown to have a significant degradation in model solution quality only after a large enough resolution gap. ### Moist Baroclinic Wave The moist baroclinic wave test case forced by orography of Hughes and Jablonowski (2023) produces unstable waves and features important to the development of weather systems. Instead of a perturbation being added to the wind field, the orography forces the unstable atmospheric state and induces Rossby and inertia-gravity waves. The moist configuration of Hughes and Jablonowski (2023) is used, but with no-rain in the physics scheme. The test was run for 10 days at C96, C48 and C24 resolutions with dynamics and physics at the same horizontal resolution. Two more configurations were run with dynamics at C48 but physics at C96 and C24. The grid is 30 km in height with 30 levels and a time step size of \(\Delta t=900\) s is used. The vertically stretched extrusion of Ullrich et al. (2014) is used. Figure 7: (Left) Plots of the \(L^{2}\) error norms in the final \(\theta_{e}\) field from the moist gravity wave test of Section 5.2. Errors are computed against a high resolution solution, and plotted as a function of the grid spacing of the physics mesh. Dashed lines join the points corresponding to computations with the same resolution for the dynamical core. The errors are largely independent of the resolution of the physics mesh, and instead depend strongly on the resolution of the dynamics mesh. (Right) the final \(\theta_{e}\) perturbation field, with both the dynamical core and the physics scheme using the same mesh. Contours are spaced at \(3\times 10^{-4}\) K. Figure 8: Moist baroclinic wave test case, forced by orography at \(t=10\) days. The dynamics resolution is denoted by D\(n\) and physics resolution is denoted P\(n\). (Left) Contours of the Exner pressure in the lowest model level, every 0.005 (no unit) from 0.94 to 1.01, while the background colours show surface potential temperature with contours every 10 K. (Right) Cloud liquid field at the 9 km height with contours every 0.001 kg kg\({}^{-1}\). In this test case the dynamical core resolution is much more influential in the evolution of the prognostic variables than the resolution of the physics scheme. The plots in Figure 8 display the first-level Exner and potential temperature fields, as well as the cloud at a height of 9 km. Figure 8 shows that for this test case, increasing the physics resolution with respect to the dynamics has a much smaller impact than changing the resolution of the dynamical core. For the cases with the dynamical core using a C48 mesh, the pressure contours, temperature field and cloud fields look similar. Running the physics at a coarser resolution to the dynamics only degrades the solution minimally. It can be seen that when the dynamical core is run at the C48 resolution, there is a more prominent area of low pressure with strong pressure gradient at a latitude of around 140 degrees; this drives cyclonic motion that results in the more overturned tail in the right-most cloud structure. This suggests that the cloud structure more strongly influenced by the fluid dynamics than the resolution of the latent heating effects. Figure 9: The Held-Suarez test case with zonally averaged fields over 800 days. The dynamics mesh is denoted by D\(n\) and physics mesh is denoted by P\(n\). (Left) Zonally-averaged potential temperature, with contours every 50 K, in a vertical slice. (Right) Zonal wind with contours every 5 m s\({}^{-1}\). The zero contour is omitted. The strength and extent of the jet appear to be largely dictated by the dynamics resolution, with D48 P24 appearing to be more comparable to D96 P96 and D48 P48 than D24 P24. ### Held-Suarez The test case of Held and Suarez (1994) is a climate simulation which simulates the average atmospheric state by forcing the wind and surface temperature. This includes simple wind drag and temperature relaxation forcings, which can be treated as physical parametrisations, and so computed on a different mesh to the dynamical core. It generates two zonal jets in the mid-latitudes, and a vertical potential temperature gradient. For details of the set up of this test case in LFRic-Atmosphere, see Sergeev et al. (2023). The test was run for 1000 days, with a spin up time of 200 days, meaning the results are averaged from the last 800 days. The tests were run at differing time steps for different resolutions: C96 used a time step of \(\Delta t=900\) s, C48 had \(\Delta t=1800\) s and C24 used \(\Delta t=3600\) s. As with the baroclinic wave test, we also performed simulations with the dynamical core using a C48 mesh (and time step of \(\Delta t=1800\) s), but with the physical parametrisations on C24 and C96 meshes. The wind drag forcing is linear in the wind field, but is multiplied by a drag factor depending on the Exner field. We computed the drag factor on the physics mesh, and mapped this to the dynamical core mesh to multiply the wind field to get the resulting increment. A temporal off-centering of \(\alpha=0.55\) for the semi-implicit scheme was used. Typically with this test case the strength and extent of the jets are a result of the resolution. From Figure 9 we can see that the strength and extent of the D48 P48, D48 P24 and D48 P96 runs are comparable, implying that varying the resolution of the physics has minimal observable effect. ## 6 Summary This work has presented a formulation for mapping LFRic-Atmosphere's prognostic variables between meshes, to allow different components of the atmospheric model to use different meshes. These meshes have the same vertical structure but different horizontal resolutions, with the resolution of the finer mesh such that its cells are nested within the cells of coarser meshes. With this new capability, computational resources can be targeted towards the components that deliver the greatest impact on the model's accuracy. At the same time, it may be possible to dramatically reduce the cost of some physical parametrisations without seeing a degradation in the quality of the solution. The formulation is designed to possess a set of properties described in Section 3, including mass conservation, preservation of constant mixing ratio fields and avoiding the generation of negative moisture concentrations. The results in Section 5 demonstrate that the formulation does have these properties. Tracers on a coarser mesh (representing chemicals and aerosols) are transported conservatively but such that constant mixing ratios are preserved. Moisture species are mapped conservatively, without generating negative values and still preserving constant mixing ratios. An idealised moist gravity wave test allowed quantification of the errors in the discretisation, which were largely independent of the resolution of the physics process. The primary goals for future work are to apply this formulation to realistic NWP and climate models and to assess the scientific consequences of computing individual physical parametrisations at different resolutions to the dynamical core. The moist baroclinic wave and Held-Suarez test cases of Sections 5.3 and 5.4 present idealised versions of NWP and climate simulations; in these cases decreasing the physics resolution did not significantly degrade the solutions, but increasing the physics resolution offered no improvement in solution quality. One particular target is for LFRic-Atmosphere to emulate the Junior-Senior capability of the UKESM model, in which the UKCA chemistry and aerosol component is performed on a coarser mesh. Although it is possible to use a different mesh for each physical parametrisation, some schemes are more closely related and share auxiliary variables and so it may be appropriate for these schemes to share a mesh. For instance, the radiation scheme interacts with the chemistry and aerosol variables, so we intend to explore using the same mesh for these components. While the test cases in Section 5 did not reveal benefits from using a higher resolution mesh for the physical parametrisations, there may be clearer effects in ensemble simulations (e.g. with stochastic physics schemes), and more realistic configurations, particularly for interactions with the land surface through boundary-layer and convection processes. One other interesting approach would use the same mesh for the physical parametrisations and dynamical core, but filter the prognostic fields that are passed to the physical parametrisations. This would address the problem described by Lander and Hoskins (1997) of the errors at the smallest scales being amplified by the physical parametrisations. ## Acknowledgements The work presented here was funded through Met Office work packages 2.2 and 3.3 of the ExCALIBUR research programme. The authors would like to thank John Thuburn and Marc Stringer for useful conversations during this project, and Nigel Wood for his suggestions on improving the manuscript. This work has also been facilitated by the many contributors to the LFRic-Atmosphere model and the underpinning LFRic-infrastructure, but particularly so by Ricky Wong for his design of the infrastructure for mapping wind fields between meshes.
2310.13889
An Experimental Study of Model-based Control for Planar Handed Shearing Auxetics Robots
Parallel robots based on Handed Shearing Auxetics (HSAs) can implement complex motions using standard electric motors while maintaining the complete softness of the structure, thanks to specifically designed architected metamaterials. However, their control is especially challenging due to varying and coupled stiffness, shearing, non-affine terms in the actuation model, and underactuation. In this paper, we present a model-based control strategy for planar HSA robots enabling regulation in task space. We formulate equations of motion, show that they admit a collocated form, and design a P-satI-D feedback controller with compensation for elastic and gravitational forces. We experimentally identify and verify the proposed control strategy in closed loop.
Maximilian Stölzle, Daniela Rus, Cosimo Della Santina
2023-10-21T02:10:25Z
http://arxiv.org/abs/2310.13889v2
# An Experimental Study of Model-based Control for Planar Handed Shearing Auxetics Robots ###### Abstract Parallel robots based on Handed Shearing Auxetics (HSAs) can implement complex motions using standard electric motors while maintaining the complete softness of the structure, thanks to specifically designed architected metamaterials. However, their control is especially challenging due to varying and coupled stiffness, shearing, non-affine terms in the actuation model, and underactuation. In this paper, we present a model-based control strategy for planar HSA robots enabling regulation in task space. We formulate equations of motion, show that they admit a collocated form, and design a P-sat-D feedback controller with compensation for elastic and gravitational forces. We experimentally identify and verify the proposed control strategy in closed loop. Keywords:Soft Robotics, Model-based Control, Underactuation ## 1 Motivation and related work The deformability, adaptiveness, and compliance of invertebrates serve as an inspiration for continuum soft robots. While serial continuum soft robots have been intensively investigated in recent years [2], parallel soft robots [5] are less studied despite exhibiting exciting properties such as an improved stiffness-to-weight ratio. One recent development in this field is robots based on Handed Shearing Auxetics (HSAs) [6, 12] in which multiple HSA rods are connected together at their distal end through a rigid platform. Twisting of the proximal end of an HSA causes the rod to elongate and enables complex motion primitives in 3D space. Recent work has investigated the mechanical characterization [4], simulation [11], and kinematic modeling [3, 11] of HSA robots but control has yet to be tackled. In this work, we make a first step towards achieving task-space control by designing model-based regulators for planar motions. Our approach takes into account essential characteristics of HSA robots, such as underactuation, shear strains, and varying stiffness. Kinematic models for parallel robots usually require separate configuration variables for each limb and the enforcement of kinematic constraints [1]. We propose to avoid this complexity by defining the Constant Strain (CS) of a virtual backbone in the center of the robot to be our configuration variable. Subsequently, we derive the system dynamics in Euler-Lagrangian form. We notice that the resulting planar dynamics are underactuated and that the actuation forces are non-affine with respect to the control inputs, which are the motor angles. The latter is a peculiarity of these systems, rarely observed in other robots. Based on the model knowledge, we devise a control strategy shown in Fig. 1(a) that first maps end-effector positions to desired configurations and steady-state (feedforward) control inputs and then also applies a P-sat-D [8] feedback action on the collocated form [9] of the system dynamics. In summary, we state our contributions as (i) a closed-form solution for the inverse kinematics of a planar CS formulation, (ii) an Euler-Lagrangian dynamical model for planar HSA robots and its expression in collocated form, (iii) a provably stable model-based control strategy for guiding the end-effector of the robot towards a desired position in Cartesian space, and (iv) experimental verification of both the model and the controller. A video accompanies this paper explaining the methodology and displaying video recordings of the control experiments3. Footnote 3: [https://youtu.be/5A5yhMibctQ](https://youtu.be/5A5yhMibctQ) ## 2 Technical approach In the following, we consider a parallel HSA robot moving in a plane. First, we derive the kinematic and dynamic models. Subsequently, we devise a planning and control strategy to move the end-effector (i.e., the platform) to a desired position in Cartesian space. ### Kinematic model Following the discrete Cosserat approach [10], we characterize the configuration space of the virtual backbone by assuming a CS model \({}_{\mathcal{V}}\xi(t)=\left[{}_{\mathcal{V}}\kappa_{\mathrm{b}}\ _{\mathcal{V}}\kappa_{\mathrm{sh}}\ _{\mathcal{V}}\sigma_{\mathrm{ax}}\right]^{ \mathrm{T}}=\mathbb{I}_{3}\,q(t)\in\mathbb{R}^{3}\), where \(\kappa_{\mathrm{be}}\), \(\sigma_{\mathrm{sh}}\), and \(\sigma_{\mathrm{ax}}\) denote the bending, shear, and axial strain respectively. Given \(q\), the pose \(\chi=\left[p_{x}\ p_{y}\ \theta\right]^{\mathrm{T}}\in SE(2)\), and a point coordinate along the backbone \(s\in[0,l^{0}]\), the forward and inverse kinematics are provided in closed form as \[\chi=\pi(q,s)=\left[\begin{array}{c}\sigma_{\mathrm{sh}}\,\frac{s_{\mathrm{ b}_{\mathrm{b}_{\mathrm{b}_{\mathrm{b}}}}}}{\kappa_{\mathrm{b}_{\mathrm{b}_{ \mathrm{b}}}}+\sigma_{\mathrm{ax}}\,\frac{\kappa_{\mathrm{b}_{\mathrm{m}}}-1 }{\kappa_{\mathrm{be}}}}\\ \sigma_{\mathrm{sh}}\,\frac{\kappa_{\mathrm{b}_{\mathrm{m}}}}{\kappa_{\mathrm{ b}_{\mathrm{m}}}}+\sigma_{\mathrm{ax}}\,\frac{\kappa_{\mathrm{b}_{\mathrm{m}}}}{ \kappa_{\mathrm{be}}}\\ \kappa_{\mathrm{be}}\,s\end{array}\right],\qquad q=\varrho(\chi,s)=\frac{ \theta}{2s}\,\left[\begin{array}{c}2\\ p_{y}-\frac{p_{x}\,s_{\theta}}{\epsilon_{\theta}-1}\\ -p_{x}-\frac{p_{y}\,s_{\theta}}{\epsilon_{\theta}-1}\end{array}\right], \tag{1}\] Figure 1: **Panel (a):** Block scheme of the closed-loop system: we plan the steady-state behavior such that the end-effector matches the given desired position \(p_{\mathrm{ee}}^{\mathrm{d}}\). The outputs of this planning are the steady-state actuation \(\phi^{\mathrm{ss}}\) and a suitable end-effector orientation \(\theta_{\mathrm{ee}}^{\mathrm{d}}\). After leveraging inverse kinematics to identify the desired and current configuration, \(q\) is mapped into a collocated form where the inputs are decoupled. Finally, we use a P-sat-D feedback controller on the actuation coordinates \(\varphi\). **Panel (b):** Visualization of the operational workspace of a planar HSA robot consisting of FPU rods. The colored area within the black dashed borders represents the positions the end-effector (visualized as a dot) can reach. The coloring denotes the mean magnitude of actuation (i.e., twisting of the rods). Furthermore, we plot three sample configurations: the unactuated straight configuration \(q=[0,0,0]^{\mathrm{T}}\) (blue), maximum clockwise bending \(q=[-11.2\,\mathrm{rad/m},0.08,0.30]^{\mathrm{T}}\) (red), and maximum counter-clockwise bending \(q=[11.2\,\mathrm{rad/m},-0.08,0.30]^{\mathrm{T}}\) (green). where we use the shorthand notations \(\mathrm{s_{be}}=\sin(\kappa_{\mathrm{be}}s)\), \(\mathrm{c_{be}}=\cos(\kappa_{\mathrm{be}}s)\), \(\mathrm{s_{\theta}}=\sin(\theta)\), and \(\mathrm{c_{\theta}}=\cos(\theta)\). Furthermore, the forward kinematics of the physical rods \(\mathcal{P}_{i},\,i\in\{1,2\}\) can be derived by first following the transformations of the virtual backbone and then adding a local translation \([\pm r_{\mathrm{off}},0]^{\mathrm{T}}\) with \(r_{\mathrm{off}}\) being the offset distance from the virtual backbone to the centerline of the HSA rod. After closing the kinematic chain, we identify a mapping \(\beta_{i}:\nu\xi\rightarrow\mathcal{P}_{i}\xi\) from the strains of the virtual backbone to the strains in the physical rods: \(\beta_{i}(\nu\xi)=\left[\nu\kappa_{\mathrm{b}},\,\nu\sigma_{\mathrm{sh}},\, \nu\sigma_{\mathrm{ax}}\pm r_{\mathrm{off}\,\,\nu}\kappa_{\mathrm{b}}\right]^ {\mathrm{T}}\). Prior work has shown that the auxetic trajectory of HSAs can be modeled by coupling the rest length \(\tilde{l}_{i}\) to the twist strain \(\kappa_{\mathrm{tw},i}\) of the \(i\)th HSA rod [4, 11]: \(\tilde{l}_{i}=(1+\epsilon_{i})l^{0}=(1+h_{i}C_{\epsilon}\kappa_{\mathrm{tw},i})\) where \(l^{0}\) is the printed length of the rod and \(C_{\epsilon}\) a positive constant. The handedness \(h_{i}\in\{-1,1\}\) describes if positive or negative twist angles are needed to elongate the closed HSA. For a given vector of rod twist angles \(\phi\in\mathbb{R}^{2}\) and after defining \(\phi_{i}^{+}=h_{i}\phi_{i}\), the elongation of the \(i\)th rod is then \(\epsilon_{i}=C_{\epsilon}\frac{\phi_{i}^{+}}{l^{0}}\). We provide examples in Fig. 1(b) of the operational workspace that can be achieved with this kinematic model. ### Dynamic model We aim to devise a dynamic model in the Euler-Lagrange form \(M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)+K(q-q^{0})+D\dot{q}=\alpha(q,\phi)\), where \(M(q),C(q,\dot{q}),K,D\in\mathbb{R}^{3\times 3}\) are the inertia, Coriolis (derived with Christoffel symbols), elastic and damping matrices respectively. \(q^{0}\in\mathbb{R}^{3}\) captures the rest configuration. The terms \(G(q)\) and \(\alpha(q,\phi)\in\mathbb{R}^{3}\) describe the gravitational and actuation forces acting on the generalized coordinates. The state of the robot at time \(t\) can be therefore described by \(x(t)=\left[q^{\mathrm{T}}(t)\ \dot{q}^{\mathrm{T}}\right]^{\mathrm{T}}\in \mathbb{R}^{6}\). The inertia matrix is found by following the standard procedure of integrating mass and rotational inertia along the HSA rods [2]. Additionally, we consider the inertial contribution of the platform mounted to the distal end of the robot. Under the small strain assumption, the elastic forces of the \(i\)th HSA rod can be modeled as \[\mathcal{P}_{\mathcal{T}\mathrm{K},i}=\begin{bmatrix}S_{\mathrm{be},i}(\phi_{ i})&S_{\mathrm{b},\mathrm{sh}}&0\\ S_{\mathrm{b},\mathrm{sh}}&S_{\mathrm{sh},i}(\phi_{i})&0\\ 0&0&S_{\mathrm{ax},i}(\phi_{i})\end{bmatrix}\left(\begin{bmatrix}\mathcal{P} _{i}\kappa_{\mathrm{b}}\\ \mathcal{P}_{i}\sigma_{\mathrm{sh}}\\ \mathcal{P}_{i}\sigma_{\mathrm{ax}}\end{bmatrix}-\begin{bmatrix}\kappa_{ \mathrm{be}}^{0}\\ \sigma_{\mathrm{sh}}^{0}\\ \sigma_{\mathrm{ax}}^{0}+\epsilon_{i}(\phi_{i})\end{bmatrix}\right), \tag{2}\] Figure 2: Experimental setup: the parallel robot consists of four HSA rods connected by a platform at their distal end. Four servo motors actuate the HSAs. We track the pose of the end-effector with a motion capture system by attaching reflective markers to the platform. where \({}_{\mathcal{P}_{i}}\xi^{0}=\left[\kappa_{\rm be}^{0}\,\sigma_{\rm sh}^{0}\,\, \sigma_{\rm ax}^{0}\right]^{\rm T}\) denotes the rest strain, \(S_{{\rm be},i}(\phi_{i})\), \(S_{{\rm sh},i}(\phi_{i})\), \(S_{{\rm ax},i}(\phi_{i})\) are the bending, shear, and axial stiffnesses which are defined as linear functions with respect to the twist angle of the rod \(\phi_{i}\)[4; 11]: \[S_{{\rm be},i}(\phi_{i})=\hat{S}_{\rm b}+C_{S_{\rm b}}\,\phi_{i}^{+},\quad S_{{ \rm sh},i}(\phi_{i})=\hat{S}_{\rm sh}+C_{S_{\rm sh}}\,\phi_{i}^{+},\quad S_{{ \rm ax},i}(\phi_{i})=\hat{S}_{\rm ax}+C_{S_{\rm ax}}\,\phi_{i}^{+}. \tag{3}\] The coefficient \(S_{{\rm b},{\rm sh}}\) accounts for the elastic coupling between the bending and the shear strain. Subsequently, we project the forces into the virtual backbone by premultiplying with \(J_{\beta}^{\rm T}=\frac{\partial\beta}{\partial q}^{\rm T}\) and then sum the contribution of all rods. Finally, we group all terms depending on the control input \(\phi\) in \(\alpha(q,\phi)\) and everything else in \(K\). After modeling the dissipative forces in each HSA as \({\rm diag}(\zeta_{\rm be},\zeta_{\rm sh},\zeta_{\rm ax})\)\({}_{\mathcal{P}_{i}}\xi\), we derive the damping matrix in configuration space as \(D=\sum_{i=1}^{2}J_{\beta,i}^{\rm T}\,{\rm diag}(\zeta_{\rm be},\zeta_{\rm sh}, \zeta_{\rm ax})\,J_{\beta,i}\,=\,2\,{\rm diag}\left((\zeta_{\rm be}+r_{\rm off }^{2}\,\zeta_{\rm ax}),\zeta_{\rm sh},\zeta_{\rm ax}\right)\). We open-source the derivation of the Euler-Lagrangian dynamics and a JAX implementation of a simulator based on them on GitHub4. We stress that (a) the derived dynamical model is not affine in the control input and (b) the system is underactuated. Footnote 4: [https://github.com/tud-cor-sr/jax-soft-robot-modelling](https://github.com/tud-cor-sr/jax-soft-robot-modelling) ### Control Our goal is to control the end-effector, which is defined as the distal surface of the platform, to a desired position in Cartesian space \(p_{\rm ee}^{\rm d}\in\mathbb{R}^{2}\). However, the mapping into configuration space is not trivial as we do not know which end-effector orientation \(\theta_{\rm ee}\) is feasible at steady-state. To tackle this challenge, we perform steady-state planning identifying admittable configurations \(q^{\rm d}\) and matching steady-state actuations \(\phi^{\rm ss}\), which allow the robot's end-effector to statically remain at \(p_{\rm ee}^{\rm d}\). More details on the used planning procedure can be found in Section 3.4. In principle, we can command \(\phi=\phi^{\rm ss}\) to achieve regulation towards the desired end-effector position. Nevertheless, we add a feedback controller to compensate for any errors in \(\phi^{\rm ss}\) caused by unmodelled effects such as hysteresis. Unfortunately, the non-affine actuation \(\alpha(q,\phi)\) would complicate the design of such a feedback controller. Therefore, we perform a first-order Taylor expansion of the actuation forces with respect to \(\phi\) resulting in a configuration-dependent actuation matrix \(A_{\phi^{\rm ss}}(q)=\frac{\partial\alpha}{\partial\phi}\big{|}_{\phi=\phi_{ \rm ss}}\in\mathbb{R}^{3\times 2}\). This allows us to re-write the right side of the Equations of Motion (EOM) as \(\tau_{q}=\alpha(q^{\rm ss},\phi^{\rm ss})+A_{\phi^{\rm ss}}(q)\,u\) where \(u=\phi-\phi^{\rm ss}\) is the new control input. To improve the robustness of the control loop, we compute \(u\) with a P-satI-D control law [8]. However, our system is underactuated and in a non-collocated form. Therefore, we apply a coordinate transformation \(h:q\to\varphi\in\mathbb{R}^{3}\) recently introduced by Pustina et al. [9] which maps the EOM into a form where \(\phi\) applies direct forces on the actuated configuration variables. The map is given by \(h(q)=\left[\int_{0}^{t}\dot{q}^{\rm T}A_{\phi^{\rm ss}}(q){\rm d}\tau,\,\sigma _{\rm sh}\right]_{\rm T}^{\rm T}=\left[h_{1}(q),\,h_{2}(q),\,\sigma_{\rm sh} \right]^{\rm T}\) with \[\begin{array}{l}h_{i}(q)=C_{\rm S,ax}\,\frac{h_{i}}{l^{0}}\left[2\,\varepsilon _{i}(\phi_{i}^{\rm ss})\,(\pm r_{\rm off}\kappa_{\rm be}+\sigma_{\rm ax})\mp r_ {\rm off}^{2}\frac{\kappa_{\rm be}^{0}}{2}\pm r_{\rm off}\,\sigma_{\rm ax}^{0} \,\kappa_{\rm be}\mp r_{\rm off}\,\kappa_{\rm be}\,\sigma_{\rm ax}+\sigma_{\rm ax }^{0}\,\sigma_{\rm ax}\\ -\frac{\sigma_{\rm ax}^{2}}{2}\right]+C_{\rm S,b}\,\frac{h_{i}}{l^{0}}\left[ \kappa_{\rm be}^{0}\,\kappa_{\rm be}-\frac{\kappa_{\rm be}^{2}}{2}\right]+C_{\rm S,sh}\,\frac{h_{i}}{l^{0}}\left[\sigma_{\rm sh}^{0}\,\sigma_{\rm sh}-\frac{ \sigma_{\rm sh}^{2}}{2}\right]+\hat{S}_{\rm ax}\,\frac{h_{i}}{l^{0}}\,C_{\varepsilon }\Big{[}\pm r_{\rm off}\,\kappa_{\rm be}+\sigma_{\rm ax}\Big{]}.\end{array} \tag{4}\] The Jacobian \(\frac{\partial h}{\partial q}\) is used to formulate the dynamics \(M_{\varphi}\ddot{\varphi}+\eta(\varphi,\dot{\varphi})+G_{\varphi}+K_{\varphi}+D_ {\varphi}\dot{\varphi}=A_{\varphi}\,\phi\) in the collocated variables [7], where \(A_{\varphi}^{\rm T}=\left[\mathbb{I}^{2}\ 0^{2\times 1}\right]^{\rm T}\). In the following, we will denote with the subscript \(a\) the first two actuated coordinates \(\varphi_{\mathrm{a}}\). Finally, the full control law of the _P-satI-D_ is given in collocated form as \[\phi=\phi^{\mathrm{ss}}+K_{\mathrm{p}}(\varphi_{\mathrm{a}}^{\mathrm{d}}-\varphi )-K_{\mathrm{d}}\dot{\varphi}_{\mathrm{a}}+K_{\mathrm{i}}\int_{0}^{t}\tanh( \gamma\,(\varphi_{\mathrm{a},t^{\prime}}^{\mathrm{d}}-\varphi_{\mathrm{a},t^{ \prime}}))\,\mathrm{d}t^{\prime}, \tag{5}\] where \(K_{\mathrm{p}},K_{\mathrm{d}},K_{\mathrm{i}}\in\mathbb{R}^{2\times 2}\) are the proportional, derivative, and integral gains respectively, and \(\gamma\in\mathbb{R}^{2\times 2}\) horizontally compresses the hyperbolic tangent. While the proposed P-satI-D control law compensates gravity through \(\phi^{\mathrm{ss}}\), we can extend the approach to include gravity cancellation (_P-satI-D + GC_) by evaluating \(G_{\varphi,\mathrm{a}}\) at the current configuration: \[\phi=\phi^{\mathrm{ss}}-G_{\varphi,\mathrm{a}}(q^{\mathrm{d}})+G_{\varphi, \mathrm{a}}(q)+K_{\mathrm{p}}(\varphi_{\mathrm{a}}^{\mathrm{d}}-\varphi)-K_{ \mathrm{d}}\dot{\varphi}_{\mathrm{a}}+K_{\mathrm{i}}\int_{0}^{t}\tanh(\gamma \,(\varphi_{\mathrm{a},t^{\prime}}^{\mathrm{d}}-\varphi_{\mathrm{a},t^{ \prime}}))\mathrm{d}t^{\prime}. \tag{6}\] The implementation of all control laws is available on GitHub5. Footnote 5: [https://github.com/tud-cor-sr/hsa-planar-control](https://github.com/tud-cor-sr/hsa-planar-control) ## 3 Experimental validation ### Experimental setup We evaluate the system model and our proposed control approach on a robot consisting of four HSA rods. The material choice of the HSA is crucial and has a Figure 3: Verification of the system model and the identified system parameters on an unseen trajectory with the HSA being randomly actuated through a GBN sequence: the solid line denotes the actual trajectory. In contrast, the dashed line visualizes the trajectory simulated with the system model. We report results for both FPU and EPU-based HSAs. significant influence on the resulting mechanical characteristics of the robot (e.g., blocked force, holding torque, bending stiffness, etc.) [12]. Furthermore, specific material requirements are dictated by the nature of the design of the HSA rod. The structure of the metamaterial is made of struts connected by living hinges. These living hinges need to be thin, flexible, and accommodate high strains [12]. Therefore, we decided to 3D-print the HSAs via digital projection lithography either from the photopolymer resin Carbon FPU 50 (stiffer) or the elastomeric polyurethane EPU 40 resin (softer). Each HSA rod is actuated by a Dynamixel MX-28 servo motor. The Dynamixel motors are set to use position control mode. The robot is mounted platform-down on a cage with an Optitrack motion capture system, which measures the SE(3) pose of the platform at 200 Hz. Our algorithms run within a ROS2 framework6. The pose measurements are first projected into the plane of actuation and serve as an input to the closed-form inverse kinematics introduced in (1). We use a Savitzky-Golay filter with a window duration of 0.1 s to numerically differentiate \(\chi_{\mathrm{ee}}(t)\), \(q(t)\) and gather with that \(\dot{\chi}_{\mathrm{ee}}(t)\) and \(\dot{q}(t)\). Footnote 6: [https://github.com/tud-cor-sr/ros2-hsa](https://github.com/tud-cor-sr/ros2-hsa) ### System identification Next, we strive to identify the parameters used in our dynamic model. We assume the robot's geometric and mass density properties to be known or easily measurable. As knowledge about the damping coefficients is not required by the control law, only the experimental identification of elongation and stiffness characteristics remains. For this, we measure the response of the system to step and staircase actuation sequences. Afterward, the parameters are regressed using least squares. For the FPU-based robot, we identify \(C_{\varepsilon}^{\mathrm{FPU}}=0.0079\) m/rad, \(S_{\mathrm{be}}^{\mathrm{FPU}}=-2.5\cdot 10^{-5}+3.9\cdot 10^{-7}\,\frac{ \phi_{1}^{+}}{l^{0}}\) Nm\({}^{2}\), \(S_{\mathrm{sh}}^{\mathrm{FPU}}=0.043+0.0029\,\frac{\phi_{1}^{+}}{l^{0}}\)N, \(S_{\mathrm{ax}}^{\mathrm{FPU}}=0.74+0.0098\,\frac{\phi_{1}^{+}}{l^{0}}\)N, and \(S_{\mathrm{b,sh}}^{\mathrm{FPU}}=-5.0\cdot 10^{-4}\)Nm/rad where \(l^{0}=0.059\) m. Furthermore, we regress \(C_{\varepsilon}^{\mathrm{EPU}}=0.0098\) m/rad, \(S_{\mathrm{be}}^{\mathrm{EPU}}=5.7\cdot 10^{-4}-9.7\cdot 10^{-6}\,\frac{\phi_{1}^{+}}{l^{0}} \) Nm\({}^{2}\), \(S_{\mathrm{sh}}^{\mathrm{EPU}}=0.59-0.00047\,\frac{\phi_{1}^{+}}{l^{0}}\)N, \(S_{\mathrm{ax}}^{\mathrm{EPU}}=5.7+0.015\,\frac{\phi_{1}^{+}}{l^{0}}\)N, and \(S_{\mathrm{b,sh}}^{\mathrm{EPU}}=-0.000\,48\) Nm/rad for the EPU HSAs which have the same length as the FPU HSAs. Finally, we identify the axial rest strain \(\sigma_{\mathrm{ax}}^{0}\) before the start of each experiment. We notice that the EPU-based HSA robot is approximately one order of magnitude more flexible compared to the FPU-based robot. ### Model verification We verify the accuracy of the proposed system model and the identified parameters on trajectories unseen during system identification. We generate the trajectories by actuating the robot with a Generalized Binary Noise (GBN) [13] sequence with a settling time of 0.5 s and at each time step \(k\) randomly sample \(\phi(k)\sim\mathcal{U}(0,\phi_{\mathrm{max}})\). We simulate the model evolution with a Dormand-Prince 5(4) integrator and a time step of 0.1 ms. Fig. 3(a) shows the model exhibiting excellent accuracy for representing the behavior of FPU-based HSA robots. For EPU-based HSA robots, we observe in Fig.3(d) more significant errors in the shear estimate. Specifically, the CS model does not seem sufficient anymore for capturing the shape of the robot, particularly for larger bending angles. Therefore, we suggest for future work to employ kinematic models with more Degrees of Freedom (DOF) such as Piecewise Constant Strain (PCS) as proposed for example in [11]. ### Steady-state planning Our approach, as detailed in Section 2.3, requires us for a given desired end-effector position \(p_{\rm ee}^{\rm d}\) to identify a statically-feasible configuration \(q^{\rm d}\) with the matching steady-state actuation \(\phi^{\rm ss}\). We perform online static inversion to identify admittable desired configurations \(q^{\rm d}\) and matching steady-state control inputs \(\phi^{\rm ss}\) during our experiments involving the FPU HSA robots. First, we substitute the inverse kinematics \(\varrho_{\rm ee}(\chi_{\rm ee})\) into the static EOM. Then, we find the roots of the equation \(G\circ\varrho_{\rm ee}(\chi_{\rm ee}^{\rm d})+K\circ\varrho_{\rm ee}(\chi_{ \rm ee}^{\rm d})-\alpha(\varrho_{\rm ee}(\chi_{\rm ee}^{\rm d}),\phi_{\rm ss})\) with respect to \((\theta_{\rm ee},\phi_{1},\phi_{2})\) using nonlinear least-squares while enforcing constraints on the sign of \(\phi\). We solve this optimization problem with projected gradient descent. In contrast, the static inversion optimization problem is not well-behaved for the identified EPU system parameters. Instead, we rely on rolling out the dynamics over a duration \(t_{\rm ss}\) to steady-state and then optimize the steady-state input \(\phi^{\rm ss}\) such that the final end-effector error \(\|p_{\rm ee}^{\rm d}-p_{\rm ee}^{\rm ss}\|\) is as small as possible. We formalize this optimization problem in a least-squares fashion \[\begin{split}\phi^{\rm ss}=\underset{\phi}{\rm argmin}& \frac{1}{2}\,\|p_{\rm ee}^{\rm d}-p_{\rm ee}^{\rm ss}(\phi)\|_{2}^{2},\\ {\rm s.t.}& x^{\rm ss}=x(t_{0})+\int_{t_{0}}^{t_{ \rm ss}}f(x(t),\phi)\,{\rm d}t,\quad\chi_{\rm ee}^{\rm ss}=\begin{bmatrix}p_{ \rm ee}^{\rm ss}\\ \theta_{\rm ee}^{\rm ss}\end{bmatrix}=\pi_{\rm ee}(q^{\rm ss}),\end{split} \tag{7}\] where \(\dot{x}(t)=f(x(t),\phi)\) are the nonlinear state-space dynamics based on the EOM derived in Section 2.2 and \(\phi\in\mathbb{R}^{2}\) is constant in time. We solve (7) online using the Levenberg-Marquardt algorithm. Finally, we choose \(q^{d}=q^{\rm ss}\) and \(\chi_{\rm ee}^{\rm d}=\pi_{\rm ee}(q^{d})\). ### Closed-loop control Next, we implement the closed-loop control strategy laid out in Section 2.3. After evaluating the control law at a rate of \(40\,\rm Hz\) and saturating the control inputs to the ranges \([0,3.40]\,\rm rad\) for FPU and \([0,4.71]\,\rm rad\) for EPU, respectively, we map \(\phi\in\mathbb{R}^{2}\) to desired positions of the four motors. For this, we take into account the handedness of the HSAs and apply the same actuation magnitude to both rods on the same side of the virtual backbone. After tuning the gains for the feedback part of the model-based control laws in (5) and (6), we select \(K_{\rm p}=\rm diag(0.3,0.3)\), \(K_{\rm i}=\rm diag(0.05,0.05)\,1/s\), \(K_{\rm d}=\rm diag(0.01,0.01)\,s\), and Figure 4: Step response of the _baseline PID_, _P-satI-D_ (with gravity compensation), and _P-satI-D + GC_ (with gravity cancellation) controllers on an FPU-based HSA robot. \(\gamma=\text{diag}(100,100)\). Furthermore, we report the performance of a model-free PID controller as a baseline. Here, the control input in task-space is given by \(u_{\text{ts}}=\left[u_{\text{ts,x}}\:u_{\text{ts,y}}\right]^{\text{T}}=K_{\text{p }}^{\text{PID}}\left(p_{\text{ee}}^{\text{d}}-p_{\text{ee}}\right)-K_{\text{d}}^ {\text{PID}}\,\dot{p}_{\text{ee}}+K_{\text{i}}^{\text{PID}}\int_{0}^{t}p_{\text {ee,t}^{\prime}}^{\text{d}}-p_{\text{ee,t}^{\prime}}\,\,\text{d}t^{\prime}\), which is then mapped to the actuation via \(\phi=\left[u_{\text{ts,x}}+u_{\text{ts,y}},-u_{\text{ts,x}}+u_{\text{ts,y}} \right]^{\text{T}}\). Here, we select \(K_{\text{p}}^{\text{PID}}=\text{diag}(10,10)\,\text{rad}/\text{m}\), \(K_{\text{i}}^{\text{PID}}=\text{diag}(110,110)\,\text{rad}/\text{m}/\text{s}\), and \(K_{\text{d}}^{\text{PID}}=\text{diag}(0.25,0.25)\,\text{rad}\,\text{s}/\text{m}\). **Evaluation:** We define a reference trajectory \(p_{\text{ee}}^{\text{d}}(k),k\in\{1,\dots,n_{k}\}\) with a duration of \(110\,\text{s}\) and consisting of eleven step functions as the reference trajectory. We report the Root Mean-Squared Error (RMSE) metric \(\sqrt{\sum_{k=1}^{n_{k}}\frac{\|p_{\text{ee}}^{\text{d}}(k)-p_{\text{ee}}(k) \|_{2}^{2}}{n_{k}}}\) for assessing the control performance, where \(p_{\text{ee}}(k)\) is the actual trajectory of the end-effector. **Control of an FPU-based HSA robot:** The _baseline PID_ achieves an RMSE of \(5.86\,\text{mm}\) with respect to the reference trajectory. The _P-satI-D_ based on (5) (with gravity compensation) exhibits an RMSE of \(4.17\,\text{mm}\). Similarly, the _P-satI-D + GC_ based on (6) (with gravity cancellation) displays an RMSE of \(4.13\,\text{mm}\). We present a comparison of the three different controllers for a step response in Fig. 4 and plot the entire trajectories of the _baseline PID_ and the _P-satI-D_ in Figures 5 and 6, respectively. Additionally, we discretize various continuous reference trajectories into setpoints: star trajectory (\(873\) setpoints and duration of \(109\,\text{s}\)), the flame of the TU Delft logo (\(680\) setpoints and duration of \(85\,\text{s}\)), the contour of the MIT-CSAIL logo (\(1046\) setpoints and duration of \(131\,\text{s}\)), and the outline of a bat at three different sizes (\(1510\) setpoints and \(189\,\text{s}\) duration). The resulting Cartesian evolutions of the _P-satI-D_ controller tracking these continuous references are displayed in Fig. 7. The step response in Fig. 4 shows how the two model-based controllers _P-satI-D_ and _P-satI-D + GC_ are able to leverage the planned \(\phi^{\text{ss}}\) and \(q^{\text{d}}\) to achieve a fast response time of roughly \(1.2\,\text{s}\). In contrast, the baseline PID Figure 5: Experimental results for tracking a reference trajectory of eleven step functions with the baseline PID controller on an FPU-based HSA robot. **Panel (a):** End-effector position with the dotted and solid lines denoting the task-space reference and actual position, respectively. **Panel (b):** The planned (dotted) and the actual (solid) configuration. **Panel (c):** The planned (dotted) and the actual (solid) actuation coordinates of the collocated system. **Panel(d):** The saturated planar control inputs are visualized with solid lines, and the computed steady-state actuation with dotted lines. needs to wait for the integral error to build up and thus has a much slower response time of approximately 4.2 s. Furthermore, overshooting caused by the baseline PID is usually more extensive than that caused by the model-based controllers. We conclude that _P-satI-D_ (gravity compensation) and _P-satI-D + GC_ (gravity cancellation) exhibit quite similar behavior. Sometimes, _P-satI-D_ exhibits undershooting at the beginning of the transient and _P-satI-D + GC_ overshooting towards the end of the transient (see Fig. 4(a)). **Control of an EPU-based HSA robot:** Tracking the reference trajectory of eleven step functions with an EPU-based robot, the _baseline PID_ controller has an RMSE of 4.40 mm. The _P-satI-D_ (with gravity compensation) is able to able to achieve an RMSE of 3.63 mm. The _P-satI-D + GC_ controller exhibits similar performance(RMSE of 3.71 mm). We visualize the step response of all three controllers in Fig. 9 and the entire trajectory of the _P-satI-D_ controller in Fig. 10. Again, we notice that the response time of the model-based controllers (0.54 s) is much shorter than the response time of the baseline PID (3.84 s). Furthermore, the importance of a model-based control law is motivated by the oscillations in the transient of the baseline PID (see x-coordinate in Fig. 9(a)). The steady-state error for the model-based controllers on the EPU material is slightly higher compared to the FPU material, as seen in Figures 9(a) & 10(a). In Section 3.3, we noticed that the shear model doesn't fully capture the actual system behavior. This then results in an error in the planned desired configuration \(q^{\rm d}\), which the Figure 6: Experimental results for tracking a reference trajectory of eleven step functions with the P-satI-D controller on an FPU-based HSA robot. **Panel (a):** End-effector position with the dotted and solid lines denoting the task-space reference and actual position, respectively. **Panel (b):** The planned (dotted) and the actual (solid) configuration. **Panel (c):** The planned (dotted) and the actual (solid) actuation coordinates of the collocated system. **Panel(d):** The saturated planar control inputs are visualized with solid lines, and the computed steady-state actuation with dotted lines. controller is not able to resolve because of the underactuation of the robot (see Fig. 10(b)). ## 4 Experimental insights In this work, we have shown effective, model-based regulation with planar HSA robots. The conducted experiments gave us deep insights into the special characteristics of HSAs and how well our model is able to capture them. We see excellent agreement for predicting the dynamical behavior of HSA robots made of FPU material. For EPU-based HSAs rvits, we observe that the model does not fully capture the shear dynamics. Figure 8: Sequence of stills for the large bat trajectory performed with the P-satD controller on the FPU robot. The red and black dots visualize the desired and current end-effector positions, respectively. The past trajectory is plotted in red (reference) and black (actual). The blue line renders the shape of the virtual backbone. Figure 7: Cartesian evolution of the proposed P-sat-D controller (solid lines) tracking various continuous reference trajectories (dotted lines) on the FPU robot.
2307.01842
Universality in the tripartite information after global quenches: spin flip and semilocal charges
We study stationary states emerging after global quenches in which the time evolution is under local Hamiltonians that possess semilocal conserved operators. In particular, we study a model that is dual to quantum XY chain. We show that a localized perturbation in the initial state can turn an exponential decay of spatial correlations in the stationary state into an algebraic decay. We investigate the consequences on the behavior of the (R\'enyi-$\alpha$) entanglement entropies, focusing on the tripartite information of three adjacent subsystems. In the limit of large subsystems, we show that in the stationary state with the algebraic decay of correlations the tripartite information exhibits a non-zero value with a universal dependency on the cross ratio, while it vanishes in the stationary state with the exponential decay of correlations.
Vanja Marić
2023-07-04T17:44:56Z
http://arxiv.org/abs/2307.01842v2
# Universality in the tripartite information after global quenches: ###### Abstract We study stationary states emerging after global quenches in which the time evolution is under local Hamiltonians that possess semilocal conserved operators. In particular, we study a model that is dual to quantum XY chain. We show that a localized perturbation in the initial state can turn an exponential decay of spatial correlations in the stationary state into an algebraic decay. We investigate the consequences on the behavior of the (Renyi-\(\alpha\)) entanglement entropies, focusing on the tripartite information of three adjacent subsystems. In the limit of large subsystems, we show that in the stationary state with the algebraic decay of correlations the tripartite information exhibits a non-zero value with a universal dependency on the cross ratio, while it vanishes in the stationary state with the exponential decay of correlations. ## I Introduction This is the third paper in a series of works about the tripartite information in the stationary states after global quenches, the first two being ref. [1] and ref. [2], that will be in the following referred to as Paper I and Paper II respectively. Paper II proves the results announced in Paper I concerning quantum quenches from ground states of critical Hamiltonians and bipartitioning protocols. The present paper completes the proofs of the results announced in Paper I and deals with quenches in which semilocal conservation laws affect the dynamics. Quantum quench is a protocol in which the system is prepared in the ground state of some local Hamiltonian and is suddenly let to evolve with a different local Hamiltonian. It is perhaps the simplest way to induce non-equilibrium dynamics and as such it has been thoroughly investigated [3; 4; 5]. In global quenches, in particular, the two Hamiltonians are macroscopically different. It has been established that in global quenches quite general isolated quantum many-body systems locally relax to thermal states, as explained by the eigenstate thermalization hypothesis [6; 7; 8; 9]. Integrable systems are an exception [10]. Their stationary properties are instead captured by generalized Gibbs ensembles (GGE) [11; 12; 13; 14], which carry memory of additional conserved charges. While most of the established results on global quenches concern translationally invariant systems, there has been also a lot of effort to relax the assumption of translational invariance. Bipartitioning protocols [15] have been studied substantially, in which the initial state consists of two macroscopically different parts, for example a domain wall. The theory of generalized hydrodynamics [16; 17; 18] (GHD) explains that in bipartitioning protocols the system can at late times be described in terms of locally-quasi-stationary states, that depend on the ratio of the distance from the inhomogeneity and the time. At infinite time (in the thermodynamic limit) around the initial inhomogeneity, in particular, the system relaxes to a stationary state, usually called non-equilibrium stationary state (NESS). One of the standard tools to capture the universal properties of systems are the entanglement entropies. At criticality the entanglement entropy of a connected block exhibits a simple logarithmic scaling with the subsystem size, where the prefactor is proportional to the central charge of the underlying conformal field theory (CFT) [19; 20; 21; 22; 23]. The entanglement entropies have been thoroughly studied also after global quenches and it has been established that they saturate to a value that is extensive with the size of the subsystem [24; 25; 26; 27; 28; 29; 30; 31; 32]. In Papers I and II it has been pointed out that the entanglement entropy of a connected block after a global quench can exhibit, both in translationally invariant quench protocols and in NESS, besides the extensive term, also a subdominant term that grows logarithmically with the subsystem size (see eq. (6)), similarly to the universal leading term in CFT. Such subextensive logarithmic terms have been found also in other works on NESS [33; 34; 35]. Because of the similarity of the subleading logarithmic term to the universal leading term in CFT, it is desirable to consider a quantity that removes the extensive and boundary contributions of the entropies, leaving a potentially universal quantity. In Papers I and II a special linear combination of the entropies with such a property, namely the tripartite information [36] of three adjacent blocks, has been considered for a wide class of quench protocols in non-interacting spin chains. It has been shown that quenches from critical points and bipartitioning protocols can exhibit stationary states with a non-zero tripartite information, which is accompanied by the subdominant logarithmic term in the entanglement entropy and the existence of spatial correlation functions that decay algebraically, similarly to CFT. In 1+1-dimensional CFT the tripartite information of three adjacent blocks is a model-dependent function with a universal dependency on the cross ratio [23; 37]. Remarkably, tripartite information in the stationary states of the aforementioned global quench protocols exhibits the same universal dependency. A property making it different from CFT is the existence of a nonzero "residual tripartite information", introduced in Paper I. Localized perturbations of initial states are normally not expected to have any macroscopic effects at large times. For systems of non-interacting fermions it has been proven under quite general conditions that the equilibration towards a GGE is resilient to localized perturbations, given that the initial state has a finite correlation length and the evolution Hamiltonian is translationally invariant [38]. However, memory effects can arise following a _local_ quench in quantum spin chains, related to the non-locality of the mapping between spins and fermions [39; 40; 41; 42; 43] and to jammed states [44; 45]. Recently it has been found that the system can keep the memory of localized perturbations even following a _global_ quench, if it possesses semilocal conserved operators [46]. In such a system a single spin flip in the initial product state can induce a subextensive logarithmic term in the entropy after a global quench [35], which hints at a possible nontrivial tripartite information. Semilocal charges are conserved operators whose density does not necessarily commute with distant localized operators. Sometimes their densities can be interpreted as semi-inifinte strings. Semilocal charges enable the existence of string order after global quenches and, in general, they have to be included in the GGE to capture correctly the properties of the stationary state [35]. Furthermore, in such systems a single localized perturbation in the initial state can change the magnetization in the stationary state [46] and induce time growth of macroscopic entanglement [47]. Similar semilocal symmetries have also been discussed in the context of bistability of driven-dissipative fermionic systems [48]. Here we study a system with semilocal charges and show that in such a system a single spin flip in the initial product state can turn an exponential decay of spatial correlations in the stationary state into an algebraic decay. Then we proceed to studying the consequences on the behavior of the entanglement entropies, focusing on the tripartite information. We show that a single spin flip induces the behavior of the tripartite information found in bipartitioning protocols and quenches from ground states of critical systems. The paper is organized as follows. In the remainder of the introduction we discuss the tripartite information (section I.1) and we introduce the model and the quench protocols under consideration (section I.2). In section II we present the results of the paper. Their derivation is presented afterwards, in section III. Conclusions are drawn in section IV. ### Tripartite Information Given a subsystem \(A\) described by a reduced density matrix \(\rho_{A}\), the von Neumman entanglement entropy is defined as \[S_{1}(A)=-\mathrm{tr}\left(\rho_{A}\log\rho_{A}\right). \tag{1}\] It corresponds to the limit \(\alpha\to 1\) of the Renyi entanglement entropies \[S_{\alpha}(A)=\frac{1}{1-\alpha}\log\mathrm{tr}\left(\rho_{A}^{\alpha}\right). \tag{2}\] Typically, Renyi entropies for \(\alpha=2,3,\ldots\) are more accessible to computations than the von Neumann entropy and sometimes the latter can be obtained from the former using the replica trick [20]. In pure states the entanglement entropies are a measure of quantum entanglement. In mixed states, such as thermal states and GGEs, this is not anymore the case, as the entropies carry contributions that are extensive with the size of the subsystem and that are largely due to classical correlations. Such extensive contributions are cancelled in the (Renyi-\(\alpha\)) mutual information. Given two subsystems, \(A\) and \(C\), the mutual information [36] is defined as \[I_{2}^{(\alpha)}(A,C)=S_{\alpha}(A)+S_{\alpha}(C)-S_{\alpha}(AC)\;. \tag{3}\] Here and in the rest of the paper \(AC\) stands for the union \(A\cup C\) of two sets \(A,C\). For large disjoint blocks \(A,C\) in the configuration of figure 1 mutual information cancels both the extensive and the boundary contributions of the entropies. Mutual information for \(\alpha=1\) is non-negative and is a measure of total correlations, classical and quantum, between \(A\) and \(C\)[49]. Moreover, mutual information provides an upper bound for connected correlation functions [50]. We mention that when \(A\) and \(C\) constitute the whole lattice, one being the complement of the other, there are area laws [50; 51; 52; 53; 54; 55] that render mutual information finite in thermal states. Figure 1: We compute the tripartite information of adjacent blocks \(A\), \(B\) and \(C\) in stationary states after global quenches. Another quantity that cancels the extensive and the boundary contributions of the entropies is the (Renyi \(\alpha\)) tripartite information [36], defined for three subsystems \(A,B,C\) as \[I_{3}^{(\alpha)}(A,B,C)=S_{\alpha}(A)+S_{\alpha}(B)+S_{\alpha}(C)-S_{\alpha}(AB) -S_{\alpha}(AC)-S_{\alpha}(BC)+S_{\alpha}(ABC)\;, \tag{4}\] which can be expressed in terms of the mutual information as \[I_{3}^{(\alpha)}(A,B,C)=I_{2}^{(\alpha)}(A,B)+I_{2}^{(\alpha)}(A,C)-I_{2}^{( \alpha)}(A,BC)\;. \tag{5}\] For three adjacent blocks, presented in figure 1, unlike the mutual information, it has a desirable property of remaining bounded in the limit \(1\ll|B|\ll|A|,|C|\), that will be of central interest. Here and in the following \(|A|\) stands for the size of \(A\). Namely, let us suppose that the entanglement entropy of a (large) connected block scales as \[S_{\alpha}(A)=a_{\alpha}|A|+b_{\alpha}\log|A|+c_{\alpha}\;, \tag{6}\] for some constants \(a_{\alpha},b_{\alpha},c_{\alpha}\). This is the case both in CFT (\(a_{\alpha}=0,b_{\alpha}\neq 0\)) and in the stationary states studied in this work (\(a_{\alpha}\neq 0\)). Then, for the configuration in figure 1 the relation between the tripartite information of \(A,B,C\) and the mutual information of \(A,C\) is (in the limit of large blocks) \[I_{2}^{(\alpha)}(A,C)=-a_{\alpha}\log(1-x)+I_{3}^{(\alpha)}(A,B,C)\;, \tag{7}\] where \[x=\frac{|A||C|}{(|A|+|B|)(|B|+|C|)} \tag{8}\] is the cross ratio. The mutual and the tripartite information can thus be simply obtained one from another, and the tripartite information describes a subleading bounded contribution to the mutual information in the limit \(x\to 1^{-}\), that corresponds to the limit \(1\ll|B|\ll|A|,|C|\). Note that, in general, the tripartite information can have any sign [56; 57] (even for \(\alpha=1\)). The tripartite information has been studied in many settings. In the context of topological order in two space dimensions it is usually called simply "topological entanglement entropy" [58]. It has been studied in quantum field theory, both in 1+1 dimensions [59; 57; 56; 57; 70; 71; 72], mainly CFT, and in higher dimensions [73; 74; 75; 56]. In holographic theories it has been shown that the tripartite information is never positive [76]. The tripartite information has also been studied in continuously monitored chains [77], on Hamming graphs [78] and, with the partition at different time slices, as a diagnostic tool for quantum scrambling [79; 80; 81; 82]. Finally, it has also been addressed after global quenches in closed quantum systems [1; 2; 83; 84]. In this work we focus on the tripartite information of three large adjacent subsystems embedded in an infinite chain (see figure 1). We refer the reader to section 2 of paper II for a discussion of the behavior of the tripartite information in this setting for different systems. Here we note that it is expected to vanish when the spin correlation functions decay exponentially with distance, as is the case in thermal states [85] and ground states of gapped Hamiltonians [86]. An exception are ground states of conformal critical systems, where the tripartite information is a model dependent function of the cross ratio in (8) [23; 37]. For example, in the ground state of the XX chain the Renyi-2 tripartite information reads [37; 57; 60] \[I_{3}^{(2)}(A,B,C)=-\log 2+\log\left(1+\sqrt{1-x}+\sqrt{x}\right)\;. \tag{9}\] Papers I and II identified different global quench protocols that result in a stationary state in which the spin correlation functions decay algebraically (in space) and allow for non-zero tripartite information: bipartitioning protocols and quenches from ground states of critical systems. In the stationary states of these protocols the entropy of a large single (connected) block \(A\) satisfies scaling (6) with both \(a_{\alpha}\) and \(b_{\alpha}\) nonzero. The tripartite information of three large adjacent blocks in these stationary states exhibits a universal dependency on the cross ratio (8), similarly to critical systems. For example, the quench from the domain wall state \(|\ldots\uparrow\uparrow\uparrow\downarrow\downarrow\downarrow\ldots\rangle\) under the XY chain Hamiltonian results in the stationary state with Renyi-2 tripartite information \[I_{3}^{(2)}(A,B,C)=-\log 2+\log\left(1+\sqrt{1-x}\right)\;, \tag{10}\] which corresponds to (9) with the last term in the logarithm dropped. Differently from CFT, the tripartite information in the aforementioned stationary states is not symmetric under the interchange \(x\leftrightarrow 1-x\). This property is particularly transparent in the limit \(x=1^{-}\), which corresponds to the limit in which the central subsystem is much smaller than the others, i.e. the limit \(1\ll|B|\ll|A|,|C|\). In this limit the tripartite information has a nonzero value, termed "residual tripartite information" in Paper I. The nonzero value is \(-\log 2\) and it is common to all studied quench protocol with non-zero tripartite information. Moreover, this value is independent of the Renyi index \(\alpha\), so it applies also to the von Neumann tripartite information. It should be contrasted to the zero value found in equilibrium at any temperature, irrespectively of criticality, or in other non-equilibrium settings, such as after quenches from ground states of gapped Hamiltonians. We note that there are also trivial examples with nonzero residual tripartite information, such as the GHZ state \(|\mathrm{GHZ}\rangle=(|\ldots\uparrow\uparrow\uparrow\ldots\rangle+|\ldots \downarrow\downarrow\downarrow\ldots\rangle)/\sqrt{2}\), which has tripartite information \(+\log 2\) independently of the size of the subsystems (as long as the subsystems do not comprise the whole system, in which case the tripartite information would be zero since the state is pure), but such examples differ from the systems we are interested in because they violate clustering. We note that algebraic decay of some spin correlation functions is not a sufficient condition for a non-zero tripartite information, as illustrated by the global quench from the ground state of the Ising chain with a critical transverse field (see papers I and II). The latter reaches a stationary state in which some spin correlation functions decay with distance \(r\) as \(1/r^{4}\), yet the tripartite information is zero. This phenomenology seems to be related to the degree in the power law, since quenches with non-zero tripartite information posses correlations that decay only as \(1/r^{2}\). ### Model and Quench Protocols ModelWe study the time evolution governed by the dual XY chain [87; 46], given by the Hamiltonian \[\mathbf{H}=\sum_{\ell=-\infty}^{\infty}\mathbf{\sigma}_{\ell-1}^{z}(J_{x}\mathbf{I}-J_{y} \mathbf{\sigma}_{\ell}^{z})\mathbf{\sigma}_{\ell+1}^{x}\;, \tag{11}\] where we assume \(0<|J_{y}|<|J_{x}|\). A particularly interesting property of the model is the existence of semilocal charges [46; 35], introduced in the following. We also mention that the special point \(J_{x}=J_{y}\) has been studied for its kinetic constraints [88; 89; 90]. The Hamiltonian is invariant under spin flip \(\mathcal{P}_{\sigma}^{z}\), where \[\mathcal{P}_{\sigma}^{z}[\mathbf{O}]=\lim_{n\to\infty}\left(\prod_{\ell=-n}^{n} \mathbf{\sigma}_{\ell}^{z}\right)\mathbf{O}\left(\prod_{\ell^{\prime}=-n}^{n}\mathbf{ \sigma}_{\ell^{\prime}}^{z}\right) \tag{12}\] for some localized operator \(\mathbf{O}\). Here and in the following we say that a local operator \(\mathbf{Q}=\sum_{\ell\in\mathbb{Z}}\mathbf{q}_{\ell}\), where the density \(\mathbf{q}_{\ell}\) is localized (meaning finite support), is even/odd under some transformation \(\mathcal{P}\) if \(\mathcal{P}[\mathbf{q}_{\ell}]=\pm\mathbf{q}_{\ell}\) respectively. We will use the bold notation exclusively for operators defined on the whole, infinite, chain. A charge \(\mathbf{Q}\) is an operator commuting with the Hamiltonian (\([\mathbf{Q},\mathbf{H}]=0\)). While for local charges \(\mathbf{Q}=\sum_{\ell\in\mathbb{Z}}\mathbf{q}_{\ell}\) the density \(\mathbf{q}_{\ell}\) is localized around site \(\ell\), semilocal charges of this model are characterised by density \(\mathbf{q}_{\ell}\) with support on all sites on one side of \(\ell\). Namely, defining the operators \(\mathbf{\Pi}_{\sigma,+}^{z}(\ell)\) by \[\mathbf{\Pi}_{\sigma,+}^{z}(\ell)\mathbf{\sigma}_{j}^{x,y}=\begin{cases}\mathbf{\sigma}_ {j}^{x,y}\mathbf{\Pi}_{\sigma,+}^{z}(\ell)&j<\ell\\ -\mathbf{\sigma}_{j}^{x,y}\mathbf{\Pi}_{\sigma,+}^{z}(\ell)&j\geq\ell\end{cases}, \qquad\left[\mathbf{\Pi}_{\sigma,+}^{z}(\ell),\mathbf{\sigma}_{j}^{z}\right]=0,\qquad \left(\mathbf{\Pi}_{\sigma,+}^{z}(\ell)\right)^{2}=\mathbf{1}, \tag{13}\] which can be thought of as semi-infinite strings \(\mathbf{\sigma}_{\ell}^{z}\mathbf{\sigma}_{\ell+1}^{z}\mathbf{\sigma}_{\ell+2}^{z}\ldots\), an example of a semilocal charge of the model in (11) is \[\mathbf{Q}^{(0,-)}=\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\mathbf{\sigma}_{\ell-1}^{x} \left(\mathbf{I}-\mathbf{\sigma}_{\ell}^{z}\right)\mathbf{\sigma}_{\ell+1}^{y}\mathbf{\Pi}_{ \sigma,+}^{z}(\ell+2)\;. \tag{14}\] A complete set of one-site shift invariant semilocal charges of the model is given in section III.1.1. Their construction is directly related to the Kramers-Wannier duality. Semilocal charges are particularly interesting because they allow the sytem to keep the memory of localized perturbations in the initial state. Namely, while localized perturbations can affect the expectation value of localized charge densities \(\mathbf{q}_{\ell}\) at most for several sites \(\ell\), the effect on semilocal charges can be drastic. Quench protocolsIn this work we study and compare two different global quench protocols: 1) _all-spin-up_ quench protocol: time evolution of a translationally invariant product state, \(|\Psi(t)\rangle=e^{-i\mathbf{H}t}\,|\Uparrow\rangle\); 2) _flipped-spin_ quench protocol: time evolution of the state in 1) with a flipped spin at a single site, \(|\Psi(t)\rangle=e^{-i\mathbf{H}t}\mathbf{\sigma}_{0}^{x}\,|\Uparrow\rangle\). Here and in the rest of the paper by \(\Uparrow\) we denote an inifinite string of \(\uparrow\). We can thus also write \(\mathbf{\sigma}_{0}^{x}\,|\Uparrow\rangle=|\Uparrow\downarrow_{0}\Uparrow\rangle\). In the flipped-spin protocol the system will for large times reach quasi-stationary states at different rays \(\zeta=d/t\), where \(d\) is the distance from the site with the flipped spin. When speaking about the stationary state we will always refer to the NESS reached at infinite time around the initial inhomogeneity, corresponding to the ray \(\zeta=0\). Kramers-Wannier transformationWe consider a transformation that differs from the standard Kramers-Wannier duality map, responsible for the self-duality of the transverse-field Ising model, just in an additional rotation, \[\mathbf{\tau}_{j}^{x}=\mathbf{\sigma}_{j-1}^{x}\mathbf{\sigma}_{j}^{x}\,,\qquad\mathbf{\tau}_{j }^{z}\mathbf{\tau}_{j+1}^{z}=\mathbf{\sigma}_{j}^{z}\,, \tag{15}\] that we will also refer to as Kramers-Wannier transformation. We will refer to the representations of the theory in terms of the \(\mathbf{\tau}\) operators as the dual picture. The transformation preserves the algebra of Pauli matrices, i.e. the \(\mathbf{\tau}\) operators satisfy the same algebra as the \(\mathbf{\sigma}\) ones. Relation (15) specifies the transformation only for localized operators even under \(\mathcal{P}_{\sigma}^{x}\), which are transformed to operators that are localized also in the dual picture and that are, moreover, even under \(\mathcal{P}_{\tau}^{x}\), where \[\mathcal{P}_{\tau}^{x}[\mathbf{O}]=\lim_{n\to\infty}\left(\prod_{\ell=-n}^{n}\mathbf{ \tau}_{\ell}^{x}\right)\mathbf{O}\left(\prod_{\ell^{\prime}=-n}^{n}\mathbf{\tau}_{ \ell}^{x}\right)\,. \tag{16}\] In this work, the full mapping is needed only for the discussion of semilocal charges. Details are given in section III.1. Kramers-Wannier transformation maps the dual XY chain (11) to the XY chain \[\mathbf{H}=\sum_{\ell=-\infty}^{\infty}\left(J_{x}\mathbf{\tau}_{\ell}^{x}\mathbf{\tau}_ {\ell+1}^{x}+J_{y}\mathbf{\tau}_{\ell}^{y}\mathbf{\tau}_{\ell+1}^{y}\right)\, \tag{17}\] a very well known model mappable to free fermions [91], with substantially developed techniques for quantum quenches [92; 93; 94; 95; 96]. The state of all spin up maps into itself, while the state with a spin flip maps into a domain wall state. Namely, we have the identification \(|\Uparrow\rangle^{(\sigma)}=|\Uparrow\rangle^{(\tau)}\) and \(|\Uparrow\downarrow_{0}\Uparrow\rangle^{(\sigma)}=|\Uparrow\Downarrow\rangle^{( \tau)}\) with the domain wall between sites \(0\) and \(1\). Note that these are not unique identifications and one could, for example, identify \(|\Uparrow\rangle^{(\sigma)}\) with \(|\Downarrow\rangle^{(\tau)}\) instead of \(|\Uparrow\rangle^{(\tau)}\) or with any linear combination of the two. In a finite system [46] there is a unique choice, but for our purposes of studying local relaxation different choices are equivalent. In any case, a single spin flip changes the state macroscopically in the dual picture. Reduced density matrices and tripartite informationWe stress it is a highly non-trivial question what are the effects of the Kramers-Wannier transformation on the tripartite information. For example, the Jordan-Wigner transformation is also a duality transformation and there are important differences between the entanglement entropy of disjoint blocks of spins and fermions [97; 68], and therefore also in the tripartite information. In fact, this difference is crucial for the phenomenology discovered in Papers I and II. However, it turns out that the Kramers-Wannier transformation in the studied stationary states does not affect the tripartite information of three large adjacent blocks. For a system in a state \(|\Psi\rangle\), the reduced density matrix for subsystem \(X\) is given by \[\rho_{X}=\frac{1}{2^{|X|}}\sum_{\gamma_{\ell}\in\{0,x,y,z\},\ell\in X}\, \langle\Psi|\prod_{\ell\in X}\mathbf{\sigma}_{\ell}^{\gamma_{\ell}}\,|\Psi \rangle\bigotimes_{\ell\in X}\sigma^{\gamma_{\ell}}. \tag{18}\] Here the sum is over an orthogonal basis of operators on \(X\), given by all possible products of Pauli matrices, and we use the standard convention \(\sigma^{0}\equiv\mathbb{I}\). While the sites \(\ell\) of some physical subsystem \(X\) are associated to the indices of the operators \(\mathbf{\sigma}_{\ell}^{\gamma}\), in the dual picture we associate the notion of subsystem to the indices of the \(\mathbf{\tau}_{\ell}^{\gamma}\) operators. Accordingly, in the dual picture it is natural to consider the density matrix \[\rho_{X}^{\tau}=\frac{1}{2^{|X|}}\sum_{\gamma_{\ell}\in\{0,x,y,z\},\ell\in X }\,\langle\Psi|\prod_{\ell\in X}\mathbf{\tau}_{\ell}^{\gamma_{\ell}}\,|\Psi \rangle\bigotimes_{\ell\in X}\sigma^{\gamma_{\ell}}. \tag{19}\] For the studied quench protocols the density matrix in (19) can be assessed using the techniques developed for the model in (17). However, it is a non-trivial question how are density matrices (18) and (19) related, or how are the corresponding entanglement entropies related. These questions are some of the main problems we tackle with in this work. As already mentioned, for a single block of spins \(X\) this problem has been addressed in [35]. The result is that density matrices (18) and (19) give different entanglement entropies in general, but the difference is bounded by a constant independent of the subsystem size \(|X|\). However, from this result we cannot conclude about the tripartite information (4), which has also a contribution from the entropy of disjoint blocks and which is, moreover, bounded itself. Thus, technically, the main problem of this work is to study the density matrix (19) when \(X\) consists of disjoint blocks \(A,C\), as presented in section III. In the end we are able to conclude that in the studied stationary states the differences in the entropies corresponding to density matrices (18) and (19) cancel in the tripartite information of three large adjacent blocks. ## II Results ### Correlation functions Computing the spin-correlation functions in the stationary states of protocols 1 and 2 is a straightforward application of the Kramers-Wannier duality and the Jordan-Wigner transformation for the dual model, as commented in section III.2. The connected correlation functions of the \(z\)-components of spins in the stationary state following quench protocol 1 are exactly zero, \[\text{protocol 1:}\qquad\bra{\mathbf{\sigma}_{0}^{z}\mathbf{\sigma}_{r}^{z}}-\bra{\mathbf{ \sigma}_{0}^{z}}\bra{\mathbf{\sigma}_{r}^{z}}=0,\qquad r\geq 1, \tag{20}\] while in the stationary state of protocol 2 they exhibit algebraic decay with distance, \[\text{protocol 2:}\qquad\bra{\mathbf{\sigma}_{0}^{z}\mathbf{\sigma}_{r}^{z}}-\bra{\mathbf{ \sigma}_{0}^{z}}\bra{\mathbf{\sigma}_{r}^{z}}\simeq\frac{16}{\pi^{4}}\frac{1}{r^{ 4}}, \tag{21}\] where \(\simeq\) means asymptotically equal. The connected spin-correlation functions of other spin components are exactly zero (for large enough distance) both in the stationary state following quench protocol 1 and the one following protocol 2, \[\text{protocols 1 and 2:}\qquad\bra{\mathbf{\sigma}_{0}^{\gamma}\mathbf{\sigma}_{r}^{ \gamma}}-\bra{\mathbf{\sigma}_{0}^{\gamma}}\bra{\mathbf{\sigma}_{r}^{\gamma}}=0, \qquad\gamma=x,y,\ r\geq 5. \tag{22}\] These results deal only with the correlation functions of operators with support on one site, which already establish that there are algebraically decaying correlations in the stationary state of quench protocol 2. For the stationary state of quench protocol 1 we provide evidence that there are no connected correlation functions decaying algebraically by computing the Renyi-2 mutual information. The results are presented in figure 2. In the stationary state of protocol Figure 2: Rényi-2 mutual information between subsystems \(A\) and \(C\) (see figure 1) for \(|A|=|C|=16\) and varying distance \(|B|\), in the stationary states following quench protocols 1 and 2. In the stationary state of protocol 1 mutual information decays exponentially with distance (\(y\)-axis is in the log scale), while in the stationary state of protocol 2 it decays algebraically. 1 the mutual information \(I_{2}^{(2)}(A,C)\) decays exponentially with the distance between \(A\) and \(C\), while it decays only algebraically in the stationary state of protocol 2. Assuming that the von Neumann mutual information \(I_{2}^{(1)}(A,C)\) follows an analogous behavior, we can conclude that all connected correlation functions decay exponentially in protocol 1 [50]. We note that in the stationary state of quench protocol 2 it is possible to construct operators whose connected correlation functions decay only as \(1/r^{2}\), as opposed to the \(1/r^{4}\) decay of the spin correlation functions in eq. (21). This is the case for the operator \(\mathbf{\xi}_{\ell}\equiv\mathbf{\sigma}_{\ell-1}^{x}\mathbf{\sigma}_{\ell}^{z}\mathbf{ \sigma}_{\ell+1}^{x}\), with the correlation functions \[\text{protocol 2:}\qquad\left\langle\mathbf{\xi}_{0}\mathbf{\xi}_{2r-1}\right\rangle -\left\langle\mathbf{\xi}_{0}\right\rangle\left\langle\mathbf{\xi}_{2r-1}\right\rangle \simeq-\frac{4}{\pi^{2}r^{2}},\qquad\left\langle\mathbf{\xi}_{0}\mathbf{\xi}_{2r} \right\rangle-\left\langle\mathbf{\xi}_{0}\right\rangle\left\langle\mathbf{\xi}_{2r} \right\rangle=0,\;r\geq 1. \tag{23}\] ### String order As discussed in ref. [35] in the context of the all-spin-up quench protocol, string order can survive a global quench. Accordingly, the order in the stationary state was termed "non-equilibrium symmetry-protected topological order". To be more precise, a string of adjacent \(\mathbf{\sigma}_{\ell}^{z}\) operators has a non-zero expectation value in the stationary state even in the limit of infinite length of the string. This is related to the fact that the expectation value of a string of \(\mathbf{\sigma}^{z}\) in the dual picture becomes a two-point correlation function (not the connected one) of localized operators. We note that in quench protocol 2 this string order parameter vanishes. Explicitly, we have \[\text{protocol 1:}\qquad\lim_{r\to\infty}\left\langle\prod_{\ell=-r}^ {r}\mathbf{\sigma}_{\ell}^{z}\right\rangle =\frac{1}{4}\left(1+\frac{J_{y}}{J_{x}}\right)^{2}, \tag{24}\] \[\text{protocol 2:}\qquad\lim_{r\to\infty}\left\langle\prod_{\ell=-r}^ {r}\mathbf{\sigma}_{\ell}^{z}\right\rangle =0. \tag{25}\] Here we point out that there is a string order also in the stationary state of protocol 2, but given by a different Figure 3: The analytical prediction for the Rényi-\(\alpha\) tripartite information of three large adjacent blocks (see figure 1) in the stationary state of the flipped-spin quench protocol, given by (29), as a function of the cross ratio (8), for different values of \(\alpha\). The curves do not differ much. The Rényi-\(\alpha\) tripartite information for any \(\alpha\) is equal to zero for \(x=0^{+}\) and equal to \(-\log 2\) for \(x=1^{-}\). By replica trick the result applies also to the von Neumann tripartite information. A nonzero value in the limit \(x=1^{-}\) was termed “residual tripartite information” in ref. [1]. string order parameter, which, on the other hand, vanishes in protocol 1. Namely, we have \[\text{protocol 1:}\qquad\lim_{r\to\infty}\left\langle\mathbf{\sigma}_{-r-3}^ {x}\mathbf{\sigma}_{-r-2}^{z}\mathbf{\sigma}_{-r-1}^{y}\left(\prod_{\ell=-r}^{r}\mathbf{ \sigma}_{\ell}^{z}\right)\mathbf{\sigma}_{r+1}^{y}\mathbf{\sigma}_{r+2}^{z}\mathbf{\sigma} _{r+3}^{x}\right\rangle =0\;, \tag{26}\] \[\text{protocol 2:}\qquad\lim_{r\to\infty}\left\langle\mathbf{\sigma}_{-r-3}^ {x}\mathbf{\sigma}_{-r-2}^{z}\mathbf{\sigma}_{-r-1}^{y}\left(\prod_{\ell=-r}^{r}\mathbf{ \sigma}_{\ell}^{z}\right)\mathbf{\sigma}_{r+1}^{y}\mathbf{\sigma}_{r+2}^{z}\mathbf{\sigma} _{r+3}^{x}\right\rangle =-\frac{1}{\pi^{2}}\left(1+\frac{J_{y}}{J_{x}}\right)^{2}\;. \tag{27}\] The computation of the string order parameters is commented in section III.2 and appendix A. ### Tripartite information Based on the representation of the reduced density matrix in the dual picture, derived in section III.1.3, we argue in section III.1.4 that the tripartite information in the stationary states of the studied global quenches is not affected by the Kramers-Wannier transformation, in the limit of large subsystems. In this way we reduce the problem of computing the tripartite information in the stationary state of quench protocols 1 and 2 to computing the tripartite information in the stationary state of the quench protocol in which the time evolution is with the XY model and the initial state is, respectively, 1) the state of all spin up \(\left|\Uparrow\right\rangle\), 2) the domain wall state \(\left|\Uparrow\right\rangle\). These problems have already been studied in details in Papers I and II so we copy the results derived there, and compare them with the exact numerical results for the second Renyi entropy, obtained using the methods developed in this work. In this way we find the following tripartite information in the stationary states of the studied quench protocols, in the limit of large subsystems: protocol 1: \[I_{3}^{(\alpha)}(A,B,C) =0,\] (28) protocol 2: \[I_{3}^{(\alpha)}(A,B,C) =\frac{1}{\alpha-1}\log\Bigl{[}\sum_{\begin{subarray}{c}\delta_ {j}\in(0,\frac{1}{2}]\\ j=1,\ldots,\delta-1\end{subarray}}\left(\frac{\Theta(\vec{\delta})\hat{\tau}_{ j}}{\Theta(\Theta(\vec{\delta})\hat{\tau}_{x})}\right)^{2}\Bigr{]}-\log 2\;.\] (29) Here \(\hat{\tau}_{x}\) is the \((\alpha-1)\times(\alpha-1)\) period matrix of the Riemann surface \(\mathcal{R}_{\alpha}\) with elements [37] \[[\hat{\tau}_{x}]_{\ell n}=\frac{2i}{\alpha}\sum_{k=1}^{\alpha-1}\sin(\tfrac{ rk}{\alpha})\cos(\tfrac{2\pi k(\ell-n)}{\alpha})\tfrac{P_{\left\lfloor k/ \alpha\right\rangle-1}(2x-1)}{P_{\left\lfloor k/\alpha\right\rangle-1}(1-2x)}\,, \tag{30}\] where \(P_{\mu}(z)\) denotes the Legendre polynomials and \(\Theta(\vec{z},M)=\sum_{\vec{m}\in\mathbb{Z}^{n-1}}e^{i\pi\vec{m}^{4}M\vec{m} +2\pi i\vec{m}\cdot\vec{\mathcal{J}}}\) is the Siegel theta function. The formula (29) is plotted in figure 3 for several values of \(\alpha\). For \(\alpha=2\) it reduces simply to eq. (10). We note that Figure 4: Rényi-2 tripartite information of three adjacent blocks \(A,B,C\) (see figure 1) of equal length, that we vary. In the stationary state of the all-spin-up quench protocol the tripartite information vanishes exponentially with the size of the subsystems, while in the stationary state of the flipped-spin quench protocol it tends to a non-zero value, for which we have analytical prediction, given by eq. (10) (solid line). eq. (29) was obtained in Paper I by establishing a correspondence between some contributions to the entanglement entropy of disjoint blocks in the expansion of ref. [68] and already known CFT resuts [98], while the results for \(\alpha=2\) and some simpler approximate formulas for higher \(\alpha\) were obtained by a direct computation in Paper II. Note also that the result (29) is independent of the parameters of the model (11). The tripartite information of three large adjacent blocks is zero in the stationary state of the all-spin-up quench protocol, while in the stationary state of the flipped-spin quench protocol it is nonzero. This conclusion is confirmed by numerical results for the second Renyi entropy, presented in figure 4. There it can be seen that in the stationary state of the all-spin-up quench protocol the tripartite information goes to zero exponentially fast with the subsystem size, while in the stationary state of protocol 2 the tripartite information reaches the asymptotic result (10). The tripartite information in the stationary state of the flipped-spin protocol exhibits a universal dependency on the cross ratio (8), as confirmed by numerical results in figure 5. The tripartite information is a function only of the cross ratio, notwithstanding that the initial state and the time evolution Hamiltonian are not related to CFT. Note that the tripartite information is zero in the limit \(x=0^{+}\), as in CFT. Differently from CFT, we find nonzero residual tripartite information \[\text{protocol 2:}\qquad I_{3}^{(\alpha)}(A,B,C)\stackrel{{ x \to 1^{-}}}{{=}}-\log 2\;, \tag{31}\] which corresponds to the limit of small separation between \(A\) and \(C\) with respect to their size, i.e. to the limit \(1\ll|B|\ll|A|,|C|\). The result is the same for all \(\alpha\) so it holds also for the von Neumann tripartite information (by replica trick). ## III Methods ### Kramers-Wannier transformation The Kramers-Wannier transformation is fully specified, as discussed in [35], by the relations \[\begin{split}\mathbf{\Pi}_{r,-}^{x}(\ell)=&\mathbf{\sigma }_{\ell}^{x}\;,\\ \mathbf{\tau}_{\ell}^{y}=&\mathbf{\sigma}_{\ell-1}^{x}\mathbf{ \tau}_{\ell}^{y}\mathbf{\Pi}_{\sigma,+}^{z}(\ell+1)\,,\\ \mathbf{\tau}_{\ell}^{z}=&\mathbf{\Pi}_{\sigma,+}^{z}(\ell) \,,\end{split} \tag{32}\] where the operator \(\mathbf{\Pi}_{\sigma,+}^{z}(\ell)\) is defined in eq. (13) and \(\mathbf{\Pi}_{r,-}^{x}(\ell)\) is defined by \[\mathbf{\Pi}_{r,-}^{x}(\ell)\mathbf{\tau}_{j}^{y,z}=\begin{cases}-\mathbf{\tau}_{j}^{y,z} \mathbf{\Pi}_{r,-}^{x}(\ell)&j\leq\ell\\ \mathbf{\tau}_{j}^{y,z}\mathbf{\Pi}_{r,-}^{x}(\ell)&j>\ell\end{cases},\qquad\left[ \mathbf{\Pi}_{r,-}^{x}(\ell),\mathbf{\tau}_{j}^{x}\right]=0,\qquad\left(\mathbf{\Pi}_{r,- }^{x}(\ell)\right)^{2}=\mathbf{1} \tag{33}\] Figure 5: The Rényi-2 tripartite information in the stationary state of the flipped-spin quench protocol for different configurations with fixed value of the cross ratio \(x\), defined by (8). From top to bottom the configurations are \((|A|,|B|,|C|)\propto(5,5,1),(1,1,1),(2,1,3),(4,1,7),(10,1,10)\). Increasing the size of the subsystems the tripartite information reaches the analytical prediction (10), given by the solid line. The operators \(\mathbf{\Pi}^{z}_{\sigma,+}(\ell)\) and \(\mathbf{\Pi}^{x}_{r,-}(\ell)\) are semilocal(ized) in the \(\sigma\) and \(\tau\) basis respectively. They can be thought of as semi-infinite strings \(\mathbf{\sigma}^{z}_{\ell}\mathbf{\sigma}^{z}_{\ell+1}\mathbf{\sigma}^{z}_{\ell+2}\ldots\) and \(\ldots\mathbf{\tau}^{x}_{-2}\mathbf{\tau}^{x}_{\ell-1}\mathbf{\tau}^{x}_{\ell}\) respectively. In particular, operators \(\mathbf{O}\) that are localized in the \(\tau\)-representation and odd under \(\mathcal{P}^{x}_{\tau}\) (\(\mathcal{P}^{z}_{\tau}[\mathbf{O}]=-\mathbf{O}\)) are in the \(\sigma\) representation even under \(\mathcal{P}^{z}_{\sigma}\) (\(\mathcal{P}^{z}_{\sigma}[\mathbf{O}]=\mathbf{O}\)) and semilocal. #### iii.2.1 Semilocal charges In this section we give a complete set of one-site shift invariant semilocal charges of the dual XY chain (11). A complete set of one-site shift invariant charges of the quantum XY chain (17) is given by (see e.g. [95; 99]) \[\mathbf{Q}^{(0,+)} =\mathbf{H}, \tag{34}\] \[\mathbf{Q}^{(1,+)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\left[J_{x}\mathbf{\tau}^{x} _{\ell}\mathbf{\tau}^{x}_{\ell+2}+J_{y}\mathbf{\tau}^{y}_{\ell}\mathbf{\tau}^{y}_{\ell+2} -(J_{x}+J_{y})\mathbf{I}\right]\mathbf{\tau}^{z}_{\ell+1},\] (35) \[\mathbf{Q}^{(n,+)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\left[\left(J_{x}\mathbf{ \tau}^{x}_{\ell}\mathbf{\tau}^{x}_{\ell+n+1}+J_{y}\mathbf{\tau}^{y}_{\ell}\mathbf{\tau}^{y} _{\ell+n+1}\right)\prod_{j=1}^{n}\mathbf{\tau}^{z}_{\ell+j}+\left(J_{x}\mathbf{\tau}^{x }_{\ell}\mathbf{\tau}^{x}_{\ell+n-1}+J_{y}\mathbf{\tau}^{y}_{\ell}\mathbf{\tau}^{y}_{\ell+ n-1}\right)\prod_{j=1}^{n-2}\mathbf{\tau}^{z}_{\ell+j}\right],\] (36) \[\text{for}\quad n=2,3,4,\ldots\quad,\] \[\mathbf{Q}^{(0,-)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\left(\mathbf{\tau}^{x}_{ \ell}\mathbf{\tau}^{y}_{\ell+1}-\mathbf{\tau}^{y}_{\ell}\mathbf{\tau}^{x}_{\ell+1}\right),\] (37) \[\mathbf{Q}^{(n,-)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\left[\left(\mathbf{\tau}^{x} _{\ell}\mathbf{\tau}^{y}_{\ell+n+1}-\mathbf{\tau}^{y}_{\ell}\mathbf{\tau}^{x}_{\ell+n+1} \right)\prod_{j=1}^{n}\mathbf{\tau}^{z}_{\ell+j}\right],\qquad n=1,2,3,\ldots \tag{38}\] These charges all mutually commute (\([\mathbf{Q}^{(n,+)},\mathbf{Q}^{(m,\pm)}]=0\) for \(n,m\in\mathbb{N}_{0}\)). We note that quantum XY chain possesses also charges that are two-site, and not one-site, shift invariant, which are non-Abelian in general [99]. These charges are not relevant for the quench protocols considered in this work and will not be discussed. We also note that in interacting systems in general one has to go beyond local charges to consider quasilocal charges [100], but this is not the case in the studied non-interacting system. Applying the duality transformation to the charges, we obtain the charges of the dual XY chain (11). While the charges even under \(\mathcal{P}^{x}_{\tau}\) are local also in the \(\sigma\)-representation, a simple example being provided by the Hamiltonian, the charges odd under \(\mathcal{P}^{x}_{\tau}\) are instead semilocal, as discussed in ref. [35] (which labels charges in a different way). Specifically, the charges odd under \(\mathcal{P}^{x}_{\tau}\) are charges \(\mathbf{Q}^{(n,+)}\) for odd \(n\) and charges \(\mathbf{Q}^{(n,-)}\) for even \(n\). These charges form a complete set of one-site shift invariant semilocal charges of the model in (11). The \(\sigma\)-representation of the charge \(\mathbf{Q}^{(0,-)}\) has been already given in eq. (14). The remaining one-site shift invariant semilocal charges of the model in (11) read \[\mathbf{Q}^{(1,+)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\left[\left(-J_{x}\mathbf{ \sigma}^{x}_{\ell}\mathbf{\sigma}^{y}_{\ell+1}+J_{y}\mathbf{\sigma}^{y}_{\ell}\mathbf{ \sigma}^{x}_{\ell+1}\right)\mathbf{\sigma}^{x}_{\ell-1}\mathbf{\sigma}^{y}_{\ell+2}-(J _{x}+J_{y})\mathbf{\sigma}^{z}_{\ell+1}\mathbf{\sigma}^{z}_{\ell+2}\right]\mathbf{\Pi}^{z}_ {\sigma,+}(\ell+3) \tag{39}\] \[\mathbf{Q}^{(n,+)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\Bigg{[}\left(-J_{x}\mathbf{ \sigma}^{x}_{\ell}\mathbf{\sigma}^{y}_{\ell+n}\prod_{j=1}^{(n-1)/2}\mathbf{\sigma}^{z}_ {\ell+2j-1}+J_{y}\mathbf{\sigma}^{y}_{\ell}\mathbf{\sigma}^{x}_{\ell+n}\prod_{j=1}^{(n -1)/2}\mathbf{\sigma}^{z}_{\ell+2j}\right)\mathbf{\sigma}^{x}_{\ell-1}\mathbf{\sigma}^{y}_{ \ell+n+1}+\] \[\left(-J_{x}\mathbf{\sigma}^{x}_{\ell}\mathbf{\sigma}^{y}_{\ell+n-2}\prod _{j=1}^{(n-3)/2}\mathbf{\sigma}^{z}_{\ell+2j-1}+J_{y}\mathbf{\sigma}^{y}_{\ell}\mathbf{ \sigma}^{x}_{\ell+n-2}\prod_{j=1}^{(n-3)/2}\mathbf{\sigma}^{z}_{\ell+2j}\right)\mathbf{ \sigma}^{x}_{\ell-1}\mathbf{\sigma}^{y}_{\ell+n-1}\mathbf{\sigma}^{z}_{\ell+n}\mathbf{\sigma} ^{z}_{\ell+n+1}\Bigg{]}\mathbf{\Pi}^{z}_{\sigma,+}(\ell+n+2)\] (40) \[\text{for}\quad n=3,5,7,\ldots\quad,\] \[\mathbf{Q}^{(n,-)} =\frac{1}{2}\sum_{\ell=-\infty}^{\infty}\left(\mathbf{\sigma}^{x}_{ \ell}\mathbf{\sigma}^{x}_{\ell+n}\prod_{j=1}^{n/2}\mathbf{\sigma}^{z}_{\ell+2j-1}+\mathbf{ \sigma}^{y}_{\ell}\mathbf{\sigma}^{y}_{\ell+n}\prod_{j=1}^{n/2-1}\mathbf{\sigma}^{z}_ {\ell+2j}\right)\mathbf{\sigma}^{x}_{\ell-1}\mathbf{\sigma}^{y}_{\ell+n+1}\mathbf{\Pi}^{z}_ {\sigma,+}(\ell+n+2)\] (41) \[\text{for}\quad n=2,4,6,\ldots\quad,\] where we use the convention that products of the form \(\prod_{j=1}^{0}\) are equal to identity. Semilocal Generalized Gibbs ensemble As pointed out in [35], the stationary state of quench protocol 1 is a _semilocal_ generalized Gibbs ensemble, meaning that it includes semilocal charges. Here we point out that the non-equilibrium stationary state emerging from the flipped-spin quench protocol is also described by a semilocal GGE. In both cases the GGE can be obtained directly by working in the dual picture and following the approach of ref. [95], as explained in appendix B. We find that the GGE \[\boldsymbol{\rho}=\frac{1}{\mathcal{Z}}\exp\left[-\sum_{n=0}^{\infty}\left( \lambda^{(n,+)}\boldsymbol{Q}^{(n,+)}+\lambda^{(n,-)}\boldsymbol{Q}^{(n,-)} \right)\right]\;, \tag{42}\] where \(\mathcal{Z}\) is the normalization, is specified by Lagrange multipliers \[\text{protocol 1:}\qquad\lambda^{(n,-)}=0,\qquad\lambda^{(n,+)}= \begin{cases}0,&n\text{ even}\\ \frac{8}{\pi}\int_{0}^{\pi/2}\operatorname{artanh}\left[\frac{2(J_{x}+J_{y}) \cos k}{\varepsilon(k)}\right]\frac{1}{\varepsilon(k)}\cos(nk)\ dk,&n\text{ odd}\end{cases}\;, \tag{43}\] \[\text{protocol 2:}\qquad\lambda^{(n,-)}=\begin{cases}\operatorname{ sgn}(J_{x}J_{y})\frac{8}{\pi}\int_{0}^{\pi/2}\operatorname{artanh}\left[\frac{2(J_{x}+J_{y})\cos k}{ \varepsilon(k)}\right]\sin[(n+1)k]\ dk,&n\text{ even}\\ 0,&n\text{ odd}\end{cases}\;,\quad\lambda^{(n,+)}=0, \tag{44}\] for \(n\in\mathbb{N}_{0}\). The Lagrange multipliers are completely different in the two quench protocols, but in both cases only those associated to semilocal charges are nonzero. In the dual picture these are the charges odd under \(\mathcal{P}^{x}_{\tau}\), which will be important, in section III.1.4, for arguing that the tripartite information is not influenced by the Kramers-Wannier transformation. We note that the nonzero Lagrange multipliers decay rather slowly with \(n\), i.e. only algebraically, which is related to the logarithmic singularity of the function \(\operatorname{artanh}[2(J_{x}+J_{y})/\varepsilon(k)]\), appearing under the integral, at \(k=0\). #### iii.1.3 The reduced density matrix in the dual picture In this section we find a convenient representation of the reduced density matrix (18) in view of comparing it with (19). For this task we introduce the Kramers-Wannier transformation for a finite system. Let us consider a system of size \(L\), described by Pauli matrices \(\sigma_{\ell}^{\gamma}\equiv\mathbb{I}^{\otimes(\ell-1)}\otimes\sigma^{ \gamma}\otimes\mathbb{I}^{\otimes(L-\ell)}\) for \(\gamma=0,x,y,z\) and \(\ell=1,2,\ldots,L\), where \(\sigma^{0}=\mathbb{I}\) is a \(2\times 2\) unit matrix. The transformation is given by \[\tau^{x}_{\ell}=\sigma^{x}_{\ell-1}\sigma^{x}_{\ell}\;,\qquad\tau^{z}_{\ell^{ \prime}}\tau^{z}_{\ell^{\prime}+1}=\sigma^{z}_{\ell^{\prime}}\,, \tag{45}\] for \(\ell=2,3,\ldots,L\) and \(\ell^{\prime}=1,2,\ldots,L-1\). It is in correspondence with the one in (15) for the infinite system. Eq. (45) does not specify the transformation of operators \(O\) that are odd under \(\Pi^{z}_{\sigma}\equiv\prod_{L=1}^{L}\sigma^{z}_{\ell}\) (\([O,\Pi^{z}_{\sigma}]=-O\)) nor the transformation at the boundaries. The full mapping can be found in [46]. For our purposes it is just important that the mapping preserves the algebra of Pauli matrices and that for operators even under \(\Pi^{z}_{\sigma}\) that are not at the boundaries the mapping is analogous to the one for the infinite system. Note that to describe a connected block \(A\) we need to consider the mapping (45) for \(L\geq|A|+1\), to avoid the peculiarities of the mapping at the boundaries. For a system in state \(|\Psi\rangle\), the reduced density matrix for subsystem \(X\) is given by (18). In order to define a duality transformation in correspondence with the one in the infinite system we consider an enlarged space \(\tilde{X}=X\cup X^{\prime}\), for some suitable choice of \(X^{\prime}\) that will be discussed later. Note that we can extend \(\rho_{X}\) by identity to the enlarged space, defining \[\bar{\rho}_{X}\equiv\rho_{X}\otimes\left(\bigotimes_{\ell\in X^{\prime}}\frac {\mathbb{I}}{2}\right)\;. \tag{46}\] To obtain the entanglement entropy from the enlarged density matrix (46) we just need to remove a constant related to the size of the enlargement \(X^{\prime}\), \[S_{\alpha}(X)=\frac{1}{1-\alpha}\log\operatorname{tr}(\bar{\rho}_{X}^{\alpha}) -|X^{\prime}|\log 2\;. \tag{47}\] Having the duality transformation defined in the enlarged space, it is convenient to consider the density matrices \[\tilde{\rho}_{Y}=\frac{1}{2^{|\tilde{X}|}}\sum_{\gamma_{\ell}\in\{0,x,y,z\}, \ell\in Y}\left\langle\Psi\right|\prod_{\ell\in Y}\mathbf{\tau}_{\ell}^{\gamma_{ \ell}}\left|\Psi\right\rangle\prod_{\ell\in Y}\tau_{\ell}^{\gamma_{\ell}}. \tag{48}\] for subsystems \(Y\subseteq\tilde{X}\). Clearly, the density matrix (48) is correspondence with the density matrix \(\rho_{Y}^{\gamma}\), defined by (19), but they are different mathematical objects. Instead of comparing the density matrices (18) and (19) directly we will find it more convenient to compare the density matrices (46) and (48), for a suitable choice of \(Y\), concluding about the former afterwards. Single blockWhen the subsystem \(X\) is a connected block \(A\) (figure 6a) we consider the enlarged subsystem \(\tilde{A}=A\cup\{\max(A)+1\}\) (figure 6b). There is a simple representation for \(\bar{\rho}_{A}=\rho_{A}\otimes\frac{1}{2}\) in the dual picture, \[\bar{\rho}_{A}=\frac{1}{2^{3}}\sum_{j_{1},j_{2},j_{3}=0}^{1}\left(\prod_{m=1} ^{3}P_{m}^{j_{m}}\right)\tilde{\rho}_{\tilde{A}}\left(\prod_{m=1}^{3}P_{m}^{j _{m}}\right)^{\dagger}, \tag{49}\] with \[P_{1}=\prod_{\ell\in\tilde{A}}\tau_{\ell}^{x},\qquad P_{2}=\tau_{\min(\tilde{ A})}^{z},\qquad P_{3}=\tau_{\max(\tilde{A})}^{z}. \tag{50}\] Figure 6: (a) A connected block \(A\). (b) Block \(\tilde{A}\) we work with in the dual picture, related to the configuration in a). (c) The transformations \(P_{j}\) appearing in eq. (49). (d) Disjoint blocks \(A\) and \(C\). (e) Blocks \(\tilde{A},\tilde{B},\tilde{C}\) we work with in the dual picture, related to the configuration in d). (f) The transformations \(P_{j}\) appearing in eq. (54). This result has already been obtained in [35] and we revisit it here for comparison with the case of disjoint blocks. The density matrix \(\bar{\rho}_{A}\) is obtained from \(\tilde{\rho}_{\tilde{A}}\) by successive transformations of the form \(\rho\to(\rho+P\rho P)/2\), for three different Hermitian involutions \(P\). Their effect is to remove those terms from \(\tilde{\rho}_{\tilde{A}}\) which act non-trivially on the site \(\max(A)+1\) (in the \(\sigma\)-representation), leaving only terms with support on \(A\). Note that transformation \(P_{1}\) has support on the whole system \(\tilde{A}\) and acts by flipping all \(\tau^{z}_{\tilde{\ell}}\), while the remaining transformations affect only the boundaries. All three transformations are represented graphically in figure 6c. The representation (49) follows from the following properties, that are direct consequences of the Kramers-Wannier transformation (15): 1. Any operator \(\mathbf{O}_{A}\) localized (in the \(\sigma\) representation) in \(A\) and even under \(\mathcal{P}^{x}_{\sigma}\) is mapped by the Kramers-Wannier transformation to an operator that in the \(\tau\) representation is localized in \(\tilde{A}\) and is even under \(\mathcal{P}^{x}_{\tau}\) (\(\mathcal{P}^{x}_{\tau}[\mathbf{O}_{A}]=\mathbf{O}_{A}\)). More technically, if \(\mathbf{O}_{A}=\prod_{\ell\in A}\mathbf{\sigma}^{\gamma_{\ell}^{x}}_{\ell}\), where \(\gamma_{\ell}\in\{0,x,y,z\}\), is even under \(\mathcal{P}^{z}_{\sigma}\) then we have \(\mathbf{O}_{A}=s\prod_{\ell^{\prime}\in A}\mathbf{\tau}^{\gamma_{\ell^{\prime}}^{ \prime\prime}}_{\ell^{\prime}}\), for some \(\gamma_{\ell^{\prime}}^{\prime}\in\{0,x,y,z\}\), with \(s\in\{\pm 1\}\) allowing for a minus sign. 2. Any operator \(\mathbf{O}_{\tilde{A}}=\prod_{\ell\in\tilde{A}}\mathbf{\tau}^{\gamma_{\ell}^{x}}_{\ell}\), where \(\gamma_{\ell}\in\{0,x,y,z\}\), that is even under \(\mathcal{P}^{x}_{\tau}\) and which in the \(\sigma\) representation has support outside \(A\), anticommutes with \(\mathbf{\sigma}^{z}_{\min(A)-1}\) (\(\{\mathbf{O}_{\tilde{A}},\mathbf{\sigma}^{z}_{\min(A)-1}\}=0\)) or \(\mathbf{\sigma}^{z}_{\max(A)+1}\) (\(\{\mathbf{O}_{\tilde{A}},\mathbf{\sigma}^{z}_{\max(A)+1}\}=0\)), or both. A simple example of property 2 are operators \(\mathbf{\tau}^{x}_{\min(\tilde{A})}=\mathbf{\sigma}^{x}_{\min(A)-1}\mathbf{\sigma}^{x}_{ \min(A)}\) and \(\mathbf{\tau}^{x}_{\max(\tilde{A})}=\mathbf{\sigma}^{x}_{\max(A)}\mathbf{\sigma}^{x}_{ \max(A)+1}\). Disjoint blocksWhen the subsystem \(X\) consists of disjoint blocks \(A\) and \(C\) (figure 6d) we don't have such a simple relation as (49). The additional complication is that the duality transformation can introduce strings between the blocks. For example, \[\mathbf{\sigma}^{x}_{\max(A)}\mathbf{\sigma}^{x}_{\min(C)}=\prod_{\ell=\max(A)+1}^{ \min(C)}\mathbf{\tau}^{x}_{\ell}. \tag{51}\] These strings are similar to the ones appearing in expressing the reduced density matrix of disjoint blocks in terms of Jordan-Wigner fermions [68] and we tackle the problem in a similar way. We have to consider the enlargement of \(X=A\cup C\) to the whole subsystem between \(A\) and \(C\), and to the site adjacent to \(C\) on the right. We thus work with \(X^{\prime}=\{\max(A)+1,\max(A)+2,\ldots,\min(C)-1\}\cup\{\max(C)+1\}\). It is also convenient to define the subsystems \(\tilde{A}\equiv A\cup\{\max(A)+1\}\), \(\tilde{B}\equiv B-\{\max(B)\}\) and \(\tilde{C}\equiv C\cup\{\max(C)+1\}\) (figure 1e). Here we assume \(|B|\geq 2\). Similarly to the single block case, we consider the density matrix (48) with \(Y=\tilde{A}\tilde{C}\), which can be written as \[\tilde{\rho}_{\tilde{A}\tilde{C}}=\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}} \sum_{O_{\tilde{A}},O_{\tilde{C}}}\bra{\Psi}\mathbf{O}_{\tilde{A}}\mathbf{O}_{\tilde{C }}\ket{\Psi}O_{\tilde{A}}O_{\tilde{C}}, \tag{52}\] where the sum is over all possible products \(O_{\tilde{A}}=\prod_{\ell\in\tilde{A}}\tau^{\gamma_{\ell}^{x}}_{\ell}\) with \(\gamma_{\ell}\in\{0,x,y,z\}\) and \(O_{\tilde{C}}\), defined analogously for subsystem \(\tilde{C}\). The problem of density matrix (52) is that it cannot describe operators such as (51), that include strings between the blocks in the dual picture. To describe them we introduce the traceless operator \[\omega_{\tilde{A}\tilde{C}}=\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{O_ {\tilde{A}},O_{\tilde{C}}}\bra{\Psi}\mathbf{O}_{\tilde{A}}\mathbf{S}^{x}\mathbf{O}_{\tilde {C}}\ket{\Psi}(O_{\tilde{A}}O_{\tilde{C}})S^{x}, \tag{53}\] which is similar to \(\tilde{\rho}_{\tilde{A}\tilde{C}}\), but includes the string \(S^{x}\equiv\prod_{\ell\in\tilde{B}}\tau^{x}_{\ell}\) (\(\mathbf{S}^{x}\equiv\prod_{\ell\in\tilde{B}}\mathbf{\tau}^{x}_{\ell}\)). We find that the reduced density matrix \(\bar{\rho}_{AC}=\rho_{AC}\otimes(\bigotimes_{\ell\in B\cup\{\max(C)+1\}}\mathbb{ I}/2)\) is equal to \[\begin{split}\bar{\rho}_{AC}=&\frac{1}{2^{6}}\sum_{j _{1},j_{2},\ldots,j_{6}=0}^{1}\left(\prod_{m=1}^{6}P_{m}^{j_{m}}\right)\tilde{ \rho}_{\tilde{A}\tilde{C}}\left(\prod_{m=1}^{6}P_{m}^{j_{m}}\right)^{\dagger}\\ +&\frac{1}{2^{6}}\sum_{j_{1},j_{2},\ldots,j_{6}=0}^{1} (-1)^{j_{4}+j_{5}}\left(\prod_{m=1}^{6}P_{m}^{j_{m}}\right)\omega_{\tilde{A} \tilde{C}}\left(\prod_{m=1}^{6}P_{m}^{j_{m}}\right)^{\dagger}\end{split} \tag{54}\] where \[\begin{split} P_{1}&=\prod_{\ell\in\tilde{A}}\tau^{x}_{ \ell},\qquad P_{2}=\tau^{z}_{\min(\tilde{A})},\qquad P_{3}=\tau^{z}_{\max( \tilde{A})},\\ P_{4}&=\prod_{\ell\in\tilde{C}}\tau^{x}_{\ell}, \qquad P_{5}=\tau^{z}_{\min(\tilde{C})},\qquad P_{6}=\tau^{z}_{\max(\tilde{C})}. \end{split} \tag{55}\] The transformations are completely analogous to the single block case and are graphically represented in figure 6f. The derivation of representation (54) is more pedantic than in the single block case, but similar, so we cover just the main ingredients. It is derived by applying properties 1 and 2 to blocks \(A\) and \(C\) separately. The additional complication is that the non-locality of the transformation can introduce strings between the blocks, as in (51). To tackle it we notice that any product of Pauli matrices \(\mathbf{O}_{X}\) localized (in the \(\sigma\) representation) in \(X=A\cup C\) and even under \(\mathcal{P}_{\sigma}^{z}\) (\(\mathcal{P}_{\sigma}^{z}[\mathbf{O}_{X}]=\mathbf{O}_{X}\)) can be written as a product \(\mathbf{O}_{X}=\mathbf{O}_{A}\mathbf{O}_{C}\) where \(\mathbf{O}_{A}\) and \(\mathbf{O}_{C}\) are localized in \(A\) and \(\left(\mathcal{P}_{\sigma}^{z}[\mathbf{O}_{A,C}]=\mathbf{O}_{A,C}\right)\) or they are both odd (\(\mathcal{P}_{\sigma}^{z}[\mathbf{O}_{A,C}]=-\mathbf{O}_{A,C}\)). The even operators \(\mathbf{O}_{A}\) and \(\mathbf{O}_{C}\) are, by property 1, localized in \(\tilde{A}\) and \(\tilde{C}\) respectively so an analogous application of property 2 to the single block case results in the first term in (54). The second term is a contribution from the terms where both \(\mathbf{O}_{A}\) and \(\mathbf{O}_{C}\) are odd. To treat such terms we notice that they can be rewritten as \(\mathbf{O}_{A}=(\mathbf{O}_{A}\mathbf{\sigma}_{\tilde{A}}^{x})\mathbf{\sigma}_{\tilde{A}_{A}}^ {x}\) and \(\mathbf{O}_{C}=(\mathbf{O}_{C}\mathbf{\sigma}_{\tilde{C}_{C}}^{x})\mathbf{\sigma}_{\tilde{C}_{ C}}^{x}\), where \(\tilde{\mathbf{\epsilon}}_{A},\tilde{C}_{C}\) is an arbitrary site belonging to \(A,C\) respectively. The operators in the brackets are even under \(\mathcal{P}_{\sigma}^{z}\) so they can be dealt with by applying properties 1 and 2, while the product \(\mathbf{\sigma}_{\tilde{A}_{A}}^{x}\mathbf{\sigma}_{\tilde{C}_{C}}^{x}=\mathbf{\tau}_{ \tilde{A}_{A}+1}^{x}\mathbf{\tau}_{\tilde{A}_{A}+2}^{x}\cdots\mathbf{\tau}_{\tilde{C} _{C}}^{x}\) is a string, common to all odd \(\mathbf{O}_{A}\), \(\mathbf{O}_{C}\). This gives the second term in (54). A subtlety is that the sign factor \((-1)^{j_{4}+j_{5}}\) appears, because the operators \(\mathbf{\sigma}_{\max(A)}^{z}=\mathbf{\tau}_{\max(\tilde{A})}^{z}\mathbf{\tau}_{\max( \tilde{A})+1}^{z}\) and \(\mathbf{\sigma}_{\min(\tilde{C})-1}^{z}=\mathbf{\tau}_{\min(\tilde{C})-1}^{z}\mathbf{ \tau}_{\min(\tilde{C})}^{z}\) arising from property 2 anticommute with the string \(\mathbf{S}^{z}\). #### v.2.4 Invariance of the tripartite information Based on representations (49) and (54) for the reduced density matrices we now argue that the tripartite information in the stationary states of the studied protocols is not influenced by the Kramers-Wannier transformation. The first thing to notice is that, in the model (17), that does not have semilocal charges, the string order does not survive the quench (see [35] for a general discussion). Thus, since \(\omega_{\tilde{A}\tilde{C}}\), defined in (53), includes strings between the blocks, in the limit of large subsystems it is expected to be negligible with respect to \(\tilde{\rho}_{\tilde{A}\tilde{C}}\). Therefore, the second line in the expression (54) for the reduced density matrix of disjoint blocks in the dual picture can be neglected, leaving us with \[\bar{\rho}_{AC}\sim\frac{1}{2^{6}}\sum_{j_{1},j_{2},\ldots,j_{6}=0}^{1}\left( \prod_{m=1}^{6}P_{m}^{j_{m}}\right)\tilde{\rho}_{\tilde{A}\tilde{C}}\left( \prod_{m=1}^{6}P_{m}^{j_{m}}\right)^{\dagger}, \tag{56}\] where \(\sim\) stands for the limit \(|A|,|B|,|C|\gg 1\). The transformations \(P\) acting on the boundaries act in the same way in the single block case and in the case of disjoint blocks (compare Fig. 6c and Fig. 6f). Moreover, when computing different entropies appearing in the definition (4) of the tripartite information, these boundary transformations always act (in the dual picture) on the sites \(\ell=\min A,\max\tilde{A},\min\tilde{C},\max\tilde{C}\). For example, when computing \(S_{\alpha}(B)\) we have transformations that act on the sites \(\ell=\min B=\max\tilde{A}\) and \(\ell=\max(B)+1=\min\tilde{C}\). Since the tripartite information is constructed in such a way to cancel the contributions of the boundaries, these transformations are not expected to affect it. Therefore, the same tripartite information should be obtained by starting from the density matrix \[\bar{\rho}_{A}^{\prime}\equiv\frac{1}{2}\sum_{j=0}^{1}P_{3}^{j}\tilde{\rho}_{ \tilde{A}}P_{3}^{j} \tag{57}\] for the reduced density matrix of a single block and \[\bar{\rho}_{AC}^{\prime}\equiv\frac{1}{2^{2}}\sum_{j_{3},j_{6}=0}^{1}\left( \prod_{m=3,6}P_{m}^{j_{m}}\right)\tilde{\rho}_{\tilde{A}\tilde{C}}\left(\prod _{m=3,6}P_{m}^{j_{m}}\right) \tag{58}\] for the reduced density matrix of disjoint blocks. The stationary state of both of the studied quench protocols can be written as \[\mathbf{\rho}_{\rm GGE}=\frac{1}{\mathcal{Z}}e^{-\mathbf{Q}}, \tag{59}\] where \(\mathcal{Z}\) is the normalization and \(\mathbf{Q}\) includes the relevant charges with appropriate Lagrange multipliers, given by eq. (43) and eq. (44). The operator \(\mathbf{Q}\) can be interpreted as a Hamiltonian and its eigenvalues \(E_{i}\) as energies, that come in pairs \(\pm E_{i}\). Since the GGE entropy is extensive with the system size, the energies \(E_{i}\) are also expected to be extensive. It follows that for any integer \(M\) such that \(1\leq M\leq\alpha-1\) the ratios \[\frac{\mathrm{tr}\left[e^{-(\alpha-2M)\mathbf{Q}}\right]}{\mathrm{tr}\left[e^{- \alpha\mathbf{Q}}\right]}=\frac{\sum_{E_{i}\geq 0}\cosh[(\alpha-2M)E_{i}]}{\sum_{E_{i} \geq 0}\cosh(\alpha E_{i})} \tag{60}\] are exponentially suppressed with the system size, while for \(M=0,\alpha\) they are equal to unity. The reduced density matrices \(\tilde{\rho}_{\tilde{A}}\) and \(\tilde{\rho}_{\tilde{A}\tilde{C}}\) are obtained by tracing out the GGE density matrix in the dual picture. Let us focus on the single block case. When the block \(\tilde{A}\) is large enough the reduced density matrix \(\tilde{\rho}_{\tilde{A}}\) is expected to be similar to the GGE in the bulk. For thermal states the entanglement Hamiltonian is at large temperatures approximated well by the Hamiltonian of the subsystem [101]. Similarly, we expect that the properties of the entanglement Hamiltonian in the GGE can be captured well by the restriction of \(\mathbf{Q}\) to the subsystem. The consequence is that for \(j_{1},\ldots,j_{\alpha-1}\in\{0,1\}\), such that \(M\) of them are non-zero, with \(1<M<\alpha\), we have \[\frac{\mathrm{tr}\left[\rho\left(P_{m_{1}}^{j_{1}}\rho P_{m_{1}}^{j_{1}} \right)\left(P_{m_{2}}^{j_{2}}\rho P_{m_{2}}^{j_{2}}\right)\ldots\left(P_{m_{ \alpha-1}}^{j_{\alpha-1}}\rho P_{m_{\alpha-1}}^{j_{\alpha-1}}\right)\right]}{ \mathrm{tr}\rho^{\alpha}}\sim 0\;, \tag{61}\] where \(m_{1},\ldots,m_{\alpha-1}=3\). Here we have used the property that \(\mathbf{Q}\) includes only charges odd under \(\mathcal{P}_{x}^{\tau}\), realization that the transformation \(\bullet\to P_{3}\bullet P_{3}\) is a finite-size analogue of transformation \(\mathcal{P}_{x}^{\tau}\), which changes the sign in front of the odd charges, and the suppression of the ratios (60). In the case of disjoint blocks, the transformations \(P_{3},P_{6}\), appearing in (58), act only on a part of the subsystem, but their support is still extensive with the size of one of the subsystems and we expect the ratios in (61) with \(m_{1},\ldots,m_{\alpha-1}\in\{3,6\}\) to be suppressed for similar reasons. From (61) it follows \[\mathrm{tr}\left[(\tilde{\rho}_{A}^{\prime})^{\alpha}\right]\sim 2^{1-\alpha} \mathrm{tr}\left[(\tilde{\rho}_{\tilde{A}})^{\alpha}\right],\qquad\mathrm{tr} \left[(\tilde{\rho}_{AC}^{\prime})^{\alpha}\right]\sim 2^{2(1-\alpha)} \mathrm{tr}\left[(\tilde{\rho}_{\tilde{A}\tilde{C}})^{\alpha}\right]\;, \tag{62}\] where in the disjoint blocks case the squared prefactor appears because in eq. (58) we have two transformations. Now, since the prefactors in (62) cancel in the tripartite information it follows that the same tripartite information is obtained if we use \(\tilde{\rho}_{\tilde{A}}\) instead of \(\tilde{\rho}_{\tilde{A}}\) (analogously for other single block contributions to the tripartite information in (4)) and \(\tilde{\rho}_{\tilde{A}\tilde{C}}\) instead of \(\tilde{\rho}_{\tilde{A}C}\). Since \(\tilde{\rho}_{\tilde{A}},\tilde{\rho}_{\tilde{A}\tilde{C}}\) are in correspondence with the density matrices \(\rho_{\tilde{A}}^{\tau},\rho_{\tilde{A}\tilde{C}}^{\tau}\), defined by (19), we can simply use the latter as well. The conclusion is that for the stationary states of the studied quench protocols the tripartite information is not influenced by the Krammers-Wannier transformation: the differences in the Renyi entropies arising from the differences between the density matrices (18) and (19) cancel in the tripartite information of three large adjacent blocks. The same conclusion is expected to hold quite generically for stationary states described by semilocal Gibbs ensembles. For arbitrary states the conclusion does not need to hold. For example, for states with non-negligible string contributions in (54) the presented argument breaks right at the start. We note that the argument presented here is largely intuitive, despite being also quite technical. However, exact numerical results for the second Renyi entropy, obtained using fermionic techniques, provide an implicit check for the argument's validity. ### Jordan-Wigner transformation The XY chain (17) is a quadratic form in Majorana fermions, defined as \[\mathbf{a}_{2\ell-1}\equiv\mathbf{a}_{\ell}^{x}=\left(\prod_{j<\ell}\mathbf{\tau}_{j}^{z} \right)\mathbf{\tau}_{\ell}^{x},\qquad\mathbf{a}_{2\ell}\equiv\mathbf{a}_{\ell}^{y}=\left( \prod_{j<\ell}\mathbf{\tau}_{j}^{z}\right)\mathbf{\tau}_{\ell}^{y}\;. \tag{63}\] The Majorana fermions are self-adjoint and satisfy the algebra \[\{\mathbf{a}_{\ell},\mathbf{a}_{n}\}=2\delta_{\ell n}\mathbf{I}\;. \tag{64}\] The XY chain Hamiltonian can due to translational invariance be written as \[\mathbf{H}=\frac{1}{4}\sum_{\ell,n=-\infty}^{\infty}\left(\mathbf{a}_{\ell}^{x}\ \ \mathbf{a}_{\ell}^{y}\right)\int_{-\pi}^{\pi}\tfrac{\mathrm{d}k}{2\pi}e^{i(\ell-n)k} \mathcal{H}(k)\begin{pmatrix}\mathbf{a}_{n}^{x}\\ \mathbf{a}_{n}^{y}\end{pmatrix}, \tag{65}\] where the \(2\times 2\) matrix \({\cal H}(k)\), called the Hamiltonian symbol [102], generates the couplings through its Fourier coefficients. The symbol is given by \[{\cal H}(k)=2(J_{x}-J_{y})\sin(k)\,\sigma^{x}-2(J_{x}+J_{y})\cos(k)\,\sigma^{y}. \tag{66}\] The positive eigenvalues of the symbol, given by \[\varepsilon(k)=2\sqrt{J_{x}^{2}+J_{y}^{2}+2J_{x}J_{y}\cos(2k)} \tag{67}\] are the energies of the quasiparticle excitations of the model. In a translationally invariant state \(\left|\Psi\right>\) the correlation matrix \[\Gamma_{\ell,n}=\delta_{\ell,n}I-\left<\Psi\right|\mathbf{a}_{\ell}\mathbf{a}_{n}\left| \Psi\right> \tag{68}\] can be expressed in terms of its symbol \(\Gamma(k)\), which is a \(2\times 2\) matrix, as \[\Gamma_{2\ell+i,2n+j}=\int_{-\pi}^{\pi}\tfrac{\mathrm{d}k}{2\pi}e^{i(\ell-n)k }\Gamma_{ij}(k)\;,\qquad\ell,n\in\mathbb{Z},\;i,j\in\{1,2\}. \tag{69}\] In the dual picture the all-spin-up quench protocol is a homogenous quench protocol, while the flipped-spin protocol is a bipartitioning protocol. Accordingly, the correlation matrix symbol describing their stationary states has already been derived in the literature (see e.g. [2; 35]). For the stationary state of the all-spin-up quench protocol it reads \[\text{protocol 1:}\qquad\Gamma(k)=\frac{\mathrm{tr}[\sigma^{y}{\cal H}(k)]{ \cal H}(k)}{2\varepsilon^{2}(k)}\,, \tag{70}\] which can be obtained simply as a time average. The non-equilibrium stationary state following the quench from the domain wall state is also translationally invariant, notwithstanding that the initial state is not, and its correlation matrix symbol reads \[\text{protocol 2:}\qquad\Gamma(k)=-\text{sgn}[v(k)]\frac{\mathrm{tr}[\sigma^{y} {\cal H}(k)]\mathbb{I}}{2\varepsilon(k)}\,, \tag{71}\] where \(v(k)=\varepsilon^{\prime}(k)\) is the velocity of the quasiparticle excitations. The symbol (71) is derived using the generalized hydrodynamic equation, which provides the connection between stationary states at different rays \(\zeta=d/t\), where \(d\) is the distance from the position of the domain wall in the initial state. The spin correlation functions of section II.1 and the string order parameters of section II.2 are computed straightforwardly from the correlation matrix by expressing the operators in question in terms of fermions (63), where we use also some simplifying properties of the correlation matrix in question, as discussed in appendix A. The Renyi entanglement entropy of a single block is obtained, as already done in [35], by expressing the density matrix (49) in terms of fermionic Gaussians, i.e. exponentials of quadratic forms of fermionic operators. Similarly, the entanglement entropy of disjoint blocks is obtained by expressing the density matrix (54) in terms of Gaussians. The procedure is pedantic, but the methods essentially do not go beyond ref. [68] dealing with the entanglement entropy of disjoint blocks for the model (17). The procedure of expressing the density matrices in terms of fermionic Gaussians is reported in appendix C, while the exact formulas for computing the second Renyi entropy are given in appendix D. ## IV Conclusions We have shown that the stationary state of the translationally invariant all-spin-up quench protocol and the stationary state reached in the flipped-spin quench protocol (NESS) have different properties. The former exhibits exponential decay of spatial correlations and zero tripartite information of three large adjacent subsystems, while in the latter there are correlations that decay only algebraically and the tripartite information is non-zero. On the other hand, string order is present in both cases. Importantly, a single spin flip in the initial state is responsible for differences in the behavior of the spatial correlations and the tripartite information between the two stationary states. This result complements the recents findings that a spin flip in the initial state affects the magnetization [46] and the subleading term in the single block entanglement entropy [35] in the stationary state. The mechanism responsible for all this phenomenology is the existence of semilocal charges, whose expectation value can be strongly affected by localized perturbations. We have also derived explicitly the expressions for generalized Gibbs ensembles describing the stationary state of the all-spin-up and the flipped-spin quench protocol. The set of semilocal charges associated with nonzero Lagrange multipliers is completely different for the two protocols. We remark that if the initial state contains several flipped spins instead of just one the properties of the stationary state will strongly depend on the parity of the number of flipped spins. Flipping an even number of spins in the initial state is expected to yield a stationary state with the same properties as in the all-spin-up quench protocol, while an odd number of flipped spins is expected to yield the discovered phenomenology of the flipped-spin quench protocol. The reason is that flipping an even number of spins does not excite semilocal charges, while an odd number of flips does. The tripartite information in the stationary state of the flipped-spin quench protocol is interesting on its own. Although the initial state is a product state and the time evolution is with a gapped Hamiltonian, the tripartite information in the stationary state shares universal properties with conformal field theory. It depends on the lengths of the subsystems only through the cross ratio (8). It would be nice to have an explanation for this phenomenon independent of the details of the computation. Furthermore, we have shown that the tripartite information in the studied stationary states is not affected by the Kramers-Wannier transformation. In particular, the flipped-spin quench protocol yields the same tripartite information as the quench protocol in which the initial state is the domain wall state and time evolution is with the XY chain, a model that does not posses semilocal charges. The latter is a problem studied in details in Papers I and II. As there, the stationary state exhibits nonzero residual tripartite information. Namely, in the limit of small size of subsystem \(B\) with respect to the size of subsystems \(A,C\) (but still much larger than the lattice spacing) the tripartite information is nonzero. The value is equal to \(-\log 2\) independently of the parameters of the Hamiltonian (11) governing the time evolution. This value should be contrasted to the zero value found in equilibrium at any temperature, irrespectively of criticality, or in other non-equilibrium settings, such as after quenches from ground states of gapped Hamiltonians. Finally, we mention that the tripartite information is negative, which in other contexts [76; 84] has been interpreted as an indication that quantum entanglement dominates over classical correlations. Maybe computing other entanglement measures, such as entanglement negativity [103; 104; 105], could shed light on the nature of this phase. ## V Acknowledgments This work was supported by the European Research Council under the Starting Grant No. 805252 LoCoMacro. I thank Maurizio Fagotti for providing me the idea for this work, collaboration on related topics and useful discussions. I also thank Saverio Bocini for useful discussions. ## Appendix A Correlation functions of Majorana fermions Majorana correlation functions, determined by the symbols (70) and (71), have some simplifying properties. We assume \(0<|J_{y}|<|J_{x}|\). In the stationary state of quench protocol 1 the correlations are \[\langle\mathbf{a}_{t+r}^{x}\mathbf{a}_{\ell}^{y}\rangle \tag{72}\] \[=\begin{cases}0,&r<0\text{ or }r\in 2\mathbb{Z}+1\\ \frac{i}{2}\big{(}1+\frac{J_{y}}{J_{x}}\big{)},&r=0\\ \frac{i}{2}\left(1-\frac{J_{y}^{2}}{J_{x}^{2}}\right)\left(-\frac{J_{y}}{J_{x} }\right)^{\frac{p}{2}-1},&r>0\text{ and }r\in 2\mathbb{Z}\end{cases},\] as obtained by expanding (70) in the Fourier series, where it is convenient to exploit the formula for the geometric series. The correlations vanish for all odd \(r\) and for negative even \(r\). Moreover the correlations decay exponentially with \(r\), related to the fact that symbol (70) is smooth. Also, trivially, the correlations \(\langle\mathbf{a}_{\ell+r}^{\alpha}\mathbf{a}_{\ell}^{\beta}\rangle\) for \(\alpha=\beta\) vanish for all \(r\neq 0\). In the stationary state of quench protocol 2 the correlations \(\langle\mathbf{a}_{\ell+r}^{\alpha}\mathbf{a}_{\ell}^{\beta}\rangle\) vanish for \(\alpha\neq\beta\), while for \(\alpha=\beta\) they vanish for nonzero even \(r\). The remaining ones decay to zero algebraically with \(r\), since (71) is discontinuous. Using partial integration we get \[\langle\mathbf{a}_{\ell+2r-1}^{x}\mathbf{a}_{\ell}^{x}\rangle=\langle\mathbf{a}_{\ell+2r-1 }^{y}\mathbf{a}_{\ell}^{y}\rangle\simeq-\text{sgn}(J_{y})\frac{i}{\pi r}. \tag{73}\] In computing the string order parameter we will also use the exact result for \(r=1\) \[\langle\mathbf{a}_{\ell+1}^{x}\mathbf{a}_{\ell}^{x}\rangle=\langle\mathbf{a}_{\ell+1}^{y} \mathbf{a}_{\ell}^{y}\rangle=-\text{sgn}(J_{y})\frac{i}{\pi}\left(1+\frac{J_{y}}{J _{x}}\right)\,, \tag{74}\] obtained by basic integral manipulations. The spin correlation functions in section II.1 and the string order parameters in section II.2 are computed straightforwardly using the Kramers-Wannier transformation (15) and the listed properties of Majorana correlators. In particular, the string operators read \[\prod_{\ell=-r}^{r}\mathbf{\sigma}_{\ell}^{z}=\mathbf{\tau}_{-r}^{z}\mathbf{ \tau}_{r+1}^{z}=-\mathbf{a}_{-r}^{x}\mathbf{a}_{-r}^{y}\mathbf{a}_{r+1}^{x}\mathbf{a}_{r+1}^{y}\;, \tag{16}\] \[\mathbf{\sigma}_{-r-3}^{z}\mathbf{\sigma}_{-r-2}^{z}\mathbf{\sigma}_{-r-1}^{y} \left(\prod_{\ell=-r}^{r}\mathbf{\sigma}_{\ell}^{z}\right)\mathbf{\sigma}_{r+1}^{y}\mathbf{ \sigma}_{r+2}^{z}\mathbf{\sigma}_{r+3}^{x}=\mathbf{\tau}_{-r-2}^{y}\mathbf{\tau}_{-r-1}^{x} \mathbf{\tau}_{r+2}^{x}\mathbf{\tau}_{r+3}^{y}=\mathbf{a}_{-r-2}^{x}\mathbf{a}_{-r-1}^{x}\mathbf{a}_ {r+2}^{y}\mathbf{a}_{r+3}^{y}\;. \tag{17}\] ## Appendix B Procedure for finding the Lagrange multipliers In this section we show how the Lagrange multipliers in the generalized Gibbs ensembles describing the studied stationary states are obtained, based on methods developed in [95; 99]. The one-site shift invariant charges of the quantum XY chain (17) given in section III.1.1 can be written as a quadratic form in Majorana fermions \[\mathbf{Q}^{(n,\pm)}=\frac{1}{4}\sum_{\ell,n=-\infty}^{\infty}\left(\mathbf{a}_{\ell}^ {x}\quad\mathbf{a}_{\ell}^{y}\right)\int_{-\pi}^{\pi}\tfrac{\mathrm{d}k}{2\pi}e^{i (\ell-n)k}\mathcal{Q}^{(n,\pm)}(k)\begin{pmatrix}\mathbf{a}_{n}^{x}\\ \mathbf{a}_{n}^{y}\end{pmatrix}, \tag{18}\] specified by the symbol \(\mathcal{Q}^{(n,\pm)}(k)\), which is a \(2\times 2\) matrix function. The charges are constructed so that the symbol \(\mathcal{Q}^{(n,\pm)}(k)\) commutes with the Hamiltoian symbol \(\mathcal{H}(k)\), because this implies that charges commute with the Hamiltonian (\([\mathbf{Q}^{(n,\pm)},\mathbf{H}]=0\)). Moreover, the symbol is chosen to have a finite number of Fourier coefficients, to ensure the locality of the charge. In particular, the charges of section III.1.1 are given by the symbols \[\mathcal{Q}^{(n,+)}(k) =\cos(nk)\mathcal{H}(k)\;, \tag{19}\] \[\mathcal{Q}^{(n,-)}(k) =\sin[(n+1)k]\mathbb{I}\;, \tag{20}\] for \(n\in\mathbb{N}_{0}\). A translationally invariant fermionic Gaussian can be written as \[\mathbf{\rho}=\frac{1}{\mathcal{Z}}\exp\left[\frac{1}{4}\sum_{\ell,n=-\infty}^{ \infty}\left(\mathbf{a}_{\ell}^{x}\quad\mathbf{a}_{\ell}^{y}\right)\int_{-\pi}^{\pi} \frac{\mathrm{d}k}{2\pi}e^{i(\ell-n)k}\mathcal{W}(k)\begin{pmatrix}\mathbf{a}_{n}^ {x}\\ \mathbf{a}_{n}^{y}\end{pmatrix}\right], \tag{21}\] where \(\mathcal{Z}\) is the normalization and \(\mathcal{W}(k)\) is related to the symbol of the correlation matrix (see e.g. (16)) through \[\mathcal{W}(k)=2\;\mathrm{artan}\left[\Gamma(k)\right]\;. \tag{22}\] On the other hand, \(\mathcal{W}(k)\) is related to the Lagrange multipliers in the GGE (42) through \[\mathcal{W}(k)=-\sum_{n=0}^{\infty}\left(\lambda^{(n,+)}\mathcal{Q}^{(n,+)}(k )+\lambda^{(n,-)}\mathcal{Q}^{(n,-)}(k)\right)\;. \tag{23}\] Thus, Lagrange multipliers \(\{\lambda^{(n,+)},\lambda^{(n,-)}\}_{n}\) are obtained from the correlation matrix symbol, given by (70) and (71) for the studied quench protocols, by comparing (22) and (23). In this way the Lagrange multipliers in (43) and (44) are obtained, where the given integrals are simplifications of the ones from \(0\) to \(2\pi\) arising from the Fourier coefficients. ## Appendix C Reduced density matrix in the fermionic picture The usefulness of representation (49) and (54) is that they enable us to go to the fermionic language. The goal of this section is to use these representations to express the density matrices in a useful form for computing the Renyi entropies, which is the subject of appendix D. Assuming \(\min A=1\) without loss of generality, the mapping starts by defining the Majorana fermions restricted to \(\tilde{A}\tilde{B}\tilde{C}\), \[a_{2\ell-1}\equiv a_{\ell}^{x}=\left(\prod_{j=1}^{\ell-1}\tau_{j}^{z}\right) \tau_{\ell}^{x},\qquad a_{2\ell}\equiv a_{\ell}^{y}=\left(\prod_{j=1}^{\ell-1 }\tau_{j}^{z}\right)\tau_{\ell}^{y}\,, \tag{104}\] for \(\ell=1,2,\ldots,|\tilde{A}\tilde{B}\tilde{C}|\), which are analogous to those defined on the whole chain and satisfy analogous algebra \(\{a_{\ell},a_{n}\}=2\delta_{\ell n}I\). Gaussians of fermionic operatorsIn the \(2^{d}\) dimensional Fock space associated to Majorana fermions \(a_{1},a_{2},a_{3},\ldots,a_{2d}\) (with anticommutation relations \(\{a_{\ell},a_{n}\}=2\delta_{\ell,n}\)) a generic normalized Gaussian operator \(\rho(\Gamma)\) reads \[\rho(\Gamma)=\frac{1}{\mathcal{Z}}\exp\left(\frac{\vec{a}^{\dagger}W\vec{a}}{ 4}\right)\,, \tag{105}\] where \(\vec{a}^{\dagger}=(a_{1},a_{2},\ldots,a_{2d})\) and \(W\) is a \(2d\times 2d\) dimensional antisymmetric matrix with complex entries. The Gaussian is completely specified by its two-point correlation matrix, given by \[\Gamma_{j,\ell}=\delta_{j,\ell}-\text{tr}\left[\rho(\Gamma)\ a_{j}a_{\ell} \right],\qquad 1\leq j,l\leq 2d, \tag{106}\] which is related to matrix \(W\) in the exponent through the formula [68] \[\Gamma=\tanh\left(\frac{W}{2}\right). \tag{107}\] The normalization is given by \[\mathcal{Z}=\pm\left[\det\left(e^{\frac{W}{2}}+e^{-\frac{W}{2}}\right)\right] ^{\frac{1}{2}}=\pm\left[\det\left(\frac{I-\Gamma}{2}\right)\right]^{-\frac{1}{ 2}}, \tag{108}\] where the sign in front of the square root is not specified by the formula and depends on further details of \(\Gamma\). When \(\rho\) is a density matrix we have that \(\Gamma\) and \(W\) are Hermitian so the sign is positive. We will encounter non-Hermitian \(\Gamma\) and \(W\) only in the case of disjoint blocks, analogously to ref. [68]. Single blockIn the case of a single block, it is an established fact [106, 107, 108, 22] that the reduced density matrices such as \(\tilde{\rho}_{\tilde{A}}\) in (49), defined by (48), are Gaussians, i.e. exponentials of quadratic forms of Majorana fermions, determined by the correlation matrix. As discussed in [35] and in appendix D, each term in the sum in (49) is still a Gaussian, with the correlation matrix depending on the transformations \(P\). The Gaussian structure can then be exploited for the computation of the Renyi entropies. Disjoint blocksIn the case of disjoint blocks we have derived representation (54) for the reduced density matrix. Each term in the sum in (54) can be written as a sum of Gaussians. The Gaussian structure can then be exploited to compute the Renyi entropies. Expressing \(\tilde{\rho}_{\tilde{A}\tilde{C}}\), appearing in (54), in terms of Gaussians is a procedure identical to expressing the reduced density matrix in the XY chain in terms of Gaussians, as done in ref. [68]. We review it briefly because we use the same techniques for \(\omega_{\tilde{A}\tilde{C}}\). We have \[\begin{split}\tilde{\rho}_{\tilde{A}\tilde{C}}&= \frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{\text{even}\,F_{\tilde{A}},F_ {\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{\tilde{C}}\ket{\Psi}\bra{F_{\tilde{A}}F_ {\tilde{C}}}^{\dagger}\\ &+\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{\text{odd}\,F_ {\tilde{A}},F_{\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{\tilde{C}}S^{z}\ket{\Psi} \bra{F_{\tilde{A}}F_{\tilde{C}}S^{z}}^{\dagger}\,\end{split} \tag{109}\] where the first (second) sum is over all \(F_{\tilde{A}}\) and \(F_{\tilde{C}}\) that are both a product of an even (odd) number fermions in \(\tilde{A}\) and \(\tilde{C}\) respectively, i.e. \(F_{\tilde{A}}\) and \(F_{\tilde{C}}\) in the first (second) sum commutes (anticommutes) with \(\Pi_{\tau}^{z}\equiv\prod_{\ell\in A}\tau_{\ell}^{z}\). Explicitly, \[F_{\tilde{A}}=\prod_{\ell\in\tilde{A}}(a_{\ell}^{x})^{j_{\ell}^{x}}(a_{\ell}^ {y})^{j_{\ell}^{y}}_{j^{\ell}},\qquad F_{\tilde{C}}=\prod_{\ell\in\tilde{C}}(a _{\ell}^{x})^{j_{\ell}^{x}}_{\ell}(a_{\ell}^{y})^{j_{\ell}^{y}}_{\ell}\, \tag{110}\] where indices \(j_{k}^{x,y}\) are either \(0\) or \(1\) and the order of operators in the product does not matter in (109). In the second term in (109) the string \(S^{z}\), where we denote \(S^{\gamma}=\prod_{\ell\in\tilde{B}}\tau_{\ell}^{\gamma}\) for \(\gamma=0,x,y,z\), appears in order to cancel the string that arises due to the non-locality of the Jordan-Wigner transformation. The two different sums can be expressed conveniently using the (anti)commutation relation with the string over the block \(\tilde{A}\) \[P_{0}\equiv S^{z}_{\tilde{A}}=\prod_{\ell\in\tilde{A}}\tau^{z}_{\ell}. \tag{100}\] We have \[\tilde{\rho}_{\tilde{A}\tilde{C}}=\frac{\rho_{0}+S^{z}_{\tilde{A}}\rho_{0}S^{z}_ {\tilde{A}}}{2}+S^{z}\frac{\rho_{z}-S^{z}_{\tilde{A}}\rho_{z}S^{z}_{\tilde{A}}}{ 2}, \tag{101}\] where we denote \[\rho_{\gamma}=\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{F_{\tilde{A}},F_ {\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{\tilde{C}}S^{\gamma}\ket{\Psi}\left(F_{ \tilde{A}}F_{\tilde{C}}\right)^{\dagger} \tag{102}\] for \(\gamma=0,x,y,z\). Here the sum is over all possible products of fermions in \(\tilde{A}\) and \(\tilde{C}\). The operator \(\omega_{\tilde{A}\tilde{C}}\), appearing in (54), can be expressed in terms of fermions in a similar way. For even \(|\tilde{B}|\) it is completely analogous to \(\tilde{\rho}_{\tilde{A}\tilde{C}}\), \[\begin{split}\omega_{\tilde{A}\tilde{C}}&=S^{x} \frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{\text{even}\,F_{\tilde{A}},F_ {\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{\tilde{C}}S^{x}\ket{\Psi}\left(F_{ \tilde{A}}F_{\tilde{C}}\right)^{\dagger}\\ &+S^{x}\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{\text{odd }\,F_{\tilde{A}},F_{\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{\tilde{C}}S^{z}S^{x }\ket{\Psi}\left(F_{\tilde{A}}F_{\tilde{C}}S^{z}\right)^{\dagger}\,\end{split} \tag{103}\] where the first (second) sum is again over all \(F_{\tilde{A}}\) and \(F_{\tilde{C}}\) that are both a product of an even (odd) number of fermions and \(S^{z}\) stems from the non-locality of the Jordan-Wigner transformation. For odd \(|\tilde{B}|\) it is slightly different, \[\begin{split}\omega_{\tilde{A}\tilde{C}}&=S^{x} \frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{\text{odd }\,F_{\tilde{A}},\,\text{even}\,F_{\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{ \tilde{C}}S^{x}\ket{\Psi}\left(F_{\tilde{A}}F_{\tilde{C}}\right)^{\dagger}\\ &+S^{x}\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{\text{ even}\,F_{\tilde{A}},\,\text{odd}\,F_{\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{ \tilde{C}}S^{z}S^{x}\ket{\Psi}\left(F_{\tilde{A}}F_{\tilde{C}}S^{z}\right)^{ \dagger}\.\end{split} \tag{104}\] Now the number of fermions in \(F_{\tilde{A}}\) and \(F_{\tilde{C}}\) has opposite parities. The string \(S^{z}\) has to be included when \(F_{\tilde{C}}\) consists of an odd number of fermions or, consequently, when \(F_{\tilde{A}}\) consists of an even number of fermions. Overall, we have \[\omega_{\tilde{A}\tilde{C}}=S^{x}\frac{\rho_{x}+(-1)^{|\tilde{B}|}S^{z}_{ \tilde{A}}\rho_{x}S^{z}_{\tilde{A}}}{2}+S^{y}\frac{\rho_{y}-(-1)^{|\tilde{B}|}S ^{z}_{\tilde{A}}\rho_{y}S^{z}_{\tilde{A}}}{2}, \tag{105}\] where the string \(S^{y}\) is a result of multiplying \(S^{x}\), that comes from the non-local character of the Kramers-Wannier transformation, with \(S^{z}\), that comes from the non-locality of the Jordan-Wigner transformation. Finally, (105) can be expressed in terms of Gaussians, since the operators (102) can be expressed in terms of Gaussians, as covered in the following. Operators with strings as GaussiansLet us consider operators defined in (102). We note that they satisfy \[S^{\gamma}\rho_{\gamma}=\frac{1}{2^{|\tilde{A}\tilde{B}\tilde{C}|}}\sum_{F_{ \tilde{A}},F_{\tilde{C}}}\bra{\Psi}F_{\tilde{A}}F_{\tilde{C}}\mathcal{S}^{ \gamma}\ket{\Psi}\left(F_{\tilde{A}}F_{\tilde{C}}\mathcal{S}^{\gamma}\right)^ {\dagger}, \tag{106}\] where we have the freedom of replacing the string between the blocks \(S^{\gamma}\) by \(\mathcal{S}^{\gamma}=F^{\prime}_{\tilde{A}}F^{\prime}_{\tilde{C}}S^{\gamma}\) for any fixed \(F^{\prime}_{\tilde{A}}\) and \(F^{\prime}_{\tilde{C}}\), that are a product of fermionic operators in \(\tilde{A}\) and \(\tilde{C}\) respectively. The exact choice is discussed in appendix E. This replacement is allowed since it corresponds simply to changing the dummy summation index. Because of this freedom we can choose \(\mathcal{S}^{\gamma}\) to be a product of an even number of Majorana fermions (irrespectively of the parity of \(|\tilde{B}|\)) and therefore a Gaussian. Moreover, choosing \(\mathcal{S}^{\gamma}\) such that it has a non-zero expectation value \(\langle\mathcal{S}^{\gamma}\rangle\), the expectation values \[\langle F_{\tilde{A}}F_{\tilde{C}}\mathcal{S}^{\gamma}\rangle=\langle\mathcal{ S}^{\gamma}\rangle\operatorname{tr}\left[F_{\tilde{A}}F_{\tilde{C}}\rho( \Gamma_{\gamma})\right], \tag{107}\] are given by the Gaussian \(\rho(\Gamma_{\gamma})\) specified by the correlation matrix \[\left(\Gamma_{\gamma}\right)_{j,\ell}=\delta_{j\ell}-\frac{\langle a_{f(j)}a_{f( \ell)}\mathcal{S}^{\gamma}\rangle}{\langle\mathcal{S}^{\gamma}\rangle}\;,\qquad j,\ell=1,2,\ldots,2|\tilde{A}\tilde{C}| \tag{101}\] where \[f(j)=\left\{\begin{array}{cc}&j,\qquad 1\leq j\leq|2\tilde{A}|\\ &2[\min\left(\tilde{C}\right)-|\tilde{A}|]+j-2,\qquad 2|\tilde{A}|+1\leq j \leq 2|\tilde{A}\tilde{C}|\end{array}\right. \tag{102}\] is just a compact notation to work with the indices related to disjoint blocks. Here we have used the Wick theorem, the property that the product of two Gaussians (\(\mathcal{S}^{\gamma}\) and \(\rho(\Gamma)\)) is a Gaussian [68] and we have divided by the expectation value of the string \(\langle\mathcal{S}^{\gamma}\rangle\) to ensure the normalization \(\operatorname{tr}\left[\rho(\Gamma_{\gamma})\right]=1\). The correlation matrix (101) and, accordingly, the matrix \(W\) in (100), are Hermitian for \(\gamma=0\), but in general this is not the case. In general they are complex antisymmetric matrices. We note that Wick theorem holds also in this more general case. It is desirable to have a proof of Wick theorem as such directly in the Majorana fermions formalism we work with so we provide it in appendix G. We have thus shown that \(S^{\gamma}\rho_{\gamma}\) can be expressed in terms of a normalized Gaussian, \[S^{\gamma}\rho_{\gamma}=\left(\mathcal{S}^{\gamma}\right)^{\dagger}\left\langle \mathcal{S}^{\gamma}\right\rangle\rho(\Gamma_{\gamma})\;. \tag{103}\] The correlation matrix (101) could, in principle, be evaluated from its definition, but since this procedure would require the evaluation of the pfaffian for each matrix element, in appendix F we derive an alternative expression (a similar trick was used in [68] to avoid singularities). Suppose that the string is given by \(\mathcal{S}^{\gamma}=a_{i_{1}}a_{i_{2}}\ldots a_{i_{m}}\) for some positive even integer \(m\). We find \[\Gamma_{\gamma}=\Gamma_{0}-UN^{-1}U^{\mathrm{T}}, \tag{104}\] where \[U_{\ell,n}=\langle a_{f(\ell)}a_{i_{n}}\rangle\qquad\ell=1,2,\ldots,2|\tilde{A }\tilde{C}|,\;n=1,2,\ldots m \tag{105}\] and \[N_{\ell,n}=\left\langle a_{i_{\ell}}a_{i_{n}}\right\rangle\;,\qquad\ell,n=1,2,\ldots,m\;. \tag{106}\] Note that the squared absolute value of the expectation value of the string is given by \[|\left\langle\mathcal{S}^{\gamma}\right\rangle|^{2}=|\det N|. \tag{107}\] ## Appendix D The second Renyi entropy In the previous section we have derived fermionic representations of reduced density matrices. In this section we use these expressions to find a way for computing the Renyi entropies exactly. For simplicity, we focus only on the second Renyi entropy. Products of GaussiansFirst we cover some basic formulas. For the computation of the second Renyi entropy we only need the knowledge of the trace of the product of two normalized Gaussians, defined in (100). It has been argued in [68] that for two normalized Gaussians \(\rho(\Gamma_{1})\) and \(\rho(\Gamma_{2})\), with correlation matrices \(\Gamma_{1}\) and \(\Gamma_{2}\), the trace is given by \[\operatorname{tr}\left[\rho(\Gamma_{1})\rho(\Gamma_{2})\right]\equiv\mathcal{ G}(\Gamma_{1},\Gamma_{2})\equiv\prod_{\{\lambda\}/2}\lambda\;, \tag{108}\] where the product is over all eigenvalues \(\lambda\) of the matrix \((I+\Gamma_{1}\Gamma_{2})/2\) with degeneracy, that is always even, reduced by half. Note that up to an unresolved sign the following formula holds (see also [109]) \[\mathcal{G}(\Gamma_{1},\Gamma_{2})=\pm\sqrt{\det\left(\frac{I+\Gamma_{1} \Gamma_{2}}{2}\right)}\;. \tag{109}\] Single blockThe expression for computing exactly the second Renyi entropy of a single block has already been derived in [35]. We revisit it here since we need the result for computing the tripartite information, and as a warm-up for the case of disjoint blocks. In the case of a single block the density matrix \(\tilde{\rho}_{\tilde{A}}\) in (49) is a Gaussian, \[\tilde{\rho}_{\tilde{A}}=\rho(\Gamma) \tag{104}\] where the correlation matrix is given by \(\Gamma=\delta_{\ell,n}-\langle a_{\ell}a_{n}\rangle\) for \(\ell,n=1,2,\ldots,2|\tilde{A}|\). The transformations \(P_{j}\) in (49) preserve the Gaussianity of the state but modify the correlation matrix. The latter can be found simply by studying the effects on the two-point products of Majorana fermions. Namely, the operator \(P_{j}\rho(\Gamma)P_{j}\), for \(j=1,2,3\), is the (normalized) Gaussian \(\rho(V_{j}\Gamma V_{j})\), where \(V_{j}\) are diagonal matrices of size \(2|\tilde{A}|\) defined by \[\left(V_{1}\right)_{\ell\ell}=\begin{cases}-1,&\ell=1,2\\ 1,&\text{otherwise}\end{cases}\,\qquad\left(V_{2}\right)_{\ell\ell}=\begin{cases}-1,& \ell=2|\tilde{A}|-1,2|\tilde{A}|\\ 1,&\text{otherwise}\end{cases}\,\qquad\left(V_{3}\right)_{\ell\ell}=(-1)^{ \lfloor\frac{\ell}{2}\rfloor}\, \tag{105}\] where \(\lfloor.\rfloor\) stands in the standard way for the floor function. The Renyi entropy now follows by applying the formula (102) for the trace of a product of two Gaussians. Introducing the compact notation \[\mathcal{V}[j_{1},j_{2},j_{3}](\Gamma)=\left(V_{1}^{j_{1}}V_{2}^{j_{2}}V_{3}^ {j_{3}}\right)^{\mathrm{T}}\Gamma\left(V_{1}^{j_{1}}V_{2}^{j_{2}}V_{3}^{j_{3}} \right)\, \tag{106}\] we have \[S_{2}(A)=2\log 2-\log\left[\sum_{j_{1},j_{2},j_{3}=0}^{1}\mathcal{G} \Big{(}\Gamma,\mathcal{V}[j_{1},j_{2},j_{3}](\Gamma)\Big{)}\right], \tag{107}\] where \(\mathcal{G}\) is given by (103) with the \(+\) sign. Note that the summand \(2\log 2\) comes from the second term in (47) (for \(|X^{\prime}|=1\)) and from the factor \(1/2^{3}\) in (49). Disjoint blocksThe second Renyi entropy of disjoint blocks can be computed using eq. (54), relating the \(\sigma\) and \(\tau\) representation, together with eq. (104), (104), that allow us to express the density matrix as a sum of Gaussian fermionic operators. In order to cancel the string operators without additional phase factors, using \(\left(\mathcal{S}^{\gamma}\right)^{\dagger}\mathcal{S}^{\gamma}=1\), we are going to start from the trivial relation \(\operatorname{tr}(\tilde{\rho}_{AC}^{2})=\operatorname{tr}(\tilde{\rho}_{AC} \tilde{\rho}_{AC}^{\dagger})\). Using the property \[\operatorname{tr}\left(\frac{\rho_{1}\pm P\rho_{1}P}{2}\frac{\rho_{2}\pm P\rho _{2}P}{2}\right)=\operatorname{tr}\left(\rho_{1}\frac{\rho_{2}\pm P\rho_{2}P} {2}\right), \tag{108}\] that holds for any operators \(\rho_{1},\rho_{2}\) given some Hermitian involution \(P\), we also have the simplification \[\operatorname{tr}\left(\tilde{\rho}_{AC}^{2}\right)=\operatorname{tr}\left( \tilde{\rho}_{\tilde{A}\tilde{C}}\tilde{\rho}_{AC}^{\dagger}+\omega_{\tilde{ A}\tilde{C}}\tilde{\rho}_{AC}^{\dagger}\right) \tag{109}\] It is crucial that each of the four terms appearing in (104) and (104) multiplied by the string is a Gaussian. Similarly to the single block case, the information on the effects of the transformations \(S_{\tilde{A}}^{z}\) and \(P_{j}\), for \(j=1,2,\ldots,6\), appearing in (54), on Gaussian operators \(\rho_{\gamma}\), for \(\gamma=0,x,y,z\), are encoded in the effects on the correlation functions. Namely, for a normalized Gaussian \(\rho(\Gamma)\) the operator \(P_{j}\rho(\Gamma)P_{j}\) is for \(j=0,1,\ldots,6\) a normalized Gaussian \(\rho(V_{j}\Gamma V_{j})\), where \(V_{j}\) are diagonal matrices of size \(2|\tilde{A}\tilde{C}|\) defined by \[\left(V_{0}\right)_{\ell\ell} =\begin{cases}-1,&\ell\leq 2|\tilde{A}|\\ 1,&\text{otherwise}\end{cases}\,\qquad\left(V_{3}\right)_{\ell\ell}=\begin{cases}(-1)^{ \lfloor\frac{\ell}{2}\rfloor},&\ell\leq 2|\tilde{A}|\\ (-1)^{|\tilde{A}|},&\text{otherwise}\end{cases}\,\qquad\left(V_{6}\right)_{\ell \ell}=\begin{cases}1,&\ell\leq 2|\tilde{A}|\\ (-1)^{\lfloor\frac{\ell}{2}\rfloor},&\text{otherwise}\end{cases}\, \tag{110}\] \[\left(V_{j}\right)_{\ell\ell} =\begin{cases}-1,&\ell=2r_{j}-1,2r_{j}\\ 1,&\text{otherwise}\end{cases}\qquad\text{for $j=1,2,4,5$, where $r_{1}=1,\ r_{2}=|\tilde{A}|,\ r_{4}=|\tilde{A}|+1,\ r_{5}=|\tilde{A} \tilde{C}|$}. \tag{111}\] To write the final formula for the second Renyi entropy let us introduce the signs \(s_{j}^{\gamma}\), for \(j=0,1,\ldots,6\), which are positive or negative depending on whether \(P_{j}\) commutes or anticommutes with \(\mathcal{S}^{\gamma}\) respectively, i.e. \[s_{j}^{\gamma}\equiv\begin{cases}1,&\left[P_{j},\mathcal{S}^{\gamma}\right]=0\\ -1,&\left\{P_{j},\mathcal{S}^{\gamma}\right\}=0\end{cases} \tag{112}\] We remind that the string operators \(\mathcal{S}^{\gamma}\) are chosen so that \(\langle\mathcal{S}^{\gamma}\rangle\neq 0\) (see appendix E for the exact choice). Note that if \(\mathcal{S}^{\gamma}=S^{\gamma}\) all the signs \(s_{j}^{\gamma}\) are positive. Let us also introduce the compact notation \[s_{j_{1},j_{2},\ldots,j_{6}}^{\gamma}=(s_{1}^{\gamma})^{j_{1}}(s_{2}^{\gamma})^ {j_{2}}\ldots(s_{6}^{\gamma})^{j_{6}}\;, \tag{101}\] and, similarly to the single block case, \[\mathcal{V}[j_{0},j_{1},\ldots,j_{6}](\Gamma)=\left(V_{0}^{j_{0}}V_{1}^{j_{1}} \ldots V_{6}^{j_{6}}\right)^{\mathrm{T}}\Gamma\left(V_{0}^{j_{0}}V_{1}^{j_{1}} \ldots V_{6}^{j_{6}}\right)\;. \tag{102}\] The formula for the second Renyi entropy then reads \[\begin{split}& S_{2}(AC)=5\log 2-\log\bigg{\{}\sum_{j_{1,2, \ldots,6}=0}^{1}\bigg{(}\\ &\mathcal{G}\Big{(}\Gamma_{0},\mathcal{V}[0,j_{1},\ldots,j_{6}] (\Gamma_{0})\Big{)}+\mathcal{G}\Big{(}\Gamma_{0},\mathcal{V}[1,j_{1},\ldots, j_{6}](\Gamma_{0})\Big{)}\\ &+(-1)^{j_{4}+j_{5}}s_{j_{1},j_{2},\ldots,j_{6}}^{\pi}|\,\langle \mathcal{S}^{\pi}\rangle\,|^{2}\,\Big{[}\mathcal{G}\Big{(}\Gamma_{x}, \mathcal{V}[0,j_{1},\ldots,j_{6}](\Gamma_{x}^{\dagger})\Big{)}+s_{0}^{\pi}(-1) ^{|\tilde{B}|}\mathcal{G}\Big{(}\Gamma_{x},\mathcal{V}[1,j_{1},\ldots,j_{6}]( \Gamma_{x}^{\dagger})\Big{)}\Big{]}\\ &+(-1)^{j_{4}+j_{5}}s_{j_{1},j_{2},\ldots,j_{6}}^{y}|\,\langle \mathcal{S}^{\pi}\rangle\,|^{2}\,\Big{[}\mathcal{G}\Big{(}\Gamma_{y}, \mathcal{V}[0,j_{1},\ldots,j_{6}](\Gamma_{y}^{\dagger})\Big{)}-s_{0}^{\pi}(-1) ^{|\tilde{B}|}\mathcal{G}\Big{(}\Gamma_{y},\mathcal{V}[1,j_{1},\ldots,j_{6}]( \Gamma_{y}^{\dagger})\Big{)}\Big{]}\\ &+s_{j_{1},j_{2},\ldots,j_{6}}^{z}|\,\langle\mathcal{S}^{z} \rangle\,|^{2}\,\Big{[}\mathcal{G}\Big{(}\Gamma_{z},\mathcal{V}[0,j_{1},\ldots,j_{6}](\Gamma_{z}^{\dagger})\Big{)}-s_{0}^{z}\mathcal{G}\Big{(}\Gamma_{z}, \mathcal{V}[1,j_{1},\ldots,j_{6}](\Gamma_{z}^{\dagger})\Big{)}\Big{]}\;\bigg{)} \bigg{\}},\end{split} \tag{103}\] Note that the summand \(5\log 2\) comes from second term in (47), the factor \(1/2^{6}\) in (54), the factor \(1/2\) in (100) and (102) and by taking into account that in applying the formula (100) a factor \(2^{|\tilde{B}|}\) has to be included to account for the degeneracy in the enlarged space we work with. The final expression (103) is rather lengthy. Evaluating the second Renyi entropy exactly requires the evaluation of 512 traces of products of Gaussians. In practical implementations we will notice that some terms become negligible with respect to the others, beyond the machine precision, very fast as we increase the size of the subsystem. To speed up our numerical implementation, once some terms become negligible as we increase the subsystem size, we simply drop them for larger subsystems. ## Appendix E Choice of the strings for the reduced density matrix of disjoint blocks In appendices C and D we have not specified the choice of the string operators \(\mathcal{S}^{\gamma}\), \(\gamma=x,y,z\), introduced in (103) and appearing in the final formula (103) for the second Renyi entropy. As discussed after eq. (103), the string operators are given by \[\mathcal{S}^{\gamma}=F_{\tilde{A}}^{\prime}F_{\tilde{C}}^{\prime}S^{\gamma} \tag{104}\] where \(S^{\gamma}=\prod_{\ell\in\tilde{B}}\tau_{\ell}^{\gamma}\) and \(F_{\tilde{A}}^{\prime},F_{\tilde{C}}^{\prime}\) are products of fermions whose indices are restricted to block \(\tilde{A},\tilde{C}\) respectively. The operators \(F_{\tilde{A}}^{\prime},F_{\tilde{C}}^{\prime}\) have to be chosen in such a way that \(\langle\mathcal{S}^{\gamma}\rangle\neq 0\). We do this task by exploiting the properties of the Majorana correlators given in appendix A. In the stationary state of the flipped-spin quench protocol the string operators \[\mathcal{S}^{\gamma} =\left(\prod_{\ell=|\tilde{A}|-|\tilde{B}|}^{|\tilde{A}|-1}\sigma_ {\ell}^{\gamma}\right)S^{\gamma},\qquad\gamma=x,y \tag{105}\] \[\mathcal{S}^{z} =\begin{cases}S^{z},&|\tilde{B}|\;\text{even}\\ \tau_{|\tilde{A}|}^{z}S^{z},&|\tilde{B}|\;\text{odd}\end{cases} \tag{106}\] do the job when \(|\tilde{A}|>|\tilde{B}|\). For \(|\tilde{A}|\leq|\tilde{B}|\) in the studied examples (the uppermost curve in figure 5) the same \(\mathcal{S}^{z}\) works, while we have been able to show directly that all correlation functions of the form \(\langle F_{\tilde{A}}F_{\tilde{C}}S^{\gamma}\rangle\) vanish for \(\gamma=x,y\). Accordingly, the terms in (103) corresponding to \(\gamma=x,y\) were dropped. In producing the data for the all-spin-up quench protocol in figures 2 and 4 we have used the same \(\mathcal{S}^{z}\) as for the flipped-spin protocol, while the terms for \(\gamma=x,y\) do not contribute and have been dropped (apart from the exception \(|A|=|B|=|C|=3\) for which we have defined the strings properly, but this particular point is not important anyway). ## Appendix F Correlation matrix with strings Here we derive the expression (109) for the correlation matrix, that is suitable for numerical implementations. We suppose that the string is given by \(\mathcal{S}^{\gamma}=a_{i_{1}}a_{i_{2}}\ldots a_{i_{m}}\) for some positive even integer \(m\). We use Wick theorem (see e.g. appendix G) to write the elements in terms of pfaffians, \[\left(\Gamma_{\gamma}\right)_{j,\ell}=\delta_{j,\ell}-\mathrm{pf}\begin{pmatrix} M&Q\\ -Q^{\mathrm{T}}&N\end{pmatrix}/\mathrm{pf}(N)\;, \tag{110}\] where \(M\) is a \(2\times 2\) antisymmetric matrix \[M=\begin{pmatrix}0&\langle a_{f(j)}a_{f(\ell)}\rangle\\ -\left\langle a_{f(j)}a_{f(\ell)}\right\rangle&0\end{pmatrix}, \tag{111}\] \(N\) is defined in (107) and \(Q\) is a \(2\times m\) matrix \[Q=\begin{pmatrix}\langle a_{f(j)}a_{i_{1}}\rangle&\langle a_{f(j)}a_{i_{2}} \rangle&\ldots&\langle a_{f(j)}a_{i_{m}}\rangle\\ \langle a_{f(\ell)}a_{i_{1}}\rangle&\langle a_{f(\ell)}a_{i_{2}}\rangle& \ldots&\langle a_{f(\ell)}a_{i_{m}}\rangle\end{pmatrix}\;. \tag{112}\] Using the pfaffian identity (see e.g. [110]) \[\mathrm{pf}\begin{pmatrix}M&Q\\ -Q^{\mathrm{T}}&N\end{pmatrix}=\mathrm{pf}(N)\mathrm{pf}\left(M+QN^{-1}Q^{T}\right) \tag{113}\] and the property \(\mathrm{pf}\left(M+QN^{-1}Q^{T}\right)=\left(M+QN^{-1}Q^{T}\right)_{1,2}\), that is just the definition of the pfaffian for a \(2\times 2\) matrix, we get (109). ## Appendix G Wick theorem for fermionic Gaussians constructed with complex antisymmetric matrices Here we state and prove the Wick theorem, for Gaussians constructed with complex antisymmetric matrices, directly in the formalism with Majorana fermions. The proof takes ingredients from ref. [111; 112]. We consider a \(2^{d}\) dimensional Fock space associated with Majorana fermions \(a_{1},a_{2},\ldots,a_{2d}\), that satisfy the algebra \(\{a_{\ell},a_{n}\}=2\delta_{\ell,n}\). We consider the Gaussian state (107) and we are interested in the "expectation values" \(\mathrm{tr}[\rho(\Gamma)a_{i_{1}}a_{i_{2}}\ldots a_{i_{n}}]\), where \(i_{1},i_{2},\ldots,i_{n}\in\{1,2,\ldots,2d\}\) are not necessarily distinct indices. For odd \(n\) the expectation value is zero. For even \(n\), the Wick theorem, given in the following, expresses the expectation value as a sum over all contractions, conveniently expressed as a pfaffian. In the proof we will use the relations (104) (see [68]) and (105) (see [109; 68]), proven independently of the Wick theorem given here. _Wick theorem._ For a Gaussian state \(\rho=\exp(\frac{1}{4}\vec{a}^{\dagger}W\vec{a})/\mathcal{Z}\), with \(W\) a complex antisymmetric matrix (not necessarily Hermitian) such that \(\mathcal{Z}=\mathrm{tr}\exp(\frac{1}{4}\vec{a}^{\dagger}W\vec{a})\neq 0\), and Majorana fermions \(a_{i_{1}},a_{i_{2}},\ldots,a_{i_{n}}\) (not necessarily distinct), with \(n\) an even number, we have \[\langle a_{i_{1}}a_{i_{2}}\ldots a_{i_{n}}\rangle=\mathrm{pf}\begin{pmatrix}0& \langle a_{i_{1}}a_{i_{2}}\rangle&\langle a_{i_{1}}a_{i_{3}}\rangle&\ldots& \langle a_{i_{1}}a_{i_{n}}\rangle\\ -\langle a_{i_{1}}a_{i_{2}}\rangle&0&\langle a_{i_{2}}a_{i_{3}}\rangle&\ldots& \langle a_{i_{2}}a_{i_{n}}\rangle\\ -\langle a_{i_{1}}a_{i_{3}}\rangle&-\langle a_{i_{2}}a_{i_{3}}\rangle&0& \ldots&\langle a_{i_{3}}a_{i_{n}}\rangle\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ -\langle a_{i_{1}}a_{i_{n}}\rangle&-\langle a_{i_{2}}a_{i_{n}}\rangle&- \langle a_{i_{3}}a_{i_{n}}\rangle&\ldots&0\end{pmatrix}\;, \tag{114}\] where \(\langle O\rangle\) stands for \(\mathrm{tr}(O\rho)\). _Proof._ For some operators \(O_{1},O_{2},\ldots,O_{n}\) whose anticommutators are \(c\)-numbers we have \[\begin{split}&\mathrm{tr}[O_{1}O_{2}\ldots O_{n}\rho]=\sum_{k=2 }^{n}(-1)^{k}\{O_{1},O_{k}\}\mathrm{tr}[(O_{2}O_{3}\ldots O_{k-1})(O_{k+1}O_{k+2 }\ldots O_{n})\rho]\\ &-\mathrm{tr}[O_{2}O_{3}\ldots O_{n}O_{1}\rho],\end{split} \tag{115}\] that is obtained by commuting \(O_{1}\) with \(O_{k}\) for \(k=2,3,\ldots,n\) and using the property that each anticommutator is a \(c\)-number to bring it outside the trace. Here we abuse the notation slightly, as we replace the operators proportional to the identity by numbers. Relation (115) holds, in particular, for \(O_{k}=a_{i_{k}}\), \(k=2,3,\ldots,n\) and \(O_{1}=a_{i}\) for any \(i\in\{1,2,\ldots,2d\}\). Now the idea is to commute \(O_{1}\) with \(\rho\) in the last term in (107), use the cyclic property of the trace and move the term to the left hand side of the equation. We use the standard nested commutators identity for complex square matrices (see e.g. Proposition 3.35 in [113]) \[e^{A}Be^{-A}=B+[A,B]+\frac{1}{2}[A,[A,B]]+\ldots+\frac{1}{n!}[A,[A,\ldots[A,B] \ldots]]+\ldots, \tag{108}\] where in the \(n\)-th term \(n\) commutators appear. It is easy to show \[\left[\frac{\vec{a}^{\dagger}W\vec{a}}{4},a_{i}\right]=-\sum_{j=1}^{2d}W_{ij}a _{j}, \tag{109}\] from which we obtain recursively for \(n\) nested commutators \[\left[\frac{\vec{a}^{\dagger}W\vec{a}}{4},\left[\frac{\vec{a}^{\dagger}W\vec{ a}}{4},\ldots\left[\frac{\vec{a}^{\dagger}W\vec{a}}{4},a_{i}\right]\ldots \right]\right]=\left(-1\right)^{n}\sum_{j=1}^{d}\left(W^{n}\right)_{ij}a_{j}. \tag{110}\] From identity (108) we now get \[a_{i}\rho=\rho\sum_{j=1}^{2d}\left(e^{W}\right)_{ij}a_{j}. \tag{111}\] Using (111) and the cyclic property of the trace, from (107) we get \[\sum_{j=1}^{2d}\left(1+e^{W}\right)_{ij}\mathrm{tr}[a_{j}a_{i_{2}}a_{i_{3}} \ldots a_{i_{n}}\rho]=\sum_{k=2}^{n}(-1)^{k}\{a_{i},a_{i_{k}}\}\mathrm{tr}[a_ {i_{2}}a_{i_{3}}\ldots a_{i_{k-1}}a_{i_{k+1}}\ldots a_{i_{n}}\rho] \tag{112}\] for \(j=1,2,\ldots,2d\). From the assumption of the theorem we have \(\mathcal{Z}\neq 0\), and since \(\mathcal{Z}^{2}=\det(e^{\frac{W}{2}}+e^{-\frac{W}{2}})\) it follows that \(1+e^{W}\) is invertible. We can thus multiply (112) by \((1+e^{W})_{\ell i}^{-1}\) and sum over \(i\) to get \[\mathrm{tr}[a_{\ell}a_{i_{2}}\ldots a_{i_{n}}\rho]=\sum_{k=2}^{n}(-1)^{k}C_{ \ell i_{k}}\mathrm{tr}[a_{i_{2}}a_{i_{3}}\ldots a_{i_{k-1}}a_{i_{k+1}}\ldots a _{i_{n}}\rho], \tag{113}\] where \(C_{\ell j}=\sum_{i=1}^{2d}(1+e^{W})_{\ell i}^{-1}\{a_{i},a_{j}\}\). Using Majorana anticommutation relations \(\{a_{i},a_{j}\}=2\delta_{ij}\) explicitly we get a simplification \(C_{\ell j}=2(1+e^{W})_{\ell j}^{-1}\). Since the correlation matrix is given by \(\Gamma=\tanh(W/2)\) it follows \(C_{\ell j}=\left\langle a_{\ell}a_{j}\right\rangle\). We have thus shown \[\mathrm{tr}[a_{i_{1}}a_{i_{2}}\ldots a_{i_{n}}\rho]=\sum_{k=2}^{n}(-1)^{k} \left\langle a_{i_{1}}a_{i_{k}}\right\rangle\mathrm{tr}[a_{i_{2}}a_{i_{3}} \ldots a_{i_{k-1}}a_{i_{k+1}}\ldots a_{i_{n}}\rho]\;. \tag{114}\] The theorem follows by applying the recursive definition of the pfaffian for an \(n\times n\) antisymmetric matrix \(M\) with \(n\) even, \[\mathrm{pf}(M)=\sum_{k=2}^{n}(-1)^{k}M_{1k}\mathrm{pf}(M_{\hat{1},\hat{k}}), \tag{115}\] where \(M_{\hat{1},\hat{k}}\) stands for the matrix with both the first and the \(k\)-th rows and columns removed.
2303.12477
The Continuous Stochastic Gradient Method: Part II -- Application and Numerics
In this contribution, we present a numerical analysis of the continuous stochastic gradient (CSG) method, including applications from topology optimization and convergence rates. In contrast to standard stochastic gradient optimization schemes, CSG does not discard old gradient samples from previous iterations. Instead, design dependent integration weights are calculated to form a linear combination as an approximation to the true gradient at the current design. As the approximation error vanishes in the course of the iterations, CSG represents a hybrid approach, starting off like a purely stochastic method and behaving like a full gradient scheme in the limit. In this work, the efficiency of CSG is demonstrated for practically relevant applications from topology optimization. These settings are characterized by both, a large number of optimization variables \textit{and} an objective function, whose evaluation requires the numerical computation of multiple integrals concatenated in a nonlinear fashion. Such problems could not be solved by any existing optimization method before. Lastly, with regards to convergence rates, first estimates are provided and confirmed with the help of numerical experiments.
Max Grieshammer, Lukas Pflug, Michael Stingl, Andrian Uihlein
2023-03-22T11:39:58Z
http://arxiv.org/abs/2303.12477v1
# The Continuous Stochastic Gradient Method ###### Abstract In this contribution, we present a numerical analysis of the _continuous stochastic gradient_ (CSG) method, including applications from topology optimization and convergence rates. In contrast to standard stochastic gradient optimization schemes, CSG does not discard old gradient samples from previous iterations. Instead, design dependent integration weights are calculated to form a linear combination as an approximation to the true gradient at the current design. As the approximation error vanishes in the course of the iterations, CSG represents a hybrid approach, starting off like a purely stochastic method and behaving like a full gradient scheme in the limit. In this work, the efficiency of CSG is demonstrated for practically relevant applications from topology optimization. These settings are characterized by both, a large number of optimization variables _and_ an objective function, whose evaluation requires the numerical computation of multiple integrals concatenated in a nonlinear fashion. Such problems could not be solved by any existing optimization method before. Lastly, with regards to convergence rates, first estimates are provided and confirmed with the help of numerical experiments. **Keywords:** Stochastic Gradient Scheme, Convergence Analysis, Step Size Rule, Backtracking Line Search, Constant Step Size ###### Abstract The proposed method is based on the _MCMC Gradient_ (CSG) method, which is a method for constructing the _MCMC gradient_ (CSG) method. The proposed method is based on the _MCMC gradient_ (CSG) method. Afterwards, Section 3 shortly covers techniques to estimate the gradient approximation error during the optimization, before we focus on the convergence rate of CSG in Section 4. While the expected rates stated therein are _not_ proven, we present detailed numerical examples to solidify our claims. Furthermore, we analyze how the convergence rate depends on the dimension of integration and how to avoid slow convergence, if the objective function admits additional structure. ## 2 Nanoparticle Design Optimization Since the design of a nanoparticle, i.e., its shape, size, material distribution, etc., heavily impacts its optical properties, the task of optimizing a nanoparticle design with respect to a specific optical property arises naturally [11]. In this section, we are interested in using hematite nanoparticles to optimize the color of a paint film [12]. Thus, we start by introducing our main framework for this application. ### Color Spaces First off, we should explain what _optimal color_ means in our setting. There are several different methods to describe color mathematically, e.g., assigning each color an RGB representation vector \(\mathbf{v}\in\mathbb{R}^{3}\), where the three components of \(\mathbf{v}\) correspond to the red, green and blue value of the color. In our application, we are interested in the color of the paint film as it appears to the human eye. Therefore, the underlying color space should be chosen based on the following property: _If the euclidean distance between the representation vectors of two colors is small, the colors should be almost indistinguishable to the human eye._ As it turns out, the RGB color space is a very poor choice with respect to this feature. Hence, we instead choose the CIELAB color space [13], which was introduced by the International Commission of Illumination (Commission Internationale de l'Eclairage, CIE), as it was designed with this exact purpose in mind. The CIELAB representation of a color consists of three values \(\mathbf{L}\), \(\mathbf{a}\) and \(\mathbf{b}\). Here, \(\mathbf{L}\) corresponds to the lightness of a color and ranges from 0 (black) to 100 (white). The values of \(\mathbf{a}\) and \(\mathbf{b}\), typically within the range of \(\pm 150\), describe the colors position with respect to the opponent color pairs green-red and blue-yellow. A short overview is given in Figure 1. Another color space, which naturally arises from our setting, is the CIE 1931 XYZ color space [14]. The values of X, Y and Z can be calculated by integrating the optical properties of a particle over the spectrum of visible light (400nm - 700nm), which we denote by \(\Lambda\). Each of these integrations is weighted by the corresponding color matching functions \(x,y,z:\Lambda\rightarrow\mathbb{R}\). Thus, in our application, we will first calculate the CIE 1931 XYZ representation of the resulting color and then use the (nonlinear) color space transformation \(\Psi:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) with \(\Psi(\text{X,Y,Z})=(\mathbf{L},\mathbf{a},\mathbf{b})^{\top}\), to work in the CIELAB color space. For this transformation, we define a reference white point \[\begin{pmatrix}\mathrm{X}_{r}\\ \mathrm{Y}_{r}\\ \mathrm{Z}_{r}\end{pmatrix}=\begin{pmatrix}94.72528492\\ 100\\ 107.13012997\end{pmatrix}\] and denote the relative XYZ values by \[\tilde{\mathrm{X}}=\tfrac{\mathrm{X}}{\mathrm{X}_{r}},\quad\tilde{\mathrm{Y}} =\tfrac{\mathrm{Y}}{\mathrm{Y}_{r}},\quad\text{and}\quad\tilde{\mathrm{Z}}= \tfrac{\mathrm{Z}}{\mathrm{Z}_{r}}.\] Utilizing the intended CIE parameters \(\epsilon=\frac{216}{24389}\) and \(\kappa=\frac{24389}{27}\), the LAB color values are then given by \[\mathbf{L}=116f(\tilde{\mathrm{Y}})-16,\quad\mathbf{a}=500\bigl{(}f(\tilde{ \mathrm{X}})-f(\tilde{\mathrm{Y}})\bigr{)}\quad\text{and}\quad\mathbf{b}=200 \bigl{(}f(\tilde{\mathrm{Y}})-f(\tilde{\mathrm{Z}})\bigr{)},\] where \(f:\mathbb{R}\to\mathbb{R}\) is defined as \[f(t)=\begin{cases}\sqrt[3]{t}&\text{if }\tilde{\mathrm{X}}>\epsilon\\ \frac{\kappa t+16}{116}&\text{otherwise}\end{cases}.\] ### Mie Theory and Discrete Dipole Approximation Given a nanoparticle shape and material, we can use the time-harmonic Maxwell's equations to calculate its optical properties. Specifically, in our setting, we are interested in the absorption (Abs), scattering (Sca) and geometry factor (Geo). The time required and precision achieved are, of course, dependent on our model of the nanoparticle and the method used to solve Maxwell's equations. For our setting, we choose two different approaches. Figure 1: Resulting color for various different values of \(\mathbf{a}\) and \(\mathbf{b}\). Positive values of \(\mathbf{a}\) result in red colors, while colors corresponding to negative values of \(\mathbf{a}\) appear green. Similarly, positive \(\mathbf{b}\) values yield yellow colors, while negative \(\mathbf{b}\) values shift the color into the blue spectrum. In this figure, we fixed \(\mathbf{L}=50\). On the one hand, we will use the discrete dipole approximation (DDA) [15; 16; 17], in which the particle is discretized into an equidistant grid of dipole cells. Thus, DDA allows the analysis of arbitrary particle shapes and material distributions. The downside lies within the computational complexity of the method, which scales with the total number of dipoles and therefore grows rapidly when increasing the resolution. While the CSG method is still capable of solving the resulting optimization problem in our experiments, the tremendous computational cost associated to the DDA approach severely impede a detailed analysis of the problem. Especially, there is no computationally feasible, generic optimization scheme to compare our results with. However, we want to note that optimization in the DDA model has already been done in a slightly simpler setting, where the full integral over \(\Lambda\) was replaced by summation over a small number of different wavelengths [18]. On the other hand, Mie theory [19; 20] provides a numerically cheap alternative, at the price of a more restrictive setting. In Mie theory, one only considers radially symmetric particles. In this special setting, it is possible to find analytic solutions based on series expansions to the time-harmonic Maxwell's equations. Therefore, in our first approach, we will only consider core-shell particles, as the utilization of Mie theory allows for a much deeper analysis of the resulting optimization problem and comparison to deterministic optimization approaches, which rely on discretization of the integrals. ### Nanoparticles in Paint Film - Kubelka-Munk Theory As mentioned above, the XYZ color values of the paint film can be calculated by integration of the corresponding color matching functions \(x,y,z\) and the important optical properties of the nanoparticle. The precise method to obtain X, Y and Z is given by the Kubelka-Munk theory [21], augmented by a Saunderson correction [22]. For a paint film, in which nanoparticles with design \(u\) are present and which is illuminated by light with wavelength \(\lambda\in\Lambda\), the resulting color can be expressed by the \(K\) and \(S\) value \[K(u,\lambda)=\mathrm{Abs}(u,\lambda)\quad\text{and}\quad S(u,\lambda)=\mathrm{ Sca}(u,\lambda)\big{(}1-\mathrm{Geo}(u,\lambda)\big{)}\] via the reflectance \[R_{\infty}(u,\lambda)=1+\frac{8}{3}\frac{K(u,\lambda)}{S(u,\lambda)}-\sqrt{ \left(\frac{8}{3}\frac{K(u,\lambda)}{S(u,\lambda)}\right)^{2}+\frac{16}{3} \frac{K(u,\lambda)}{S(u,\lambda)}}\,.\] Now, X, Y and Z can be obtained by \[\mathrm{X}(u) =\int_{\Lambda}x(\lambda)\frac{(1-\rho_{0}-\rho_{1})R_{\infty}(u,\lambda)+\rho_{0}}{1-\rho_{1}R_{\infty}(u,\lambda)}\mathrm{d}\lambda,\] \[\mathrm{Y}(u) =\int_{\Lambda}y(\lambda)\frac{(1-\rho_{0}-\rho_{1})R_{\infty}(u,\lambda)+\rho_{0}}{1-\rho_{1}R_{\infty}(u,\lambda)}\mathrm{d}\lambda,\] \[\mathrm{Z}(u)=\int_{\Lambda}z(\lambda)\frac{(1-\rho_{0}-\rho_{1})R_{\infty}(u, \lambda)+\rho_{0}}{1-\rho_{1}R_{\infty}(u,\lambda)}\mathrm{d}\lambda,\] where \(\rho_{0}\) and \(\rho_{1}\) are material parameters. In our setting, which we introduce in the next section, we have \(\rho_{0}=0.04\) and \(\rho_{1}=0.6\). ### Problem Formulation In our first setting, we consider a radially symmetric core-shell nanoparticle (see Figure 2), where the inner core consists of water, while the outer shell is made of hematite. Thus, the design \(u\) consists of the radius \(R\) (1nm - 75nm) of the core and the thickness \(d\) (1nm - 250nm) of the outer hematite shell. As an additional layer of difficulty, we can, in practice, not expect all nanoparticles present in the paint film to be identical copies of design \(u\). Instead, when trying to produce nanoparticles of a specific design in large quantities, one usually ends up with a mixture of particles of different designs, following a certain probability distribution \(\mu_{u}\), which is dependent on the intended design \(u\). We model this aspect by assuming that, given a design \(u=(R,d)\), the particles present in the paint film follow a normal distribution (truncated to a reasonable design space \(\mathcal{R}\times\mathcal{D}\)) centered around \(u\), i.e., \[\tilde{R}\sim\mathcal{N}(R,\tfrac{1}{10}R)\quad\text{and}\quad\tilde{d}\sim \mathcal{N}(d,\tfrac{1}{10}d).\] Therefore, the \(K\) and \(S\) values in the Kubelka-Munk model need to be replaced by their averaged counterparts \[K(u,\lambda)=\iint_{\mathcal{R}\times\mathcal{D}}\mathrm{Abs}(\tilde{R}, \tilde{d},\lambda)\mathrm{d}\mu_{u}(\tilde{R},\tilde{d})\] and \[S(u,\lambda)=\iint_{\mathcal{R}\times\mathcal{D}}\mathrm{Sca}(\tilde{R}, \tilde{d},\lambda)\big{(}1-\mathrm{Geo}(\tilde{R},\tilde{d},\lambda)\big{)} \mathrm{d}\mu_{u}(\tilde{R},\tilde{d}),\] Figure 2: Radially symmetric core-shell nanoparticle. The inner core (blue) has radius \(R\) in the range of 1nm - 75nm and consists of water. The thickness of the hematite shell (red) is denoted by \(d\) and ranges from 1nm to 250nm. before calculating the reflectance \(R_{\infty}(u,\lambda)\) and integrating it over \(\Lambda\). The objective in our application is to produce a paint of bright red color. Thus, the complete optimization problem reads \[\max_{u\in\mathcal{U}}\quad\tfrac{1}{20}\operatorname{\mathbf{L}}(u)+\tfrac{19} {20}\operatorname{\mathbf{a}}(u). \tag{1}\] ### Challenges The highly condensed fashion, in which (1) is formulated, may obscure a lot of the difficulties that arise when trying to solve it. To get a better understanding of the problem, let us first analyze the abstract structure of the objective function \(J(u)=\tfrac{1}{20}\operatorname{\mathbf{L}}(u)+\tfrac{19}{20}\operatorname{ \mathbf{a}}(u)\): \[\begin{pmatrix}\operatorname{Abs}\\ \operatorname{Sca}\\ \operatorname{Geo}\end{pmatrix}\xrightarrow{\frac{\text{integrate}}{\mathcal{ R}\times\mathcal{D}}}\begin{pmatrix}K\\ S\end{pmatrix}\xrightarrow{\text{Munk}}R_{\infty}\xrightarrow{\text{ integrate}}\begin{pmatrix}\operatorname{X}\\ \operatorname{Y}\\ \operatorname{Z}\end{pmatrix}\xrightarrow{\text{color}}\xrightarrow{\text{ transf.}\Psi}\begin{pmatrix}\operatorname{L}\\ \operatorname{\mathbf{a}}\\ \operatorname{\mathbf{b}}\end{pmatrix}\to J(u).\] Since calculating \(J(u)\) and \(\nabla J(u)\) requires integrating the optical properties in multiple dimensions and since evaluating said properties for any combination of \(\tilde{R}\), \(\tilde{d}\) and \(\lambda\) requires solving the time-harmonic Maxwell's equations, standard deterministic approaches, e.g., full gradient methods, run into a prediscretization problem. On the one hand, the number of integration points needs to be sufficiently large for our setting. In Figure3, a slice through the objective function for a fixed value of \(R\) and several different amounts of integration points is shown. While we actually do not care too much about the approximation error resulting from a small number of integration points, the artificial local maxima introduced into the objective function by the discretization severely impact the quality of the optimization. In other words, many solutions to the discretized problem are completely unrelated to solutions to (1). We want to note that, even though not all of the stationary points in Figure3 correspond to stationary points of (1), the prediscretization still leads to very flat regions in the objective functions, which hinder the performance of many solvers. In Figure4, this effect is displayed. On the other hand, the number of integration points is heavily restricted by the computational cost associated to the evaluation of Abs, Sca and Geo. While medium resolutions (\(25^{3}\sim 15000\) points in total) are still numerically tractable for simple Mie particles, they are outright impossible to achieve in the more general DDA setting, which we want to consider later. For comparison: The optimization in [18] was carried out using a discretization consisting of \(20\) points in total. We want to emphasize that standard SG-type schemes, or even the _Stochastic Composition Gradient Descent_ (SCGD) method [23], which was used for the comparison for composite objective functions in (2, Section4.2), are not capable of solving (1), due to the special structure of \(J\). Figure 4: Flat regions in the discretized objective functions. The underlying contour plot corresponds to the discretization of \(\Lambda\times\mathcal{R}\times\mathcal{D}\) into \(50\times 50\times 50\) points. For each figure, the green region consists of all points at which the euclidean norm of the gradient of the discretized objective function is smaller than \(0.05\). The discretizations of \(\Lambda\times\mathcal{R}\times\mathcal{D}\) are given in the titles, respectively. Figure 3: Objective function values for fixed core radius of 3nm. Different graphs correspond to different discretizations. The label of a curve shows into how many points the integrals over \(\Lambda\), \(\mathcal{R}\) and \(\mathcal{D}\) have been split, respectively. Each of the discretizations introduces artificial stationary points into the objective function. ### Discretization For the reasons mentioned above, we will only compare the results obtained by CSG to generic deterministic optimization schemes for various choices of discretization. Since the integration over \(\Lambda\) admits no special structure, we always choose an equidistant partition for this dimension of integration. However, for the integration over \(\mathcal{R}\times\mathcal{D}\), we can use our knowledge of \(\mu_{u}\) to achieve a better approximation to the true integral. Instead of dividing \(\mathcal{R}\times\mathcal{D}\) into an equidistant grid, we utilize the fact that \(\tilde{R}\) and \(\tilde{d}\) are normal distributed independent from each other. Since, for a normal distribution, \(99.7\%\) of all weight is concentrated in the \(3\sigma\)-interval around the mean value, we may only discretize this portion of the full domain in each step. Moreover, we know the precise density function for both \(\tilde{R}\) and \(\tilde{d}\). Thus, given a design \(u_{n}=(R_{n},d_{n})\), we will partition \(\left(R_{n}-\frac{3}{10}R_{n},R_{n}+\frac{3}{10}R_{n}\right)\) and \(\left(d_{n}-\frac{3}{10}d_{n},d_{n}+\frac{3}{10}d_{n}\right)\) not into equidistant intervals, but instead in intervals of equal weight. This procedure is illustrated in Figures 5 and 6 and produces very good results even for a small number of sample points. However, as we have already seen in Figure 3, even this dedicated discretization scheme introduces additional prophelms into (1). Furthermore, we want to emphasize that choosing a reasonable discretization is a challenge of its own. Not only is there no a priori indication for the general magnitude of the number of points needed, it is also unclear whether or not one should use the same number of points in each direction. ### Numerical Results As mentioned above, the restriction to radially symmetric nanoparticles allows us to apply standard blackbox solvers to (1), in order to have a comparison for the CSG results. In our case, we chose the _fmincon_ implementation of an interior point algorithm, integrated in MATLAB, as is it an easy-to-use blackbox algorithm that yields reproducible results. Specifically, we compared the results of SCIBL-CSG with empirical weights on \(\mathcal{R}\times\mathcal{D}\) and exact hybrid weights on \(\Lambda\) (cf. (2, Section 3)) to the fmincon results for three different discretization schemes of \(\Lambda\times\mathcal{R}\times\mathcal{D}\). Two of these are equal in each dimension (\(10\times 10\times 10\) and \(7\times 7\times 7\)), while the last one is asymmetric (\(8\times 2\times 2\)). Once again, we want to stress that finding an appropriate discretization scheme already requires a thorough analysis of (1). The specific choices listed above represent three of the most promising candidates found during our investigation. As we consider this example to be a prototype for more advanced settings from topology optimization, e.g., switching the setting to the DDA model later, we compare the different approaches with respect to the number of inner gradient evaluations, since this is by far the most time-consuming step in these cases. To be precise, an evaluation represents the calculation of Abs, Sca, Geo, \(\nabla\,\mathrm{Abs}\), \(\nabla\,\mathrm{Sca}\) and \(\nabla\,\mathrm{Geo}\) for a single \((\lambda,\bar{R},\bar{d})\in\Lambda\times\mathcal{R}\times\mathcal{D}\). Since the produced iterates depend on the initial design, we randomly selected 500 starting points in the whole design domain \(\mathcal{U}=[1,75]\times[1,250]\). In each optimization run, the total number of evaluations was limited to 50.000 for fmincon and to 5.000 for SCIBL-CSG. To obtain an overview of the general performance of the different approaches, we take snapshots of all iterates after different amounts of evaluations. The results are given in Figure 9 and Figure 10 and yield a good impression on how fast each method tends to find solutions to (1). Note that, for the sake of readability and better comparison, the final CSG iterates after 5.000 evaluations are shown in all graphs labeled with a higher number of total evaluations. By comparing Figure 9 and Figure 10 with Figure 4, we observe that the artificial flat regions discussed earlier indeed slow down the optimization progress for all choices of prediscretization. Furthermore, we note that only the highest resolution \(10\times 10\times 10\) overcomes this approximation error, at the cost of the largest amount of evaluations needed. In contrast, the resolutions \(7\times 7\times 7\) and \(8\times 2\times 2\) converge much faster, but some of the final designs are no stationary points of (1). Out of the 500 optimization runs we performed, \(7\times 7\times 7\) converged to a wrong design, i.e., artificial local minimum, 16 times (3.2%). For \(8\times 2\times 2\), a wrong design was found in 218 (43.6%) instances, see Figure 10. Lastly, we are interested in the performance of each method with respect to \(J(u_{n})\) over the course of the iterations. Since each local solution to (1) admits a different objective function value, we focus only on the global maximum. For all approaches, we selected all runs whose final designs are closer to the global maximum of (1) than to any other stationary point. The results are shown in Figure 7 and Figure 8. ### Optimization in the DDA Model As a final example from application, we drop the restriction to core shell particles and consider hematite nanoparticles of arbitrary shape with the DDA model. While the setting is very similar to the setting analyzed above, there are some minor differences. First, we slightly change the weights appearing in the objective function: \[\max_{u\in\mathcal{U}}\quad\tfrac{1}{2}\,\mathbf{L}(u)+\tfrac{1}{2}\,\mathbf{ a}(u). \tag{2}\] This change was made purely for aesthetics, as the weights in (1) favour radially symmetric solutions, while (2) admits local solutions with a more interesting design structure. Furthermore, we do not assume a particle design distribution anymore, since it is unclear, how such a general shape distribution should look like. However, as the particles are no longer radially symmetric, we now have to consider the orientation of the particle with respect to the incoming light ray instead. Therefore, the \(K\) and \(S\) values explained in the introduction of this setting need to be averaged over all possible orientations, i.e., \[K(u,\lambda)=\frac{1}{|\mathbb{S}^{2}|}\iint_{\mathbb{S}^{2}}\mathrm{Abs}(u, \lambda,\nu)\mathrm{d}\nu\] Figure 8: The medians presented in Figure 7 (solid lines) and the corresponding quantiles \(P_{0.25,0.75}\), indicated by the shaded areas. For better visibility, the number of evaluations is scaled logarithmically and the discretization \(8\times 2\times 2\) was discarded. Figure 7: Median objective function value of all optimization runs in which the final design was closer to the global maximum of (1) than to any other stationary point. The values were obtained using a discretization into \(50\times 50\times 50\) points. Figure 9: Iterates of the different optimization approaches for (1) in the whole design domain \(\mathcal{U}=[1,75]\times[1,250]\). For fmincon, the discretization of \(\Lambda\times\mathcal{R}\times\mathcal{D}\) is given in the titles, respectively. To measure the progress, the starting points are also shown. As mentioned above, an evaluation corresponds to the calculation of Abs, Sca, Geo, \(\nabla\,\)Abs, \(\nabla\,\)Sca and \(\nabla\,\)Geo for one combination \((\lambda,\vec{R},\vec{d})\in\Lambda\times\mathcal{R}\times\mathcal{D}\). Again, the underlying contours are obtained by discretizing \(\Lambda\times\mathcal{R}\times\mathcal{R}\) into \(50\times 50\times 50\) points. Figure 10: Continuation of the results for (1) presented in Figure 9. Since CSG was stopped after 5.000 evaluations, the iterates do not change afterwards, but are still shown as a point of reference. In the last row, final designs obtained by \(7\times 7\times 7\) and \(8\times 2\times 2\), which do not correspond to stationary points of (1), are highlighted in blue. and \[S(u,\lambda)=\frac{1}{|\mathbb{S}^{2}|}\iint_{\mathbb{S}^{2}}\mathrm{ Sca}(u,\lambda,\nu)\big{(}1-\mathrm{Geo}(u,\lambda,\nu)\big{)}\mathrm{d}\nu.\] Here, \(\mathbb{S}^{2}\) denotes the unit sphere and the particle orientation \(\nu\) is assumed to be distributed uniformly random over all possible directions. The design domain is a ball of \(300\)nm diameter, discretized into \(n_{0}=65752\) dipole cells. The design \(u\in[0,1]^{n_{0}}\) gives the relative amount of hematite to water in each cell. The optical properties of intermediate (grey) material \(u^{(i)}\in(0,1)\) are generated by linear interpolation between the respective properties of water and hematite. Generally, one would combine filtering techniques and greyness penalization to obtain a smooth final design without intermediate material (see, e.g., [24]). However, we explicitly refrain from doing so to present a clear analysis of the CSG performance, without interference from secondary layers of smoothing techniques. As mentioned above, the change to the DDA model significantly increases the computational cost of evaluating Sca, Abs and Geo for a given \((u,\lambda,\nu)\in\mathcal{U}\times\Lambda\times\mathbb{S}^{2}\). Thus, the deterministic approaches used in the previous setting are no longer computationally feasible. Furthermore, we want to use this example to analyze the impact of the chosen norm on \(\mathcal{U}\times\Lambda\times\mathbb{S}^{2}\), appearing in the nearest neighbor calculation, which was already mentioned in (2, Section 3.5). To be precise, calculating the CSG integration weights requires the definition of an outer norm \[\big{\|}(u^{*},\lambda^{*},\nu^{*})\big{\|}_{\mathrm{Out}}=c_{u}\|u^{*}\|_{ \mathcal{U}}+c_{\lambda}\|\lambda^{*}\|_{{}_{\Lambda}}+c_{\nu}\|\nu^{*}\|_{ \mathbb{S}^{2}},\] where \(\|\cdot\|_{\mathcal{U}}\), \(\|\cdot\|_{{}_{\Lambda}}\) and \(\|\cdot\|_{\mathbb{S}^{2}}\) denote norms on the corresponding inner spaces and \(c_{u},c_{\lambda},c_{\nu}>0\). In this application, we choose the euclidean norm \(\|\cdot\|_{{}_{2}}\) for each inner space. Additionally, we fix \(c_{u}=1\), but consider different coefficients \(c_{\lambda}\) and \(c_{\nu}\). For the optimization, we consider three different initial designs, which are shown in Figure 11, top row. The objective function value as well as the values of \(\mathbf{L}\), \(\mathbf{a}\) and \(\mathbf{b}\) for these designs were computed using the CSG method with fixed design, i.e., with constant step size \(\tau=0\), and verified by Monte Carlo (see, e.g., [25]) integration. For one of the initial designs, the objective function value approximation of CSG and Monte Carlo integration with respect to the number of evaluations and different choices of \(\|\cdot\|_{{}_{\mathrm{Out}}}\) is shown in Figure 12. Each design was optimized with SCIBL-CSG, using inexact hybrid weights for the integration over \(\mathbb{S}^{2}\) and exact hybrid weights for the integration over \(\Lambda\). For \(\|\cdot\|_{{}_{\mathrm{Out}}}\), we considered four different choices of the parameters: 1. \(c_{u}=1\), \(c_{\lambda}=100\) and \(c_{\nu}=100\) 2. \(c_{u}=1\), \(c_{\lambda}=1\) and \(c_{\nu}=1\) 3. \(c_{u}=1\), \(c_{\lambda}=\frac{1}{100}\) and \(c_{\nu}=1\) 4. \(c_{u}=1\), \(c_{\lambda}=\frac{1}{100}\) and \(c_{\nu}=\frac{1}{100}\) The results in case (a) for all three initial designs are presented in Figure 13 and the respective design evolution for the initial design _screwdriver (50%)_, shown in Figure 11 top row, is depicted in Figure 14. The corresponding final designs, obtained after 5.000 SCIBL-CSG iterations, are presented in Figure 11, bottom row. As a second measure for convergence in the design space, the evolution of the norm distance to the respective final designs are shown in Figure 15 for all three initial designs. Comparing Figure 12 and Figure 13, we notice that CSG, using an appropriate outer norm, finds an optimized design almost as fast as it computes the objective function value for a given design. In other words: The full optimization process is only slightly more expensive that the simple evaluation of a single design. Moreover, CSG finds an optimal solution to (2) long before the Monte Carlo approximation to the initial objective function value is converged. It should, of course, also be noted, that choosing \(\left\|\cdot\right\|_{\mathrm{Out}}\) should be done with caution, as Figure 16 shows. While case (a) is, to the best of our knowledge, _not_ optimal by any means, cases (b) and (c) clearly show worse results. Choosing Figure 11: Representation of the initial designs (top row). Red boxes correspond to cells consisting purely of hematite, while grey boxes indicate an artificial intermediate material, consisting of 50% hematite and 50% water. For later references, we denote the initial designs by _plate (100%)_, _plate (50%)_ and _screwdriver (50%)_, respectively. The different final designs, obtained by 5.000 iterations of SCIBL-CSG with outer norm (a) are shown in the bottom row. For better visibility, cells with less than 50% hematite are considered as pure water and left out of the visualization. For each final design, the amount of cells discarded in this fashion is less than 100 (less than 0.15% of all cells). \(\|\cdot\|_{\mathrm{{}_{Out}}}\) extremely poorly, i.e., case (d), can even have devastating effects on the performance, see Figure 17. This, however, could also imply that the performance might be significantly improved, if problem specific inner and outer norms would be chosen. Especially in even more complex settings, techniques to obtain such norms a priori, or even during the optimization process itself, represent one of the most important points for further research. Figure 12: Objective function approximation for the _screwdriver (50%)_ design. The blue and orange curve show the results for CSG with fixed step size \(\tau=0\) and different coefficients of the outer norm \(\left\|\cdot\right\|_{\mathrm{{}_{Out}}}\). For Monte Carlo, each inner integral over \(\mathbb{S}^{2}\) was approximated using 40 random directions. The true objective function value \(J^{*}\approx 37.84\) is indicated by the dashed line. The Monte Carlo results are truncated for the sake of readability, as it requires over 8.000 evaluations to reach a good approximation to \(J^{*}\). Figure 13: CSG objective function approximations during the optimization process for all initial designs and choice (a) for \(\left\|\cdot\right\|_{\mathrm{{}_{Out}}}\), i.e., \(c_{u}=1\), \(c_{\lambda}=100\) and \(c_{\nu}=100\). The dashed lines indicate the objective function values of each initial design, respectively. Figure 14: Top left to bottom right: Design evolution during the optimization process for the _screudriver (50%)_ initial design and outer norm (a). The design snapshots were taken every 200 iterations. Red boxes represent design cells consisting of pure hematite. Intermediate material is indicated via a color gradient, where a cell filled with 50% water and 50% hematite is colored grey. Based on this gradient, depending on the ratio of hematite and water in a cell, the cell color is shifted to red (more hematite) or blue (more water). Figure 16: CSG objective function value approximation during the optimization process for the _plate (100%)_ initial design. The dashed line shows the inital objective function value, whereas the different graphs correspond to the choices (a), (b) and (c) for \(\|\cdot\|_{\mathrm{Out}}\). Figure 17: Results for the _plate (100%)_ initial design presented in Figure 16, augmented by the CSG objective function value approximation in the case that \(\|\cdot\|_{\mathrm{Out}}\) was chosen according to (d). Figure 15: Euclidean distance (after dividing by \(\sqrt{\mathrm{dim}(\mathcal{U})}\) for scaling) between intermediate designs and the respective final design during the SCIBL-CSG optimization process, carried out with outer norm (a). ## 3 Online Error Estimation Before we go into theoretical details, we first collect a few key properties and results concerning CSG, which were shown in [2]. In a first simple setting, we consider optimization problems of the form \[\begin{split}\min& J(u)\\ \text{s.t.}& u\in\mathcal{U}\subset\mathbb{R}^{d_{ \text{o}}}\text{ for some }d_{\text{o}}\in\mathbb{N}.\end{split} \tag{3}\] Additionally, we assume that \(\mathcal{U}\) is compact, and for some \(d_{\text{r}}\in\mathbb{N}\), there exists an open an bounded set \(\mathcal{X}\subset\mathbb{R}^{d_{\text{r}}}\) and a measure \(\mu\) with \(\text{supp}(\mu)\subset\mathcal{X}\), such that \(J\) can be written as \(J(u)=\int_{\mathcal{X}}j(u,x)\mu(\text{d}x)\). The detailed set of assumptions is given in [2, Section 2]. For now, it is only important that \(\nabla_{1}j:\mathcal{U}\times\mathcal{X}\to\mathbb{R}\) is bounded and Lipschitz continuous with Lipschitz constant \(L_{j}\). During the optimization process, CSG computes design dependent integration weights \(\big{(}\alpha_{k}\big{)}_{k=1\dots,n}\) (cf. [2, Section 3]) to build an approximation \(\hat{G}_{n}\) to the true objective function gradient, based on the available samples from previous iterations \(\big{(}\nabla_{1}j(u_{k},x_{k})\big{)}_{k=1,\dots,n}\). To be precise, we have \[\nabla J(u)=\int_{\mathcal{X}}\nabla_{1}j(u,x)\mu(\text{d}x)\approx\sum_{k=1} ^{n}\alpha_{k}\nabla_{1}j(u_{k},x_{k})=:\hat{G}_{n}.\] It was shown in [2, Lemma 4.7], that \[\|\nabla J(u_{n})-\hat{G}_{n}\|\to 0\quad\text{for }n\to\infty\text{ almost surely}.\] Carefully investigating the methods to obtain the integration weights, we observe that \[\begin{split}\left\|\nabla J(u_{n})-\hat{G}_{n}\right\|& =\left\|\int_{\mathcal{X}}\nabla_{1}j(u_{n},x)\mu(\text{d}x)-\hat{G}_{n} \right\|\\ &=\left\|\sum_{i=1}^{n}\int_{M_{i}}\nabla_{1}j(u_{n},x)\mu(\text {d}x)-\sum_{i=1}^{n}\nabla_{1}j(u_{i},x_{i})\nu_{n}(M_{i})\right\|,\end{split}\] where \(\nu_{n}\) denotes the measure associated to one of the measures listed in [2, Section 3.6], depending on the choice of integration weights, and \[\begin{split} M_{k}:=\big{\{}x\in\mathcal{X}\,:\,\|u_{n}& -u_{k}\|_{\mathcal{U}}+\|x-x_{k}\|_{\mathcal{X}}\\ &<\|u_{n}-u_{j}\|_{\mathcal{U}}+\|x-x_{j}\|_{\mathcal{X}}\text{ for all }j\in\{1,\dots,n\}\setminus\{k\}\big{\}}.\end{split}\] By construction, \(M_{k}\) contains all points \(x\in\mathcal{X}\), such that \((u_{n},x)\) is closer to \((u_{k},x_{k})\) than to any other previous point we evaluated \(\nabla_{1}j\) at. For exact integration weights, we have \(\nu_{n}=\mu\) and thus \[\left\|\nabla J(u_{n})-\hat{G}_{n}\right\| =\left\|\sum_{i=1}^{n}\int_{M_{i}}\nabla_{1}j(u_{n},x)\mu(\mathrm{ d}x)-\sum_{i=1}^{n}\int_{M_{i}}\nabla_{1}j(u_{i},x_{i})\mu(\mathrm{d}x)\right\|\] \[\leq\sum_{i=1}^{n}\int_{M_{i}}\left\|\nabla_{1}j(u_{n},x)-\nabla_ {1}j(u_{i},x_{i})\right\|\mu(\mathrm{d}x)\] \[\leq\sum_{i=1}^{n}\int_{M_{i}}L_{j}\cdot\left(\sup_{x\in M_{i}}Z_ {n}(x)\right)\mu(\mathrm{d}x)\] \[=L_{j}\sum_{i=1}^{n}\mu(M_{i})\sup_{x\in M_{i}}Z_{n}(x)\] \[\leq L_{j}\sup_{x\in\mathcal{X}}Z_{n}(x).\] Here, \(Z_{n}\) is given by \[Z_{n}(x):=\min_{k\in\{1,\ldots,n\}}\big{(}\|u_{n}-u_{k}\|_{\iota}+\|x-x_{k}\| _{\mathcal{X}}\big{)}.\] In other words, the approximation error can be bounded in terms of the Lipschitz constant of \(\nabla_{1}j\) and the quantity \(Z_{n}\), which relates to the size of Voronoi cells [26] with positive integration weights. Both \(L_{j}\) and \(\sup_{x\in\mathcal{X}}Z_{n}(x)\) can be efficiently approximated during the optimization process, e.g. by finite differences of the samples \(\big{(}\nabla_{1}j(u_{i},x_{i})\big{)}_{i=1,\ldots,n}\) and by \[\sup_{x\in\mathcal{X}}Z_{n}(x)\approx\max_{k=1,\ldots,n}Z_{n}(x_{k}),\] yielding an online error estimation. Such an approximation may, for example, be used in stopping criteria. ## 4 Convergence Rates Throughout this section, we assume (2, Assumptions 2.2 - 2.8) to be satisfied. ### Theoretical Background In the convergence analysis presented in [2], we have already seen that the fashion in which the gradient approximation \(\hat{G}_{n}\) is calculated in CSG is crucial for \(\|\hat{G}_{n}-\nabla J(u_{n})\|\to 0\) and that this property of CSG in turn is the key to all advantages CSG offers in comparison to classic stochastic optimization methods, like convergence for constant steps, backtracking, more involved optimization problems, etc. The price we pay for this feature lies within the dependency of \(\hat{G}_{n}\) on the past iterates. For comparison, the search direction \(\hat{G}_{n}^{\mathrm{SG}}\) in a stochastic gradient descent method is given by \[\hat{G}_{n}^{\mathrm{SG}}=\nabla_{1}j(u_{n},x_{n}).\] Thus, it is independent of all previous steps and fulfills \[\mathbb{E}_{\mathcal{X}}\left[\hat{G}_{n}^{\mathrm{SG}}\right]=\mathbb{E}_{ \mathcal{X}}\big{[}\nabla_{1}j(u_{n},\cdot)\big{]}=\nabla J(u_{n}),\] i.e., it is an unbiased sample of the full gradient. The combination of these properties allows for a straight-forward convergence rate analysis, see, e.g., [27]. In contrast, \(\hat{G}_{n}\) is in general _not_ an unbiased approximation to \(\nabla J(u_{n})\) and moreover _not_ independent of \(\big{(}u_{i},x_{i}\big{)}_{i=1,\ldots,n-1}\). The main problem in finding the convergence rate of \(\|u_{n+1}-u_{n}\|\to 0\) is, that this quantity depends on the approximation error \(\|\hat{G}_{n}-\nabla J(u_{n})\|\), which, as we have seen in Section3, depends on \(Z_{n}\). Since \(Z_{n}\) itself is deeply connected to \(\min_{k}\|u_{n}-u_{k}\|\), we run into a circular argument. Therefore, up to now, we are not able to proof convergence rates for the CSG iterates. We can, however, state a prediction to this rate and provide numerical evidence. **Claim 4.1**: _We claim that the CSG method, applied to problem (3), using a constant step size \(\tau<\frac{2}{\mathcal{L}}\) and empirical integration weights, fulfills_ \[\|u_{n+1}-u_{n}\|=\mathcal{O}\left(\ln(n)\cdot n^{-\frac{1}{\max\{2,d_{\mathrm{ r}}\}}}\right).\] To motivate this claim, note that, in the proof of (2, Lemma 4.7), it was shown that there exists \(C>0\) such that \[\left\|\hat{G}_{n}-\nabla J(u_{n})\right\|\leq C\left(\int_{\mathcal{X}}Z_{n} (x)\mu(\mathrm{d}x)+d_{W}(\mu_{n},\mu)\right),\] where \(d_{W}\) denotes the Wasserstein distance of the two measures \(\mu_{n}\) and \(\mu\). By (28, Theorem 1), the empirical measure \(\mu_{n}\) satisfies \[\mathbb{E}\big{[}d_{W}(\mu_{n},\mu)\big{]}\leq C(d_{\mathrm{r}})\cdot\left( \int_{\mathcal{X}}\|x\|_{{}_{\mathcal{X}}}^{3}\mu(\mathrm{d}x)\right)^{\frac{ 1}{3}}\cdot\begin{cases}\frac{1}{\sqrt{n}}&\text{if }d_{\mathrm{r}}=1,\\ \frac{\ln(1+n)}{\sqrt{n}}&\text{if }d_{\mathrm{r}}=2,\\ n^{-\frac{1}{d_{\mathrm{r}}}}&\text{if }d_{\mathrm{r}}\geq 3.\end{cases}\] This result is the main motivation for Claim4.1. It can be shown that the rate \(n^{-1/d_{\mathrm{r}}}\) for \(d_{\mathrm{r}}\geq 3\) is sharp if \(\mu\) corresponds to a uniform distribution on \(\mathcal{X}\). Thus, in this case, it is reasonable to assume a uniform distribution also corresponds to the worst-case rate of \(\int_{\mathcal{X}}Z_{n}(s)\mu(\mathrm{d}x)\to 0\). Assuming that the difference in designs appearing in \(Z_{n}\) is negligible due to the overall convergence of CSG, we obtain the rate \[\sup_{x\in\mathcal{X}}\,Z_{n}(x)=\mathcal{O}\left(\ln(n)\cdot n^{-\frac{1}{\max \{2,d_{\varepsilon}\}}}\right).\] To see this, we fill \(\mathcal{X}\subset\mathbb{R}^{d_{\varepsilon}}\) with balls (w.r.t. the norm \(\|\cdot\|_{\mathcal{X}}\)) of radius \(\varepsilon>0\) and denote by \(N(\varepsilon)\in\mathbb{N}\) the number of cells. Due to the dimension of \(\mathcal{X}\), we have \(\mathcal{O}\big{(}N(\varepsilon)\big{)}=\varepsilon^{-d_{\varepsilon}}\). Now, to achieve \(\sup_{x\in\mathcal{X}}Z_{n}(x)<\varepsilon\), we need each of these cells to contain at least one of the sample points \((x_{i})_{i=1,\ldots,n}\). It is well-known that the expected number of samples we need to draw for this to happen is given by \[N(\varepsilon)\sum_{k=1}^{N(\varepsilon)}\frac{1}{k}=\mathcal{O}\left(- \varepsilon^{-d_{\varepsilon}}\ln(\varepsilon)\right),\] where we used \[\sum_{k=1}^{n}\frac{1}{k}=\mathcal{O}\big{(}\ln(n)\big{)}\quad\text{for $n \to\infty$}.\] In other words, the convergence rates of \(\int_{\mathcal{X}}Z_{n}(x)\mu(\mathrm{d}x)\to 0\) and \(d_{W}(\mu_{n},\mu)\to 0\) are comparable. Now that we motivated the rates claimed in Claim 4.1 for the approximation error \(\|\hat{G}_{n}-\nabla J(u_{n})\|\), we use the following proposition to show that the rates of \(\|u_{n+1}-u_{n}\|\to 0\) can not be worse. **Proposition 4.2**: _Assume that the approximation error \(\|\hat{G}_{n}-\nabla J(u_{n})\|\) satisfies_ \[\|\hat{G}_{n}-\nabla J(u_{n})\|=\mathcal{O}\left(\ln(n)\cdot n^{-\frac{1}{\max \{2,d_{\varepsilon}\}}}\right).\] _Then, under the assumptions of Claim 4.1, it holds_ \[\|u_{n+1}-u_{n}\|=\mathcal{O}\left(\ln(n)\cdot n^{-\frac{1}{\max\{2,d_{ \varepsilon}\}}}\right).\] _Proof_ Assume for contradiction that this is not the case. Thus, there exists \(N\in\mathbb{N}\) such that \[\left\|\nabla J(u_{n})-\hat{G}_{n}\right\|\leq\tfrac{1}{2}\left(\tfrac{1}{ \tau}-\tfrac{L}{2}\right)\|u_{n+1}-u_{n}\|_{\mathcal{U}}\quad\text{for all $n\geq N$}. \tag{4}\] By the descent lemma (29, Lemma 5.7), the characteristic property of the projection operator (29, Theorem 6.41) and the Cauchy-Schwarz inequality, we obtain \[J(u_{n+1}) -J(u_{n})\] \[\leq\nabla J(u_{n})^{\top}(u_{n+1}-u_{n})+\tfrac{L}{2}\|u_{n+1}-u _{n}\|_{\mathcal{U}}^{2}\] \[=\hat{G}_{n}^{\top}(u_{n+1}-u_{n})+\tfrac{L}{2}\|u_{n+1}-u_{n}\|_ {\mathcal{U}}^{2}+\left(\nabla J(u_{n})-\hat{G}_{n}\right)^{\top}(u_{n+1}-u_{ n})\] \[\leq\left(\tfrac{L}{2}-\tfrac{1}{\tau}\right)\left\|u_{n+1}-u_{n} \right\|_{\mathcal{U}}^{2}+\left\|\nabla J(u_{n})-\hat{G}_{n}\right\|\cdot\left\| u_{n+1}-u_{n}\right\|_{\mathcal{U}}\] \[=\left(\left(\tfrac{L}{2}-\tfrac{1}{\tau}\right)\left\|u_{n+1}-u_ {n}\right\|_{\mathcal{U}}+\left\|\nabla J(u_{n})-\hat{G}_{n}\right\|\right) \left\|u_{n+1}-u_{n}\right\|_{\mathcal{U}}.\] Combining this with (4) gives \(J(u_{n+1})\leq J(u_{n})\) for all \(n\geq N\), since \(\tfrac{L}{2}<\tfrac{1}{\tau}\). Thus, the sequence of objective function values \(\big{(}J(u_{n})\big{)}_{n\in\mathbb{N}}\) is monotonically decreasing for all \(n\geq N\). By continuity of \(J\) and compactness of \(\mathcal{U}\), \(J\) is bounded and \(J(u_{n})\to\bar{J}\) for some \(\bar{J}\in\mathbb{R}\). Therefore, \[-\infty<\bar{J}-J(u_{N})=\sum_{n=N}^{\infty}\big{(}J(u_{n+1}-J(u_{n})\big{)} \leq\tfrac{1}{2}\left(\tfrac{L}{2}-\tfrac{1}{\tau}\right)\sum_{n=N}^{\infty} \left\|u_{n+1}-u_{n}\right\|_{\mathcal{U}}^{2}.\] Hence, the series \[\sum_{n=N}^{\infty}\left\|u_{n+1}-u_{n}\right\|_{\mathcal{U}}^{2}\] converges, contradicting \(\left\|u_{n+1}-u_{n}\right\|\neq\mathcal{O}\left(\ln(n)\cdot n^{-\tfrac{1}{ \max\{2,d_{\mathrm{r}}\}}}\right)\). ### Numerical Verification We want to verify the proclaimed rates numerically. For this purpose, we consider two optimization problems that can easily be scaled to high dimensions. The first problem is given by \[\min_{u\in\mathcal{U}}\quad\frac{1}{2}\int_{\mathcal{X}}\left\|u-x\right\|_{2 }^{2}\mathrm{d}x, \tag{5}\] where \(\mathcal{X}=\left(-\tfrac{1}{2},\tfrac{1}{2}\right)^{d_{\mathrm{r}}}\) and \(\mathcal{U}=[-5,5]^{d_{\mathrm{r}}}\), i.e., \(\mathcal{U}\) and \(\mathcal{X}\) have the same dimension. The second problem, \[\min_{u\in\mathcal{U}}\quad\frac{1}{2}\int_{-0.5}^{0.5}\left\|u-x\cdot\mathds{1 }_{d_{\mathrm{o}}}\right\|_{2}^{2}\mathrm{d}x, \tag{6}\] fixes \(d_{\mathrm{r}}=1\), while \(\mathcal{U}=[-5,5]^{d_{\mathrm{o}}}\). Here, \(\mathds{1}_{d_{\mathrm{o}}}\) represents the vector \((1,1,\ldots,1)^{\top}\in\mathbb{R}^{d_{\mathrm{o}}}\). Note that, in both settings, we have \(L_{j}=1\). Thus, by Section 3, we have \[\left\|\hat{G}_{n}-\nabla J(u_{n})\right\|\leq\sup_{x\in\mathcal{X}}\,Z_{n}(x )\approx\max_{k=1,\ldots,n}Z_{n}(x_{k}).\] The optimal solution to (5) and (6) is given by the zero vector \(u^{*}=0\in\mathcal{U}\). In our analysis, for different values of the dimensions \(d_{\mathrm{r}},d_{\mathrm{o}}\in\mathbb{N}\), problems (5) and (6) were initialized with \(500\) random starting points. The constant step size of CSG was chosen as \(\tau=\tfrac{1}{2}\). We track \(\left\|u_{n}-u^{*}\right\|\) and \(\max_{k=1,\ldots,n}Z_{n}(x_{k})\) during the optimization process and compare the median of the \(500\) runs to the rates predicted in Claim 4.1. The results can be seen in Figures 18 to 21. Note that, for the plots of the predicted rates, we omitted the factor \(\ln(n)\). Therefore, the corresponding graphs are straight lines, where the slope \(-\frac{1}{\max\{2,d_{\mathrm{r}}\}}\) is equal to the asymptotic slope of the predicted rate, since \[\ln(n)\cdot n^{-\frac{1}{\max\{2,d_{\mathrm{r}}\}}}=\mathcal{O}\left(n^{-\frac{ 1}{\max\{2,d_{\mathrm{r}}\}}+\varepsilon}\right)\quad\text{for all }\varepsilon>0.\] In the equidimensional, i.e., \(\dim(\mathcal{X})=\dim(\mathcal{U})\), setting (5), the experimentally obtained values for \(Z_{n}\) almost perfectly match the claimed rates. For \(\|u_{n}-u^{*}\|\), the observed rates also match the predictions for very small and large dimensions. For \(d_{\mathrm{r}}=3,4,5\), the convergence obtained in the experiments was even slightly faster than predicted. Investigating the results for (6), it is clearly visible that increasing the design dimension \(d_{\mathrm{o}}\), while keeping the parameter dimension \(d_{\mathrm{r}}\) fixed, has no influence on the obtained rates of convergence, indicating that CSG is able to efficiently handle large-scale optimization problems. ### Circumventing Slow Convergence As we have seen so far, the convergence rate of the CSG method worsens with increasing dimension of integration \(d_{\mathrm{r}}\in\mathbb{N}\). However, it is possible to circumvent this behavior, if the problem admits additional structure. Assume that there exist suitable \(\mathcal{X}_{1},\mathcal{X}_{2},\mu_{1},\mu_{2},f_{1}\) and \(f_{2}\) such that the objective function appearing in (3) can be rewritten as \[J(u)=\int_{\mathcal{X}}j(u,x)\mu(\mathrm{d}x)=\int_{\mathcal{X}_{1}}f_{1} \left(u,x,\int_{\mathcal{X}_{2}}f_{2}(u,y)\mu_{2}(\mathrm{d}y)\right)\mu_{1}( \mathrm{d}x).\] Assume further, that \(\mathcal{X}_{1},\mathcal{X}_{2},\mu_{1},\mu_{2},f_{1}\) and \(f_{2}\) satisfy the corresponding equivalents of (2, Assumptions 2.2 - 2.8). Now, we can independently calculate integration weights \((\alpha_{k})_{k=1,\ldots,n}\) and \((\beta_{k})_{k=1,\ldots,n}\) for the integrals over \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\), respectively. The corresponding CSG approximations (indicated by hats) are then given by \[f^{(n)} :=\int_{\mathcal{X}_{2}}f_{2}(u,y)\mu_{2}(\mathrm{d}y)\approx \sum_{i=1}^{n}\alpha_{i}f_{2}(u_{i},y_{i})=:\hat{f}_{n},\] \[g^{(n)} :=\int_{\mathcal{X}_{2}}\nabla_{1}f_{2}(u,y)\mu_{2}(\mathrm{d}y) \approx\sum_{i=1}^{n}\alpha_{i}\nabla_{1}f_{2}(u_{i},y_{i})=:\hat{g}_{n},\] \[\nabla J(u_{n}) \approx\sum_{i=1}^{n}\beta_{i}\Big{(}\nabla_{1}f_{1}(u_{i},x_{i },\hat{f}_{i})+\partial_{3}f_{1}(u_{i},x_{i},\hat{f}_{i})\cdot\hat{g}_{i}\Big{)} =:\hat{G}_{n}.\] The same steps as performed in the proof of (2, Lemma 4.7) yield the existence of a constant \(C_{1}>0\), depending only on the Lipschitz constants of \(\nabla f_{1}\) and Figure 18: The bold lines represent the median values of \(\max_{k=1,\ldots,n}Z_{n}(x_{k})\) for the equidistant problem (5) with respect to the iteration counter. The different colors indicate the different dimensions \(d_{\rm r}\in\{1,2,\ldots,500\}\). The dotted lines correspond to the respective predicted rates \(n^{-\frac{1}{\max\{2,d_{\rm r}\}}}\). Since the predictions for \(d_{\rm r}=1\) and \(d_{\rm r}=2\) are equal, only the case \(d_{\rm r}=2\) is shown. Figure 20: Results for the median of \(\max_{k=1,\ldots,n}Z_{n}(x_{k})\) in setting (6) for different dimensions \(d_{\rm o}\in\{1,2,\ldots,1000\}\), indicated by different colors. As we conjectured, the asymptotic slope of all curves is equal, since \(d_{\rm r}=1\) is fixed. As a point of reference, we added the graph of \(n^{-0.65}\), represented by the dotted line. \(\nabla f_{2}\), such that \[\Big{\|}\nabla J(u_{n})-\hat{G}_{n}\Big{\|}\\ \leq C_{1}\Big{(}d_{W}(\mu_{1},\nu_{n}^{\beta})+\sup_{x\in\mathcal{X }_{1}}\min_{k=1,\ldots,n}\bigl{(}\|u_{n}-u_{k}\|_{{{{{l}}}}\vec{+}}\,\|x-x_{k}\| _{{{{{l}}}}\vec{+}}\,|\hat{f}_{n}-\hat{f}_{k}|\bigr{)}\Big{)}. \tag{7}\] Here, \(\nu_{n}^{\beta}\) corresponds to the measure related to the integration weights \((\beta_{k})_{k=1,\ldots,n}\), see (2, Assumption 2.8). Now, denoting by \(C_{2}>0\) a constant depending on the Lipschitz constant \(L_{f_{2}}\) of \(f_{2}\), we decompose the last term: \[|\hat{f}_{n}- \hat{f}_{k}|\] \[\leq|\hat{f}_{n}-f_{n}|+|\hat{f}_{k}-f_{k}|+|f_{n}-f_{k}|\] \[\leq|\hat{f}_{n}-f_{n}|+|\hat{f}_{k}-f_{k}|+L_{f_{2}}\|u_{n}-u_{k }\|_{\iota}\] \[\leq C_{2}\Bigl{(}\|u_{n}-u_{k}\|_{{{{l}}}\nu}+\sup_{y\in \mathcal{X}_{2}}\min_{i=1,\ldots,n}\bigl{(}\|u_{n}-u_{i}\|_{{{{l}}}}+\|y-y_{i }\|_{{{{X}}}_{2}}\bigr{)}\] \[\qquad+\sup_{y\in\mathcal{X}_{2}}\min_{i=1,\ldots,k}\big{(}\|u_{ k}-u_{i}\|_{{{{l}}}\nu}+\|y-y_{i}\|_{{{{X}}}_{2}}\bigr{)}+d_{W}(\mu_{2},\nu_{n}^ {\alpha})+d_{W}(\mu_{2},\nu_{k}^{\alpha})\Bigr{)}\] \[=C_{2}\Bigl{(}\|u_{n}-u_{k}\|_{{{{l}}}\nu}+\sup_{y\in\mathcal{X}_{ 2}}Z_{n}(y)+\sup_{y\in\mathcal{X}_{2}}Z_{k}(y)+d_{W}(\mu_{2},\nu_{n}^{\alpha}) +d_{W}(\mu_{2},\nu_{k}^{\alpha})\Bigr{)}. \tag{8}\] Assuming that the convergence of the sequence \((u_{n})_{n\in\mathbb{N}}\) generated by the CSG method implies \[\mathcal{O}\left(\sup_{y\in\mathcal{X}_{2}}Z_{n}(y)\right)=\mathcal{O}\left( \sup_{y\in\mathcal{X}_{2}}Z_{k}(y)\right)\quad\text{and}\quad\mathcal{O} \bigl{(}d_{W}(\mu_{2},\nu_{n}^{\alpha})\bigr{)}=\mathcal{O}\bigl{(}d_{W}(\mu_{ 2},\nu_{k}^{\alpha})\bigr{)},\] we insert (8) into (7), to obtain \[\big{\|}\nabla J(u_{n})-\hat{G}_{n}\big{\|}\leq C(C_{1},C_{2})\Bigl{(}d_{W}( \mu_{1},\nu_{n}^{\beta})+d_{W}(\mu_{2},\nu_{n}^{\alpha})+\sup_{x\in\mathcal{ X}_{1}}Z_{n}(x)+\sup_{y\in\mathcal{X}_{2}}Z_{n}(y)\Bigr{)}.\] Therefore, by the same arguments as in Section 4.1, we claim \[\big{\|}\nabla J(u_{n})-\hat{G}_{n}\big{\|} =\mathcal{O}\left(\ln(n)\cdot n^{-\frac{1}{\max\{2,\dim(\mathcal{X }_{1}),\dim(\mathcal{X}_{2})\}}}\,\right),\] \[\|u_{n+1}-u_{n}\| =\mathcal{O}\left(\ln(n)\cdot n^{-\frac{1}{\max\{2,\dim(\mathcal{ X}_{1}),\dim(\mathcal{X}_{2})\}}}\,\right).\] In conclusion, we claim that, assuming the objective function can be rewritten in terms of nested expectation values \[J(u)=\int_{\mathcal{X}_{1}}f_{1}\left(u,x_{1},\int_{\mathcal{X}_{2}}f_{2} \left(u,x_{2},\int_{\mathcal{X}_{3}}f_{3}(\cdots)\mu_{3}(\mathrm{d}x_{3}) \right)\mu_{2}(\mathrm{d}x_{2})\right)\mu_{1}(\mathrm{d}x_{1}),\] the convergence rate of the CSG method depends only on the _largest_ dimension of the occurring \(\mathcal{X}_{i}\), which may be much lower when compared to \(\dim(\mathcal{X})\). Since this is again a claim and not a rigorous proof, we validate this assumption numerically. For this, we once more consider (5) and initialize it with 500 random starting points. This time, however, we utilize the fact that the objective function can be written as \[J(u)=\frac{1}{2}\int_{\mathcal{X}}\|u-x\|_{2}^{2}\mathrm{d}x=\frac{1}{2}\int_ {\mathcal{X}}\Big{(}\sum_{i=1}^{d_{\mathrm{r}}}(u_{i}-x_{i})^{2}\Big{)}\mathrm{ d}x=\frac{1}{2}\sum_{i=1}^{d_{\mathrm{r}}}\int_{-\frac{1}{2}}^{\frac{1}{2}}(u_{i}-x _{i})^{2}\mathrm{d}x_{i}.\] Thus, we can group the independent coordinates into subintegrals of arbitrary dimension, allowing us to study our claim for a large number of different regroupings without having to change the whole problem formulation. The results for several different decompositions and 500 random starting points in the case \(d_{\mathrm{r}}=100\) are shown in Figure 22. The improved rates of convergence are clearly visible, independent on whether the subgroup dimensions are equal or not. As claimed above, the highest remaining dimension of integration determines the overall convergence rate of CSG. ## 5 Conclusion and Outlook In this contribution, we presented a numerical analysis of the CSG method. The practical performance of CSG was tested for two applications from nanoparticle design optimization with varying computational complexity. For the low-dimensional problem formulation, CSG was shown to perform superior when compared to the commercial _fmincon_ blackbox solver. The high-dimensional setting provided an example, for which classic optimization schemes (stochastic as well as deterministic) from literature do not provide optimal solutions within reasonable time. Figure 22: Median total error \(\|u_{n}-u^{*}\|\) of the CSG iterates for (5), for \(d_{\mathrm{r}}=100\). The integral over \(\mathcal{X}=\left(-\frac{1}{\Gamma_{1}},\frac{1}{2}\right)^{d_{\mathrm{r}}}\) has been decomposed into several integrals of smaller dimension. The labels in the bottom left give details about the decomposition, e.g., the orange line corresponds to splitting the whole integral into one integral of dimension 75 and 5 integrals of dimension 5. The dotted line indicates the expected rate of convergence obtained by the CSG method without splitting up the integral. Convergence rates for CSG with constant step size were proposed and analytically motivated. They were shown to agree with numerically obtained convergence rates in several different instances. Moreover, in the case that the objective function admits additional structure, techniques to circumvent slow convergence for high dimensional integration domains were presented. While the proposed convergence rates for CSG agree with our experimental results, it remains an open question if they can be proven rigorously. Furthermore, even though the choice of a metric for the nearest neighbor approximation in the integration weights is irrelevant for the convergence results, a problem specific metric could significantly improve the performance of CSG by exploiting additional structure, which might be lost by utilizing an arbitrary metric. How to automatically obtain such a metric during the optimization process requires further research. Data Availability Statement.The simulation datasets generated during the current study are available from the corresponding author on reasonable request. Conflict of Interests.The authors have no relevant financial or non-financial interests to disclose.
2307.00584
Cops and robber on variants of retracts and subdivisions of oriented graphs
\textsc{Cops and Robber} is one of the most studied two-player pursuit-evasion games played on graphs, where multiple \textit{cops}, controlled by one player, pursue a single \textit{robber}. The main parameter of interest is the \textit{cop number} of a graph, which is the minimum number of cops that can ensure the \textit{capture} of the robber. \textsc{Cops and Robber} is also well-studied on directed/oriented graphs. In directed graphs, two kinds of moves are defined for players: \textit{strong move}, where a player can move both along and against the orientation of an arc to an adjacent vertex; and \textit{weak move}, where a player can only move along the orientation of an arc to an \textit{out-neighbor}. We study three variants of \textsc{Cops and Robber} on oriented graphs: \textit{strong cop model}, where the cops can make strong moves while the robber can only make weak moves; \textit{normal cop model}, where both cops and the robber can only make weak moves; and \textit{weak cop model}, where the cops can make weak moves while the robber can make strong moves. We study the cop number of these models with respect to several variants of retracts on oriented graphs and establish that the strong and normal cop number of an oriented graph remains invariant in their strong and distributed retracts, respectively. Next, we go on to study all three variants with respect to the subdivisions of graphs and oriented graphs. Finally, we establish that all these variants remain computationally difficult even when restricted to the class of 2-degenerate bipartite graphs.
Harmender Gahlawat, Zin Mar Myint, Sagnik Sen
2023-07-02T14:45:18Z
http://arxiv.org/abs/2307.00584v1
# Cops and robber on variants of retracts and subdivisions of oriented graphs ###### Abstract Cops and Robber is one of the most studied two-player pursuit-evasion games played on graphs, where multiple _cops_, controlled by one player, pursue a single _robber_. The main parameter of interest is the _cop number_ of a graph, which is the minimum number of cops that can ensure the _capture_ of the robber. Cops and Robber is also well-studied on directed/oriented graphs. In directed graphs, two kinds of moves are defined for players: _strong move_, where a player can move both along and against the orientation of an arc to an adjacent vertex; and _weak move_, where a player can only move along the orientation of an arc to an _out-neighbor_. We study three variants of Cops and Robber on oriented graphs: _strong cop model_, where the cops can make strong moves while the robber can only make weak moves; _normal cop model_, where both cops and the robber can only make weak moves; and _weak cop model_, where the cops can make weak moves while the robber can make strong moves. We study the cop number of these models with respect to several variants of retracts on oriented graphs and establish that the strong and normal cop number of an oriented graph remains invariant in their strong and distributed retracts, respectively. Next, we go on to study all three variants with respect to the subdivisions of graphs and oriented graphs. Finally, we establish that all these variants remain computationally difficult even when restricted to the class of 2-degenerate bipartite graphs. **Keywords:** Cops and Robber, Oriented Graphs, Retracts, Subdivisions. ## 1 Introduction Among the games modeled on graph search, the two-player combinatorial pursuit-evasion game called Cops and Robber is one of the most popularly studied in literature [7]. The game was introduced independently by Quilliot [36], and Nowakowski and Winkler [34] on simple graphs. It gained a lot of popularity following its inception, primarily due to its various applications in topics like artificial intelligence [27, 32], constrained satisfaction problems and database theory [21, 22], distributed computing [15, 10] and network decontamination [16], as well as for its deep impact on graph theory and algorithms [1, 37]. As a result, several variants of the game have been introduced and studied, many of which have deep connections and significant impacts on some of the aforementioned topics. For example, several variants of the game are shown to have correspondence with width parameters like treewidth [37], pathwidth [35], tree-depth [19], hypertree-width [2], cycle-rank [19], and directed tree-width [28]. Even though most of the variants are modeled on simple graphs, there exist natural variant(s) defined and studied on directed graphs and oriented graphs as well [33, 31, 18]. Recently, Das et al. [13] studied three natural variations of the game on oriented graphs, namely, the _strong_, _normal_, and _weak cop_ models. In this article, we continue to build on their works by focusing on finding fundamental structural results for these models. We especially concentrate on exploring the game's interaction with variants of retracts and particular types of subdivisions of the oriented graphs. Our structural results corresponding to the subdivisions also establish the computational hardness results for these variants. The primary goal of this paper is to contribute to building a theory of Cops and Robber on oriented graphs. ### Cops and Robber on Oriented Graphs An _oriented graph_\(\overrightarrow{G}\) is a directed graph having no loop or parallel arcs in opposite directions. An oriented graph may indeed have parallel arcs in the same direction between two vertices; however, for our works, such parallel arcs are redundant. Therefore, without loss of generality, unless otherwise stated, we will assume that the underlying graph \(G\) of the oriented graphs \(\overrightarrow{G}\) is simple, finite, connected, and contains at least two vertices. With this, our "playing field" (oriented graphs) is ready, and thus, let us try to understand the game. Let us assume \(\overrightarrow{G}\) is the oriented graph on which we play the game. To begin with, Player 1 (the cop player) will place \(k\)_cops_ on the vertices of \(\overrightarrow{G}\), and then, Player 2 (the robber player) will place a _robber_ on a vertex of \(\overrightarrow{G}\). After this initial set-up, the players will take turns, starting with Player 1, to move the cops (resp., the robber) from one vertex to another following game rules (depending on the game model). If, after a finite number of turns, a cop and the robber end up on the same vertex, that is, if a cop _captures_ the robber, then Player 1 wins. Otherwise, Player 2 wins. This describes the game in general; however, the rules for moving the cops (resp., the robber) will be described while presenting the game models. On an oriented graph, two kinds of the move are of interest: a _strong move_ where a cop (or the robber) can shift from a vertex \(u\) to its neighbor \(v\) irrespective of the direction of the arc joining \(u\) and \(v\), and a _weak move_ where a cop (or the robber) can shift from a vertex \(u\) to its neighbor \(v\) only if there is an arc from \(u\) to \(v\). The three models of the game are determined by the allowed moves for the Players 1 and 2. We list them below for convenience. 1. _The strong cop model:_ In their turn, Player 1 can make at most one strong move for each of the cops, while Player 2 can only make at most one weak move for the robber. 2. _The normal cop model:_ In their turn, Player 1 (resp., Player 2) can make at most one weak move for each of the cops (resp. the robber). 3. _The weak cop model:_ In their turn, Player 1 can make at most one weak move for each of the cops, while Player 2 can make at most one strong move for the robber. We need to recall some related necessary parameters [13] for continuing the study. The _strong cop number_ (resp., _normal cop number_, _weak cop number_) of an oriented graph \(\overrightarrow{G}\), denoted by \(c_{s}(\overrightarrow{G})\) (resp., \(c_{n}(\overrightarrow{G})\), \(c_{w}(\overrightarrow{G})\)), is the minimum number of cops needed by Player 1 to ensure a winning strategy on \(\overrightarrow{G}\). Moreover, \(\overrightarrow{G}\) is _strong-cop win_ (resp., _normal-cop win_, _weak-cop win_) if its strong cop number (resp., normal cop number, weak cop number) is 1. From the definitions, one can observe the relation: \[c_{s}(\overrightarrow{G})\leq c_{n}(\overrightarrow{G})\leq c_{w}( \overrightarrow{G}). \tag{1}\] Given a family \(\mathcal{F}\) of oriented graphs, the parameters are defined by \[c_{\alpha}(\mathcal{F})=\max\{c_{\alpha}(\overrightarrow{G})\in\mathcal{F}\}, \tag{2}\] for all \(\alpha\in\{n,s,w\}\). **Remark 1.1**.: _If both Player \(1\) and Player \(2\) are allowed to make strong moves, then this game is the same as the game of Cops and Robber on the underlying undirected graph. Moreover, given an undirected graph \(G\), its cop number, denoted by \(c(G)\), is the minimum number of cops needed by Player \(1\) to have a winning strategy for a game played on \(G\). If the cop number of a graph \(G\) is \(1\), then we say that \(G\) is cop win._ ### Motivation and Context The normal cop model is well-studied in the context of directed/oriented graphs, while the two other variations are recent [13]. Hamidoune [24] considered the game on Cayley digraphs. Frieze et al. [18] studied the game on digraphs and gave an upper bound of \(\mathcal{O}\left(\frac{n(\log\log n)^{2}}{\log n}\right)\) for cop number in digraphs. Hahn and MacGillivray [23] gave an algorithmic characterization of the cop-win finite reflexive digraphs and showed that any \(k\)-cop game can be reduced to 1-cop game, resulting in an algorithmic characterization for \(k\)-copwin finite reflexive digraphs. However, these results do not give a structural characterization of such graphs. Later, Darlington et al. [11] tried to structurally characterize cop-win oriented graphs and gave a conjecture, which was later disproved by Khatri et al. [30]. This is evidence that the problem is not so straightforward to solve. Recently, the cop number of planar Eulerian digraphs and related families were studied in several articles [14, 25, 26]. Bradshaw et al. [9] proved that the cop number of directed and undirected Cayley graphs on abelian groups has an upper bound of the form of \(\mathcal{O}(\sqrt{n})\). Modifying this construction, they obtained families of graphs and digraphs with cop number \(\Theta(\sqrt{n})\). In general, the problem of determining the cop number of a directed graph is known to be EXPTIME-complete due to Kinnersley [31], which positively settled a conjecture by Goldstein and Reingold [20]. Overall, the cop number is well-studied but, surprisingly, less understood on directed/oriented graphs. This article attempts to address this issue by studying some fundamentals in this domain. ### Our Contributions and Organization In Section 2, we present some useful preliminaries. In Section 3, we deal with variants of retracts. To elaborate, the graph \(G-v\) is a _retract_ of \(G\) if there are vertices \(u,v\in V(G)\) satisfying \(N[v]\subseteq N[u]\). Here, we also say that \(v\) is a _corner vertex_. One key step in establishing the full characterization of cop win (undirected) graphs was a lemma which proved that a graph is cop win if and only if its retract is also cop win. The characterization of weak-cop win oriented graphs also used a similar lemma for weak-retract (defined in Section 3). We prove the analogs of the key lemmas for strong and normal models, even though we are yet to succeed in providing an exact characterization of strong- (resp., normal-)cop win oriented graphs. In Section 4 and 5, we study the effect of two different subdivisions, namely, the _strong subdivision_ and the _weak subdivision_, on cop numbers. The precise definitions are provided in Sections 4 and 5, respectively. For classical Cops and Robber game, some classical results study the effect of subdivisions on the cop number of an undirected graph establishing that the cop number of a graph does not decrease if we subdivide each of its edges a constant number of times [6, 29]. On the other hand, in [13], a special case of the strong subdivision was used as a tool to prove results and provide interesting examples. In this article, we study the effect of these two subdivisions on the cop numbers and establish the relation between cop number parameters involving these subdivisions. In Section 6, we prove that unless \(P=NP\), determining the strong, normal, and weak cop numbers are not polynomial-time solvable, even if we restrict the input graphs to the class of 2-degenerate bipartite oriented graphs. In Section 7, we conclude the article, including the mention of some open problems. ## 2 Preliminaries This paper considers the game on oriented graphs whose underlying graph is simple, finite, and connected. Let \(\overrightarrow{G}\) be an oriented graph whose underlying graph is \(G\). We also say that \(\overrightarrow{G}\) is an _orientation_ of \(G\). Let \(\overrightarrow{uv}\) be an arc of \(\overrightarrow{G}\). We say that \(u\) is an _in-neighbor_ of \(v\) and \(v\) is an _out-neighbor_ of \(u\). Let \(N_{\overrightarrow{G}}^{-}(u)\) and \(N_{\overrightarrow{G}}^{+}(u)\) denote the set of in-neighbors and out-neighbors of \(u\), respectively. Moreover, let \(N_{\overrightarrow{G}}^{+}[v]=N^{+}(v)\cup\{v\}\) and \(N_{\overrightarrow{G}}^{-}[v]=N^{-}(v)\cup\{v\}\). When it is clear from the context, by \(N_{\overrightarrow{G}}(v)\) we denote \(N_{\overrightarrow{G}}^{+}(v)\cup N^{-}(u)\), and by \(N_{\overrightarrow{G}}[v]\) we denote \(N_{\overrightarrow{G}}^{-}(v)\cup\{v\}\). Similarly, for an undirected graph \(H\) and a vertex \(v\in V(H)\), let \(N_{H}(v)\) denote the set of neighbors of \(v\) and let \(N_{H}[v]=N_{H}(v)\cup\{v\}\). Moreover, when it is clear from the context, to ease the presentation, we drop the subscript \(\overrightarrow{G}\) (and \(H\)) from these notations. A vertex without any in-neighbor is a _source_, and a vertex without any out-neighbor is a _sink_. A vertex \(v\) is said to be _dominating_ if \(N^{+}[v]=V(\overrightarrow{G})\). Let \(v\) be vertex of \(\overrightarrow{G}\) and \(S\) is a subset of vertices of \(\overrightarrow{G}\) (i.e., \(S\subseteq V(\overrightarrow{G})\)). Then, we say that \(v\) is a _source in \(S\)_ if \(S\subseteq N^{+}[v]\). Moreover, we say that \(|N^{+}(v)|\) is the _out-degree_ of \(v\), \(|N^{-}(v)|\) is the _in-degree_ of \(v\), and \(|N^{+}(v)|+|N^{-}(v)|\) is the _degree_ of \(v\). An undirected graph \(G\) is \(k\)_-degenerate_ if, for every induced subgraph \(H\) of \(G\), there is a vertex in \(H\) with at most \(k\)-degree. An oriented graph is \(k\)-_degenerate_ if its underlying graph is \(k\)-degenerate. ## 3 Retracts Retracts are shown to have close relationships with the game of Cops and Robber on undirected graphs [6]. In fact, the first characterization of cop win graphs used the concept of retracts [7]. Moreover, the characterization of weak-cop win graphs is based on the notion of _weak-retracts_ (defined below). Thus, it makes sense to study the (strong/weak/normal) cop number of oriented graphs with respect to retracts. Given an oriented graph \(\overrightarrow{G}\), let \(u\) and \(v\) be two adjacent vertices satisfying \(N[v]\subseteq N[u]\). In such a scenario, the oriented graph \(\overrightarrow{G}-v\) is a _strong-retract_ of \(\overrightarrow{G}\). Given an oriented graph \(\overrightarrow{G}\), let \(\overrightarrow{u_{1}v},\overrightarrow{u_{2}v},\cdots,\overrightarrow{u_{p}v}\) be \(p\) arcs satisfying \(N^{+}(v)\subseteq N^{+}(u_{i})\), for each \(i\in[p]\), and \(N^{-}(v)\subseteq\bigcup_{i=1}^{p}N^{-}[u_{i}]\). In such a scenario, the oriented graph \(\overrightarrow{G}-v\) is a _distributed-retract_ of \(\overrightarrow{G}\). Given an oriented graph \(\overrightarrow{G}\), let \(\overrightarrow{uv}\) be an arc satisfying \(N(v)\subseteq N^{+}[u]\). In such a scenario, the oriented graph \(\overrightarrow{G}-v\) is a _weak-retract_ of \(\overrightarrow{G}\). In [13], it was proved that an oriented graph is weak-cop win if and only if its weak-retract is weak-cop win. Here, we extend this result to prove that the strong and normal cop number of an oriented graph remains invariant in their strong-retracts and distributed-retracts, respectively. **Theorem 3.1**.: _Let \(\overrightarrow{G}^{\prime}\) be a strong-retract of \(\overrightarrow{G}\). Then \(c_{s}(\overrightarrow{G})=c_{s}(\overrightarrow{G}^{\prime})\)._ Proof.: Since \(\overrightarrow{G}^{\prime}\) is a strong-retract of \(\overrightarrow{G}\), we may assume that \(\overrightarrow{G}^{\prime}=\overrightarrow{G}-v\), \(u\) and \(v\) are adjacent, and \(N[v]\subseteq N[u]\). First suppose that \(c_{s}(\overrightarrow{G})=k\). We will use the winning strategy of \(k\) cops in \(\overrightarrow{G}\) to get a winning strategy for \(k\) cops in \(\overrightarrow{G}^{\prime}\) with the only difference being: whenever a cop, say \(\mathcal{C}\), has to move to the vertex \(v\), it moves to \(u\) instead. Observe that \(\mathcal{C}\) can make this move as \(N[v]\subseteq N[u]\) and \(\mathcal{C}\) can make strong moves. For the same reason, the next move of \(\mathcal{C}\) will be as it would have been in the winning strategy for \(\overrightarrow{G}\). That is, if in \(\overrightarrow{G}\), \(\mathcal{C}\) stays on \(v\) or moves to some \(w\in N(v)\), then, in \(\overrightarrow{G}^{\prime}\), \(\mathcal{C}\) will stay on \(u\) or move to \(w\), respectively. The second instance is possible as \(N(v)\subseteq N[u]\). Since the movement of \(\mathcal{R}\) is restricted to \(\overrightarrow{G}^{\prime}\), \(k\) cops will capture \(\mathcal{R}\) after a finite number of moves using this strategy. For the converse, suppose that \(c_{s}(\overrightarrow{G}^{\prime})=k\). Before going to the proof, we will define the _image of the robber_, denoted by \(I_{\mathcal{R}}\), a function from \(V(\overrightarrow{G})\to V(\overrightarrow{G}^{\prime})\) as follows: \[I_{\mathcal{R}}(x)=\begin{cases}x&\text{if }x\neq v,\\ u&\text{if }x=v.\end{cases}\] We will use the winning strategy of \(k\) cops in \(\overrightarrow{G}^{\prime}\) to get a winning strategy for \(k\) cops in \(\overrightarrow{G}\). Let us assume that the game is being played on \(\overrightarrow{G}\). However, moves of \(\mathcal{R}\) on \(\overrightarrow{G}\) are emulated via the image function \(I_{\mathcal{R}}(x)\) on \(\overrightarrow{G}^{\prime}\). The cops will move in \(\overrightarrow{G}^{\prime}\) to capture the image of the robber, and the cops on \(\overrightarrow{G}\) will play the exact same moves (it is possible as \(\overrightarrow{G}^{\prime}\) is a subgraph of \(\overrightarrow{G}\)). At the time of the capture of the image of the robber in \(\overrightarrow{G}^{\prime}\), if the robber is on any vertex other than \(v\) in \(\overrightarrow{G}\), then it gets captured there as well. If the robber is on the vertex \(v\) of \(\overrightarrow{G}\) at the time when its image gets captured on \(\overrightarrow{G}^{\prime}\), then in \(\overrightarrow{G}\), there is a cop on the vertex \(u\) at that point of time. Therefore, as \(N[v]\subseteq N[u]\), the robber will get captured in the next move. Next, we show that the normal cop number of an oriented graph remains invariant in its distributed retracts. **Theorem 3.2**.: _Let \(\overrightarrow{G}^{\prime}\) be a distributed-retract of \(\overrightarrow{G}\). Then \(c_{n}(\overrightarrow{G})=c_{n}(\overrightarrow{G}^{\prime})\)._ Proof.: Since \(\overrightarrow{G}^{\prime}\) is a distributed-retract of \(\overrightarrow{G}\), we may assume that \(\overrightarrow{G}^{\prime}=\overrightarrow{G}-v\), and \(\overrightarrow{u_{1}}\overrightarrow{v},\overrightarrow{u_{2}}\overrightarrow {v},\cdots,\overrightarrow{u_{p}}\overrightarrow{v}\) are \(p\) arcs satisfying \(N^{+}(v)\subseteq N^{+}(u_{i})\) and \(N^{-}(v)\subseteq\bigcup_{i=1}^{p}N^{-}[u_{i}]\), for each \(i\in[p]\). First of all, suppose that \(c_{n}(\overrightarrow{G})=k\). We are going to show that \(k\) cops have a winning strategy in \(\overrightarrow{G}^{\prime}\) as well. The idea is to play the game simultaneously on \(\overrightarrow{G}\) and \(\overrightarrow{G}^{\prime}\). The robber \(\mathcal{R}\) will originally move in \(\overrightarrow{G}^{\prime}\), while on \(\overrightarrow{G}\) it will simply mimic the moves (it is possible as \(\overrightarrow{G}^{\prime}\) is a subgraph of \(\overrightarrow{G}\)). On the other hand, the cops will use a winning strategy to capture the robber in \(\overrightarrow{G}\), and we will use this strategy to provide a winning strategy on \(\overrightarrow{G}^{\prime}\) as well. In fact, we will use the exact same strategy on \(\overrightarrow{G}^{\prime}\) with the only difference being: a cop \(\mathcal{C}\) will move to one of the \(u_{i}\)s in \(\overrightarrow{G}^{\prime}\) when its corresponding cop moves to the vertex \(v\) in \(\overrightarrow{G}\). The choice of this \(u_{i}\) will depend on the movement of \(\mathcal{C}\) in that particular turn. To elaborate, in \(\overrightarrow{G}\), \(\mathcal{C}\) must have moved from some \(v^{-}\in N^{-}(v)\) to \(v\). According to the definition of distributed-retract, \(v^{-}\) belongs to \(N^{-}[u_{j}]\) for some \(j\in[p]\). Choose the minimum index \(i\) among all such \(u_{j}\)s for which \(v^{-}\in N^{-}[u_{j}]\). The corresponding \(u_{i}\) is our choice for positioning \(\mathcal{C}\) in that particular turn. Observe that it is possible for a cop to make its moves following the above strategy in \(\overrightarrow{G}^{\prime}\), following the game rules. Since the movement of the robber \(\mathcal{R}\) is restricted to the vertices of \(\overrightarrow{G}^{\prime}\), it will get captured on both graphs in this strategy. Next, we will show the other direction, that is, we will suppose that \(c_{n}(\overrightarrow{G}^{\prime})=k\) and show that there is a winning strategy for \(k\) cops on \(\overrightarrow{G}\). To do so, we will play the game simultaneously on \(\overrightarrow{G}\) and \(\overrightarrow{G}^{\prime}\). The robber \(\mathcal{R}\) will originally move on \(\overrightarrow{G}\) and its shadow \(\mathcal{R}_{S}\) will move on \(\overrightarrow{G}^{\prime}\). Now, the \(k\) cops will capture \(\mathcal{R}_{S}\) on \(\overrightarrow{G}^{\prime}\) (as we know \(k\) cops have a winning strategy on \(\overrightarrow{G}^{\prime}\)). We will mimic the moves of the cops on \(\overrightarrow{G}^{\prime}\) on \(\overrightarrow{G}\) (it is possible since \(\overrightarrow{G}^{\prime}\) is a subgraph of \(\overrightarrow{G}\)). To begin with, let us describe the movements of \(\mathcal{R}_{S}\). Whenever \(\mathcal{R}\) is at any vertex other than \(v\), \(\mathcal{R}_{S}\) is also on that vertex. If \(\mathcal{R}\) starts at \(v\), then \(\mathcal{R}_{S}\) will start at \(u_{1}\). Moreover, during the play, if \(\mathcal{R}\) moves from a particular vertex \(v^{-}\) to \(v\), then \(\mathcal{R}_{S}\) will move to \(u_{i}\), where \(i\) is the minimum index satisfying \(v^{-}\in N^{-}[u_{i}]\). Observe that, it is possible for \(\mathcal{R}_{S}\) to make its moves following the above-mentioned rules. After a finite number of moves, \(\mathcal{R}_{S}\) will get captured on \(\overrightarrow{G}^{\prime}\). At that point in time, either \(\mathcal{R}\) also gets captured on \(\overrightarrow{G}\), or it must be placed on \(v\) with a cop \(\mathcal{C}\) placed on \(u_{j}\) for some \(j\). In the latter case, \(\mathcal{R}\) will get captured in the next turn. In particular, the above result implies that cop win oriented graphs are distributed-retract invariant. To complement the above result, we prove a sufficient condition for an oriented graph to be not cop win. **Theorem 3.3**.: _If for every arc \(\overrightarrow{uv}\) in \(\overrightarrow{G}\), there exists an out-neighbor \(v^{+}\) of \(v\) that is not an out-neighbor of \(u\), then \(\overrightarrow{G}\) is not cop win._ Proof.: Suppose the cop \(\mathcal{C}\) is _attacking_ the robber \(\mathcal{R}\). That means, we may assume that \(\mathcal{C}\) is on \(u\) and \(\mathcal{R}\) is on \(v\) for some arc \(\overrightarrow{uv}\). We know that there exists some \(v^{+}\in N^{+}(v)\setminus N^{+}(u)\). Thus, the robber will move to such a \(v^{+}\) and avoid the capture. ## 4 Strong Subdivisions Let \(G\) be a simple, connected, and finite graph. Then, \(\overrightarrow{S}_{t}(G)\) is the oriented graph obtained by replacing each edge \(uv\) of \(G\) by two directed paths of _length_ (number of arcs) \(t\): one starting from \(u\) and ending at \(v\), and the other starting at \(v\) and ending at \(u\). The oriented graph \(\overrightarrow{S}_{t}(G)\) is called the _strong \(t\)-subdivision_ of \(G\). See Figure 1 for a reference. As we deal only with simple oriented graphs here, the value of \(t\) is at least \(2\). For the ease of presentation of proofs, we provide an explicit construction of \(\overrightarrow{S}_{t}(G)\) from \(G\). Consider an edge \(uv\in E(G)\). This edge is replaced by two directed paths of length \(t\) each: one from \(u\) to \(v\) of the form \(uv_{1}^{u}v_{2}^{u}\cdots v_{t-1}^{u}v\), and one from \(v\) to \(u\) of the form \(vu_{1}^{v}w_{2}^{v}\cdots u_{t-1}^{v}u\). Moreover, the vertices \(u\) and \(v\) are termed as the _original vertices_, and the vertices of the form \(v_{i}^{u}\) and \(u_{j}^{v}\) are termed as the _new vertices_. Furthermore, we define a function \(f:V(\overrightarrow{S}_{t}(G))\to V(G)\) such that for any \(x,y\in V(G)\), \(i\in[t-1]\) and \(x_{i}^{y}\in V(\overrightarrow{S}_{t}(G))\), \(f(x_{i}^{y})=x\) and \(f(x)=x\). Finally, we have the following easy observation that will be useful for us. **Observation 4.1**.: _For any two vertices \(x,y\in V(\overrightarrow{S}_{t}(G))\), if there is a directed path from \(x\) to \(y\) of length at most \(t\), then \(f(y)\in N_{G}[f(x)]\) and \(f(x)\in N_{G}[f(y)]\)._ Figure 1: Illustration of subdivision of an edge \(uv\). Here, in the subdivided part, for each red vertex \(x\), \(f(x)=v\), and for each green vertex \(y\), \(f(y)=u\). In what follows, we will provide both upper and lower bounds on the strong, normal, and weak cop number of \(\overrightarrow{S}_{t}(G)\) in terms of \(c(G)\). In [13], it was proved that the cop number of a graph \(G\) is a natural lower bound for the strong cop number of \(\overrightarrow{S}_{2}(G)\). Here, we generalize this result to \(\overrightarrow{S}_{t}(G)\). Specifically, we have the following lemma. **Lemma 4.2**.: _Let \(G\) be a simple graph. Then, for any \(t>1\), \(c_{s}(\overrightarrow{S}_{t}(G))\geq c(G)\)._ Proof.: Let \(c_{s}(\overrightarrow{S}_{t}(G))=k\). We will show that \(k\) cops have a winning strategy in \(G\) as well. To this end, we borrow the winning strategy of cops from \(\overrightarrow{S}_{t}(G)\). As the game is played in \(G\), we play a game simultaneously in \(\overrightarrow{S}_{t}(G)\) to use the winning strategy of cops from \(\overrightarrow{S}_{t}(G)\). _Game Setup:_ The \(k\) cops begin by placing themselves on the vertices of \(\overrightarrow{S}_{t}(G)\) as per the winning strategy. Accordingly, we place the cops on the vertices of \(G\) such that if a cop, say \(\mathcal{C}_{i}\), is placed on a vertex \(x\in V(\overrightarrow{S}_{t}(G))\), then we place \(\mathcal{C}_{i}\) on the vertex \(f(x)\) in \(V(G)\). Now, \(\mathcal{R}\) enters a vertex, say \(u\), in \(V(G)\). In \(\overrightarrow{S}_{t}(G)\) also, we place \(\mathcal{R}\) on the same vertex \(u\). _Move Translations:_ Now, the game proceeds as follows. Each round in \(G\) is translated to \(t\) rounds in \(\overrightarrow{S}_{t}(G)\). For each move of \(\mathcal{R}\) in \(G\) from a vertex \(u\) to a vertex \(v\), we make \(t\) moves of \(\mathcal{R}\) in \(\overrightarrow{S}_{t}(G)\) from \(u\) to \(v\) along the directed path \(uv_{1}^{u}\cdots v_{t-1}^{u}v\) if \(u\) and \(v\) are distinct vertices, else \(\mathcal{R}\) stays at the same vertex for the next \(t\) moves in \(\overrightarrow{S}_{t}(G)\). Cops will move according to the winning strategy in \(\overrightarrow{S}_{t}(G)\) for these \(t\) moves in \(\overrightarrow{S}_{t}(G)\). Notice that in these \(t\) moves, if a cop starts from a vertex \(x\) and finishes at a vertex \(y\), then there is a directed path either from \(x\) to \(y\) or from \(y\) to \(x\) of length at most \(t\). Therefore, due to Observation 4.1, \(f(y)\in N_{G}[f(x)]\). Thus, when a cop, say \(\mathcal{C}_{i}\), moves from a vertex \(x\) to a vertex \(y\) in these \(t\) moves in \(\overrightarrow{S}_{t}(G)\), we move \(\mathcal{C}_{i}\) from \(f(x)\) to \(f(y)\) in \(G\). _Capture:_ Then, the game goes on like this. Since \(k\) cops have a winning strategy in \(\overrightarrow{S}_{t}(G)\), they will capture \(\mathcal{R}\) after a finite number of rounds in \(\overrightarrow{S}_{t}(G)\). That is, after a finite number of rounds a cop, say \(\mathcal{C}_{i}\), and \(\mathcal{R}\) will be on the same vertex \(x\) in \(\overrightarrow{S}_{t}(G)\). This translates to \(\mathcal{C}_{i}\) and \(\mathcal{R}\) both being on \(f(x)\) in \(G\). This completes our proof. Next, we provide an upper bound on the normal cop number (and hence, on the strong cop number) of \(\overrightarrow{S}_{t}(G)\) by establishing that it is at most \(c(G)+1\). In particular, we have the following lemma. **Lemma 4.3**.: _Let \(G\) be a simple graph. Then, \(c_{n}(\overrightarrow{S}_{t}(G))\leq c(G)+1\)._ Proof.: Let \(k\) cops have a winning strategy in \(G\). We will use this strategy to get a winning strategy for \(k+1\) cops in \(\overrightarrow{S}_{t}(G)\) for the normal cop model. Here also, we will play two games simultaneously. _Game Setup:_ The game begins with \(k\) cops placing themselves on the vertices of \(G\) as per the winning strategy. Let the \(k\) cops be marked as \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\). We place \(k\) cops on the same vertices in \(\overrightarrow{S}_{t}(G)\), i.e., if a cop \(\mathcal{C}_{i}\) is on a vertex \(x\) in \(G\), then \(\mathcal{C}_{i}\) is placed on the vertex \(x\) in \(\overrightarrow{S}_{t}(G)\) as well. Moreover, we have an extra _dummy cop_, denoted \(D_{1}\), in \(\overrightarrow{S}_{t}(G)\). We place \(D_{1}\) on an arbitrary vertex in \(\overrightarrow{S}_{t}(G)\). Now, \(\mathcal{R}\) enters on a vertex, say \(x\), in \(\overrightarrow{S}_{t}(G)\). We place \(\mathcal{R}\) on \(f(x)\) in \(G\). _Move Translations:_ Now, the game proceeds as follows. The cops move in \(G\) as per their winning strategy. This move of cops is translated to \(t\) moves of cops in \(\overrightarrow{S}_{t}(G)\) as follows. If a cop \(\mathcal{C}_{i}\) moves from a vertex \(u\) to \(v\) in \(G\), then it moves from \(u\) to \(v\) in \(\overrightarrow{S}_{t}(G)\) along the directed path \(uv_{1}^{u}\cdots v_{t-1}^{u}v\). During these \(t\) moves, \(\mathcal{R}\) might move from its current position \(x\) to some vertex \(y\) such that there is a directed path from \(x\) to \(y\) of length at most \(t\). Therefore, due to Observation 4.1, \(f(y)\in N_{G}[f(x)]\). Thus, we move \(\mathcal{R}\) from \(f(x)\) to \(f(y)\) in \(G\). Then, the game goes on. _Capture:_ Since \(k\) cops have a winning strategy in \(G\), after a finite number of rounds, they can capture \(\mathcal{R}\) in \(G\). Consider the position of cops and the robber just before the capture. Let \(\mathcal{R}\) be on a vertex \(u\in V(G)\). Then, there is a cop at a vertex \(v\in N_{G}(u)\), and for every vertex \(w\in N_{G}(u)\setminus\{v\}\), there is a cop on some vertex in \(N_{G}[w]\). This position translates to \(\overrightarrow{S}_{t}(G)\) in the following manner. \(\mathcal{R}\) occupies either the vertex \(u\) or some vertex \(u_{j}^{w}\) where \(w\in N_{G}(u)\) and \(j<t\). Moreover, for every vertex \(v\in N_{G}(u)\), there is a cop that can reach \(v\) in at most \(t\) rounds. Now, comes the role of the dummy cop. In each round, \(D_{1}\) moves towards \(\mathcal{R}\) (it can do so since \(\overrightarrow{S}_{t}(G)\) is strongly connected) and forces \(\mathcal{R}\) to move after every finite number of rounds. This way, first, \(\mathcal{R}\) is forced to move to \(u\), and then to move to some vertex \(v_{1}^{u}\) where \(v\in N_{G}(u)\). Since there is a cop that can reach \(v\) in at most \(t\) rounds, this cop, say \(\mathcal{C}_{i}\), will start moving towards \(v\). Moreover, since \(\mathcal{R}\) has to move after every finite number of rounds (due to the dummy cop), it will eventually reach \(v\) in no more than \(t-1\) rounds, where it will be captured by \(\mathcal{C}_{i}\). Hence, \(c_{n}(\overrightarrow{S}_{t}(G))\leq c(G)+1\). Thus, we get the following theorem as a consequence of Lemma 4.2 and Lemma 4.3 that bounds both strong cop number and normal cop number of \(\overrightarrow{S}_{t}(G)\) in terms of \(c(G)\). **Theorem 4.4**.: _Let \(G\) be a simple graph. Then, \(c(G)\leq c_{s}(\overrightarrow{S}_{t}(G))\leq c_{n}(\overrightarrow{S}_{t}(G ))\leq c(G)+1\)._ Theorem 4.4 also establishes a lower bound on the weak cop number of \(\overrightarrow{S}_{t}(G)\). In the following result, we establish an upper bound on the weak cop number of \(\overrightarrow{S}_{2}(G)\) in terms of \(c(G)\). In particular, we have the following result. **Theorem 4.5**.: _Let \(G\) be a simple graph. Then, \(c(G)\leq c_{w}(\overrightarrow{S}_{2}(G))\leq c(G)+2\)._ Proof.: The lower bound follows directly from Theorem 4.4 by taking \(t=2\). To prove the upper bound, we provide a winning strategy using \(c(G)+2\) weak cops against a strong robber. Let \(c(G)=k\). Here also, we will play the two games simultaneously. _Game Setup:_ The game begins with \(k\) cops placing themselves on the vertices of \(G\) as per the winning strategy. Let the \(k\) cops be marked as \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\). We place \(k\) cops on the same vertices of \(\overrightarrow{S}_{2}(G)\), i.e., if a cop \(\mathcal{C}_{i}\) is placed on a vertex \(x\) in \(G\), then \(\mathcal{C}_{i}\) is placed on the same vertex \(x\) in \(\overrightarrow{S}_{2}(G)\) as well. Moreover, we have two extra _dummy cops_, \(D_{1}\) and \(D_{2}\). Now, \(\mathcal{R}\) enters a vertex, say \(u\), in \(\overrightarrow{S}_{2}(G)\). Then, we place the robber on \(f(u)\) in \(G\). _Move Translations:_ Now, the game proceeds as follows. Each round in \(G\) is translated to two rounds in \(\overrightarrow{S}_{2}(G)\). Each round in \(G\) begins with the \(k\) cops moving as per their winning strategy. This move is translated to two moves of the cop player in \(\overrightarrow{S}_{2}(G)\) as follows: If a cop \(\mathcal{C}_{i}\) moves from a vertex \(u\) to a vertex \(v\) in \(G\), then \(\mathcal{C}_{i}\) moves from \(u\) to \(v\) in \(\overrightarrow{S}_{2}(G)\) along the directed path \(uv_{1}^{u}v\) in two moves. During these two moves of the cops, \(\mathcal{R}\) can also make two moves in \(\overrightarrow{S}_{2}(G)\). Let \(\mathcal{R}\) moves from a vertex \(u\) to a vertex \(v\), and then to a vertex \(w\) (\(u,v\), and \(w\) are not necessarily distinct). If one of \(v\) or \(w\) is an original vertex in \(\overrightarrow{S}_{2}(G)\), then \(\mathcal{R}\) moves to that vertex in \(G\), otherwise \(\mathcal{R}\) does not move in \(G\). Notice that \(v\) and \(w\) both cannot be original vertices if they are distinct. Hence, at most one of them is an original vertex, and therefore, the next move of \(\mathcal{R}\) in \(G\) is well defined. Moreover, observe that, following this procedure, after a round, if \(\mathcal{R}\) is at a vertex \(u\) in \(G\), then \(\mathcal{R}\) is at a vertex in \(N_{\overrightarrow{S}_{2}(G)}[u]\) in \(\overrightarrow{S}_{2}(G)\). _Capture:_ Since \(k\) cops have a winning strategy in \(G\), they will be able to capture \(\mathcal{R}\) in \(G\) after a finite number of rounds. Let \(\mathcal{R}\) be at a vertex \(u\) just before the capture. At this instance, there is a cop at a vertex \(v\in N_{G}(u)\), and for every vertex \(w\in N_{G}(u)\setminus\{v\}\), there is a cop on some vertex in \(N_{G}[w]\). Consider the translation of this situation in \(\overrightarrow{S}_{2}(G)\). \(\mathcal{R}\) is at a vertex in \(\{u\}\cup N^{+}(u)\cup N^{-}(u)\). Now, we have the following claim. **Claim 4.5.1**.: _If \(\mathcal{R}\) is at the original vertex \(u\) at this instance, it will be captured after a finite number of rounds._ Proof of Claim.: First, we establish that \(\mathcal{R}\) will have to move after every finite number of rounds. To see this, let the robber occupies a vertex, say \(y\), and the dummy cop \(D_{2}\) occupies a vertex, say \(z\). If \(\mathcal{R}\) does not make a move, then \(D_{2}\) moves towards \(y\) along a shortest path between \(y\) and \(z\) in \(\overrightarrow{S}_{2}(G)\). Note that such a path always exists and has a finite length since \(\overrightarrow{S}_{2}(G)\) is a strongly connected finite digraph. Thus, if \(\mathcal{R}\) does not move from \(y\), \(D_{2}\) will eventually reach \(y\) and capture \(\mathcal{R}\) in a finite number of rounds. Now, if \(\mathcal{R}\) moves to vertex \(u_{1}^{v}\), then the cop at \(v\) will capture \(\mathcal{R}\). If \(\mathcal{R}\) moves to the vertex \(v_{1}^{u}\), then the cop at \(v\) stays at its current vertex, and the dummy cop \(D_{2}\) keeps moving towards the vertex \(u\). Since the vertex \(v\) is occupied by a cop, \(\mathcal{R}\) cannot move to \(v\) (as long as a cop occupies \(v\)). Moreover, since \(D_{2}\) is moving towards \(u\), observe that if \(\mathcal{R}\) keeps oscillating between \(v_{1}^{u}\) and \(u\), it will be captured after a finite number of rounds. Hence, after a finite number of rounds, \(\mathcal{R}\) will have to move to a vertex \(w_{1}^{u}\) or \(u_{1}^{w}\). At this point, the cop at \(v\) moves to \(u_{1}^{v}\), ensuring that \(\mathcal{R}\) cannot return to \(u\) in the next move. Moreover, recall that there is a cop, say \(\mathcal{C}_{1}\), at a vertex in \(N_{G}[w]\). Now, \(\mathcal{C}_{1}\) will move towards \(w\). Now, since \(\mathcal{R}\) is at \(w_{1}^{u}\) or \(u_{1}^{w}\), and it has to move every finite number of rounds, it will have to move to either \(u\) or \(w\), and it cannot move to \(u\) due to the cop at \(u_{1}^{v}\). Hence, \(\mathcal{R}\) will have to move to \(w\) where it will be captured by \(\mathcal{C}_{1}\). \(\diamond\) Due to Claim 4.5.1, we can assume that \(\mathcal{R}\) is at a new vertex in \(N^{+}(u)\cup N^{-}(u)\). Now again, \(D_{2}\) will move towards \(\mathcal{R}\) and will force \(\mathcal{R}\) to move after a finite number of rounds. At this point, \(\mathcal{R}\) is at an original vertex \(x\in N_{G}[u]\setminus\{v\}\) and there is a cop at a vertex \(y\) in \(N_{G}(x)\) (in case \(\mathcal{R}\) does not get captured at this step). Without loss of generality, let us rename the cops such that the cop at \(y\) is named \(D_{1}\). Next, we have the following claim. **Claim 4.5.2**.: _Let \(\mathcal{R}\) be at an original vertex \(u\) in \(\overrightarrow{S}_{2}(G)\) and \(D_{1}\) be at an original vertex \(v\) such that \(v\in N_{G}(u)\). Then, in a finite number of rounds, \(D_{1}\) and \(D_{2}\) can force \(\mathcal{R}\) to move to a vertex \(w\in N_{G}(u)\setminus\{v\}\) such that \(D_{1}\) is at the vertex \(u\)._ Proof of Claim.: If \(\mathcal{R}\) does not move, \(D_{2}\) moves towards \(\mathcal{R}\) and forces \(\mathcal{R}\) to move after a finite number of rounds. Similarly to the arguments presented in the proof of Claim 4.5.1, the cops force \(\mathcal{R}\) to first move to a vertex \(w_{1}^{u}\) or \(u_{1}^{w}\) such that \(w\in N_{G}(u)\setminus\{v\}\). At this point, \(D_{1}\) moves to \(u_{1}^{v}\) ensuring that \(\mathcal{R}\) cannot come back to \(u\). Since \(\mathcal{R}\) is forced by \(D_{2}\) to move after every finite number of rounds, it will have to move to \(w\) after a finite number of rounds, and \(D_{1}\) moves to \(u\) at this point. \(\diamond\) Now, the game proceeds as follows. The \(k\) cops, \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\), begin in \(G\) again as per their winning strategy in \(G\). Now, the move translation is slightly different. For each round in \(G\), the \(k\) cops move as per their winning strategy. In \(\overrightarrow{S}_{2}(G)\), the cops move as per the move translation explained above in two rounds. In these two rounds, if \(\mathcal{R}\) has moved to an original vertex then we move \(\mathcal{R}\) in \(G\) accordingly. Otherwise, \(D_{1}\) and \(D_{2}\) force \(\mathcal{R}\) to move to an original vertex, as per Claim 4.5.2. The other cops, \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\) stay at their current position during these moves in \(\overrightarrow{S}_{2}(G)\). Once \(\mathcal{R}\) has moved to an original vertex in \(\overrightarrow{S}_{2}(G)\), we move \(\mathcal{R}\) in \(G\) according to the move translation. In this case, observe that when \(\mathcal{R}\) gets captured in \(G\), it is at an original vertex in \(\overrightarrow{S}_{2}(G)\). Therefore, due to Claim 4.5.1, it gets captured in \(\overrightarrow{S}_{2}(G)\) as well after a finite number of rounds. This completes our proof. Here, we present an easy result that will be used to prove winning strategies for cops. **Lemma 4.6**.: _Let \(G\) be a simple graph. Consider the strong cop model in the graph \(\overrightarrow{S}_{t}(G)\). If on a cop's move, \(\mathcal{R}\) occupies a vertex of the form \(v_{i}^{u}\), where \(i\in[t-1]\), and there is a cop \(\mathcal{C}\) occupying a vertex \(x\) such that there is a directed path from either \(x\) to \(v\) or \(v\) to \(x\) of length at most \((t-i)+1\), then \(\mathcal{R}\) will be captured in at most \(2t\) rounds._ Proof.: Assume the scenario to be as in the statement. \(\mathcal{C}\) begins by moving towards \(v\) to decrease its distance to \(v\) to \(t-i\). Notice that a strong cop can move against the orientation of the arcs as well. Now, in the next \(t-i\) rounds, the only possible moves for \(\mathcal{R}\) are either staying at the same vertex or moving towards \(v\) (since \(\mathcal{R}\) can only make weak moves). Hence, after these \(t-i\) moves, \(\mathcal{R}\) is either at the vertex \(v\) or at a vertex \(v_{j}^{u}\) such that \(j\geq i\). During these rounds, in each move, \(\mathcal{C}\) moves towards \(v\) and reaches \(v\) after \(t-i\) rounds. In this instance, if \(\mathcal{R}\) is at the vertex \(v\), it gets captured. Otherwise, \(\mathcal{R}\) is at a vertex \(v_{j}^{u}\) where \(j\geq i\). Moreover, the only moves possible for \(\mathcal{R}\) are to move towards \(v\). Now, \(\mathcal{C}\) can make strong moves towards \(v_{j}^{u}\) and capture \(\mathcal{R}\) in at most \(t\) more rounds. Next, we provide a sufficient condition under which the cop number of \(G\) and the strong cop number of its strong-subdivision is the same. **Theorem 4.7**.: _Let \(G\) be a triangle-free undirected graph. Then \(c(G)=c_{s}(\overrightarrow{S}_{t}(G))\)._ Proof.: Let \(c(G)=k\). Due to Theorem 4.4, to establish our claim, it is sufficient to prove that \(k\) strong cops have a winning strategy in \(\overrightarrow{S}_{t}(G)\), i.e., \(c_{s}(\overrightarrow{S}_{t}(G))\leq k\). We provide such a strategy below. Similarly to the proof of Lemma 4.3, we play two games simultaneously in \(G\) and \(\overrightarrow{S}_{t}(G)\) and use the winning strategy of \(k\) cops in \(G\) to achieve the following in \(\overrightarrow{S}_{t}(G)\). Each cop \(\mathcal{C}_{i}\) is on an original vertex, \(\mathcal{R}\) occupies a vertex \(u\) or a vertex of the form \(u_{j}^{w}\) (where \(j<t\)). Moreover, there is a cop, say \(\mathcal{C}\) on a vertex \(v\in N_{G}(u)\), and for each vertex \(x\in N_{G}(u)\), there is a cop on some vertex \(y\in N_{G}[x]\) in \(\overrightarrow{S}_{t}(G)\). Now, we distinguish the following two cases. 1. \(\mathcal{R}\) occupies the vertex \(u\): In this case, \(\mathcal{C}\) moves towards \(u\) following the directed path \(uv_{1}^{u}\cdots v_{t-1}^{u}v\) against the orientation of the arcs. Now, if \(\mathcal{R}\) does not move for the next \(t\) rounds, then it will be captured by \(\mathcal{C}\) in at most \(t\) rounds. If \(\mathcal{R}\) starts moving towards \(v\) (using the same path), observe that it will be captured after at most \(t\) rounds. If \(\mathcal{R}\) starts moving towards some vertex \(w\in N(u)\setminus\{v\}\), then observe that \(\mathcal{R}\) is at the vertex \(w_{1}^{u}\) and there is a cop at a distance at most \(t\) from \(w\). Hence, due to Lemma 4.6, \(\mathcal{R}\) will be captured after at most \(2t\) cop moves. 2. \(\mathcal{R}\) occupies a vertex \(u_{i}^{x}\), then \(\mathcal{C}\) starts moving towards \(u\) following the directed path \(uv_{1}^{u}\cdots v_{t-1}^{u}v\) against the orientation of the arcs. If \(\mathcal{R}\) does not move to reach \(u\) before \(\mathcal{C}\) reaches \(u\), then \(\mathcal{R}\) will be captured in the next \(t\) moves. If \(\mathcal{R}\) reaches \(u\) when \(\mathcal{C}\) is at some vertex \(v_{j}^{u}\), then notice that we have considered a similar scenario in the previous case. Hence \(\mathcal{R}\) will be captured in at most \(2t\) rounds. Thus, \(k\) strong cops will capture \(\mathcal{R}\) in \(\overrightarrow{S}_{t}(G)\) after a finite number of rounds. Even though strong-cop win characterization is an open problem, we can characterize all strong-cop win oriented graphs that are strong-subdivisions. **Theorem 4.8**.: _Let \(G\) be a graph. Then \(\overrightarrow{S}_{t}(G)\) is strong-cop win if and only if \(G\) is a tree._ Proof.: In one direction, let \(G\) be a tree. Then, we know that \(c(G)=1\). Moreover, since \(G\) is triangle-free, we have \(c_{s}(\overrightarrow{S}_{t}(G))=c(G)=1\) due to Theorem 4.7. Thus, \(\overrightarrow{S}_{t}(G)\) is strong-cop win. In the reverse direction, we show that if \(G\) is not a tree, then \(c_{s}(\overrightarrow{S}_{t}(G))>1\). If \(c(G)>1\), then \(c_{s}(\overrightarrow{S}_{t}(G))>1\) due to Lemma 4.2. Therefore, we assume \(c(G)=1\) and look for a contradiction. Next, we have the following easy claim, which we prove for the sake of completeness. **Claim 4.8.1**.: _Let \(G\) be a cop win graph that is not a tree. Then, \(G\) contains at least one triangle._ Proof of Claim.: Let \(v\) be a corner vertex in \(G\). Then \(G\) is cop win if and only if \(G-\{v\}\) is cop win [7]. Therefore, let \(H\) be the graph that we get by removing the _leaves_ (a vertex of degree \(1\)) from \(G\) recursively and exhaustively. Since each leaf is a corner, we have that \(H\) is also a cop win graph. Since \(G\) was not a tree, \(H\) contains at least one cycle. Now, since \(H\) is cop win, it must contain at least one corner vertex, say \(u\). Let \(u\) be a corner vertex of \(v\), i.e., \(N[u]\subseteq N[v]\). Now, since \(u\) has degree at least two, \(u\) has a neighbor \(w\) distinct from \(v\). Finally, since \(N[u]\subseteq N[v]\), we have that \(uvw\) is a triangle in \(H\) as well as in \(G\). \(\diamond\) Hence, due to Claim 4.8.1, we have that \(G\) contains at least one triangle, say \(uvw\). Now, consider the graph \(\overrightarrow{S}_{t}(G)\). The robber will stay in the subgraph of \(\overrightarrow{S}_{t}(G)\) corresponding to the triangle \(uvw\). The game begins with the cop, say \(\mathcal{C}\), placing itself on a vertex of the graph. Irrespective of the beginning position of \(\mathcal{C}\), there is at least one vertex \(x\in\{u,v,w\}\) such that \(x\) is not an in-neighbor or an out-neighbor of the current position of \(\mathcal{C}\). Now, \(\mathcal{R}\) stays on this vertex \(x\) unless \(\mathcal{C}\) moves to a vertex \(x_{t-1}^{y}\) or a vertex \(y_{1}^{x}\) where \(y\in N_{G}(x)\). At this instance, notice that there is at least one vertex \(z\in\{u,v,w\}\setminus\{x,y\}\). Now, \(\mathcal{R}\) moves to the vertex \(z_{1}^{x}\), and in the next \(t-1\) moves, keeps moving towards \(z\). Now, we claim that \(\mathcal{C}\) cannot capture \(\mathcal{R}\) in these \(t\) moves. It is because each vertex in the directed path between \(x\) and \(z\) is closer to \(\mathcal{R}\) than \(\mathcal{C}\). Finally, observe that once \(\mathcal{C}\) reaches \(z\), we are in a situation identical to the one we started. Hence, \(\mathcal{R}\) will wait at the vertex \(z\) unless it is under attack and then move to some other vertex of the triangle \(uvw\). This way \(\mathcal{R}\) will evade the capture forever. This completes our proof. ## 5 Weak Subdivisions Let \(\overrightarrow{G}\) be an oriented graph. Let \(\overrightarrow{W}_{t}(\overrightarrow{G})\) be the oriented graph obtained by replacing each arc \(\overrightarrow{uv}\) of \(\overrightarrow{G}\) by a length \(t\) directed path from \(u\) to \(v\) of the form \(uv_{1}^{u}\cdots v_{t-1}^{u}v\). The oriented graph \(\overrightarrow{W}_{t}(\overrightarrow{G})\) is called the _weak \(t\)-subdivision_ of \(\overrightarrow{G}\). Similarly to the definition of strong subdivisions, the vertices \(u\) and \(v\) are termed as _original_ vertices while the vertices of the form \(u_{j}^{w}\) (\(j<t\)) are termed as _new_ vertices. Moreover, similar to Section 4, we define a function \(g:V(\overrightarrow{W}_{t}(\overrightarrow{G}))\to V(\overrightarrow{G})\) such that for any \(x,y\in V(\overrightarrow{G})\), \(i\in[t-1]\) and \(x_{i}^{y}\in V(\overrightarrow{W}_{t}(\overrightarrow{G}))\), we have \(g(x_{i}^{y})=x\) and \(g(x)=x\). Finally, we have an observation similar to Observation 4.1. **Observation 5.1**.: _For any two vertices \(x,y\in V(\overrightarrow{W}_{t}(\overrightarrow{G}))\), if there is a directed path from \(x\) to \(y\) of length at most \(t\), then \(g(y)\in N_{\overrightarrow{G}}^{+}[g(x)]\)._ We know that, given a simple graph \(G\), its cop number does not decrease if we subdivide each edge of \(G\)[6]. We prove its oriented analog for each of the three models. First, we have the following lemma. **Lemma 5.2**.: _Let \(\overrightarrow{G}\) be an oriented graph. Then, \(c_{n}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{n}( \overrightarrow{G})\) and \(c_{w}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{w}( \overrightarrow{G})\)._ Proof.: Here we will play two games simultaneously in \(\overrightarrow{G}\) and \(\overrightarrow{W}_{t}(\overrightarrow{G})\). The proof of \(c_{n}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{n}(\overrightarrow{G})\) and \(c_{w}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{w}(\overrightarrow{G})\) are almost identical. We provide proof of \(c_{n}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{n}(\overrightarrow{G})\) first. Let \(c_{n}(\overrightarrow{W}_{t}(\overrightarrow{G}))=k\). We will use the strategy of these \(k\) cops in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) to get a strategy for \(k\) cops in \(\overrightarrow{G}\). _Setup:_ The game starts with \(k\) cops placing themselves on the vertices of \(G\) as per the winning strategy. Now, if a cop \(\mathcal{C}_{i}\) is placed at a vertex \(x\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\), then we place \(\mathcal{C}_{i}\) on \(g(x)\) in \(G\). Next, \(\mathcal{R}\) enters on a vertex in \(\overrightarrow{G}\). Then, we place \(\mathcal{R}\) in the same vertex in \(\overrightarrow{G}\). _Move Translation:_ Now, the game proceeds as follows. To ease the presentation, let the cops skip their first move in \(\overrightarrow{G}\) as well as in \(\overrightarrow{W}_{t}(\overrightarrow{G})\), and then each round in \(\overrightarrow{G}\) consists of \(\mathcal{R}\) moving, followed by cops moving. Note that it does not hurt the winning strategy of \(k\) cops in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) since the cops can win irrespective of the starting position of \(\mathcal{R}\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\). Then, each round in \(\overrightarrow{G}\) is translated to \(t\) rounds in \(\overrightarrow{W}_{t}(\overrightarrow{G})\). If \(\mathcal{R}\) moves from a vertex \(u\) to \(v\) in \(\overrightarrow{G}\), then \(\mathcal{R}\) moves from \(u\) to \(v\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\), along the path \(uv_{1}^{u}\cdots v_{t-1}^{u}v\), in next \(t\) rounds. In these \(t\) rounds, cops move in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) as per their winning strategy. Note that each cop \(\mathcal{C}_{i}\) moves from a vertex \(x\) to a vertex \(y\) such that there is a directed path from \(x\) to \(y\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) of length at most \(t\). Hence, due to Observation 5.1, \(g(y)\in N_{\overrightarrow{G}}^{\pm}[g(x)]\). Thus, we move \(\mathcal{C}_{i}\) from vertex \(g(x)\) to \(g(y)\) in \(\overrightarrow{G}\). _Capture:_ Since \(k\) cops have a winning strategy in \(\overrightarrow{W}_{t}(\overrightarrow{G})\), they will capture \(\mathcal{R}\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) after a finite number of rounds. Notice that at this point, \(\mathcal{R}\) gets captured in \(\overrightarrow{G}\) as well. This completes the proof of \(c_{n}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{n}(\overrightarrow{G})\). Now, the proof of \(c_{w}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{w}(\overrightarrow{G})\) is similar with the following changes. Let \(c_{w}(\overrightarrow{W}_{t}(\overrightarrow{G}))=k\). Then we similarly borrow the strategy of \(k\) cops in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) to get a winning strategy in \(\overrightarrow{G}\). Here the setup is exactly the same. Next, in a move, if \(\mathcal{R}\) makes a strong move from \(u\) to \(v\) in \(\overrightarrow{G}\), then note that \(\mathcal{R}\) can move from \(u\) to \(v\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) by making \(t\) strong moves. Finally, the cops will move in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) according to the winning strategy and notice that when \(\mathcal{R}\) gets captured in \(\overrightarrow{W}_{t}(\overrightarrow{G})\), it gets captured in \(\overrightarrow{G}\) as well. In the following lemma, we prove that the strong cop number of an oriented graph does not decrease by operation of weak subdivisions. **Lemma 5.3**.: _Let \(\overrightarrow{G}\) be an oriented graph. Then, \(c_{s}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{s}(\overrightarrow{ G})\)._ Proof.: The proof here is similar to the proof of Lemma 5.2. Here also, we will play two games simultaneously: one on \(\overrightarrow{W}_{t}(\overrightarrow{G})\) and one on \(\overrightarrow{G}\) in the following manner. Let \(c_{s}(\overrightarrow{W}_{t}(\overrightarrow{G}))=k\). _Setup:_ The game begins with \(k\) cops placing themselves on the vertices of \(\overrightarrow{W}_{t}(\overrightarrow{G})\) as per their winning strategy. Now, if a cop \(\mathcal{C}_{i}\) is placed on a vertex \(x\in V(\overrightarrow{W}_{t}(\overrightarrow{G}))\), then we place \(\mathcal{C}_{i}\) on a vertex of \(\overrightarrow{G}\) in the following manner. If \(x\) is an original vertex, then we place \(\mathcal{C}_{i}\) on \(x\). Else, \(x\) is of the form \(u_{j}^{v}\) (where \(j<t\)), and then we place \(\mathcal{C}_{i}\) on \(u\). (The choice of \(u\) and \(v\) is not important here. We can choose either of them, and the rest of the proof will remain the same.) Then, \(\mathcal{R}\) enters on a vertex, say \(w\) of \(\overrightarrow{G}\). Then, we place \(\mathcal{R}\) on \(w\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\). _Move Translation:_ Now, the game proceeds as follows. The cops miss their first move as in the proof of Lemma 5.2. Hence, we may assume that each round contains first the move of \(\mathcal{R}\) and then the move of cops. Hence \(\mathcal{R}\) moves in \(\overrightarrow{G}\) from a vertex, say \(u\), to a vertex, say \(v\). This move gets translated to \(t\) moves of \(\mathcal{R}\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\). Now, \(\mathcal{R}\) moves in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) from \(u\) to \(w\) along the directed path between \(u\) and \(w\) of length \(t\). In these \(t\) rounds, the cops move as per their winning strategy. Now, these \(t\) moves of a cop \(\mathcal{C}_{i}\) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) are translated to a move of \(\mathcal{C}_{i}\) in \(\overrightarrow{G}\) in the following manner. Note that if \(\mathcal{C}_{i}\) moves from a vertex, say \(x\), to a vertex, say \(y\), then there is a directed path from either \(x\) to \(y\) or from \(y\) to \(x\) of length at most \(t\). If both \(x\) and \(y\) are original vertices, then \(\mathcal{C}_{i}\) is at the vertex \(x\) in \(\overrightarrow{G}\) and it moves to the vertex \(y\) in \(\overrightarrow{G}\) (note that \(x\) and \(y\) are adjacent in \(\overrightarrow{G}\)). Otherwise, the \(x\) to \(y\) path in \(G\) contains at most one original vertex. If it does not contain any original vertex, then \(\mathcal{C}_{i}\) stays at the same vertex, else \(\mathcal{C}_{i}\) moves to the original vertex contained in the path, say \(z\), in \(\overrightarrow{G}\). The only thing to observe here is that if \(\mathcal{C}_{i}\) is at a vertex of the form \(u_{j}^{v}\) (\(j<t\)) in \(\overrightarrow{W}_{t}(\overrightarrow{G})\), then \(\mathcal{C}_{i}\) is either at \(u\) or \(v\) in \(\overrightarrow{G}\) and hence, \(\mathcal{C}_{i}\) can always make the promised moves. _Capture:_ Finally, notice that when \(\mathcal{R}\) gets captured in \(\overrightarrow{W}_{t}(\overrightarrow{G})\) at an original vertex, then it gets captured in \(\overrightarrow{G}\) as well. Finally, the following theorem is implied by Lemma 5.2 and Lemma 5.3. **Theorem 5.4**.: _Let \(\overrightarrow{G}\) be an oriented graph. Then,_ 1. \(c_{n}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{n}(\overrightarrow{ G})\)_,_ 2. \(c_{w}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{w}(\overrightarrow{G})\)_,_ 3. \(c_{s}(\overrightarrow{W}_{t}(\overrightarrow{G}))\geq c_{s}(\overrightarrow{G})\)_._ ## 6 Computational Complexity In this section, we establish that assuming \(P\neq NP\), for an oriented graph \(\overrightarrow{G}\), there can be no polynomial time algorithm to decide whether \(c_{x}(\overrightarrow{G})=k\) where \(x\in\{s,n,w\}\), even when \(\overrightarrow{G}\) is restricted to be a 2-degenerate oriented bipartite graph. These results are not very surprising since most pursuit-evasion games are indeed computationally difficult. We shall use the following well-known result on approximation hardness of Minimum Dominating Set (MDS) [4]. For a graph \(G\), \(\gamma(G)\) denotes its _domination number_, that is, the size of a minimum dominating set of \(G\). **Proposition 6.1** ([4]).: _Unless \(P=NP\), there is no polynomial time approximation algorithm that approximates Minimum Dominating Set with an approximation ratio \(o(\log n)\)._ Fomin et al. [17] proved that Cops and Robber is NP-hard. They did so by providing a reduction from MDS on a graph \(G\) to Cops and Robber on a graph \(G^{\prime}\). Moreover, in their construction, they had the following result. **Proposition 6.2** ([17]).: _A graph \(G\) has a dominating set of size \(k\) if and only if \(G^{\prime}\) is \(k\)-copwin._ Next, consider the graph \(G^{\prime}\). From \(G^{\prime}\), we get the graph \(\overrightarrow{S}_{2}(G^{\prime})\). Hence, we have the following corollary as a consequence of Proposition 6.2, Theorem 4.4, and Theorem 4.5. **Corollary 6.3**.: _For any graph \(G\), we have_ \[\gamma(G)\leq c_{s}(\overrightarrow{S}_{2}(G^{\prime}))\leq c_{n}( \overrightarrow{S}_{2}(G^{\prime}))\leq c_{w}(\overrightarrow{S}_{2}(G^{ \prime}))\leq\gamma(G)+2.\] Proof.: From Theorem 4.4 and Theorem 4.5, it follows that \(c(G^{\prime})\leq c_{s}(\overrightarrow{S}_{2}(G^{\prime}))\leq c_{n}( \overrightarrow{S}_{2}(G^{\prime}))\leq c_{w}(\overrightarrow{S}_{2}(G^{ \prime}))\leq c(G^{\prime})+2\). Now, combining this with the fact that \(\gamma(G)=c(G^{\prime})\) (Proposition 6.2), we have that \(\gamma(G)\leq c_{s}(\overrightarrow{S}_{2}(G^{\prime}))\leq c_{n}( \overrightarrow{S}_{2}(G^{\prime}))\leq c_{w}(\overrightarrow{S}_{2}(G^{ \prime}))\leq\gamma(G)+2\). Hence, if we can compute any of \(c_{s}(\overrightarrow{S}_{2}(G^{\prime}))\), \(c_{n}(\overrightarrow{S}_{2}(G^{\prime}))\), or \(c_{w}(\overrightarrow{S}_{2}(G^{\prime}))\) in polynomial time, then we have an additive \(+2\) approximation for dominating set, which would imply that \(P=NP\) (due to Proposition 6.1). Therefore, we have the following theorem. **Theorem 6.4**.: _Unless \(P=NP\), for an oriented graph \(\overrightarrow{G}\), there is no polynomial time algorithm to compute any of \(c_{s}(\overrightarrow{G})\), \(c_{n}(\overrightarrow{G})\), or \(c_{w}(\overrightarrow{G})\) even if we restrict ourselves to \(2\)-degenerate bipartite oriented graphs._ Proof.: The proof follows from Corollary 6.3 and the observation that \(\overrightarrow{S}_{2}(G)\) of any simple graph \(G\) is bipartite and \(2\)-degenerate. ## 7 Conclusions In this paper, we considered three variants of the Cops and Robber game on oriented graphs, namely _strong cop model_, _normal cop model_, and _weak cop model_ with respect to subdivisions and retracts. We generalized and established various results on the relation between the cop numbers in these variants and the subdivisions and retracts. One interesting implication of our result concerning subdivisions was that computing the cop number in all these three models is computationally difficult. More specifically, unless \(P=NP\), none of these problems can be solved in polynomial time on oriented graphs even when input is restricted to \(2\)-degenerate bipartite graphs. We also remark that the idea of the proof of Theorem 4.7 can also be used to establish that if we subdivide each edge of a triangle-free undirected graph an equal number of times, the cop number does not change. We are still very far from a good understanding of the Cops and Robber game on oriented graphs. For example, the question of characterizing the strong-cop win graphs and normal-cop win graphs, our original motivation to study this problem, still remains open. In an attempt to characterize strong-cop win oriented graphs, in Theorem 3.1 we showed that a strong-retract of an oriented graph retains the strong-cop number. One natural question here can be to find out (all) the non-trivial examples of oriented stong-cop win graphs \(\overrightarrow{G}\) which do not contain a strong-retract that is strong-cop win. Moreover, while the game is well-understood on several undirected graph classes like planar graphs [3], bounded-genus graphs [8], geometric intersection graphs [12], minor-free graphs [5], we only know an upper bound of \(O(\sqrt{n})\)[33] on the cop number of strongly connected planar directed graphs and do not know any lower bound better than \(\omega(1)\)[33]. So, another interesting research direction is to explore the cop number of the (strongly connected) directed counterparts of the above-mentioned graph classes.
2308.10112
PDL: Regularizing Multiple Instance Learning with Progressive Dropout Layers
Multiple instance learning (MIL) was a weakly supervised learning approach that sought to assign binary class labels to collections of instances known as bags. However, due to their weak supervision nature, the MIL methods were susceptible to overfitting and required assistance in developing comprehensive representations of target instances. While regularization typically effectively combated overfitting, its integration with the MIL model has been frequently overlooked in prior studies. Meanwhile, current regularization methods for MIL have shown limitations in their capacity to uncover a diverse array of representations. In this study, we delve into the realm of regularization within the MIL model, presenting a novel approach in the form of a Progressive Dropout Layer (PDL). We aim to not only address overfitting but also empower the MIL model in uncovering intricate and impactful feature representations. The proposed method was orthogonal to existing MIL methods and could be easily integrated into them to boost performance. Our extensive evaluation across a range of MIL benchmark datasets demonstrated that the incorporation of the PDL into multiple MIL methods not only elevated their classification performance but also augmented their potential for weakly-supervised feature localizations.
Wenhui Zhu, Peijie Qiu, Xiwen Chen, Oana M. Dumitrascu, Yalin Wang
2023-08-19T21:20:30Z
http://arxiv.org/abs/2308.10112v2
# PDL: Regularizing Multiple Instance Learning with Progressive Dropout Layers ###### Abstract Multiple instance learning (MIL) was a weakly supervised learning approach that sought to assign binary class labels to collections of instances known as bags. However, due to their weak supervision nature, the MIL methods were susceptible to overfitting and required assistance in developing comprehensive representations of target instances. While regularization typically effectively combated overfitting, its integration with the MIL model has been frequently overlooked in prior studies. Meanwhile, current regularization methods for MIL have shown limitations in their capacity to uncover a diverse array of representations. In this study, we delve into the realm of regularization within the MIL model, presenting a novel approach in the form of a Progressive Dropout Layer (PDL). We aim to not only address overfitting but also empower the MIL model in uncovering intricate and impactful feature representations. The proposed method was orthogonal to existing MIL methods and could be easily integrated into them to boost performance. Our extensive evaluation across a range of MIL benchmark datasets demonstrated that the incorporation of the PDL into multiple MIL methods not only elevated their classification performance but also augmented their potential for weakly-supervised feature localizations. The codes are available at [https://github.com/ChongQingNoSubway/PDL](https://github.com/ChongQingNoSubway/PDL). ## 1 Introduction Weakly-annotated data was a prevalent occurrence in numerous biomedical applications, including medical image segmentation [1], drug molecule discovery [2], and tumor detection [3], etc. The weak supervisory signal comprised multiple instances (e.g., multiple tumor regions in a whole slide image) but was characterized by general categories (e.g., benign/malignant). Learning from this weakly-annotated data was typically formulated as the _multiple instance learning_ (MIL) problem. Instead of standard supervised learning such as image classification, the MIL assigned a label to a bag of instances rather than individually classifying each instance. Typical MIL frameworks consisted of an instance-level feature projector and a MIL aggregator. The feature projector mapped each instance onto a feature embedding, and then these embeddings were fed into the MIL aggregator for making predictions. Accordingly, prevailing MIL research focused on advancing either the feature projector [4] or the MIL aggregator [5; 4; 6; 7; 8]. Despite the substantial efforts directed towards these two directions, MIL models continue grappling with the issue of _overfitting_ due to its weak supervisory signals. This enduring issue presented considerable challenges to their ability to attain intricate and expressive feature representations. Regularization played a crucial role in addressing overfitting and improving the generalizability of neural networks [9; 10; 11]. It motivated us to explore the values of adopting regularization in MIL models. Previous MIL models often integrated with two types of regularizations in their implementations: (i) randomly dropping instances (DropInstance) and (ii) adding Dropout [12] to the last layer of the MIL model, as commonly applied in various image classification tasks [13; 12]. The DropInstance [4; 5], in which a specific proportion of instances was randomly dropped before entering the network (e.g., feature projector or MIL aggregator), created various combinations of instances to serve as data augmentation. However, due to the limited number of positive instances (e.g., tumors), many combinations were similar, making it challenging to alleviate overfitting effectively. Furthermore, mealy adding Dropout to the MIL aggregator classifier lacked in-depth modeling of instance correlations and proved insufficient in addressing overfitting. Besides, the regularization methods above were either applied at the beginning or the end of a MIL model. It cannot effectively assist the MIL model in discovering diverse and rich feature representations. In contrast, AttentionDropout [14] dropped strongly activated elements in deep neural networks and enabled the network to learn more potential features. It inspired us to explore a MIL model's regularization in the middle (i.e., the instance-level feature extractor). Nevertheless, the AttentionDropout lacked consideration of instance relations and localizations, and its main component was not directly applicable to MIL. Such challenges compelled us to explore a MIL-specific dropout method capable of mitigating overfitting while discovering latent features. This paper proposed a novel progressive dropout layer (PDL) applied to the trainable middle layer within the instance projector. The PDL consisted of two main components: i) a MIL-specific attention-based dropout (MIL-AttentionDropout) and ii) a progressive learning scheduler. The MIL-AttentionDropout assigned a distinct drop rate to each instance based on its importance, enabling us to leverage the inter-instance correlations to discover richer and more representative features while introducing instance combination stochasticity to mitigate overfitting. The progressive learning scheduler adjusted the global maximum drop rates within MIL-AttentionDropout. We progressively increased the drop rates as the training progressed, effectively guiding the localizations within the MIL framework. The main contributions of this paper were as follows: * We introduced an innovative MIL dropout method to combat overfitting, offering easy integrations into existing methods as a widely adopted paradigm. * We incorporated the capability to uncover latent features, specifically addressing defined challenges within the context of MIL, such as misleading localizations. It offered valuable insights for future investigations. * Extensive experiments on various MIL benchmarks demonstrated the proposed method was effective in mitigating overfitting and discovering latent features to boost the performance of existing MIL methods in both classification and localization accuracy. ## 2 Related Works ### MIL methods: The introduction of MI-Net [8] has increased the prominence of bag-level MIL methods where only bag-level labels were used to supervise training. It mitigated the ambiguity of propagating bag-level labels to each instance in instance-level MIL methods [15; 16; 17]. Accordingly, empirical studies demonstrated that bag-level MIL methods generally exhibited superior performance when compared to their instance-level counterparts [5; 8]. The mainstream focus of bag-level MIL methods mainly licel in advancing instance-level aggregation by incorporating attention mechanisms [7], transformer [5], knowledge distillation [6], and non-local attention [4] to capture inter-instance correspondences within a bag. However, these methods introduced more complex model structures and enlarged parameter sets, increasing the risk of overfitting. Therefore, we proposed an approach orthogonal to existing MIL methods that could mitigate the overfitting and discover the latent features in MIL methods without altering the fundamental architectures of these methods. ### Dropout methods: Dropout has been experimentally validated as a potent technique for mitigating overfitting in diverse computer vision studies [12; 13]. Numerous investigations have ventured beyond the conventional dropout approach, exploring this avenue of research extensively. Examples included spatial-dropout [13], dropout based on contiguous regions [18], and AttentionDropout [14]. In the context of MIL, where aggregators operated on instance embedding features without spatial context during training, the majority of studies used vanilla dropout. [12; 13]. This practice entailed the random deactivation of neurons within the bag-level classifier. Nonetheless, it neglected the intrinsic correlations existing within individual instances. Unlike the standard Dropout method, AttentionDropout [14] generated spatial attention maps via channel pooling and eliminated elements surpassing a prescribed attention weight threshold within a feature map. It encouraged the neural network to explore latent features more extensively. However, applying it directly to MIL faced challenges due to the lack of instance correlation modeling within instance-level embeddings and the use of hard thresholding, which overlooked inter-instance correlations.We presented an exhaustive analysis and discourse on these Dropout techniques in this study. Given the challenges encountered when adapting existing dropout methods to MIL, we introduced a novel MIL-specific dropout layer. This innovative approach capitalized on inter-instance correlations to dynamically modulate the dropout rates. ## 3 MIL Preliminaries and Problem Statement ### MIL Preliminaries In the classical binary MIL classification, the objective is to learn a mapping from a given bag of instances \(X=\{x_{k}\,|\,k=1,\cdots,K\}\) to a binary label \(Y\in\{0,1\}\). In most MIL applications, the instance-level labels \(\{y_{k}\ |\ k=1,\cdots,K\}\) are unknown, making it a weakly-supervised problem: \[Y=\begin{cases}0,\ \text{iff}\ \sum_{k}y_{k}=0,\\ 1,\ \text{otherwise}.\end{cases} \tag{1}\] We considered embedding-based MIL as an example, which typically consisted of two main modules: i) an instance-level projector and ii) a MIL aggregator. First, the instance-level projector \(f_{\psi}(\cdot)\) parameterized by \(\psi\) (e.g., a multi-layer perceptron) projected each instance \(x_{k}\) within a bag into a feature embedding \(\textbf{v}_{k}=f_{\psi}(x_{k})\), with \(\textbf{v}_{k}\in\mathbb{R}^{L}\), where \(L\) denoted the embedding dimension. Secondly, the MIL aggregator was applied to combine instance embeddings into a bag prediction. The bag prediction was given as a Bernoulli distribution \(P(X)\) (i.e., the probability of Y=1 given a bag \(X\)) by maximizing the log-likelihood. #### 3.1.1 Instance-level projector. The instance-level projector \(f_{\psi}(\cdot)\) usually had two types of approaches. One approach was a pre-trained network (e.g., Resnet trained on Imagenet [5], self-supervised manner [4]), which directly projected instances into embedding features and then fed them to the MIL aggregator for training. The other approach added the multilayer perceptron in the middle between the pre-trained network and the MIL aggregator. It could be used as feature dimensionality reduction [6] and provided deep supervisory across various embedding dimensions [8] as shown in Figure 1(A). It underwent joint training with the MIL aggregator. The second approach was referred to as the middle layer in subsequent discussions. Figure 1: (A). The MI-Net utilized the MIL pooling to enhance supervision across various embedding instance dimensions. (B). Illustration of progressive dropout layer (PDL) mechanism under the MIL workflow, taking one layer as an example. PDL dynamically assigned a drop rate to each instance based on Avarage-Pooling Based Attention (APBA), and applied Instance-Based Dropout to instances with drop rate \(p^{\prime}_{k}\). The Progressive Learning Scheduler controlled the global maximum drop rate \(P\) to generate a set of \(\{p_{1},p_{2},\dots,p_{k}\}\). This mechanism enforced the model to discover latent features and mitigate overfitting. #### 3.1.2 MIL pooling. The MIL aggregator was typically presented as a permutation-invariant function \(\rho\), as implied by Eq. 1. The pooling operation was a prevailing choice for such a permutation-invariant function, namely MIL pooling. The MIL could then be formulated as \(P(X)=g_{\theta}(\rho(\{\mathbf{v}_{k}\mid k=1,\cdots,K\}))\), where \(g_{\theta}(\cdot)\) was a bag-level classifier parameterized by \(\theta\). We took attention-based pooling [7] as an example. Mathematically: \[\begin{split}\rho(\{\mathbf{v}_{k}\mid k=1,\cdots,K\})=\sum_{k= 1}^{K}\alpha_{k}\mathbf{v}_{k},\\ \text{with}\ \ \alpha_{k}=\text{softmax}(\mathbf{w}_{1}^{T}\text{ tanh}(\mathbf{w}_{2}\mathbf{v}_{k}^{T})),\end{split} \tag{2}\] where \(\mathbf{w}_{1}\in\mathbb{R}^{D\times 1}\) and \(\mathbf{w}_{2}\in\mathbb{R}^{D\times L}\) were learnable parameters. \(\alpha_{k}\) implied the importance of the \(k\)-th instance. ### Problem Statement In the context of MIL, Dropout, following the original formulation [12; 13], can be represented as such: \[O=(I\circ\xi),\text{ with }\xi_{i,j}\sim\mathcal{B}(p), \tag{3}\] where \(I\) and \(O\) were the \(B\times L\) matrix of input and output features for the current minibatch \(B\) in a dropout layer. The \(\circ\) denoted the elementwise (Hadamard) product of the input matrix with a \(B\times L\) matrix of independent noise variables \(\xi\), from a Bernoulli distribution with probability \(1-p\), with \(p\) as the dropout rate. Considering the obtained embedding feature of a bag of instances \(\mathbf{V}=\{\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{K}\}\) with \(\mathbf{V}\in\mathbb{R}^{K\times L}\) from the projector, the dropout mask \(\xi\) is applied to each position of instance feature space \(L\). The main research subjects were the forms of integrated single instances and building relationships among instances. The dropout operation disrupted the integrity of instances to increase uncertainty relations between instances. Adding more Dropouts [12; 13] into MIL aggregators \(g_{\theta}(\cdot)\) was an intuitive approach for solving overfitting. However, the issue was exacerbated by various model architectures, which posed challenges in effectively integrating Dropout as a viable paradigm. Inspired by MI-Net (DS) [8], we proposed to employ Dropout in the middle layers as shown in Figure 1(A). This paradigm offered two advantages: the flexibility to add Dropout without affecting MIL aggregators and the ability to obtain representations of embedding features in different dimensions. The deep supervision method also signified the feasibility of acquiring instance weights on various dimensions by applying the MIL pooling. AttentionDropout [14] was a natural choice and it has been extended to different dimensional features, forcing the network to discover latent features by dropping strong activation elements. However, it fell short in addressing overfitting. Its inherent capability to unveil latent features faced challenges within the context of MIL. For instance, its attention mechanism was not directly applicable to MIL, it overlooked instance correlations as shown in Eq. 3, and its threshold-based dropout led to misleading MIL localizations. ## 4 Progressive Dropout Layer The number of dropout layers could be adjusted based on the quantity of fully connected layers within the middle layer. To illustrate the principle, we have chosen a single layer for explanation, as depicted in Figure 1(B). The PDL mainly included MIL-specific Attention-based Dropout and Progressive learning scheduler. The synergy between these two components aimed to imbue the capacity to unveil latent features and tackle overfitting when the PDL is employed. In pursuit of this objective, we introduced a series of innovative MIL-specific approaches designed to overcome the constraints posed by existing dropouts that are not well-suited for MIL scenarios. ### MIL-Specific Attention-Based Dropout Given a set of dimensionality-reduced embedding features \(\tilde{\mathbf{V}}=\{\tilde{\mathbf{v}}_{1},\tilde{\mathbf{v}}_{2},\ldots, \tilde{\mathbf{v}}_{K}\}\) with \(\tilde{\mathbf{V}}\in\mathbb{R}^{K\times C}\), concatenated from \(K\) instances feature \(\tilde{\mathbf{v}}_{k}\in\mathbb{R}^{C}\), we introduced three key components within MIL-specific Attention-based Dropout by the following workflow order in figure 1(B). #### 4.1.1 Avarage-pooling based attention. Rather than applying spatial attention focusing on each position for the feature map (e.g., embedding feature \(v\) included \(K\times C\) positions), we required an instance-level attention map at the current embedding dimension \(C\), which would be employed in subsequent instance-based dropout to establish intrinsic connections within instances. The Attention-based pooling in Eq. 2 was a primary approach to obtain the instance attention map in MIL. However, since this method introduced additional parameters \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\), it would exacerbate overfitting. To address this problem, we proposed an approximate attention method, Avarage-Pooling Based Attention (APBA). It could be formulated as \[\begin{split} A=\text{softmax}\left(\{\text{pool}(\tilde{\mathbf{v }}_{k})\mid k=1,\cdots,K\}\right)\\ \text{with pool}(\tilde{\mathbf{v}}_{k})=\frac{1}{C}\sum_{i=1}^{ C}\tilde{\mathbf{v}}_{k}^{(i)}\end{split} \tag{4}\] which employed the average pooling in each instance at embedding dimension \(C\) after passing the softmax activation function to obtain the corresponding weight of each instance. The critical positions of each instance were activated after passing through a ReLU activation, and the intensity of each position was directly correlated to its contribution toward determining the bag label. The APBA served as an aggregator method to summarize the activated positions. Specifically, instances with more activated positions were considered positive instances with higher attention weights. The APBA was a non-parameterized approximation method that could identify the required positive instances. #### 4.1.2 Dynamic drop rate assignment to each instance. When the network was forced to drop instances with high attention weights, it guaranteed the ability to discover latent features. Randomly dropping elements to help the network see a different set of data combinations effectively introduced a form of data augmentation to prevent network units from co-adapting [12]. Accordingly, keeping the stochasticity of drop rates for instances with low attention weight was crucial in addressing overfitting. Based on this observation, we considered adjusting the drop rate of each instance based on the attention map. Here we proposed a non-linear interpolation method to generate a set of drop rates for each instance \(\{p_{1},p_{2},\ldots,p_{k}\}\). Mathematically, it was formulated as \[P/E*\log_{G}(linspace(0,G^{E}-1,K)+1). \tag{5}\] It generated the number of instances \(K\) drop rate set from 0 to \(P\). The \(P\) denoted the global maximum drop rate. The \(linspace(min,max,num)\) was a linear interpolation function to return \(num\) evenly spaced samples from interval \([min,max]\), \(E\), \(G\) was a hyperparameter to control the spacing of produced set. As shown in Figure 1(B), this method generated the drop rate set \(\{p_{1},p_{2},\ldots,p_{k}\}\) towards spacing close to the end of the vector (\(P\)), which ensured that instances with high weights received a correspondingly high drop rate. In contrast, other instances received drop rates proportionate to their weight ranks to maintain stochasticity. Specifically, the drop rate was allocated based on the attention weights of instances, ranging from high to low, corresponding in \(\{p_{1},p_{2},\ldots,p_{k}\}\) from \(0\) to \(P\) descending order. Here we assigned the drop rate \(\{p^{\prime}_{1},p^{\prime}_{2},\ldots,p^{\prime}_{k}\}\) of each instance \(\{\tilde{\mathbf{v}}_{1},\tilde{\mathbf{v}}_{2},\ldots,\tilde{\mathbf{v}}_{K}\}\) based on the attention map \(A\). #### 4.1.3 Instance-based dropout. As illustrated in Eq. 3, most dropout methods lacked the consideration of instance correlations. To preserve the integrity of instances and eliminate the uncertainty between instances, we proposed the Instance-Based Dropout (IBD), represented as \[\tilde{\mathbf{V}}^{{}^{\prime}}=(\tilde{\mathbf{V}}\circ\xi),\ with\ \xi_{k}\sim \mathcal{B}(p^{\prime}_{k}) \tag{6}\] where \(\tilde{\mathbf{V}}^{{}^{\prime}}\) was the \(K\times C\) matrix of \(K\) instances features \(\{\tilde{\mathbf{v}}_{1},\tilde{\mathbf{v}}_{2},\ldots,\tilde{\mathbf{v}}_{K}\}\) output matrix. The \(\circ\) denoted the elementwise product of the input matrix with a \(K\times 1\) matrix of independent noise variables \(\xi\), from a Bernoulli distribution with probability \(1-p^{\prime}_{k}\), with \(p^{\prime}_{k}\) the dropout rate from an assigned corresponding set of \(\{p^{\prime}_{1},p^{\prime}_{2},\ldots,p^{\prime}_{k}\}\). Although Dropout methods [13, 14] were applied to each point in the entire feature map, our method operated on each instance, where the entire instance features would be dropped out. ### Progressive Learning Scheduler A fixed threshold (unchanged maximum drop rate \(P\) in Eq. 5 for each training epoch) would mislead the MIL localization, leading to the network recognizing negative instances as target instances (positive instances). We considered the following scenario. Positive instances with high attention weights initially dropped out when employing a fixed threshold (drop rate), and the classification task was retained to train under the previous bag label. It led the network to identify negative instances as positive instances when lacking the prior knowledge for positive instance recognition (See evidence in Figure 5). To address it, we proposed a progressive learning scheduler to guide the MIL-specific Attention-based Dropout. The idea was to suppress the dropout layer during the initial training phases, thereby giving the ability to identify positive instances. After that, the dropout was progressively activated. We implemented it by allocating the parameter \(P\) in Eq. 5 based on \(T\) epochs. **Definition 1**.: For any function \(t\to P(t)\), here \(P(0)=0\) and \(\lim_{t\to T}P(t)=P_{max}\) was called progressive function, which adjusted the \(P\) in Eq. 5 for each epoch \(t\). The initial condition \(P(0)=0\) where drop rate suppression was performed, dropout was gradually introduced in a way that \(P(t)\leq P_{max}\) for any \(t\), finally convergence \(P(t)\to P_{max}\). The progressive progress could be formulated as \[P(t)=P_{max}-\partial(t)P_{max} \tag{7}\] Here \(\partial(t)\) was a monotonic decreasing function for \(0\leq\partial(t)\leq 1\), all satisfying the above criteria, were employed as a progressive learning scheduler. The \(P(t)\) generated \(\{P_{1},P_{2},\ldots,P_{t}\}\) from range of 0 to \(P_{max}\), which corresponds to each epoch \(t\). Each global drop rate \(P_{t}\) would feed to dropout layers to generate the set of instance drop rate \(\{p_{1},p_{2},\ldots,p_{k}\}\) (Eq. 5, \(P_{t}\) as \(P\)). ## 5 Experimental Results ### Dataset and Experiments Detail In this study, we conducted massive experiments to evaluate the performance of integrating the proposed PDL with several existing MIL methods. We used four public MIL benchmark datasets. In this study, we extensively evaluated the performance of the proposed PDL integration with existing MIL methods using four public benchmark datasets. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Methods & MUSK1 & MUSK2 & FOX & TIGER & ELEPHANT \\ \hline mi-Net [8] & 0.889 \(\pm\) 0.039 & 0.858 \(\pm\) 0.049 & 0.613 \(\pm\) 0.035 & 0.824 \(\pm\) 0.034 & 0.858 \(\pm\) 0.037 \\ MI-Net [8] & 0.887 \(\pm\) 0.041 & 0.859 \(\pm\) 0.046 & 0.622 \(\pm\) 0.038 & 0.830 \(\pm\) 0.032 & 0.862 \(\pm\) 0.034 \\ MI-Net with DS [8] & 0.894 \(\pm\) 0.042 & 0.874 \(\pm\) 0.043 & 0.630 \(\pm\) 0.037 & 0.845 \(\pm\) 0.039 & 0.872 \(\pm\) 0.032 \\ MI-Net with RC [8] & 0.898 \(\pm\) 0.043 & 0.873 \(\pm\) 0.044 & 0.619 \(\pm\) 0.047 & 0.836 \(\pm\) 0.037 & 0.857 \(\pm\) 0.040 \\ ABMIL [7] & 0.892 \(\pm\) 0.040 & 0.858 \(\pm\) 0.048 & 0.615 \(\pm\) 0.043 & 0.839 \(\pm\) 0.022 & 0.868 \(\pm\) 0.022 \\ ABMIL-Gated [7] & 0.900 \(\pm\) 0.050 & 0.863 \(\pm\) 0.042 & 0.603 \(\pm\) 0.029 & 0.845 \(\pm\) 0.018 & 0.857 \(\pm\) 0.027 \\ DP-MINN [19] & 0.907 \(\pm\) 0.036 & 0.926 \(\pm\) 0.043 & 0.655 \(\pm\) 0.052 & 0.897 \(\pm\) 0.028 & 0.894 \(\pm\) 0.030 \\ NLMIL [20] & 0.921 \(\pm\) 0.017 & 0.910 \(\pm\) 0.009 & 0.703 \(\pm\) 0.035 & 0.857 \(\pm\) 0.013 & 0.876 \(\pm\) 0.011 \\ ANLMIL [21] & 0.912 \(\pm\) 0.009 & 0.822 \(\pm\) 0.084 & 0.643 \(\pm\) 0.012 & 0.733 \(\pm\) 0.068 & 0.883 \(\pm\) 0.014 \\ DSMIL [4] & 0.932 \(\pm\) 0.023 & 0.930 \(\pm\) 0.020 & 0.729 \(\pm\) 0.018 & 0.869 \(\pm\) 0.008 & 0.925 \(\pm\) 0.007 \\ \hline ABMIL + PDL & 0.991 \(\pm\) 0.027 & 0.962 \(\pm\) 0.066 & **0.828 \(\pm\) 0.058** & 0.940 \(\pm\) 0.049 & 0.970 \(\pm\) 0.032 \\ ABMIL-Gated + PDL & **0.993 \(\pm\) 0.019** & **0.968 \(\pm\) 0.049** & 0.820 \(\pm\) 0.066 & **0.941 \(\pm\) 0.046** & 0.967 \(\pm\) 0.033 \\ DSMIL + PDL & 0.987 \(\pm\) 0.031 & 0.962 \(\pm\) 0.048 & 0.809 \(\pm\) 0.062 & 0.933 \(\pm\) 0.048 & **0.971 \(\pm\) 0.035** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison on the MIL benchmark dataset. Each experiment was performed five times with a 10-fold cross-validation. We reported the mean of the classification accuracy (\(\pm\) the standard deviation of the mean). The integration of the PDL model always resulted in enhanced performance. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Camelyon16} & \multicolumn{4}{c}{TCGA-NSCLC} \\ \cline{3-10} & & \multicolumn{2}{c}{ImageNet Pretrained} & \multicolumn{2}{c}{SimCLR Pretrained} & \multicolumn{2}{c}{ImageNet Pretrained} & \multicolumn{2}{c}{SimCLR Pretrained} \\ \cline{3-10} & & Accuracy & AUC & Accuracy & AUC & Accuracy & AUC & Accuracy & AUC \\ \hline \multirow{3}{*}{ABMIL [7]} & & 82.95 \(\pm\) 0.51 & 85.33 \(\pm\) 0.31 & 86.20 \(\pm\) 0.34 & 87.52 \(\pm\) 0.75 & 88.29 \(\pm\) 0.26 & 94.39 \(\pm\) 0.41 & 89.61 \(\pm\) 0.39 & 95.20 \(\pm\) 0.20 \\ & +PDL & 83.98 \(\pm\) 0.45 & 85.61 \(\pm\) 0.34 & 88.84 \(\pm\) 0.43 & 91.09 \(\pm\) 0.89 & 91.81 \(\pm\) 0.52 & 96.11 \(\pm\) 0.42 & 91.43 \(\pm\) 0.33 & 95.95 \(\pm\) 0.12 \\ & \(\Delta\) & **+1.03** & **+0.28** & **+2.64** & **+3.57** & **+3.52** & **+1.72** & **+1.82** & **+0.75** \\ \hline \multirow{3}{*}{ABMIL-Gated [7]} & & 82.64 \(\pm\) 0.88 & 85.22 \(\pm\) 0.12 & 86.31 \(\pm\) 1.19 & 88.39 \(\pm\) 0.76 & 88.76 \(\pm\) 0.63 & 94.48 \(\pm\) 0.54 & 89.52 \(\pm\) 0.33 & 95.01 \(\pm\) 0.14 \\ & +PDL & 84.81 \(\pm\) 0.69 & 86.86 \(\pm\) 0.69 & 88.68 \(\pm\) 0.43 & 91.25 \(\pm\) 0.46 & 92.19 \(\pm\) 0.43 & 96.20 \(\pm\) 0.36 & 96.02 \(\pm\) 0.26 & 96.03 \(\pm\) 0.01 \\ & \(\Delta\) & **+1.55** & **+1.64** & **+2.37** & **+2.86** & **+3.43** & **+1.72** & **+1.15** & **+1.02** \\ \hline \multirow{3}{*}{DSMIL [4]} & & 80.46 \(\pm\) 2.00 & 82.46 \(\pm\) 2.29 & 84.34 \(\pm\) 1.49 & 85.49 \(\pm\) 2.77 & 86.10 \(\pm\) 0.62 & 93.38 \(\pm\) 0.77 & 85.90 \(\pm\) 0.72 & 92.10 \(\pm\) 0.68 \\ & +PDL & 86.82 \(\pm\) 1.64 & 85.97 \(\pm\) 0.67 & 85.89 \(\pm\) 0.85 & 88.85 \(\pm\) 1.47 & 89.52 \(\pm\) 0.75 & 94.15 \(\pm\) 0.16 & 89.05 \(\pm\) 0.07 & 94.27 \(\pm\) 0.37 \\ & \(\Delta\) & **+6.36** & **+3.51** & **+1.55** & **+3.33** & **+3.24** & **+0.77** & **+3.15** & **+2.17** \\ \hline #### 5.1.1 MIL benchmarks. The benchmark datasets were MUSK1, MUSK2, FOX, TIGER, and ELEPHANT. These datasets were popularly studied to evaluate and compare the performance of MIL algorithms. The MUSK1 and MUSK2 [22] focused on molecule classification. They contained a collection of bags, each consisting of instances representing atoms. Differently, FOX, TIGER, and ELEPHANT [23] involve image classification. Each bag represented the images and contained instances that represented patches within images. #### 5.1.2 Cameloyon16. The primary objective of this dataset was to identify metastatic breast cancer in lymph node tissue. It consisted of high-resolution digital Whole Slide Images (WSIs). It was officially divided into a training set of 270 samples and a testing set of 129 samples. By the preprocessing approach detailed in [4], each WSI was patched into non-overlapping patches with a size of \(224\times 224\). This procedure resulted in approximately 3.4 million patches when viewed at a magnification of \(\times 20\). #### 5.1.3 Tcga-Nsclc. The WSI dataset TCGA-NSCLC primarily identified two distinct subtypes of lung cancer: lung squamous cell carcinoma and lung adenocarcinoma. As outlined in [4], 1037 WSIs were categorized into three sets: a training set encompassing 744 WSIs, a validation set comprising 83 WSIs, and a testing set containing 210 WSIs. After the preprocessing steps, approximately 10.3 million patches were extracted from these WSIs when viewed at a magnification level of \(\times 20\). #### 5.1.4 MNIST-bags. Following the dataset setting [7], original MNIST dataset images were grouped into bags, each containing various digit images. The number of target images within a bag could vary; not all bags had the target digit. The digit 9 would be used as the target. The dataset was studied in the Discussion. #### 5.1.5 Implementation details. We employed the ResNet [24] architecture as the pre-trained instance projector in WSI experiments to extract patch features. Two sets of pre-trained weights were utilized to ensure comprehensive evaluation, including pre-trained on the ImageNet and WSI patches following the contrastive learning framework called SimCLR [25]. The SimCLR training settings were the same as DSMIL [4]. The feature of each patch is embedded in a 512-dimensional vector. The MIL benchmark already provided the pre-extracted embedding feature. For MNIST bags, we followed the ABMIL [7] and used the identical feature extractor method. All baselines were implemented with the parameter configurations specified in their original papers. To incorporate the PDL module, we employed a strategy involving adding three fully connected layers into the middle layer. A PDL layer was appended after each activation function, as illustrated in Figure 1(B), the middle layer output dimension was 64, maximum drop probability \(P_{max}\) = 0.45, and epoch \(T=200\) for WSI and \(T=40\) for MIL benchmark. In this paper, all progressive learning schedulers adopted the non-linear interpolation function, same with Eq. 5 (\(K\) = \(T\) and \(P\) = \(P_{max}\)), which was satisfied with progressive progress in Eq. 7, and all \(G=10\), \(E=0.5\) in this paper. All methods were implemented in PyTorch with NVIDIA A100. Furthermore, We reported other interpolation experimental results (Eqs. 5 and 7) and all detailed experiment settings in the supplementary material. ### MIL Benchmark Results As shown in Table 1, we integrated the PDL into MIL aggregator methods, including ABMIL, ABMIL-Gated, and DSMIL. Following the setting in DSMIL, all experiments ran five times in the 10-fold cross-validation. The three aggregator methods with PDL outperforms all previous SOTA methods across all five MIL benchmark datasets. ABMIL + PDL achieved state-of-the-art accuracy by 82.8% on Fox and improved accuracy by an average of 12.38% on five datasets than previous ABMIL. ABMIL-Gated + PDL achieved state-of-the-art accuracy of 99.3% on MUSK1, 96.8% on MUSK2, and 94.1% on TIGER, and also improving accuracy by an average of 12.41% on five datasets than previous ABMIL-Gated. DSMIL + PDL achieved state-of-the-art accuracy by 97.1% on ELEPHANT, and PDL improved accuracy by an average of 5.54% on five datasets. ### WSI Dataset Results We presented the results on two WSI benchmark datasets, Camelyon16 and TCGA-NSCLC. As shown in Table 2, a comparative study was performed to evaluate the performance gains by integrating the PDL. Four state-of-the-art aggregators were considered: ABMIL, DSMIL, TransMIL, and DTFD-MIL. To establish the authenticity of the experiments, each experiment was performed five times, and the mean and variance were computed. Through extensive experiments, we have observed that: (1) PDL improved all aggregators on both datasets. The average accuracy and AUC improvement of 2.26% and 2.43% when applying the PDL on Camelyon16. It was also an average improved accuracy and AUC by 2.63% and 1.23% on TCGA-NSCLC. In most cases, the variance was smaller than that of the original methods. Mitigating the overfitting made the model more stable. (2) The SimCLR pretraining feature extractor was better than pre-trained in ImageNet, which can be attributed to the fact that the SimCLR feature extractor has already learned the contextual information about WSI. In contrast, the PDL demonstrated a greater improvement on the feature extractor pre-trained in ImageNet, and the PDL could also improve performance even in the presence of noisy features. Remarkably, it achieved performance comparable to SimCLR in some cases, even when applied to features extracted from ImageNet pretraining. (3) The PDL in most cases, improved more on Camelyon16 than TCGA-NSCLC. The primary reason was that the TCGA-NSCLC dataset for detecting two types of lung cancer included a substantial portion of normal patches, which differed from conventional binary MIL problems, particularly when handling normal patches. ## 6 Discussion ### Comparison between the PDL and other dropout methods. We evaluated the PDL performance by comparing it with other Dropout methods, including Dropout [13], SpatialDropout [26], DropInstance [6] and AttentionDropout [14]. All experiments performed extracted features from SimCLR pre-trained manner on the CAMELYON16 dataset, and ABMIL and ABMIL-Gated were used as baselines. All Dropout modules adopted the same method to embed into two aggregator methods. We applied a search for these methods within a drop rate range of 0 to 0.4, and the AttentionDropout used 0.65 as the threshold. Table 5 shows the obtained optimal results. Our PDL outperformed other existing Dropout methods. ### Overfitting. As shown in Figure 2, these MIL aggregator methods commonly exhibited overfitting, especially TransMIL and DTFD. The main reason was that these aggregator methods adopted more complex models and needed to learn a larger number of parameters. The proposed PDL demonstrated efficacy in addressing overfitting. All MIL aggregator methods integrated with PDL exhibited more stable declines in losses throughout the training process. ### WSI localization. As shown in Figure 3, we visualized a tumor detection example based on the attention map of the ABMIL aggregator. The attention scores ranged from 0 to 1, where the brightness of green indicates the weight score, with brighter green representing higher scores. Compared to the ABMIL without PDL, the ABMIL integrating PDL provided more lesion \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c}{ABMIL} & \multicolumn{2}{c}{ABMIL-Gated} \\ \cline{2-5} & Accuracy & AUC & Accuracy & AUC \\ \hline Baseline & 86.82 & 86.38 & 85.27 & 88.19 \\ \hline Dropout & 87.60 & 87.61 & 86.05 & 89.57 \\ DropInstance & 86.05 & 88.90 & 87.60 & 88.97 \\ AttentionDropout & 88.37 & 85.35 & 85.27 & 86.13 \\ SpatialDropout & 87.60 & 88.50 & 86.05 & 90.53 \\ PDL & **89.15** & **92.49** & **89.15** & **91.61** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison results between the PDL and other dropout methods on the CAMELYON16 dataset. patches and denser lesion localizations, with higher weight scores assigned to the lesion patches. The PDL not only effectively addressed overfitting but also showcased prowess in uncovering latent lesion features. ### Could PDL effectively assign drop rates as expected? The APBA is a non-parametric attention method in PDL, which obtains the attention map in the given embedding dimension. We experimented on the MINIST dataset and validated whether PDL works correctly by visualizing the attention maps and drop masks during training. We conducted experiments applying the ABMIL aggregator method and two-layer PDL. The PDL can capture the target instances '9' at the different embedding features (see Figure 4), which gave the highest attention weight while being assigned the highest drop rate. ### Why was the progressive learning scheduler necessary? Employing a fixed drop rate could misguide the localization of the MIL aggregator. The proposed progressive scheduler addressed this issue by allocating a gradually increasing drop rate during training. We experimented with MINIST, employing ABMIL as the aggregator method and comparing the fixed threshold and the progressive scheduler of PDL. Figure 3: Tumor localization in WSIs comparing ABMIL with PDL and Without PDL based on Camlyon16. Figure 2: The Camelyon16 experiment loss visualization before and after the integration of PDL during training. As shown in Figure 5, we visualized the attention map of ABMIL, and a fixed threshold method incorrectly identified '8' as a positive instance, even the attention weight beyond the target positive instance '9'. In the end, the PDL method with a fixed threshold achieved an accuracy of 0.87, while the PDL with a progressive scheduler attained an accuracy 0.96. The wrong positive instance recognition also impaired classification performance. ## 7 Conclusion This study proposed a progressive dropout layer that may be integrated into prevalent MIL methods. Our investigation encompassed four datasets, achieving compelling results substantiating the effectiveness in mitigating overfitting and discovering latent features. Interestingly, we found that different MIL methods employed individually pre-trained ResNet weights. The diverse weight yielded significant disparities in experiment results, particularly in WSI. In the future, we will work on a unified paradigm for instance-level projectors: joint training with existing MIL aggregators. The new framework will enable the application of data augmentation on patches to solve overfitting. ## Appendix A Experiment Details We provide details of integrating PD into existing MIL methods, including ABMIL [7], DTFD-MIL [6], TransMIL [5], DSMIL [4]. Figure 4: The APBA visualization of PDL in training, the number represents the attention weight, and the red block denotes the assigned top three drop rate. Figure 5: The attention localization of ABMIL. ### MIL Benchmark experiments setting #### a.1.1 Embedding features For the MUSK1 and MUSK2, A bag is constructed for each molecule, as instances. Each instance is represented with a 166-dimensional embedding feature. The FOX, TIGER, and ELEPHANT datasets contain 200 bags of instance features. Each instance is associated with a 230-dimensional embedding feature. #### a.1.2 Detail of integrating PDL into existing MIL architectures We employed three fully connected layers in the middle layer, each comprising 256, 128, and 64 hidden units. Each layer was equipped with a ReLU activation function and subsequently appended by a PDL layer. Following the passage of features through the intermediate layer, the ABMIL, ABMIL-Gate, and DSMIL aggregators processed the 64-dimensional embedding feature as their input. To compare original-based models, all aggregators adhere to the configuration outlined in paper [4], with distinct embedding features of dimensions 230 and 166 directly employed as inputs. All experiments used the Adam optimizer with \(2e^{-4}\) learning rates and \(5e^{-3}\) weight decay and trained on cross-entropy loss for 40 epochs. The PDL parameters were identical to the paper description (section Implementation details). ### WSIs experimental setting #### a.2.1 Preprocessing WSI To pre-process the WSIs datasets, every WSI is cropped into 224 x 224 patches without overlap to form a bag, in the magnifications of 20x. Background patches are discarded with threshold 30. #### a.2.2 Embedding Network pretrained The ResNet-18 was employed as the embedding network; it is worth noting that initially, we utilized pre-trained weights from ResNet50 trained on ImageNet following the paper [5]. However, due to the availability of various weight versions from PyTorch, some weights might be trainable in certain methods but not functional in others. As a result, we opted for ResNet-18, a set of weights that can be successfully trained across all MIL methods. In addition to the aforementioned pre-trained weights from ImageNet, we also incorporated weights obtained through a contrastive learning framework for self-supervised pre-training, SimCLR pre-trained. The self-supervised SimCLR manner employed the contrastive learning framework [25] to pretrain the projector based on the training set, wherein contrastive loss training was implemented between extracted patches and corresponding two random data data-augmentation counterparts [4]. Each patch would project into a 512-dimensional embedding feature. #### a.2.3 Detail of integrating PDL into existing MIL architectures We followed the parameter settings outlined in the original literature for the baseline experiments on the two WSI datasets, as shown in Table 4. For integrating PDL into these MIL methods, We employed three fully connected layers in the middle layer, each comprising 256, 128, and 64 hidden units. Each layer was equipped with a ReLU activation function and subsequently appended by a PDL layer. The input dimensions of these networks were adjusted to 64 while keeping the other parameters unchanged. All the experiments were trained on 200 epochs. The PDL parameters were identical to the paper description (section Implementation details). The experiments integrated with PDL also followed the parameter in Table 4. \begin{table} \begin{tabular}{c|c c c c c} \hline & ABMIL & ABMIL-Gate & DSMIL & TransMIL & DTFD-MIL \\ \hline Optimizer & AdamW & AdamW & AdamW & RAdam & Adam \\ \hline Learning rate & \(1e^{-4}\) & \(1e^{-4}\) & \(1e^{-4}\) & \(1e^{-4}\) & \(2e^{-4}\) \\ \hline Weight decay & \(1e^{-4}\) & \(1e^{-4}\) & \(5e^{-3}\) & \(5e^{-3}\) & \(2e^{-3}\) \\ \hline Optimizer scheduler & LookAhead & LookAhead & CosineAnnealingLR & LookAhead & MultiStepLR \\ \hline Loss function & CrossEntropy & CrossEntropy & CrossEntropy & CrossEntropy & CrossEntropy + distilling loss \\ \hline \end{tabular} \end{table} Table 4: Parameters setting for baseline MIL methods. Here employed the AdamW [27], RAdam [28], CosineAnnealingLR [29] to follow these MIL methods papers setting. ### MNIST experiments setting #### a.3.1 Embedding Network Each bag comprised a random assortment of 28 x 28 grayscale images extracted from the MNIST dataset. The count of images in a bag follows a Gaussian distribution, with the nearest integer value being considered. A bag receives a positive label if it contains one or more images with the label '9'. The average number of instances per bag was set to 20, with a standard deviation of 2. We employed embedding networks as those used in [7](as shown in Figure 6). After through the embedding network, each instance was an 800-dimensional embedding feature. #### a.3.2 Detail of integrating PDL into existing MIL architecture The experiment will be used in the Discussion section to analyze PDL rationality further. We employed two fully connected layers in the middle layer, each comprising 512 and 256 hidden units. Each layer was equipped with a ReLU activation function and subsequently appended by a PDL layer. We used the ABMIL aggregator as the primary research object. We used the ABMIL aggregator as the preliminary research object. It processed the 256-dimensional embedding feature as its input feature. The experiments used the Adam optimizer with \(5e^{-4}\) learning rates and \(1e^{-4}\) weight decay and trained on cross-entropy loss for 40 epochs. The PDL parameters were identical to the paper description (section Implementation details). Notably, owing to the constrained number of positive instances within the MNIST dataset, we have conducted experiments employing two PDL layers. ## Appendix B Interpolation Methods As mentioned in two components within our main body of the paper, the same interpolation method (Here is LOG) has been utilized to generate a set of drop rates for each instance and a set of global maximal drop rates for each epoch for "Dynamic assignment drop rate for each instance (DADR)" and "Progressive Learning Scheduler (PLS)." Here are three interpolation methods, as represented: \[\begin{split} COS:& P*(0.5*(1-cos(linspace(0,\pi,n)))) \\ LOG:& P/E*log_{G}(linspace(0,G^{E}-1,n)+1)\\ EXP:& P/B*(G^{linspace(0,log_{G}(B+1),n)}-1) \end{split} \tag{8}\] \(linspace(min,max,num)\) returns \(num\) evenly spaced samples from interval \([min,max]\), \(n\) denotes the number of instances, and \(E\), \(G\), and \(B\) are used to control the spacing towards of produced non-linear vector. Both three methods will generate a set of probability vector \(\{p_{1},p_{2},\ldots,p_{n}\}\) from 0 to \(P\). As shown in Figure 7, its main differences are used to generate a probability vector of different spacing at the beginning and end of the vector. Figure 6: Emebedding network for MINIST experiment. ### Details of experimental setting In this experiments, we set the \(E\) = 0.5, \(G\) = 10, and \(B\) = 0.5 same with main body of paper. We employed the ABMIL as the MIL aggregator based on the CAMELOYON16 dataset. The pre-trained embedding network was Resnet-18 in SimCLR self-supervised manner. The Progressive learning scheduler adopted the three different interpolation methods to generate a set of \(\{P_{1},P_{2},\dots,P_{t}\}\) from a range of 0 to \(P_{max}=0.45\) for 200 epochs (\(T=200\)), here we set the \(P_{max}\) as \(P\), \(n=T\) in Equation 8. For the Dynamic drop rate assignment, we passed the \(P_{t}\) for each epoch; the n is automatically adjusted based on the number of instances for each bag, which is obtained from the number of instances by the attention map, so here interpolation methods only required one parameter \(P\) is enough in Equation 8. Same with the WSIs experiment setting, we employed three fully connected layers in the middle layer, each comprising 256, 128, and 64 hidden units. Each layer was equipped with a ReLU activation function and subsequently appended by a PDL layer. The input size of ABMIL was adjusted to a 64-dimensional feature. All experiments employed the AdamW with a learning rate of \(1e^{-4}\) and weight decay \(1e^{-4}\) and adopted the Lookahead optimizer scheduler to adjust the learning rate. It was worth noting that three interpolation methods are satisfied with the progressive processing definition, as shown the Figure 7; they were a progressive increment function from 0 to P. ### Experimental results \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c}{DADR:LOG} & \multicolumn{2}{c}{DADR:COS} & \multicolumn{2}{c}{DADR:EXP} \\ \cline{2-7} & Accuracy & AUC & Accuracy & AUC & Accuracy & AUC \\ \hline PLS:LOG & 88.84 \(\pm\) 0.43 & 91.09 \(\pm\) 0.89 & 88.99 \(\pm\) 1.00 & 90.85 \(\pm\) 0.50 & 87.90 \(\pm\) 1.60 & 90.62 \(\pm\) 1.21 \\ PLS:COS & 87.75 \(\pm\) 0.84 & 89.42 \(\pm\) 0.54 & 88.21 \(\pm\) 0.64 & 89.49 \(\pm\) 1.36 & 88.06 \(\pm\) 0.69 & 88.64 \(\pm\) 0.77 \\ PLS:EXP & 87.28 \(\pm\) 0.88 & 88.70 \(\pm\) 0.86 & 88.06 \(\pm\) 0.42 & 89.37 \(\pm\) 1.49 & 87.13 \(\pm\) 0.69 & 88.47 \(\pm\) 0.53 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison results between three interpolation methods applied with Dynamic drop rate assignment(DADR) and Progressive Learning Scheduler(PLS) on the CAMELOYON16 dataset. Figure 7: Three interpolation methods To comprehensively evaluate the three interpolation methods, the DADR and PLS applied three interpolation methods (LOG, COS, EXP), respectively. We conducted five runs for each experiment and calculated the mean and variance. As depicted in table 5, the three methods correspond to distinct interpolation approaches, each of which positions the generated points near 0, P, or concentrates them within an intermediate region. For datasets like WSI, the typical instance count ranges from 8000 to 10000, resulting in a relatively substantial number of positive instances. This substantial presence of positive instances is the underlying reason for the favorable performance of LOG with DADR. However, in comparison to the LOG approach, the EXP with PLS constrains the maximum global drop rate close to 0, leading to a scenario where only a small fraction of instances was dropped during most training epochs. Consequently, this significantly compromised its ability to mitigate overfitting. In relative terms, the performance of the COS with PLS remained relatively stable, although it fell short of the LOG with PLS efficacy. Here all interpolation methods could improve the performance compared with the baselines without PDL (**86.20 Accuracy and 87.52 AUC**). This performance variation was likely attributed to the specific datasets. We suggest selecting these three interpolation methods based on three main factors: i) the instance count of the dataset, ii) the proportion of positive instances, and iii) the number of training epochs.
2301.13110
Dynamical galactic effects induced by stable vortex structure in bosonic dark matter
The nature of dark matter (DM) remains one of the unsolved mysteries of modern physics. An intriguing possibility is to assume that DM consists of ultralight bosonic particles in the Bose-Einstein condensate (BEC) state. We study stationary DM structures by using the system of the Gross-Pitaevskii and Poisson equations, including the effective temperature effect with parameters chosen to describe the Milky Way galaxy. We have investigated DM structure with BEC core and isothermal envelope. We compare the spherically symmetric and vortex core states, which allows us to analyze the impact of the core vorticity on the halo density, velocity distribution, and, therefore, its gravitational field. Gravitational field calculation is done in the gravitoelectromagnetism approach to include the impact of the core rotation, which induces a gravimagnetic field. As result, the halo with a vortex core is characterized by smaller orbital velocity in the galactic disk region in comparison with the non-rotating halo. It is found that the core vorticity produces gravimagnetic perturbation of celestial body dynamics, which can modify the circular trajectories.
K. Korshynska, Y. M. Bidasyuk, E. V. Gorbar, Junji Jia, A. I. Yakimenko
2023-01-30T17:46:36Z
http://arxiv.org/abs/2301.13110v1
# Dynamical galactic effects induced by stable vortex structure in bosonic dark matter ###### Abstract The nature of dark matter (DM) remains one of the unsolved mysteries of modern physics. An intriguing possibility is to assume that DM consists of ultralight bosonic particles in the Bose-Einstein condensate (BEC) state. We study stationary DM structures by using the system of the Gross-Pitaevskii and Poisson equations, including the effective temperature effect with parameters chosen to describe the Milky Way galaxy. We have investigated DM structure with BEC core and isothermal envelope. We compare the spherically symmetric and vortex core states, which allows us to analyze the impact of the core vorticity on the halo density, velocity distribution, and, therefore, its gravitational field. Gravitational field calculation is done in the gravitoelectromagnetism approach to include the impact of the core rotation, which induces a gravimagnetic field. As result, the halo with a vortex core is characterized by smaller orbital velocity in the galactic disk region in comparison with the non-rotating halo. It is found that the core vorticity produces gravimagnetic perturbation of celestial body dynamics, which can modify the circular trajectories. ## I Introduction The nature of DM particles remains one of the most fascinating puzzles of modern physics. The DM large-scale properties consistent with astrophysical observations are successfully explained by the cold dark matter model (CDM), which describes DM as a collisionless sufficiently cold perfect fluid. However, at smaller scales, the CDM encounters the cusp-core, missing satellites, and too-big-to-fail problems. One possibility to solve them is to assume that DM particles are ultra-light bosons as it is assumed in ultra-light dark matter (ULDM) models [1]. Generically, these models are characterized by the suppression of the small-scale structures, the presence of cores, and dynamic effects which arise from the BEC formed in the central regions of galaxies. Such DM halo proposals were investigated in [2; 3; 4; 5; 6]. The ULDM model is supported indirectly by observations. For example, in cosmological simulations [7] it was found that the bosonic DM can indeed reproduce the observed distribution of matter at very large scales [8; 9], though the mass of such bosons should be extremely small. There have been also studies on some tensions of the ULDM with observational data from the rotation curves of galaxies including the Milky Way, which could probe the particle mass in the range \(m=10^{-22}-10^{-21}\) eV [10; 11]. Furthermore, the viability of the ULDM model was studied with the stellar kinematics measurements in dwarf galaxies [12]. Another important piece of evidence is the DM nongravitational self-interaction, which has been recently reported for collisions of the clusters [13; 14]. In addition, the DM halo model must ensure the stability of a predicted halo. The stability of compact astrophysical objects which may be formed due to the Bose-Einstein condensation of ULDM was shown numerically [15]. In the present paper, we discuss DM, which consists of ultra-light bosons with repulsive self-interaction. Such models make use of two macroscopic quantum phenomena: Bose-Einstein condensation and superfluidity. Bose-Einstein condensate in the mean-field approximation is described by the Gross-Pitaevskii equation. By adding dissipation in the Gross-Pitaevskii equation one obtains a more general model, which includes the effective temperature effect and predicts that the ULDM halo consists of a BEC core and an isothermal envelope [16]. Such core-envelope structure in the ULDM model was also discussed in [17; 18; 19; 20]. Another important property, superfluidity, allows the quantization of the circulation and thus the possibility of the formation of vortices in the core of the halo. The central object of our study, the vortex, has a vanishing wavefunction at the vortex line, with a quantized circular flow around the vortex line [1]. According to the recent numerical studies [21; 22] only the non-rotating soliton and single-charged vortex are stable, even being strongly perturbed. In the present work, we consider a DM halo, which consists of two regions - core and isothermal envelope, while the core could be either a soliton or a single-charged vortex. Most of our knowledge about DM is based on its gravitational interaction with baryonic matter objects. Thus, testing the validity of the UDM theory requires a detailed investigation of the DM gravitational field. The DM density distribution, predicted by ULDM models, has been extensively studied in numerical simulations and applied in studies aimed at reconstructing the gravitational potential of DM halos for the Milky Way [23] and dwarf galaxies [24]. In general, one can determine the gravitational field of the ULDM by solving the Einstein equations with the DM density and rotation flow as sources of the gravitational field, where rotation flow is induced by the BEC superfluidity. Thus, in the ULDM model, we should be able to deduce the impact of the superfluid DM rotation on the observations. The dominant effect of the vortex existence is due to the different core density distributions. Moreover, rotation flows produce \(v/c\) and higher order effects, which can be taken into account in the gravitoelectromagnetism approach discussed in [25; 26; 27; 28; 29] and used in our calculations below. The gravitoelectromagnetic formulation of a slowly rotating, self-gravitating, and dilute BEC intended for astrophysical applications in the context of DM halos was discussed in [30]. As a rule, the gravi-magnetic force is quite weak and does not affect significantly the dynamics of astrophysical systems. However, in the central region of the BEC core, the DM density vanishes while the vortex flow velocity dramatically increases, which can affect the dynamics of luminous matter in the central region of the galaxies. In the present work, we calculate the DM gravitational field, which is needed for analysis of the observable predictions of the DM model, namely, to study how DM affects the movement of luminous matter. In our study, DM is the only source of a gravitational field, while luminous matter moves along geodesics, induced by DM. A more precise description of galactic kinematics is given by modeling the baryonic contribution to the gravitational potential which can distort the BEC soliton structures [31; 32]. Such a contribution was found to be significant for the Milky Way (MW) but not essential for the SPARC LSB galaxies [33]. In this paper, we will limit ourselves to some simple consequences of the ULDM model on the galactic kinematics, namely, rotation curves and deviation of circular trajectory, induced by the gravimagnetic force. The more detailed study in this direction is beyond the scope of the current paper, though it is an interesting perspective on further work. The paper is organized as follows. In Sec.II, we develop the key parameters of our model, define the equations for halo structure, and formulate the gravitoelectromagnetism ansatz. In Sec.III, we discuss the halo density profile for two stable core configurations and define the corresponding hydrodynamical velocity. In Sec.IV, the gravielectric (Newtonian) field of the halo is calculated and the rotational curves are obtained. Sec.V provides gravimagnetic field calculations and our estimates of the gravimagnetic effect on circular trajectory. The results are summarized in Sec.VI. Model ### Ultra-light dark matter model and halo structure In this section, we briefly discuss the model, suggested in [16]. The structure of the DM halo is described by the Gross-Pitaevskii-Poisson (GPP) equations, which define the dynamical evolution of self-gravitating BEC field \(\psi\) \[i\hbar\frac{\partial\psi}{\partial t} = -\frac{\hbar^{2}}{2m}\Delta\psi+m\Phi_{\rm g}\psi+\frac{K\gamma m }{\gamma-1}|\psi|^{2(\gamma-1)}\psi \tag{1}\] \[+\frac{m}{2}\left(\frac{3}{4\pi\eta_{0}}\right)^{2/3}|\psi|^{4/3 }\psi+2k_{\rm B}T\ln|\psi|\psi\] \[-i\frac{\hbar}{2}\xi\left[\ln\left(\frac{\psi}{\psi^{*}}\right)- \left\langle\ln\left(\frac{\psi}{\psi^{*}}\right)\right\rangle\right]\psi,\] \[\Delta\Phi_{\rm g}=4\pi G|\psi|^{2}, \tag{2}\] where \(\langle X\rangle=\frac{1}{M}\int|\psi|^{2}Xd{\bf r}\) is the spatial average over halo, \(m\) is the bosonic particle mass, \(\hbar\) denotes reduced Planck constant, \(k_{\rm B}\) is Boltzmann constant. The first equation can be obtained by incorporating dissipative effects into the Schrodinger equation by means of the theory of scale relativity. This generalization of the Schrodinger equation means basically taking into account the interaction of the system with the external environment. The model Eqs. (1),(2) were derived in [34], and we follow this approach in our current work. We consider the BEC model with parameters \(\gamma=2\) and \(K=\frac{2\pi a_{\rm s}\hbar^{2}}{m^{3}}\), where \(a_{\rm s}\) denotes the _s_-wave scattering length of the self-interaction. The parameter \(\eta_{0}\) determines the equation of state of DM [35]. The first term on the right-hand side of Eq.(1) is the kinetic term, and the second describes the interaction with the condensate gravitational potential \(\Phi_{\rm g}\). The third term takes into account the bosonic self-interaction (we will consider only the case \(\gamma=2\) which corresponds to binary collisions). The fourth term accounts for the core, and the fifth term describes an isothermal envelope with effective temperature \(T\) which surrounds the core. These terms can be derived from the Lynden-Bell theory of violent relaxation [16]. The last term with \(\xi<0\) is a damping term and ensures that the system relaxes towards the equilibrium state. An important feature of the Gross-Pitaevskii (GP) equation is that it satisfies the H-theorem, i.e., the free energy \(F\) of the system decreases \[\dot{F}=-\xi\int\rho{\bf u}^{2}d{\bf r}\leq 0.\] where \(\rho=|\psi|^{2}\) denotes BEC density and \({\bf u}=\nabla S({\bf r},t)/m\) is the velocity field. These quantities are obtained by application of Madelung transformation, according to the expression \(\psi({\bf r},t)=\sqrt{\rho({\bf r},t)}e^{iS({\bf r},t)/\hbar}\), where \(S({\bf r},t)\) is the action. The negative sign of \(\xi\) implies that the system relaxes towards the state with zero hydrodynamical velocity \({\bf u}=0\). Therefore, a stationary vortex solution with nonzero \({\bf u}\) can be found only if we set \(\xi=0\). The free energy \(F=E-TS_{\rm B}\) is expressed through the total energy \(E\), the effective temperature \(T\), and the Boltzmann entropy \(S_{\rm B}=-k_{\rm B}\int(\rho/m)(\ln\rho-1)d{\bf r}\). The total energy consists of the classical kinetic energy \(\Theta_{\rm c}=1/2\int\rho{\bf u}^{2}d{\bf r}\), the quantum kinetic energy \(\Theta_{\rm Q}=1/m\int\rho Qd{\bf r}\), the gravitational potential energy \(W=1/2\int\rho\Phi_{\rm g}d{\bf r}\), and the internal energy of the self-interaction \(U=K\int\rho^{2}d{\bf r}\), \(E_{0}=\Theta_{\rm c}+\Theta_{\rm Q}+W+U\). Here \(Q=-\frac{\hbar^{2}}{2m}\frac{\Delta\sqrt{\rho}}{\sqrt{\rho}}\) is the quantum potential. A stable equilibrium state corresponds to the minimum of the free energy \(F\) at fixed total mass \(M\) of BEC. This gives the following condition of quantum hydrostatic equilibrium [16]: \[\frac{\rho}{m}\nabla Q+\nabla P+\rho\nabla\Phi_{\rm g}+\frac{\rho}{2}\nabla{ \bf u}^{2}=0,\] where \(P=K\rho^{2}+\rho\frac{k_{B}T}{m}\) is pressure due to the self-interaction and effective temperature. Taking into account the Poisson equation (2) and neglecting the quantum pressure term \(Q\), we obtain the following equation of state \[-2K\Delta\rho-\frac{k_{\rm B}T}{m}\Delta\ln\rho=4\pi G\rho+\frac{1}{2}{\bf u} ^{2}, \tag{3}\] where \(G\) is the gravitational constant. The solution of this equation is discussed in Sec. III. ### Gravitoelectromagnetic approach To determine the gravitational field of DM halo we employ the well-known gravitoelectromagnetism (GEM) approach [29] which was previously applied to galactic structures in [25; 27; 28]. According to the GEM formalism, in the case of a test particle (which is luminous matter in our case) moving much slower than the speed of light \(c\), it is convenient to represent the spacetime metric in the form \[dS^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=\left(1-\frac{2\Phi_{\rm g}}{c ^{2}}\right)(dx^{0})^{2}\\ +\frac{4}{c^{2}}\left(\mathbf{A}_{\rm g}\mathbf{d}\mathbf{x} \right)dx^{0}+\left(-1-\frac{2\Phi_{\rm g}}{c^{2}}\right)\delta_{ij}dx^{i}dx^{j}, \tag{4}\] where \(\Phi_{\rm g}\) and \(\mathbf{A}_{\rm g}\) are the GEM scalar (gravielectric) and vector (gravimagnetic) potentials. For the gravitoelectromagnetic fields \(\mathbf{E}_{\rm g}\) and \(\mathbf{B}_{\rm g}\) \[\mathbf{E}_{\rm g}=-\nabla\Phi-\frac{1}{2c}\partial_{t}\mathbf{A}_{\rm g}, \tag{5}\] \[\mathbf{B}_{\rm g}=\nabla\times\mathbf{A}_{\rm g}, \tag{6}\] the Einstein equations imply the following relations: \[\nabla\mathbf{E}_{\rm g}=4\pi G\rho,\] \[\nabla\times\mathbf{B}_{\rm g}=\frac{2}{c}\partial_{t}\mathbf{E}_{\rm g}+ \frac{8\pi G}{c}\mathbf{j}.\] Here sources of the gravitational field are mass density \(\rho\) and matter current \(\mathbf{j}=\rho\mathbf{u}\) (\(\mathbf{u}\) is the matter velocity). Since these equations are clearly analogous to those in the electromagnetic theory, their solutions have a form similar to Maxwell's theory \[\Phi_{\rm g}(\mathbf{r})=G\int_{\Omega}\frac{\rho(\mathbf{r}^{\prime})d^{3}r^{ \prime}}{|\mathbf{r}-\mathbf{r}^{\prime}|}, \tag{7}\] \[\mathbf{A}_{\rm g}(\mathbf{r})=\frac{2G}{c}\int_{\Omega}\frac{\rho(\mathbf{r} ^{\prime})\mathbf{u}(\mathbf{r}^{\prime})d^{3}r^{\prime}}{|\mathbf{r}-\mathbf{ r}^{\prime}|}, \tag{8}\] where integration proceeds over \(\mathbf{r}^{\prime}\) occupied by DM particles, \(\rho(\mathbf{r}^{\prime})\) is the condensate density and \(\mathbf{u}(\mathbf{r}^{\prime})\) is the BEC velocity at \(\mathbf{r}^{\prime}\). \(\mathbf{r}\) are coordinates associated with the test particle, moving along geodesics in the BEC gravitational field. Finally, the geodesic movement for a test particle, which corresponds to the spacetime metric in the GEM form, \[\frac{d^{2}x^{i}}{dt^{2}}=\frac{\partial\Phi_{\rm g}}{\partial x_{i}}+\frac{2 }{c}\frac{dA_{\rm g}^{i}}{dt}-\frac{2}{c}\left(\frac{\partial\mathbf{A}_{\rm g }}{\partial x_{i}}\frac{d\mathbf{x}}{dt}\right)\] can be equivalently described as the classical motion \(m\ddot{\mathbf{x}}=\mathbf{F}_{\rm g}\) in the gravitoelectromagnetic analog of the Lorentz force \[\mathbf{F}_{\rm g}=-m\left(\mathbf{E}_{\rm g}+\frac{2}{c}\mathbf{v}\times \mathbf{B}_{\rm g}\right)=m(\mathbf{a}_{\rm E}+\mathbf{a}_{\rm B}), \tag{9}\] where \(\mathbf{v}\) is the particle velocity and \(m\) is its mass. Here we introduced gravielectric \(\mathbf{a}_{\rm E}=-\mathbf{E}_{\rm g}\) and gravimagnetic \(\mathbf{a}_{\rm B}=-\frac{2}{c}\mathbf{v}\times\mathbf{B}_{\rm g}\) components of acceleration. ## III Halo density profile The model, based on the generalized GPP equations (see Eqs. (1), (2)) describes the core-envelope structure of DM halo with a dense core and diffuse isothermal envelope. The model yields the following equation of state for the ULDM \(P=K\rho^{2}+\rho\frac{k_{B}T}{m}\) (see Sec. II). Thus, one can conclude, that in the core region equation of state is approximately \(P=K\rho^{2}\), because the weak self-interaction dominates over effective temperature impact due to large density. That is why the latter will be neglected in the discussion of the core states. On the contrary, in the isothermal envelope region, we have the equation of state \(P=\rho\frac{k_{B}T}{m}\), which means that the effective temperature term plays a crucial role there. Based on these considerations, we calculate the halo density in two steps. Firstly, we reproduce the numerical result for the total density of the non-rotating halo (see the original result in [16]), which defines density distribution in the isothermal envelope region. This step is needed as a starting point to define isothermal enve lope density distribution and to compare the discussed in [16]\(s=0\) solitonic core with the new case of vortex core \(s=1\). Secondly, under the assumption that core and envelope do not interact, we discuss the core density profile separately by means of variational ansatz [36]. This way we will study the spherically symmetric (\(s=0\)) and the single-charged vortex (\(s=1\)) solutions for the core density distribution. ### Isothermal envelope In the first case of a non-rotating core, we can set \(\mathbf{u}=0\), and then the Eq. (3) simplifies \[-\frac{4\pi a_{\mathrm{s}}\hbar^{2}}{m^{3}}\Delta\rho-\frac{k_{\mathrm{B}}T}{ m}\Delta\ln\rho=4\pi G\rho,\] where we took into account that \(K=\frac{4\pi a_{\mathrm{s}}\hbar^{2}}{m^{3}}\). It is convenient to introduce the density function and the radial coordinate \(\rho=\rho_{\mathrm{c}}e^{-f}\), \(y=r/r_{0}\), where \[r_{0}=\sqrt{\frac{k_{\mathrm{B}}T}{4\pi G\rho_{\mathrm{c}}m}} \tag{10}\] and \(\rho_{\mathrm{c}}\) defines the density at the center. The equation of state can be rewritten in the following form: \[\frac{d^{2}f}{dy^{2}}+\frac{2}{y}\frac{df}{dy}=\frac{\chi\left(\frac{df}{dy} \right)^{2}+1}{\chi+e^{f}}, \tag{11}\] where \(\chi=4\pi a_{\mathrm{s}}\hbar^{2}\rho_{\mathrm{c}}/(m^{2}k_{\mathrm{B}}T)\). The boundary conditions are \(f(0)=0\) and \(\frac{df}{dy}(0)=0\), which define the boundary conditions for the density function \(\rho(0)=\rho_{\mathrm{c}}\) and \(\frac{d\rho(0)}{d\tau}=0\). We solve Eq. (11) numerically for different values of \(\chi\) and present solutions in Fig. 1 (a). The isothermal envelope density distribution is defined as \(\rho=\rho_{0}e^{-f}=\rho_{0}f_{\mathrm{N}}(r)\), where \(f\) is a numerical solution of Eq. (11). The profile has a solitonic core and an isothermal envelope whose density decreases as \(\rho(r)\sim k_{\mathrm{B}}T/(2\pi Gmr^{2})=v_{\infty}^{2}/(4\pi Gr^{2})\)[16] in agreement with observations (here \(v_{\infty}\) is the constant rotational velocity in the large distance limit). The existence of a BEC core in the ULDM model was also discussed in [17; 18; 19; 20]. The possible physical origin of the core-envelope structure could be the merger of two-state configurations when the total system tends to a virialized state, and the obtained averaged profile has a core and a tail structure [37]. The process of halo formation usually undergoes gravitational cooling [38], which is discussed in [38; 39]. Gravitational cooling process for initially quite arbitrary density profiles leads to relaxation and virialization through the emission of scalar field particles [40]. The resulting profile has the same dense core and diffuse envelope structure. In the case \(s=1\), the hydrodynamical velocity \(\mathbf{u}\) does not vanish in the inner region due to the existence of the vortex. The definition of the velocity profile in the isothermal halo region is a complicated task. One would expect that there is an intermediate region between the core and isothermal envelope, where the hydrodynamical velocity is small but nonzero, and at large enough distances, we should have \(\mathbf{u}=0\). This is due to the divergent mass of the isothermal envelope, which therefore cannot rotate in order for kinetic energy to be finite. For an estimate, we simply put \(\mathbf{u}=0\) in the whole isothermal envelope region. This approximation can be justified by the negligibly small density of the isothermal envelope in comparison with the core density, so its rotation would have no sufficient impact on the system. Hence the density profile in the envelope region remains unchanged. Thus, to define isothermal envelope density distribution we use the numerical solution for \(\rho=\rho_{0}f_{\mathrm{N}}(r)\), obtained earlier in the case of non-rotating core. The density profile in the core region will be discussed in the next section in detail. To reproduce the Milky Way halo mass \(M=1.3\times 10^{12}M_{\odot}\) and radius \(R_{\mathrm{halo}}=287kpc\)[41], taking into account the model described in [16], we choose the following values of the particle mass \(m=2.92\times 10^{-22}eV/c^{2}=0.52\times 10^{-57}kg\), scattering length \(a_{\mathrm{s}}=8.17\times 10^{-77}m\), effective DM temperature \(T=5.09\times 10^{-25}K\), central density in the spherical case \(\rho_{\mathrm{c}}=0.34\times 10^{-17}\frac{kg}{m^{3}}\) and distance scaling parameter \(r_{0}=0.071kpc\). Then \(\chi=20\) and the temperature of BEC of such ultralight bosons is much larger than the effective temperature. For the spherically symmetric case, this yields the core with mass \(M_{\rm c}=6.39\times 10^{10}M_{\odot}\) and radius \(R_{\rm c}=1kpc\). ### Core stationary states The dynamics of self-gravitating BEC of \(N\) weakly interacting bosons with mass \(m\) is described by the GPP system of equations with the term, corresponding to the effective temperature impact: \[i\hbar\frac{\partial\psi}{\partial t}=\left(-\frac{\hbar^{2}}{2 m}\nabla^{2}+gN|\psi|^{2}+m\Phi_{g}\right.\\ \left.+2k_{B}T\ln\left|\frac{\psi}{\psi_{0}}\right|\right)\psi \tag{12}\] \[\nabla^{2}\Phi_{g}=4\pi GmN|\psi|^{2} \tag{13}\] where \(g=\frac{4\pi\hbar^{2}a_{*}}{m}\) is the coupling strength that corresponds to the two-particle interaction, \(a_{s}\) is the s-wave scattering length, \(\Phi\) is the gravitational potential and G is gravitational constant. The GPP system of Eqs.(12) and (13) includes three crucial physical parameters: particle mass \(m\), the total number of particles \(N\) (or, equivalently, total mass \(M\)) and coupling strength \(g\) (or, equivalently, self-interaction constant \(\frac{\lambda}{8\pi}=\frac{a_{*}}{\lambda_{\rm c}}\), where \(\lambda_{\rm c}=\frac{\hbar}{mc}\) is the Compton wavelength of bosons) [36]. The GPP system of equations is invariant under the transformation \(t=\lambda_{*}^{2}t^{\prime}\), \({\bf r}=\lambda_{*}{\bf r}^{\prime}\), \(\psi=\lambda_{*}^{-2}\psi^{\prime}\), \(\Phi_{\rm g}=\lambda_{*}^{-2}\Phi_{\rm g}^{\prime}\), \(g=\lambda_{*}^{2}g^{\prime}\), where \(\lambda_{*}>0\), which allows us to scale-out the coupling constant to \(g=1\). In order to simplify calculations, it is convenient to introduce dimensionless variables and wave function \[i\frac{\partial\psi}{\partial t}=\left(-\frac{1}{2}\nabla^{2}+|\psi|^{2}+\Phi _{\rm g}+T_{\rm eff}\ln|\psi|\right)\psi, \tag{14}\] \[\nabla^{2}\Phi_{\rm g}=|\psi|^{2}, \tag{15}\] where the dimensional variables are related to the dimensionless ones as follows: \({\bf r}={\bf r}_{\rm ph}L\), \(t=\omega_{*}t_{\rm ph}\), \(\Phi_{\rm g}=\left(\frac{\lambda_{*}}{L}\right)^{2}\frac{\Phi_{\rm ph}}{c^{2}}\), and \(\psi=\frac{\lambda}{8\pi}\left(\frac{m_{\rm Pl}}{m}\right)^{2}\sqrt{4\pi GM} \frac{\hbar}{mc^{2}}\psi_{\rm ph}\). Here the distance and time scaling parameters are \(L=\lambda_{\rm c}\frac{m_{\rm Pl}}{m}\sqrt{\frac{\lambda}{8\pi}}=\frac{m_{\rm Pl }\hbar}{m^{2}c}\sqrt{\frac{\lambda}{8\pi}}=0.99\times 10^{19}m=0.32kpc\) and \(\omega_{*}=\frac{c\lambda_{*}}{L^{2}}=2.08\times 10^{-15}s^{-1}\). The dimensionless effective temperature parameter is \(T_{\rm eff}=\frac{2k_{\rm B}T}{\omega_{*}\hbar}\) and will be neglected in the following discussion because the corresponding term \(T_{\rm eff}\ln|\psi|\) is negligibly small in the core region. Therefore, we neglect the temperature effects in the analysis of the BEC core density distribution. For the BEC core mass \(M_{\rm c}=6.39\times 10^{10}M_{\odot}\) and radius \(R_{\rm c}=1kpc\), we solve the GPP equations (14) and (15) by using the variational ansatz in cylindrical coordinates \(r,z\) \[\psi(r,\phi,z)=A\left(\frac{r}{R}\right)^{s}e^{-\frac{r^{2}}{2R^{2}}-\frac{z^{2 }}{2(R\eta)^{2}}+is\phi}. \tag{16}\] Here \(R\) and \(\eta\) are variational parameters, which will be fixed later. Constant \(A\) is fixed by the normalization condition \[A=\sqrt{\frac{N_{0}}{\pi^{3/2}\eta R^{3}s!}}, \tag{17}\] the cases \(s=0,1\) are considered, and \(N_{0}\) is defined by the core mass \[N_{0}=4\pi\frac{M_{\rm c}}{m_{\rm Pl}}\sqrt{\frac{\lambda}{8\pi}}=2.55\cdot 10^{ 4},\] where \(m_{\rm Pl}=\sqrt{\frac{\hbar c}{G}}\) is the Planck mass and \(\lambda/(8\pi)=1.21\times 10^{-91}\) is the self-interaction coupling constant. The dimensionless quantities and the physically observed ones are related as follows: \[R_{\rm c}=R_{99}L=\frac{m_{\rm Pl}\hbar}{m^{2}c}\sqrt{\frac{\lambda}{8\pi}}R_{ 99}, \tag{18}\] \[\rho=M|\psi_{\rm ph}|^{2}=\frac{M}{L^{3}N_{0}}|\psi|^{2}\\ =\rho_{0}\left(\frac{r}{R}\right)^{2s}e^{-\frac{r^{2}}{R^{2}}- \frac{z^{2}}{(R\eta)^{2}}}, \tag{19}\] where \(R_{99}\) is the dimensionless radius which contains 99 percent of the mass of the core (the vari ational analysis gives \(R_{99}\approx 2.38R\) in the case of solitonic core and \(R_{99}\approx 2.58R\) in the case of vortex core), \(\rho\) is the condensate density, and \(\rho_{0}=MA^{2}/(L^{3}N_{0})\) is the density scaling parameter. \(R_{\rm c}\) denotes the total radius of the core in physical units. Using the variational ansatz for the BEC wave function (16), we obtain the energy [36]: \[\begin{split} E&=\int d^{3}\mathbf{r}\psi^{*}( \mathbf{r},t)\left(-\frac{1}{2}\nabla^{2}+|\psi|^{2}+\Phi_{\rm s}\right)\psi( \mathbf{r},t)\\ &=e\left(\frac{N_{0}(1+2\eta^{2}(1+s))}{4R^{2}\eta^{2}}+\frac{N_{ 0}^{2}\Gamma(s+1/2)}{4\sqrt{2}\pi^{2}R^{3}\eta\Gamma(s+1)}\right.\\ &-\left.\frac{N_{0}^{2}}{8\pi R}\int_{0}^{\infty}{\rm Erfc}\left( \frac{k_{*}\eta}{\sqrt{2}}\right)L_{s}^{2}\left(\frac{k_{*}^{2}}{4}\right)e^{ -\frac{k_{*}^{2}(1-\eta^{2})}{2}}dk_{*}\right).\end{split} \tag{20}\] where \(\Gamma(x)\) denotes the Gamma function, \({\rm Erfc}(x)\) is the complementary error function and \(L_{\rm s}(x)\) is the Laguerre polynomials. Here \(\epsilon=(\hbar^{2}/4\pi m_{\rm Pl}\lambda_{\rm c}^{2})(8\pi/\lambda)^{3/2}\) is characteristic energy, which does not depend on variational parameters. In what follows, we will use \(r_{0}=2.18\times 10^{18}m=0.071kpc=0.22L\) as the distance scaling parameter. In the subsection below, we investigate the case \(s=0\). #### ii.2.1 Non-rotating spherically-symmetric core In this case, the BEC wave function in Eq. (16) depends only on radial distance \(r\) in spherical coordinates \[\psi(r)=Ae^{-\frac{r^{2}}{2R^{2}}} \tag{21}\] and the density function (see Eq. (19)) equals \[\rho(r)=\rho_{0}e^{-\frac{r^{2}}{R^{2}}}. \tag{22}\] In what follows, \(r\) will denote spherical distance, when the \(s=0\) case is discussed. We should relate \(R\) and the BEC core radius \(R_{\rm c}\) which is defined through \(M_{\rm c}=\frac{4}{\pi}\rho_{0}R_{\rm c}^{3}\)[36]. Since \(\rho_{0}=M_{\rm c}A^{2}/(L^{3}N_{0})\), the numerical result for halo density (see Fig. 1 a) gives \(\frac{R_{\rm c}}{LR}=1.64\) or \(R=8.66\) in the \(r_{0}\) scale. It is interesting to compare the obtained \(R\) with its value in the variational analysis method used in [36]. Substituting \(\eta=1\) and \(s=0\) in the energy functional in Eq. (20), we get \[\frac{E}{\epsilon}=\frac{3N_{0}}{4R^{2}}+\frac{N_{0}^{2}}{4\sqrt{ 2}\pi^{3/2}R^{3}}\\ -\frac{N_{0}^{2}}{8\pi R}\int_{0}^{\infty}{\rm Erfc}\left(\frac{k _{*}}{\sqrt{2}}\right)dk_{*}.\] Its extremum is defined by the equation \[R^{2}-\frac{6\sqrt{2}\pi^{3/2}}{N_{0}}R-3=0 \tag{23}\] that gives \(R=1.73\) or \(R=7.86\) in the \(r_{0}\) scale. Thus, \(R_{\rm c}=0.9\,kpc\) (see Eq. (18)) and, therefore, the variational analysis method and numerical calculation (see Fig. 1 (a)) are in a good agreement. #### ii.2.2 Rotating axially-symmetric core In the case \(s=1\) (see Eq.(16)), we have a wave function, which depends on cylindrical coordinates \(r,z,\phi\) \[\psi(r,\phi,z)=A\frac{r}{R}e^{-\frac{r^{2}}{2R^{2}}-\frac{z^{2}}{2(R\eta)^{2}}+ i\phi} \tag{24}\] and the density function equals \[\rho(r,z)=\rho_{0}\frac{r}{R}e^{-\frac{r^{2}}{R^{2}}-\frac{z^{2}}{(R\eta)^{2}}}, \tag{25}\] where \(A\) is given by Eq. (17). The dimensionless total energy in Eq. (20) for \(s=1\) reads \[\frac{E}{\epsilon}=\frac{N_{0}(1+4\eta^{2})}{4R^{2}\eta^{2}}+ \frac{N_{0}^{2}}{8\sqrt{2}\pi^{3/2}R^{3}\eta}\\ -\frac{N_{0}^{2}}{8\pi R}\int_{0}^{\infty}{\rm Erfc}\left(\frac{k_ {*}\eta}{\sqrt{2}}\right)\left(1-\frac{k_{*}^{2}}{4}\right)^{2}e^{-\frac{k_{ *}^{2}(1-\eta^{2})}{2}}dk_{*}.\] Equations of an extremum of the total energy with respect to \(\eta\) and \(R\) yield the solution \(\eta=1.464\), and \(R=1.226\) in the \(L\) scale. In the \(r_{0}\) scale, we have \(R=5.57\). To determine the core density distribution, we use the variational analysis result. We assume that the core interacts negligibly weakly with the isothermal envelope. Therefore, for the isothermal envelope region, we use the numerical distribution \(f_{\rm N}(r_{\rm sph})=f_{\rm N}(\sqrt{r^{2}+z^{2}})\) (see Fig. 1 (a)), derived under \({\bf u}=0\) condition. Thus, we obtain (see Fig. 1 (b)) \[\rho(r,z)=\rho_{0}\begin{cases}1.92\frac{r}{R}e^{-\frac{r^{2}}{R^{2}}-\frac{z^ {2}}{(R\eta)^{2}}},\frac{r_{\rm sph}}{r_{0}}\leq\frac{R_{\rm c}}{r_{0}}\\ f_{\rm N}\left(\frac{r_{\rm sph}}{r_{0}}\right),\frac{r_{\rm sph}}{r_{0}}>\frac {R_{\rm c}}{r_{0}},\end{cases} \tag{26}\] where \(r=\sqrt{x^{2}+y^{2}}\) and \(z\) are cylindrical coordinates and \(r_{\rm sph}=\sqrt{x^{2}+y^{2}+z^{2}}\). Here \(\rho_{0}\) is the spherical halo central density. The spherically symmetric isothermal envelope density \(\rho(r,z)=\rho_{0}f_{\rm N}\left(r_{\rm sph}/r_{0}\right)\) is found numerically by solving Eq. (11). The total core radius \(R_{\rm c}\) is defined by Eq. (18). By using \({\bf u}=\frac{{\bf j}_{\rm ph}}{\rho}\) and the particle current \[{\bf j}_{\rm ph}=-\frac{i\hbar}{2m}(\psi_{\rm ph}^{*}\nabla\psi_ {\rm ph}-\psi_{\rm ph}\nabla\psi_{\rm ph}^{*})\\ =\frac{\hbar}{m}\frac{|\psi_{\rm ph}|^{2}}{r}{\bf e}_{\phi},\] we find the velocity distribution \({\bf u}({\bf r})\) of DM particles \[{\bf u}=\frac{\frac{\hbar}{m}\frac{|\psi_{\rm ph}|^{2}}{r}{|\psi_{\rm ph}|^{2} }}{|\psi_{\rm ph}|^{2}}{\bf e}_{\phi}=\frac{\hbar}{m}\frac{1}{r}{\bf e}_{\phi} =\alpha\frac{cr_{0}}{r}{\bf e}_{\phi}, \tag{27}\] where \(\alpha=\hbar/(mr_{0}c)=0.31\cdot 10^{-3}\). Obviously, the velocity of condensate particles increases while approaching the center of the vortex. Note that there is an inner region where the velocity becomes of the order of \(c\) and, therefore, this region cannot be described by making use of the gravitoelectromagnetism ansatz (see Appendix A for explanation). This region is limited by the radial distance \(r=\alpha r_{0}=2.2\times 10^{-5}\,kpc\). In the two following sections, by using the for Figure 1: Halo density profile \(\rho/\rho_{0}\) as a function of dimensionless \(r/r_{0}\) coordinate in the plane \(z=0\), both \(x\) and \(y\) axes have log scale. The cyan insets in both plots show 3D density isosurfaces of the corresponding BEC cores. Left panel (a) shows the halo with the BEC core in a soliton state (\(s=0\)). Three curves correspond to different values of parameter \(\chi=4\pi a_{s}\hbar^{2}\rho_{\rm c}/(m^{2}k_{\rm B}T)\), so that while increasing \(\chi\) one decreases effective temperature \(T\) and vice versa. Right panel (b) shows the halo with the core in a vortex state (\(s=1\), \(\chi=20\)). Note, we investigate in detail the isothermal envelope for \(\chi=20\), which is consistent with observations for the Milky Way. The black dashed line divides the distribution into two parts: the inner region with a rotating core and the outer region composed of an isothermal envelope. malism of GEM, we describe particle movement in the gravitational field of DM in the \(s=0,1\) states aiming to understand how baryonic matter particles interact with the proposed DM. ## IV Gravielectric field and rotation curves In this section, we obtain numerical results for the gravielectric (Newtonian) component of the DM halo gravitational field. Having calculated the field, we analyze the rotation curves, predicted by the model in the cases of soliton and vortex core. To determine the gravielectric potential in the case of a non-rotating halo we use the numerically obtained density distribution (see Fig. 1 (a)). In the case of a rotating axially symmetric halo, the mass density distribution is shown in Fig. 1 (b). In the spherically symmetric case of non-rotating halo (\(s=0\)), only the radial component of the gravielectric field is not zero (see Eq.(7)) and the corresponding gravielectric acceleration \(\mathbf{a}_{\mathrm{E}}=-\mathbf{E}_{\mathrm{g}}=a_{\mathrm{E}}\mathbf{e}_{r}\) (see Eq.(9)) is presented in Fig. 2. The acceleration at large distances behaves like \(a_{\mathrm{E}}/a_{0}=82.66r_{0}/r\), i.e., \(a_{\mathrm{E}}=9.3\times 10^{-29}\frac{kpc^{2}}{s^{2}}\times 1/r\). Here \(a_{0}=G\rho_{0}r_{0}=5.38\times 10^{-13}km/s^{2}\). In the core region, where the density distribution is described by the variational ansatz (22), the gravielectric potential and the corresponding acceleration can be found analytically \[\frac{1}{r^{2}}\frac{\partial}{\partial r}r^{2}\frac{\partial}{\partial r} \Phi_{\mathrm{g}}=-4\pi G\rho_{0}e^{-\frac{r^{2}}{R^{2}}}.\] The general solution is given by \[\Phi(a)=-4\pi G\rho_{0}R^{2}\left(\frac{c_{1}}{r}+c_{2}-\frac{R\sqrt{\pi}\, \mathrm{Erf}(r/R)}{4r}\right).\] where \(\mathrm{Erf}(x)\) denotes the error function and \(c_{1}\), \(c_{2}\) are constants. We can set \(c_{2}=0\). At a large distance, the gravielectric potential of the halo must be equal to the potential of a body with the same mass \(M=\pi^{3/2}\rho_{0}R^{3}\). This implies that \(c_{1}=0\). Thus, \(\Phi(r)\) is completely determined and we have the radial acceleration \[\mathbf{a}_{\mathrm{E}}(r)=\nabla\Phi_{\mathrm{g}}(r)\] \[=\pi G\rho_{0}R^{3}\left(\frac{2e^{-r^{2}/R^{2}}}{Rr}-\frac{ \sqrt{\pi}\,\mathrm{Erf}\left(\frac{r}{R}\right)}{r^{2}}\right)\mathbf{e}_{r}.\] Figure 3: The radial component of gravielectric acceleration \(a_{\mathrm{Er}}/a_{0}\) (blue dashed line) and density (red line) of the rotating halo (\(s=1\) core) as functions of dimensionless \(r/r_{0}\) coordinate in the \(z=0\) plane, both \(x\) and \(y\) axes have log scale. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). Figure 2: The radial component of gravielectric field \(a_{\mathrm{E}}/a_{0}\) (blue dashed line) and density (red solid line) of the non-rotating halo (\(s=0\) core) as functions of the dimensionless \(r/r_{0}\) coordinate, both \(x\) and \(y\) axes have log scale. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). Clearly, \(a_{\rm E}\) has a maximum at \(r=R=8.66\) in the \(r_{0}\) scale in agreement with the radial gravielectric acceleration shown in Fig. 2. Gravielectric field in the case of vortex core has radial and \(z\) components in cylindrical coordinates, namely \({\bf a}_{\rm E}=a_{\rm Er}{\bf e}_{r}+a_{\rm Ez}{\bf e}_{z}\). They are illustrated in Figs. 4 and 5, respectively. The radial dependence of the gravielectric radial acceleration in the \(z=0\) plane is shown in Fig. 3. Notice that at \(r\approx 0.81r_{0}=0.058\,kpc\) the acceleration projection changes sign, hence test particles are repelled in the interior region and attracted in the exterior region. This result stems from the geometry of the considered doughnut-shaped halo with a hole. Now we aim to determine the impact of the gravitational field of the DM halo on the movement of celestial bodies in the Milky Way galaxy. Figure 4: The radial component of gravielectric acceleration \(a_{\rm Er}/a_{0}\) induced by the rotating halo (\(s=1\) core) as a function of dimensionless \(r/r_{0}\) and \(z/r_{0}\) coordinates. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). The left panel shows the isothermal envelope region with the three axes in the log scale and the right panel is a zoom-in of the core region. Figure 5: The \(z\)-component of gravielectric acceleration \(a_{\rm Ez}/a_{0}\) induced by the rotating halo (\(s=1\) core) as a function of dimensionless \(r/r_{0}\) and \(z/r_{0}\) coordinates. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). The left panel shows the isothermal envelope region with the three axes in the log scale and the right panel is a zoom-in of the core region. According to our model (see Sec. III) density distribution depends on the state of the core, which must lead to a difference between the rotation curves, which they induce. To demonstrate how the gravielectric acceleration induces rotation in the \(s=0\) and \(s=1\) cases, we present the rotation velocity \(v\) in the \(z=0\) plane as a function of the radial distance \(r\) in Fig. 6. The new result here is the curve in the case \(s=1\), while \(s=0\) case was discussed earlier in [16]. The two halos with \(s=0\) and \(s=1\) core have equal mass, which is the observed mass of DM halo in the Milky Way, according to the model discussed in Sec. III. The numerical results indeed show that at large distances the corresponding rotational curves have the same asymptotic. Note that, the gravielectric force in \(s=1\) case changes its sign at \(r=0.81r_{0}=0.058\,kpc\). Hence, at distances less than \(0.058\,kpc\) there are no stable rotation orbits in the rotating halo model. However, the stable orbits are possible if one includes not only DM but also the other sources of the gravitational field, namely, the baryonic galactic bulge and the supermassive black hole in the central region of the galaxy. ## V Gravimagnetic field in the BEC core In this section, we obtain numerical results for the gravimagnetic (first post-Newtonian) component of the DM halo gravitational field (see Subsec. II.2 of Sec.II). The component is induced by a moving source, hence, it is nonzero only in the second case of the DM halo with a vortex core. To determine the gravimagnetic potential in the case of a rotating axially symmetric halo we use the mass density and velocity distributions given by Eqs. (26) and (27). The calculation is based on Eqs. (8) and (6). The results of numerical integration for radial and \(z\)-components of the gravimagnetic field, \({\bf B}=B_{r}{\bf e}_{r}+B_{z}{\bf e}_{z}\), are shown in Figs. 7 and 8, respectively. Fig. 9 displays the \(z\)-component of the gravimagnetic field \(B_{z}\) in the \(z=0\) plane (the radial component of the gravimagnetic field equals zero in this plane). Having determined the gravimagnetic field, we can calculate the corresponding acceleration of the test particle. Using Eq.(9), we have \[{\bf a}_{\rm B}({\bf r}=(a,b,k))=-\frac{2}{c}v{\bf e}_{\phi}\times{ \bf B}_{\rm g}\] \[=-1.38\alpha G\rho_{0}r_{0}\frac{v}{c}(B_{r}(a,b,k){\bf e}_{r}+B_ {z}(a,b,k){\bf e}_{z})\] \[=a_{\rm Br}(a,b,k){\bf e}_{r}+a_{\rm Ba}(a,b,k){\bf e}_{z}.\] Figure 6: The rotation (Kepler) velocity \(v\) in the \(z=0\) plane as a function of the radial distance \(r\). The pink dashed line corresponds to the non-rotating spherical halo (\(s=0\) core) and the cyan solid line to the rotating halo (\(s=1\) core). The background represents a gradient plot of the density distribution in \(s=1\) case Figure 7: The radial component of gravimagnetic field \(B_{r}/a_{0}\) induced by the rotating core as a function of dimensionless \(r/r_{0}\) and \(z/r_{0}\) coordinates. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). where \(a=r/r_{0}\), \(b=\phi\), \(c=z/r_{0}\) are rescaled cylindrical coordinates. This allows us to estimate the impact of the gravimagnetic field on stars' motion. In the case of the Milky Way galaxy, \(v=v_{0}+\gamma_{0}r_{0}a\) if \(a<a_{\rm break}\) and \(v=v_{1}+\gamma_{1}r_{0}a\) for \(a\geq a_{\rm break}\)[42]. Constants \(\gamma_{0}\), \(\gamma_{1}\), \(a_{\rm break}\), and \(v_{1}\) are different for the thick and thin galactic disks' velocity profiles. Setting \(R_{\rm break}=r_{0}a_{\rm break}=5\,kpc\) and \(v_{0}\)=0 in both cases gives the values of parameters presented in Table 1. This approximation is valid up to \(13\,kpc=180r_{0}\)[42]. We should emphasize here that \(v\) includes only the component of velocity directed along \({\bf e}_{\phi}\) and does not include the component along \({\bf e}_{r}\). It is important to distinguish the \(\phi\)-component and the absolute value of the whole velocity when dealing with sufficiently non-circular elliptic orbits. According to Eq.(9), the gravimagnetic acceleration in galactic plane \(c=0\) can be estimated as \[{\bf a}_{\rm B}=-1.38\alpha G\rho_{0}r_{0}\frac{v_{i}+\gamma_{i}r_{0}a}{c}B_{r }(a,b,0){\bf e}_{r},\] where \(i=0\) for \(a<a_{\rm break}\) and \(i=1\) for \(a\geq a_{\rm break}\). The corresponding plot is shown in Fig. 10. The spike on the red curve, which shows the modulus of the ratio of the gravimagnetic acceleration to the gravielectric one \(|a_{\rm Br}/a_{\rm Er}|\) appears because the gravielectric acceleration changes sign at \(r=0.81r_{0}=0.058\,kpc\). It is interesting that \(a_{\rm Br}(a)\) tends to a constant in the \(a\ll 1\) limit (see Fig. 11). This directly follows from the analytical expression. In the interior region \(r<5\,kpc\), we have \[\frac{a_{\rm Br}}{a_{0}}=-1.38\alpha\frac{\gamma_{0}r_{0}a}{c}B_{r}(a,b,0).\] Figure 10: The radial component of gravimagnetic acceleration (solid and dashed blue lines) \(a_{\rm Br}/a_{0}\) for thin and thick disks, respectively, and the absolute value of the ratio of gravimagnetic acceleration to gravielectric \(|a_{\rm Br}/a_{\rm Er}|\) (both for thin and for thick disks) as a function of \(r/r_{0}\). Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). Figure 9: The \(z\)-component of the gravimagnetic field \(B_{z}/a_{0}\) (blue dashed line) and density (red line) of the rotating core as functions of dimensionless \(r/r_{0}\) in \(z=0\) plane. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). Figure 8: The \(z\)-component of gravimagnetic field \(B_{z}/a_{0}\) induced by the rotating core as a function of dimensionless \(r/r_{0}\) and \(z/r_{0}\) coordinates. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). In the \(a\ll 1\) limit, we find \[B_{r}(a,b,0)\] \[\approx\frac{2\pi}{a}\int_{0}^{\infty}dx\int_{-\infty}^{\infty}dz \frac{x^{2}}{\sqrt{x^{2}+z^{2}}}e^{-\frac{x^{2}}{R^{2}}-\frac{z^{2}}{(R_{0})^{2 }}}.\] The last integral can be calculated numerically which yields \[\frac{a_{\rm Br}}{a_{0}}\;\approx\;-1.38\alpha\frac{\gamma_{0}r_{0}}{c}\,\times 2 47\;=\;9.8\,\times 10^{-7}.\] We see that in the case under consideration the gravimagnetic acceleration indeed tends to be a constant in the \(a\ll 1\) limit. The gravimagnetic field calculations performed in this section allow us to obtain some testable predictions of the model. According to numerical results for \({\bf B}_{\rm g}\) and \({\bf E}_{\rm g}\), the gravielectric force changes its sign at \(r=0.81r_{0}=0.058\,kpc\), and the gravimagnetic force component is attractive or repulsive, depending on the direction of the motion. The acceleration in the polar coordinates \((r,\phi)\) is given by \({\bf a}=(\ddot{r}-r\dot{\phi}^{2}){\bf e}_{r}+(r\ddot{\phi}+2\dot{r}\dot{\phi} ){\bf e}_{\phi}\). Then the equations of motion for a star take the form \[\frac{d^{2}r}{dt^{2}}=r\left(\frac{d\phi}{dt}\right)^{2}-E_{r}-\frac{2B_{z}r}{ c}\frac{d\phi}{dt}, \tag{28}\] \[r\frac{d^{2}\phi}{dt^{2}}=\frac{2B_{z}}{c}\frac{dr}{dt}-2\frac{dr}{dt}\frac{d \phi}{dt}. \tag{29}\] Since the gravielectric acceleration dominates over the gravimagnetic one, it suffices to take the latter into account as a perturbation. Therefore, we treat \(B_{\rm g}\) as the first-order perturbation and expand \(\phi(t)\) and \(r(t)\) around the solution \(r_{\rm c}\) and \(\phi_{\rm c}\) determined by the gravielectric acceleration. For \(r=r_{\rm c}+\delta r\) and \(\phi=\phi_{\rm c}+\delta\phi\), in the zeroth-order, we have the Kepler problem equations with \(E_{r}(r_{\rm c})\) calculated numerically in Sec.IV. The corresponding solutions are elliptic orbits. For simplicity, we will consider only the case of circular orbits \(r_{\rm c}(\phi)=r_{\rm c}=const\). By substituting \(E_{r}(r_{\rm c}+\delta r)\approx E(r_{\rm c})+\frac{dE}{dr}(r_{\rm c})\delta r\) in Eqs. (28) and (29), we obtain \[\frac{d^{2}\delta r}{dt^{2}}=w_{0}^{2}\delta r+2r_{\rm c}w_{0}\frac{d\delta \phi}{dt}-\frac{dE_{r}}{dr}\Big{|}_{r_{\rm c}}\delta r-\frac{2B_{z}}{c}r_{\rm c }w_{0},\] \[r_{\rm c}\frac{d^{2}\delta\phi}{dt^{2}}=-2w_{0}\frac{d\delta r}{dt}.\] where \(w_{0}=\frac{d\phi_{\rm c}}{dt}\) is angular frequency, induced by gravielectric field. Thus, it can be explicitly written as \(w_{0}^{2}=\frac{E_{r}(r_{\rm c})}{r_{\rm c}}\) Integrating the second \begin{table} \begin{tabular}{c c c c} \hline \hline Galactic disk & \(v_{1}\) [\(kms^{-1}\)] & \(\gamma_{0}[kms^{-1}kpc^{-1}]\) & \(\gamma_{1}[kms^{-1}kpc^{-1}]\) \\ \hline thin disk & 236.71 & 45.41 & -1.93 \\ thick disk & 206.93 & 39.086 & -2.30 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters of the Milky Way’s rotational velocity profiles [42]. Figure 11: The radial component of gravimagnetic acceleration (dashed blue line) \(a_{\rm Br}/a_{0}\) and gravielectric acceleration (red line) \(a_{\rm Er}/a_{0}\) in the inner region of halo. Here the grey region corresponds to \(r<0.1r_{0}\), where the gravimagnetic approximation is not valid. Here \(a_{0}=5.38\times 10^{-13}\frac{km}{s^{2}}\), \(r_{0}=71pc\). equation, we get \[\frac{d\delta\phi}{dt}=-\frac{2w_{0}}{r_{\rm c}}\delta r,\] where we set the integration constant to zero. Substituting this relation in the first equation, we find \[\frac{d^{2}\delta r}{dt^{2}}=-\left(3w_{0}^{2}+\frac{dE_{r}}{dr}\Big{|}_{r_{\rm c }}\right)\delta r-\frac{2B_{z}}{c}r_{\rm c}w_{0}.\] From the numerical result, we see that \(f(E)\) is positive and tends to zero at a large distance. Then, for \(3\frac{E_{r}(r_{\rm c})}{r_{\rm c}}+\frac{dE_{r}}{dr}\Big{|}_{r_{\rm c}}= \Omega^{2}>0\), we find solutions \[\delta r=-\frac{2B_{z}r_{\rm c}w_{0}}{c\Omega^{2}}+J\sin(\Omega(t-t_{0})),\] \[\delta\phi=\delta\phi_{\rm c}+\frac{4B_{z}w_{0}^{2}}{c\Omega^{2}}t+\frac{2w_{0 }J}{\Omega r_{\rm c}}\cos(\Omega(t-t_{0})),\] where \(\delta r_{\rm c}\), \(J\), and \(t_{0}\) are defined by the corresponding initial conditions. It is interesting to estimate \(\Omega^{2}\) at some distance \(r_{\rm c}\), e.g., \(r_{\rm c}=8\,kpc=113r_{0}\) which is the distance of the Sun from the center of the galaxy. Then we have \(\Omega=\sqrt{0.0017\times a_{0}/r_{0}}=6.48\times 10^{-16}s^{-1}\) (the corresponding period is \(T=3.1\times 10^{8}\,y\)), \(B_{z}=2.76\times 10^{-8}a_{0}\), and \(\delta r=-2B_{z}r_{\rm c}w_{0}/c\Omega^{2}=-2.7\times 10^{-8}r_{0}=-0.38a.u.\). The latter distance is approximately equal to 80 solar radii. The angular frequency is shifted by the value \(4B_{z}w_{0}^{2}/c\Omega^{2}=4.8\times 10^{-25}s^{-1}\) (the corresponding period is \(T=4.2\times 10^{17}y\)). ## VI Conclusions We investigated the model of DM halo with BEC core composed of ultra-light bosonic particles. Solving the generalized GPP equations for self-gravitating BEC we obtained the density profile of the DM halo and analyzed its core and envelope structure. The density and velocity profiles were found for two types of stable structures with topological charges (\(s=0\) and \(s=1\)) of the BEC core. Using this DM halo description, we investigated its gravitational field and the impact of this field on the baryonic matter. The key result of our paper is that the observable effects, predicted by the ULDM halo model, depend on the state of the core. In particular, solitonic and vortex cores yield different density and velocity distributions and thus different gravitational fields. The doughnut-like density distribution (vanishing at the vortex core) and vortex flows (rapidly increasing at the vortex axis) of the BEC core can significantly modify both gravielectric and gravimagnetic components of the gravitational field. We described the gravitational fields of these two core configurations by using the gravimagnetism approach. A dominant component of the gravitational field is the gravielectric (Newtonian) one, which generates the rotation of celestial bodies in the galaxy. The rotational velocity induced by the halo with vortex is smaller close to the core region but has the same asymptotics at large distances in comparison with the non-rotating halo. The first post-Newtonian component of the gravitational field, which is called gravimagnetic, is induced by the rotation of the BEC vortex core and appears only in the model of a rotating halo. Although, as expected, the gravimagnetic acceleration is much weaker than the gravielectric one, it can affect the dynamics of baryonic matter in the halo, especially in its inner region. In our simplified perturbation approach for circular orbit gravimagnetic field yields radius and frequency shift, and can also induce trajectory oscillations, depending on initial conditions. There are several possible directions in which the present study could be extended. An analysis of gravitational fields beyond the gravimagnetic approach is required in the central region of the galaxy, due to the high rotational velocity of BEC there. Furthermore, according to astrophysical observations, there is a supermassive black hole in the center of our galaxy whose presence should be taken into account. Finally, the gravitational effects of baryonic matter should be included in further studies. ## VII Acknowledgments The authors are grateful to Yelyzaveta Nikolaeva, Sebastian Ulbricht, Stanislav Vilchinskii, and Luca Salasnich for useful discussions and comments. A.Y. acknowledge support from BIRD Project "Ultracold atoms in curved geometries" of the University of Padova. ## Appendix A Let us discuss the self-consistency of our model, which makes use of the GEM approach to describe the first post-Newtonian contribution to the gravitational field potential. We assumed that a test particle (celestial body acted upon by the gravitational field) propagates with a non-relativistic speed \(v\) so that all terms of higher than linear order in \(O(v/c)\) can be neglected in the equations of motion. As to DM, we describe it by using the nonlinear Schrodinger equation with gravitational potential \(\Phi_{\rm g}\). Since the hydrodynamical velocity in the vortex (the state with \(s=1\)) is \(u(r)=\alpha cr_{0}/r\), it increases at small \(r\) and attains at \(r\sim\alpha r_{0}\) values of the order of \(c\). Obviously, the Newtonian treatment is not applicable in this region. Therefore, we use the Klein-Gordon equation in order to describe the relativistic equation of motion of bosons, as follows: \[\nabla_{\alpha}\nabla^{\alpha}\phi+\left[\left(\frac{mc}{\hbar}\right)^{2}-U( |\phi|^{2})\right]\phi=0, \tag{30}\] where \(U=\frac{2m}{\hbar^{2}}gN|\phi|^{2}\) and \(\phi\) is the scalar field. We neglect the effective temperature because only the core region is investigated (the hydrodynamical velocity \(u(r)\) is nonzero only in the core region) and \(\nabla_{\alpha}\) denotes covariant derivative in curved space-time. The metric in the GEM approach reads (here all notations are the same as in Subsec. II.2) \[dS^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=\left(1-\frac{2\Phi_{g}}{c^{2} }\right)(dx^{0})^{2}\\ +\frac{4}{c^{2}}\left(\mathbf{A}_{g}\mathbf{d}\mathbf{x}\right) dx^{0}+\left(-1-\frac{2\Phi_{g}}{c^{2}}\right)\delta_{ij}dx^{i}dx^{j}\] and the Laplace operator is given by \[\nabla_{\alpha}\nabla^{\alpha}\phi=\frac{1}{\sqrt{-g}}\partial_{\alpha}(\sqrt {-g}g^{\alpha\beta}\partial_{\beta}\phi)\] where \(g=det(g_{\mu\nu})\approx-1\). Then we have \[\nabla_{\alpha}\nabla^{\alpha}\phi=\frac{1}{c^{2}}\left(1-\frac{ 2\Phi_{g}}{c^{2}}\right)\partial_{t}^{2}\phi-\frac{2A_{g}^{i}}{c^{3}}\partial _{t}\partial_{i}\phi\\ -\frac{2}{c^{3}}\partial_{i}(A_{g}^{i}\partial_{t}\phi)-\partial_ {i}\left[\left(1+\frac{2\Phi_{g}}{c^{2}}\right)\delta^{ij}\partial_{j}\phi \right],\] where fields \(\Phi_{g}\) and \(\mathbf{A}_{g}\) are time-independent. Taking into account the gauge condition \(\partial_{i}A_{g}^{i}=0\), we find \[\nabla_{\alpha}\nabla^{\alpha}\phi=\frac{1}{c^{2}}\left(1-\frac{ 2\Phi_{g}}{c^{2}}\right)\partial_{t}^{2}\phi\\ -\frac{4A_{g}^{i}}{c^{3}}\partial_{t}\partial^{i}\phi+\partial_{i }\left[\left(1+\frac{2\Phi_{g}}{c^{2}}\right)\partial_{i}\phi\right].\] To obtain a nonrelativistic approximation of the Klein-Gordon equation we represent the scalar field in the form \(\phi=e^{imc^{2}t/\hbar}\psi\). Substituting this expression in the Klein-Gordon equation and multiplying by \(e^{-imc^{2}t/\hbar}\) we get \[\frac{1}{c^{2}}\left(1-\frac{2\Phi_{g}}{c^{2}}\right)\left[ \partial_{t}^{2}\psi+\frac{2imc^{2}}{\hbar}\partial_{t}\psi-\left(\frac{mc^{2} }{\hbar}\right)^{2}\psi\right]\\ -\frac{4A_{g}^{i}}{c^{3}}\left[\partial_{t}\partial^{i}\psi+\frac {imc^{2}}{\hbar}\partial^{j}\psi\right]+\left[1+\frac{2\Phi_{g}}{c^{2}}\right] \partial^{j}\partial_{j}\psi\\ +\frac{2}{c^{2}}\partial^{j}\Phi_{g}\partial_{j}\psi+\left[\left( \frac{mc}{\hbar}\right)^{2}-\frac{2m}{\hbar^{2}}U(|\psi|^{2})\right]\psi=0.\] Neglecting terms of order of \((u/c)^{2}\) and higher (\(A_{g}\sim u/c\)), we obtain \[\frac{2im}{\hbar}\partial_{t}\psi-\left(\frac{mc}{\hbar}\right)^ {2}\psi+2\Phi_{g}\left(\frac{m}{\hbar}\right)^{2}\psi+\partial^{j}\partial_{j}\psi \\ +\left[\left(\frac{mc}{\hbar}\right)^{2}-\frac{2m}{\hbar^{2}}U(| \psi|^{2})\right]\psi=0\] Finally, after some straightforward simplifications, we derive the Schrodinger equation in the form \[i\hbar\partial_{t}\psi=\left(-\frac{\hbar^{2}}{2m}\partial^{j}\partial_{j}+m\Phi_{g} +U(|\psi|^{2})\right)\psi.\] Thus, we conclude that the model is self-consistent if we take into account only terms up to \(u/c\), or, equivalently, in the region, where the hydrodynamical velocity of vortex is not relativistic (\(u\ll c\)).
2306.11978
Designing Pr-based Advanced Photoluminescent Materials using Machine Learning and Density Functional Theory
This work presents a machine learning approach to predict novel perovskite oxide materials in the Pr-Al-O and Pr-Sc-O compound families with the potential for photoluminescence applications. The predicted materials exhibit a large bandgap and high Debye temperature, and have remained unexplored thus far. The predicted compounds (Pr$_3$AlO$_6$, Pr$_4$Al$_2$O$_9$, Pr$_3$ScO$_6$ and Pr$_3$Sc$_5$O$_{12}$) are screened using machine learning approach, which are then confirmed by density functional theory calculations. The study includes the calculation of the bandgap and density of states to determine electronic properties, and the optical absorption and emission spectra to determine optical properties. Mechanical stability of the predicted compounds, as demonstrated by satisfying the Born-Huang criterion. By combining machine learning and density functional theory, this work offers a more efficient and comprehensive approach to materials discovery and design.
Upendra Kumar, Hyeon Woo Kim, Sobhit Singh, Hyunseok Ko, Sung Beom Cho
2023-06-21T02:10:31Z
http://arxiv.org/abs/2306.11978v1
Designing Pr-based Advanced Photoluminescent Materials using Machine Learning and Density Functional Theory ###### Abstract This work presents a machine learning approach to predict novel perovskite oxide materials in the Pr-Al-O and Pr-Sc-O compound families with the potential for photoluminescence applications. The predicted materials exhibit a large bandgap and high Debye temperature, and have remained unexplored thus far. The predicted compounds (Pr\({}_{3}\)AlO\({}_{6}\), Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\), Pr\({}_{3}\)ScO\({}_{6}\) and Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\)) are screened using machine learning approach, which are then confirmed by density functional theory calculations. The study includes the calculation of the bandgap and density of states to determine electronic properties, and the optical absorption and emission spectra to determine optical properties. Mechanical stability of the predicted compounds, as demonstrated by satisfying the Born-Huang criterion. By combining machine learning and density functional theory, this work offers a more efficient and comprehensive approach to materials discovery and design. Perovskite Oxide Materials; Machine Learning; Density Functional Theory; High Debye Temperature; Larger Bandgap Semiconductor ## I Introduction Luminescent materials based on perovskite halides, such as methylammonium lead iodide (MAPbI\({}_{3}\)), have shown great promise for use in photovoltaic and optoelectronic applications [1; 2]. However, the stability of these materials remains a significant challenge, particularly due to their sensitivity to moisture and oxygen at ambient conditions [3]. To address this issue, a range of strategies have been explored, including the use of encapsulation [4] and protective coatings [5]. However, such approaches can be complex and costly, thereby limiting the practical applicability of perovskite halides. In addition to exploring encapsulation and protective coatings, researchers have also investigated a variety of other materials in an effort to develop more stable luminescent materials. Despite these efforts, the stability of perovskite halides remains a key challenge in their practical implementation. Furthermore, a majority of perovskite oxides exhibit rigid stability on humidity, however, they are not considered as common photovoltaic materials due to their wide band-gap energies. Nonetheless, some of perovskite oxides shows excellent showcasing distinct optical properties. These properties are utilized in the development of advanced optoelectronic devices, such as nonlinear optics crystals [6; 7], scintillators [8], photoluminescent (PL) and electroluminescent materials [9; 10], as well as solar cells [11]. Novel PL bands have been observed in SrTiO\({}_{3}\), which is a typical example of a perovskite semiconductor [12; 13]. Perovskite oxide derivatives offer an attractive alternative to perovskite halides for luminescent applications [14]. These materials are often more thermally stable [15] and less toxic [16] than their halide counterparts, and can exhibit desirable electronic and optical properties. Furthermore, perovskite oxide derivatives provide a diverse design space that allows for the incorporation of traditional luminescent elements such as Cr, Yb, Pr, Eu, Tb, and others. The chemical space of perovskite oxide derivatives is not fully explored yet, offering a promising avenue for the development of new luminescent materials. Perovskite oxides are promising scintillators due to their high light yield and fast response time [17]. They emit more photons per unit of absorbed radiation than other materials, making them useful for detecting high-energy particles and reducing the risk of radiation damage to sensitive equipment [18]. Additionally, perovskite oxide scintillators have the potential to overcome the stability issues of perovskite halide scintillators. While perovskite halide scintillators have high light yield and fast response times, they are known to be unstable under certain conditions such as exposure to moisture and high temperatures. Several types of perovskite oxide scintillators have been studied, including strontium titanate (SrTiO\({}_{3}\)) [19], barium titanate (BaTiO\({}_{3}\)) [20], and lanthanum aluminate (LaAlO\({}_{3}\)) [21], which have shown promising results in terms of their light yield and response time. Therefore, ongoing research is focused on improving their performance and understanding their underlying physics. In this study, we have identified under-explored Pr-based perovskite oxides and predicted new promising compounds for photovoltaic applications. By combining machine learning and data mining, we found that Pr-based perovskites are relatively under-explored compared to other types. The focus of this work is on predicting perovskite oxide materials in the Pr-Al-O and Pr-Sc-O families, known for their larger bandgap and Debye temperature, as potential candidates for photoluminescence applications. Machine learning is employed to screen a vast number of materials and predict their electronic and optical properties. To validate the predictions, density functional theory (DFT) calculations are performed to study the band structure, density of states, optical absorption, emission spectra, as well as elastic and mechanical stability. This integrated approach of machine learning and DFT offers a more efficient and comprehensive method for materials discovery and design, facilitating the identification of perovskite oxide materials with desirable electronic and optical properties for photoluminescence applications. ## II Methods To construct the ML models, we employed convolutional neural networks (CGCNN) [22] method. The core concept behind the CGCNN method [22] is to first represent a crystal structure using a crystal graph that encodes both atomic information and bonding interactions between atoms. Then design a convolutional neural network on top of the graph in order to automatically extract representations that are optimal for predicting target properties by training the ML model with DFT calculated data. The Vienna ab initio simulation package (VASP), which uses the projector augmented wave method (PAW), is used to perform all the reported DFT calculations in this work [23; 24]. Exchange and correlation energies are computed using the generalized gradient approximation provided as parameterized by Perdew-Burke-Ernzerhof (PBE) [25]. Brillouin zone is constructed using a \(\Gamma\)-centered (3\(\times\)3 \(\times\)5 for Pr\({}_{3}\)AlO\({}_{6}\) and 2\(\times\)3 \(\times\)2 for Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\)) and Monkhorst-Pack (4\(\times\)4\(\times\)4 for Pr\({}_{3}\)ScO\({}_{6}\) and 2\(\times\)2 \(\times\)2 for Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\)) type k-meshes [26]. In all DFT calculations, we employed a plane wave cut-off energy of 400 eV. We used \(10^{-4}\) eV\(/\) A as the force convergence criterion to relax the inner-atomic coordinates, and \(10^{-6}\) eV as the energy convergence criterion for self-consistence DFT calculations. ## III Result and Discussion The Debye temperature (\(\Theta_{\rm D}\)) is the maximum temperature that can be attained as a result of a single normal-mode vibration i.e. the temperature of a crystal's highest normal mode of vibration. It is a good indicator of structural stiffness, which makes it suitable for evaluating photoluminescence quantum yield [27]. However, there are a number of limitations to utilizing the DFT method to compute \(\Theta_{\rm D}\) i.e. doing so is computationally expensive. Instead, it is also possible to predict \(\Theta_{\rm D}\) for many compounds using machine learning [28], which is computationally less expensive than DFT. Still, knowing only the \(\Theta_{\rm D}\) of a crystal structure is insufficient to achieve a high photoluminescence quantum efficiency: _a wide bandgap is also required_. Plotting \(\Theta_{\rm D}\) as a function of DFT determined bandgap, which serves as a sorting diagram, allows for the final optimization of these two features. There is a dependency of \(\Theta_{\text{D}}\) on the energy band gap \((E_{g})\) in the semiconducting material [29]. Therefore, it is necessary to find the value of bandgap and \(\Theta_{\text{D}}\) for a good photoluminescence material. Advanced ML-based methods have revolutionized the field of material science by providing an alternative to traditional experimental trial-and-error and computationally expensive DFT calculation techniques [30]. In the Pearson's crystal database (PCD)[31], the machine learning has been used to predict \(\Theta_{\text{D}}\) for a majority of compounds. Therefore, we performed data mining for O, Se, S and Te based chalcogen ternary compounds by using materials project [32] database, as shown in Fig.1**(a)**. It is found that oxide-based materials consist large bandgap and high Debye temperature among the studied chalcogen family. Since data availability for Debye temperature is very limited in the materials-project database [32], we employed CGCNN [22] for the calculation of calculating Debye temperature with the crystallographic information files as an input feature. The CGCNN is able to predict the Debye temperature with accuracy of 93% as shown in Fig.1**(b)**. We started our data mining process using the materials-project database [32] to explore the relatively under-explored perovskite oxide family containing Cr, Eu, Yb, and Pr elements, as shown in Fig.(2), with further details provided in supplementary section (I). After screening several promising photoluminescence materials using data mining, we found that Praseodymium (Pr) is the least explored element in this family. Therefore, we focused on the ternary compounds of the Pr based perovskite oxide family, which have a large bandgap and high Debye temperature, as depicted in Fig.2**(d)**. Among the Pr based perovskite oxide family, the Pr-Sc-O and Pr-Al-O subfamilies are the least explored. We found only two well-known materials having zero energy above the convex hull (E\({}_{\text{hull}}\)) in the materials-project database [32], as described in supplementary section (I:D) [32]. Hence, we explored these families to predict new candidate structures for potential photoluminescence applications. We have used the substitution method to search for more crystal structures. A machine learning model for ionic substitution based on experimental data has already been proposed by Hautier et al. [33]. In this work, we utilize the same machine learning model for the prediction of new structures in Pr-Al-O and Pr-Sc-O ternary compound families. In the case of Pr-Al-O family, there are only three obtained compositions, i.e., **(i)** Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\)**(ii)** PrAlO\({}_{3}\) and **(iii)** Pr\({}_{3}\)AlO\({}_{6}\) having zero energy above the convex hull. We note, PrAlO\({}_{3}\) has already been in the category of well known compound [32]. Therefore, there are only two remaining candidates i.e. **(i)** Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) and **(ii)** Pr\({}_{3}\)AlO\({}_{6}\), which can be considered as novel candidates in this work. Similarly, in the case of Pr-Sc-O family, the compounds Pr\({}_{3}\)ScO\({}_{6}\) and Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\) are considered as novel candidates in this work. All the details of newly predicted compounds are given in Table(I). Their structures Figure 1: **(a)** Plot between Debye temperature and bandgap of chalcogenide perovskites family. **(b)** Plot between DFT (PBE) predicted vs CGCNN predicted values of Debye temperature. are shown in the Fig.(3). Getting the formation energy and building the related convex hull is a crucial step in figuring out if a compound is energetically stable [34]. From thermodynamical point of view, the convex hull belongs to the Gibbs free energy of the compounds at zero temperature. Our calculations reveal that the newly predicted compounds are energetically stable, as confirmed by the convex hull plot shown in Fig.(4). Further details, including E\({}_{\text{hull}}\) and parent atom details, are provided in supplementary section (II). All these predicted compounds are mechanically stable as discussed in more detail in the supplementary section(III). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \begin{tabular}{c} **Lattice** \\ **Parameters (\(\lambda\))** \\ \end{tabular} & \begin{tabular}{c} **Lattice** \\ **Angle** \\ \end{tabular} & \begin{tabular}{c} **Space Group** \\ **Type** \\ \end{tabular} & \begin{tabular}{c} **Crystal** \\ **Structure** \\ \end{tabular} & \begin{tabular}{c} **Space Group** \\ **Number** \\ \end{tabular} \\ \hline \(\text{P}_{\text{P}_{3}\text{A}\text{A}\text{A}_{0}}\) & \(\begin{array}{c}a=7.47\\ b=7.47\\ c=5.62\\ \end{array}\) & \(\begin{array}{c}a=90^{\circ}\\ \beta=90^{\circ}\\ \gamma=102.03^{\circ}\\ \beta=11.50\\ \end{array}\) & \(\begin{array}{c}P_{21}/c\\ \gamma=90^{\circ}\\ \gamma=90^{\circ}\\ \end{array}\) & \(\begin{array}{c}P_{21}/c\\ \gamma=90^{\circ}\\ \end{array}\) & \(\begin{array}{c}\text{Monoclinic}\\ \end{array}\) & \(\begin{array}{c}14\\ \end{array}\) \\ \hline \(\text{P}_{\text{P}_{3}\text{S}\text{C}\text{O}_{3}}\) & \(\begin{array}{c}a=6.89\\ b=6.89\\ c=6.89\\ c=6.89\\ \end{array}\) & \(\begin{array}{c}a=92.38^{\circ}\\ \beta=92.88^{\circ}\\ \gamma=92.38^{\circ}\\ \end{array}\) & \(\begin{array}{c}\text{Trigural}\\ \beta=92.88^{\circ}\\ \gamma=92.38^{\circ}\\ \end{array}\) & \(\begin{array}{c}\text{Trigural}\\ \beta=93.8^{\circ}\\ \end{array}\) & \(\begin{array}{c}\text{Trigural}\\ \beta=94.7^{\circ}\\ \end{array}\) & \(\begin{array}{c}\text{Monoclinic}\\ \end{array}\) & \(\begin{array}{c}\text{148}\\ \end{array}\) \\ \hline \(\text{P}_{\text{P}_{3}\text{S}\text{C}\text{O}_{3}}\) & \(\begin{array}{c}a=11.39\\ b=11.39\\ c=11.39\\ \end{array}\) & \(\begin{array}{c}\text{$\theta=109.47^{\circ}$}\\ \gamma=109.47^{\circ}\\ \end{array}\) & \(\begin{array}{c}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L}\text{L} \text{L}\text Figure 4: The convex hull plot of **(a)** Pr-Al-O **(b)** Pr-Sc-O family. The PrAlO\({}_{3}\) and PrScO\({}_{3}\) represent known compounds of the materials-project database [32], and others are newly predicted compounds by our ML combined with DFT method. ### Bandgap We calculated electronic bandgap of the newly predicted compounds using PBE-DFT [25], CGCNN, and modified Becke Johnson (mBJ) [35] methods. For further improving the accuracy of the bandgap, we utilized computationally more expensive hybrid functional - Heyd-Scuseria-Ernzerhof (HSE06) [36]. All the calculated bandgap values are reported in the Table(2). We have also calculated Debye temperature of the newly predicted compounds using CGCNN [22] and validated it with DFT calculations, as mentioned in the Table(2). All the predicted compounds have a large bandgap and high Debye temperature. Direct band gap semiconductors allow for efficient production of photons without assistance from phonons due to the aligned valence and conduction band extrema. This makes them highly desirable for optical de \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & **Bandgap** & **Bandgap** & **Bandgap** & **Bandgap** & **Debye Temp.** & **Debye Temp.** & **Bandgap** \\ & **DFT-PBE (eV)** & **CGCNN (eV)** & **DFT-mBJ (eV)** & **DFT-HSE (eV)** & **CGCNN (K)** & **DFT-PBE (K)** & **Type** \\ \hline Pr\({}_{3}\)AlO\({}_{6}\) & 4.18 & 4.18 & 5.69 & 5.67 & 393.98 & 398.78 & Indirect \\ \hline Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) & 4.07 & 3.84 & 6.07 & 5.56 & 428.87 & 431.33 & Direct \\ \hline Pr\({}_{3}\)ScO\({}_{6}\) & 4.29 & 4.15 & 5.46 & 5.80 & 410.60 & 390.24 & Indirect \\ \hline Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\) & 3.83 & 3.44 & 5.12 & 5.32 & 488.55 & 486.34 & Direct \\ \hline \end{tabular} \end{table} Table 2: Bandgap and Debye temperature of newly predicted compounds. The DFT-PBE data is used for training the CGCNN model. vices i.e. for photoluminescence applications. So, the newly predicted compounds Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) and Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\) are better suited for photoluminescence applications, as shown in Fig.(5). ### Density of States (DOS) DOS plays a crucial role in defining the characteristics of the materials [37]. In order to learn more about the electronic structure of the predicted compound, we have calculated the total and atomic-orbitals resolved electronic DOS, as shown in the Fig.(6), respectively. In predicted compounds major contribution is because \(p\) orbital of oxygen, \(d\) orbital of Pr and Sc. In DOS of Pr\({}_{3}\)AlO\({}_{6}\), shown in the Fig.6**(a)**, the peak around -4.3 eV is due to the hybridization of all elements. In case of valance band maximum major contribution in the total DOS is due to \(p\) orbital of oxygen. But in case of conduction band minimum, the \(d\) orbital of Pr has a major contribution to the total DOS. The \(s\) orbital of Al is also playing a role in making larger peak around -4.3 eV. Similar behaviour can be also seen for the Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\), depicted in the Fig.6**(b)**. But, here, \(s\) orbital of Al contribution can be seen around -4 eV. So, electronic transitions from O\(-p\) orbitals to Pr\(-d\) orbitals are possible. Similar pattern of DOS can be seen for the Pr\({}_{3}\)ScO\({}_{6}\) and Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\) compound, depicted in Fig.6**(c)** and Fig.6**(d)**. ### Optical Spectra It is possible to derive the linear optical properties from the frequency-dependent complex dielectric function \(\varepsilon(\omega)\): \[\varepsilon(\omega)=\varepsilon_{1}(\omega)+i\varepsilon_{2}(\omega). \tag{1}\] Figure 6: Total HSE06 electronic density of states and orbital projected density of states for **(a)** Pr\({}_{3}\)AlO\({}_{6}\) **(b)** Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) **(c)** Pr\({}_{3}\)ScO\({}_{6}\), and **(d)** Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\). where the real and imaginary components of the dielectric function are denoted by \(\varepsilon_{1}(\omega)\) and \(\varepsilon_{2}(\omega)\), respectively, \(\omega\) represents the frequency of the photon. The real components \(\varepsilon_{1}(\omega)\) can be obtained by using the Kramers-Kronig relationship [38] and imaginary components \(\varepsilon_{2}(\omega)\) can be calculated by using momentum matrix elements between the valence and conduction wave functions [39]. With the help of \(\varepsilon_{1}(\omega)\) and \(\varepsilon_{2}(\omega)\), the refractive index \(n(\omega)\) and absorption coefficient \(\alpha(\omega)\) can be calculated by using formula: \[n(\omega)=\left[\frac{\sqrt{\varepsilon_{1}^{2}+\varepsilon_{2}^{2}}+ \varepsilon_{1}}{2}\right]^{\frac{1}{2}},\text{ and} \tag{2}\] \[\alpha(\omega)=\sqrt{2}\omega\left[\frac{\sqrt{\varepsilon_{1}^{2}+\varepsilon _{2}^{2}}-\varepsilon_{1}}{2}\right]^{\frac{1}{2}}. \tag{3}\] For predicted anticipated compounds, the computed \(\varepsilon_{1}(\omega)\) and \(\varepsilon_{2}(\omega)\) as a function of \(\omega\) are shown in Fig.(7). The maximum peak of real part can be seen at 6.73 eV (\(\varepsilon_{1}(\omega)=3.17\)) for Pr\({}_{3}\)AlO\({}_{6}\), 6.84 (\(\varepsilon_{1}(\omega)=2.84\)) eV for Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\), 6.81 eV (\(\varepsilon_{1}(\omega)=3.43\)) for Pr\({}_{3}\)ScO\({}_{6}\) and 7.13 eV (\(\varepsilon_{1}(\omega)=3.51\)) for Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\) compounds. The zero frequency limits (\(\omega\to 0\)) of \(\varepsilon_{1}(\omega)\) could be used to calculate the static dielectric constants. Static dielectric constants are determined to be 1.75 for Pr\({}_{3}\)AlO\({}_{6}\), 1.60 for Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\), 3.43 for Pr\({}_{3}\)ScO\({}_{6}\), and 1.69 for Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\). The \(\varepsilon_{2}(\omega)\) in Fig.7**(b)** shows that the dielectric function's threshold energy is at about 5.1 eV. This is equivalent to the fundamental absorption edge, which is the optical transition between the valence band maximum (VBM) and the conduction band minimum (CBM). The absorptive portion of \(\varepsilon_{2}(\omega)\) displays two dominating peaks at nearly 10 eV and 25 eV as the energy increases. The first peak is caused by O\(-2p\) electrons transitioning into the \(s\) states of cations, but the subsequent peak may correspond to O\(-2p\) electrons transitioning into the \(p\) states of cations [40]. The decay of light intensity propagating over a unit distance in a material is described by the absorption coefficient \(\alpha(\omega)\). According to Fig.8**(a)**, the absorption edge begins to appear at around 5 eV. This is caused by excited electrons transitioning from O\(-2p\) states at the top of the valence band to empty cation \(2s\) states. Take note that, the \(\alpha(\omega)\) is seen at a value lower than 5 eV i.e. ultraviolet range. On the other hand, these compounds display a noticeable absorption due to the fact that the absorption coefficient rapidly increases when the photon energy is greater than the absorption edge. This is a property that is typical of semiconductors and insulators. In Fig.8**(b)**, the measured curve of \(n(\omega)\) as a function of photon energy is shown. It should be noted that the static refractive index \(n(0)\) values for incoming light are 0.94 for Pr\({}_{3}\)AlO\({}_{6}\) Figure 7: HSE06 calculated **(a)** real part \(\varepsilon_{1}(\omega)\) and **(b)** imaginary part \(\varepsilon_{2}(\omega)\) of the complex dielectric function. 0.89 for Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\), 0.95 for Pr\({}_{3}\)ScO\({}_{6}\) and 0.92 for Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\) compounds. At a photon energy around 7 eV, the refractive index \(n(\omega)\) achieves its maximum value. After that, the energy level gradually decreases until it reaches its lowest point, after which it hardly changes at all in the high energy zone (\(\geq\)50 eV). ## IV Conclusion In conclusion, this work presents a novel approach to discover new Pr-based perovskite oxide materials with desirable electronic and optical properties for photoluminescence applications. The use of a ML approach to screen a large number of candidate materials followed by DFT calculations to confirm their potential allowed for a more efficient and comprehensive approach to materials discovery and design. The predicted compounds (Pr\({}_{3}\)AlO\({}_{6}\), Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\), Pr\({}_{3}\)ScO\({}_{6}\) and Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\)) in the Pr-Al-O and Pr-Sc-O compound families were evaluated for their band structure, mechanical stability, density of states, optical absorption and emission spectra, confirming their potential for photoluminescence applications. Compared to their halide counterparts, perovskite oxide derivatives are often more thermally stable and less toxic, which makes them more suitable for practical applications. The perovskite oxide derivatives have diverse design space, which allows for the incorporation of a wide range of luminescent and enables the tuning of their electronic and optical properties. In addition, the chemical space of perovskite oxide derivatives is not fully explored yet, offering a promising avenue for the development of new luminescent materials. So, this work provides insights for future experimental investigations and can lead to the development of new materials for a variety of technological applications. ## Acknowledgement This research was supported by National R&D Program through the National Research Foundation of Korea (NRF) funded by Ministry of Science and ICT (and RS-2023-00209910). Upendra Kumar expresses sincere gratitude to Dr. Sanjay Nayak, a Postdoctoral Fellow at Linkoping University, for providing valuable motivation and inspiration to pursue work in the field of machine learning. ## Author contributions Upendra Kumar and Hyeon Woo Kim conceived the idea and contributed equally to this project. Sobhit Singh provided key suggestions for manuscript modifications. Upendra Kumar wrote the manuscript and all authors read and reviewed it. Hyunseok Ko and Sung Beom Cho supervised the project. Figure 8: HSE06 calculated **(a)** absorption \(\alpha(\omega)\) and **(b)** refractive \(n(\omega)\) spectrum. ## Supplementary Material ### I. Data Mining **I:A. Cr Based Ternary Perovskites Oxide** ## I:C. Yb Based Ternary Perovskites Oxide Sc and Al are very under explored families in the Pr-chalcogen-O compounds, as mentioned in Material Project2. The Pr-based ternary perovskites oxides are mentioned in Table(S4). Footnote 2: A. Jain et al., “Commentary: The materials project: A materials genome approach to accelerating materials innovation,” APL Mater. **1**, 011002(2013). [https://doi.org/10.1063/1.4812323](https://doi.org/10.1063/1.4812323) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Algorithm** & **Space Group** & **Crystal System** & **Ehull (eV/atom)** & **Band Gap(eV)** & **Direct** \\ \hline 1 & Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) & (P2\({}_{1}\)/c, 14) & monoclinic & 0 & 4.07 & True \\ 2 & PrAlO\({}_{3}\) & (Pnma, 62) & orthorhombic & 0 & 3.72 & False \\ 3 & Pr\({}_{3}\)AlO\({}_{6}\) & (Cmc21, 36) & orthorhombic & 0 & 4.17 & False \\ 4 & Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) & (P2\({}_{1}\)/c, 14) & monoclinic & 1.96 & 4.18 & False \\ 5 & Pr\({}_{3}\)Al\({}_{5}\)O\({}_{12}\) & (’Ia\(\bar{3}\)d’, 230) & cubic & 10.80 & 3.76 & True \\ 6 & PrAlO\({}_{3}\) & (Cmcm, 63) & orthorhombic & 16.74 & 4.25 & True \\ 7 & PrAlO\({}_{3}\) & (P1, 2) & triclinic & 42.40 & 4.29 & False \\ 8 & PrAlO\({}_{3}\) & (P2\({}_{1}\)/c, 14) & monoclinic & 43.39 & 4.25 & False \\ 9 & PrAlO\({}_{6}\) & (C2/c, 15) & monoclinic & 48.50 & 4.19 & False \\ 10 & PrAlO\({}_{3}\) & (’PI’, 2) & triclinic & 48.91 & 4.23 & False \\ 11 & PrAlO\({}_{3}\) & (P2\({}_{1}\)/c, 14) & monoclinic & 61.80 & 4.45 & False \\ 12 & PrAl\({}_{11}\)O\({}_{18}\) & (P1, 2) & triclinic & 67.48 & 2.63 & False \\ 13 & Pr\({}_{4}\)Al\({}_{6}\)O\({}_{15}\) & (C2/c, 15) & monoclinic & 79.38 & 4.21 & False \\ 14 & Pr\({}_{2}\)Al\({}_{4}\)O\({}_{9}\) & (Pbam, 55) & orthorhombic & 81.72 & 3.35 & False \\ 15 & PrAlO\({}_{3}\) & (P2\({}_{1}\)/c, 14) & monoclinic & 87.84 & 4.33 & True \\ 16 & PrAlO\({}_{3}\) & (P1, 2) & triclinic & 94.27 & 4.26 & True \\ 17 & PrAlO\({}_{6}\) & (R\(\bar{3}\), 148) & trigonal & 110.70 & 4.01 & False \\ 18 & PrAlO\({}_{3}\) & (Pnma, 62) & orthorhombic & 122.13 & 3.51 & True \\ 19 & PrAlO\({}_{3}\) & (P1, 2) & triclinic & 128.87 & 3.02 & False \\ 20 & PrAlO\({}_{3}\) & (P6\({}_{3}\)/mmc, 194) & hexagonal & 209.10 & 3.22 & False \\ \hline \end{tabular} \end{table} Table S6: Predicted structures in the Pr-Al-O ternary compound family. ## III Mechanical properties To ensure that a material is properly included into the developing technology, it is essential to extensively analyse its mechanical characteristics. By applying the Born criterion, we assess the mechanical stability3. The elastic tensor matrix is obtained using finite lattice distortions and the strain-stress relationship. In order to account for the relaxation of rigid ions, the elastic tensor matrix \((\mathrm{C}_{ij})\) has been computed. The coefficients are measured in gigapascals (GPa). The MechElastic Python module4 is used for the mechanical characteristics of newly predicted compounds by utilising the elastic coefficient matrix (\(\mathrm{C}_{ij}\)) generated from any ab-initio density-functional theory (DFT) method. The elastic tensor matrices for \(\mathrm{Pr_{3}AlO_{6}}\) has form: Footnote 3: F. Mouhat and F.-X. Coudert, “Necessary and sufficient elastic stability conditions in various crystal systems,” Phys. Rev. B **90**, 224104(2014). [https://doi.org/10.1103/PhysRevB.90.224104](https://doi.org/10.1103/PhysRevB.90.224104) \[\mathrm{C}_{ij}(\mathrm{GPa})=\begin{bmatrix}204.85&67.26&84.16&0&0&0\\ 67.26&185.02&85.07&0&0&0\\ 84.16&85.07&237.75&0&0&0\\ 0&0&0&52.50&0&0\\ 0&0&0&0&45.87&0\\ 0&0&0&0&38.66\\ \end{bmatrix}\] (S.4) In the case of Pr\({}_{3}\)AlO\({}_{6}\), which is part of the orthorhombic crystal system, the essential conditions are as follows: 1. C\({}_{11}>0\) 2. C\({}_{11}\times\) C\({}_{22}\)\(>\) C\({}_{12}^{2}\) 3. C\({}_{11}\times\) C\({}_{22}\times\) C\({}_{33}\)\(+\) 2C\({}_{12}\times\) C\({}_{13}\times\) C\({}_{23}\)\(-\) C\({}_{11}\times\) C\({}_{23}^{2}\)\(-\) C\({}_{22}\times\) C\({}_{13}^{2}\)\(-\) C\({}_{33}\times\) C\({}_{12}^{2}\)\(>0\) 4. C\({}_{44}>0\) 5. C\({}_{55}>0\) 6. C\({}_{66}>0\) The elastic tensor matrices for Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) has form: \[\text{C}_{ij}(\text{GPa})=\begin{bmatrix}205.77&83.24&82.13&2.68&0&0\\ 83.24&195.80&81.60&-2.54&0&0\\ 82.13&81.60&169.47&-1.73&0&0\\ 2.68&-2.54&-1.73&54.80&0&0\\ 0&0&0&0&61.25&-1.83\\ 0&0&0&0&-1.83&61.43\end{bmatrix}\] (S.5) For Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\) which belongs to the monoclinic crystal system, the necessary criteria are given as; 1. C\({}_{11}\) > 0 2. C\({}_{22}\) > 0 3. C\({}_{33}\) > 0 4. C\({}_{44}\) > 0 5. C\({}_{55}\) > 6. C\({}_{66}\) > 0 \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Parameters** & **Pr\({}_{3}\)AlO\({}_{6}\)** & **Pr\({}_{4}\)Al\({}_{2}\)O\({}_{9}\)** & **Pr\({}_{3}\)ScO\({}_{6}\)** & **Pr\({}_{3}\)Sc\({}_{5}\)O\({}_{12}\)** \\ \hline Bulk & 121.19 & 117.97 & 128.86 & 128.84 \\ modulus (GPa) & 52.37 & 56.60 & 50.45 & 61.48 \\ \hline Young’s modulus (GPa) & 137.33 & 146.39 & 133.86 & 159.14 \\ \hline Poisson’s ratio & 0.31 & 0.29 & 0.33 & 0.29 \\ \hline \end{tabular} \end{table} Table S8: Calculated material properties of predicted structures. 7. \(\left[\text{C}_{11}+\text{C}_{22}+\text{C}_{33}+2\times(\text{C}_{12}+\text{C}_{13 }+\text{C}_{23})\right]>0\) 8. \(\text{C}_{33}\times\text{C}_{55}\) - \(\text{C}_{35}^{2}>0\) 9. \(\text{C}_{44}\times\text{C}_{66}\) - \(\text{C}_{46}^{2}>0\) (x) \(\text{C}_{22}+\text{C}_{33}\) - \(2\times\text{C}_{23}>0\) 10. \(\text{C}_{22}\times(\text{C}_{33}\times\text{C}_{55}\) - \(\text{C}_{35}^{2})+2\times\text{C}_{23}\times\text{C}_{25}\times\text{C}_{35}\) - \((\text{C}_{23})^{2}\times\text{C}_{55}\) - \((\text{C}_{25})^{2}\times\text{C}_{33}>0\) 11. \(2\times[\text{C}_{15}\times\text{C}_{25}\times(\text{C}_{33}\times\text{C}_{12 }\) - \(\text{C}_{13}\times\text{C}_{23})+\text{C}_{15}\times\text{C}_{35}\times(\text {C}_{22}\times\text{C}_{13}\) - \(\text{C}_{12}\times\text{C}_{23})+\text{C}_{25}\times\text{C}_{35}\times( \text{C}_{11}\times\text{C}_{23}\) - \(\text{C}_{12}\times\text{C}_{13})]\) - \([\text{C}_{15}\times\text{C}_{15}\times(\text{C}_{22}\times\text{C}_{33}\) - \(\text{C}_{23}^{2})+\text{C}_{25}\times\text{C}_{25}\times(\text{C}_{11}\times \text{C}_{33}\) - \(\text{C}_{13}^{2})+\text{C}_{35}\times\text{C}_{35}\times(\text{C}_{11}\times \text{C}_{22}\) - \(\text{C}_{12}^{2})]\) + \(\text{C}_{55}\times\text{g}>0\) where, g = [\(\text{C}_{11}\times\text{C}_{22}\times\text{C}_{33}\) - \(\text{C}_{11}\times\text{C}_{23}\times\text{C}_{23}\) - \(\text{C}_{22}\times\text{C}_{13}\times\text{C}_{13}\) - \(\text{C}_{33}\times\text{C}_{12}\times\text{C}_{12}\) + \(2\times\text{C}_{12}\times\text{C}_{13}\times\text{C}_{23}\) ]. The elastic tensor matrices for \(\text{Pr}_{3}\text{ScO}_{6}\) has form: \[\text{C}_{ij}(\text{GPa})=\begin{bmatrix}185.28&92.99&98.50&-4.21&-14.20&6.47 \\ 92.99&214.09&93.92&-4.96&7.98&-5.98\\ 98.50&93.92&195.43&8.53&-11.13&-0.95\\ -4.21&-4.96&8.53&60.74&-0.29&-8.01\\ -14.20&7.98&-11.13&-0.29&45.79&-4.37\\ 6.47&-5.98&-0.95&-8.01&-4.37&50.59\\ \end{bmatrix}\] (S.6) For \(\text{Pr}_{3}\text{ScO}_{6}\) which belongs to the rhombohedral-2 crystal system, the necessary criteria are given as; 1. \(\text{C}_{11}\) - \(\text{C}_{12}\) > 0 2. \(\text{C}_{13}{}^{2}\) < (1/2)\(\times\text{C}_{33}\)(\(\text{C}_{11}\) + \(\text{C}_{12}\)) 3. \(\text{C}_{14}{}^{2}\) + \(\text{C}_{15}{}^{2}\) < (1/2)\(\times\text{C}_{44}\times\text{(C}_{11}\)-\(\text{C}_{12}\)) = \(\text{C}_{44}\times\text{C}_{66}\) 4. \(\text{C}_{44}>0\) The elastic tensor matrices for \(\text{Pr}_{3}\text{Sc}_{5}\text{O}_{12}\) has form: \[\text{C}_{ij}(\text{GPa})=\begin{bmatrix}222.24&82.08&82.08&0&0&0\\ 82.081&222.24&82.081&0&0&0\\ 82.081&82.081&222.24&0&0&0\\ 0&0&0&56.54&0&0\\ 0&0&0&0&56.54&0\\ 0&0&0&0&56.54\\ \end{bmatrix}\] (S.7) For \(\text{Pr}_{3}\text{Sc}_{5}\text{O}_{12}\) which belongs to the cubic crystal system, the necessary criteria are given as; 1. \(\text{C}_{11}\) - \(\text{C}_{12}\) > 0 2. \(\text{C}_{11}\) + 2\(\text{C}_{12}\) > 0 3. \(\text{C}_{44}\) > 0 All of the Born criterion are met by the coefficient produced using DFT-PBE, indicating that all predicted structures are mechanically stable. Other mechanical properties-related parameters are listed in Table(S8).
2305.08979
Is infrared-collinear safe information all you need for jet classification?
Machine learning-based jet classifiers are able to achieve impressive tagging performance in a variety of applications in high-energy and nuclear physics. However, it remains unclear in many cases which aspects of jets give rise to this discriminating power, and whether jet observables that are tractable in perturbative QCD such as those obeying infrared-collinear (IRC) safety serve as sufficient inputs. In this article, we introduce a new classifier, Jet Flow Networks (JFNs), in an effort to address the question of whether IRC unsafe information provides additional discriminating power in jet classification. JFNs are permutation-invariant neural networks (deep sets) that take as input the kinematic information of reconstructed subjets. The subjet radius and a cut on the subjet's transverse momenta serve as tunable hyperparameters enabling a controllable sensitivity to soft emissions and nonperturbative effects. We demonstrate the performance of JFNs for quark vs. gluon and Z vs. QCD jet tagging. For small subjet radii and transverse momentum cuts, the performance of JFNs is equivalent to the IRC-unsafe Particle Flow Networks (PFNs), demonstrating that infrared-collinear unsafe information is not necessary to achieve strong discrimination for both cases. As the subjet radius is increased, the performance of the JFNs remains essentially unchanged until physical thresholds that we identify are crossed. For relatively large subjet radii, we show that the JFNs may offer an increased model independence with a modest tradeoff in performance compared to classifiers that use the full particle information of the jet. These results shed new light on how machines learn patterns in high-energy physics data
Dimitrios Athanasakos, Andrew J. Larkoski, James Mulligan, Mateusz Ploskon, Felix Ringer
2023-05-15T19:42:54Z
http://arxiv.org/abs/2305.08979v2
# Is infrared-collinear safe information all you need for jet classification? ###### Abstract Machine learning-based jet classifiers are able to achieve impressive tagging performance in a variety of applications in high energy and nuclear physics. However, it remains unclear in many cases which aspects of jets give rise to this discriminating power, and whether jet observables that are tractable in perturbative QCD such as those obeying infrared-collinear (IRC) safety serve as sufficient inputs. In this article, we introduce a new classifier, Jet Flow Networks (JFNs), in an effort to address the question of whether IRC unsafe information provides additional discriminating power in jet classification. JFNs are permutation-invariant neural networks (deep sets) that take as input the kinematic information of reconstructed subjets. The subjet radius serves as a tunable hyperparameter, enabling the sensitivity to soft emissions and nonperturbative effects to be gradually increased as the subjet radius is decreased. We demonstrate the performance of JFNs for quark vs. gluon and QCD vs. \(Z\) jet tagging. For small subjet radii, the performance of JFNs is equivalent to the IRC-unsafe Particle Flow Networks (PFNs), demonstrating that infrared-collinear unsafe information is not necessary to achieve strong discrimination for both cases. As the subjet radius is increased, the performance of the JFNs remains essentially unchanged until physical thresholds that we identify are crossed. For relatively large subjet radii, we show that the JFNs may offer an increased model independence with a modest tradeoff in performance compared to classifiers that use the full particle information of the jet. These results shed new light onto how machines learn patterns in high-energy physics data. ## 1 Introduction Jets are highly energetic and collimated groups of particles observed in the detectors of high-energy scattering experiments such as the Large Hadron Collider (LHC) [1; 2; 3]. Jets arise from the fragmentation of highly energetic quarks and gluons, which themselves can arise from the decay of unstable particles such as the Higgs boson. Classifying the origins of jets, such as quark vs. gluon initiated jets, QCD vs. boosted \(Z/W\) jets, and QCD vs. boosted top jets [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] is crucial to disentangle the various processes occurring at collider experiments and perform searches for physics beyond the Standard Model. Jet classification algorithms have been developed based on multivariate combinations of jet substructure observables as well as using machine learning methods. Machine learning based jet classifiers significantly outperform traditional multivariate jet taggers that utilize a limited number of observables, since they are able to leverage the full information in the jet. However, machine learning based classifiers often have the drawback that they are not calculable by analytical methods. Efforts to address this have been an active area of research, such as enforcing Infrared-Collinear (IRC) safety [23] in the network architecture [24; 25; 26; 27; 28] or by finding optimal ways to reduce the amount of information provided as input to neural networks [29; 30; 31; 32; 33]. In order to increase the interpretability of machine learning based classifiers, a complete IRC-safe basis of jet substructure observables was introduced in Refs. [34; 10; 35] based on \(N\)-subjettiness observables [36; 37]. These observables capture the momentum and relative angles of emissions inside the jet. The set of \(N\)-subjettiness observables is then used as input to a machine learning algorithm for jet classification. While the complete basis of IRC-safe observables is large (\(3M-4\) for \(M\) particles in the jet), it was found that the performance of classifiers saturates quickly with a relatively small number of observables. Another set of observables, Energy Flow Polynomials (EFPs), was developed as a linear and IRC-safe basis of jet substructure observables in Ref. [24]. Interestingly, it was found that while the performance of classifiers based on complete sets of observables saturates, in most cases there remains a performance gap between classifiers with IRC-safe inputs (Sudakov safe classifiers) and IRC-unsafe classifiers that make use of the full information content of the particles inside the jet. Examples of such IRC-unsafe classifiers include architectures based on deep sets [11], point clouds [12] and transformers [38]. This performance gap has been observed for a variety of jet classification tasks, including QCD vs. \(W/Z\) and \(H\) jets [33], and \(pp\) vs. \(AA\) jets [29]. For quark vs. gluon tagging, it was found that the IRC-safe EFPs can match the performance of PFNs when only momentum information of particles in the jet is considered [11]. Several efforts have been made to quantify the gap, with the aim to gain new insights into fundamental QCD dynamics. There are several possible explanations for the observed performance gap: * IRC-unsafe classifiers may be able to make use of the very soft information content of jets, which is difficult to access with IRC-safe observables. * IRC-unsafe classifiers such as PFNs take as input the exact position information of the particles inside the jet, whereas IRC-safe observables can only capture the information of relative distances. It is possible that existing machine learning algorithms can make more efficient use of position information. * The specific form of the IRC-safe observables may not be optimal for classification tasks and there may be other sets of observables that could perform better. With this question in mind, we introduce in this work a new machine learning-based jet classifier, Jet Flow Networks (JFNs)1, which take as input the energy and position of reclustered subjets instead of individual particles. JFNs allow for soft and collinear emissions to be clustered into subjets making the input IRC-safe and the resulting classifier generally Sudakov safe [39; 40]. However, different than the \(N\)-subjettiness or the EFP basis of observables, position information is used instead of having (indirectly) access only to relative distances between emissions (or subjets) inside the jet. We note that we do not consider quark flavor tagging in this work which requires nonperturbative information, see e.g. Refs. [41; 42; 43]. JFNs are closely related to Particle Flow Networks (PFNs) [11] and Energy Flow Networks (EFNs) [11], which will be elaborated on in section 3. In the limit of a vanishing subjet radius where every subjet contains only a single hadron, JFNs are identical to PFNs. The radius of the reclustered subjets in JFNs can be used to dial in nonperturbative information allowing for a smooth transition to IRC-unsafe classifiers. As such, JFNs complement the existing family of permutation-invariant networks in particle physics. As for PFNs and EFNs, we will utilize machine learning algorithms for JFNs based on a permutation invariant deep set architecture [11; 44; 45; 46]. Footnote 1: In analogy to Particle Flow Networks (PFNs) [11]. In this paper, we will explicitly study classification tasks of quark vs. gluon jets and jets from QCD vs. jets from boosted hadronic decays of \(Z\) bosons. The particular IRC (un)safety of the likelihood ratios for these tasks has been studied previously. It is expected that the likelihood ratio for quark vs. gluon jet discrimination is IRC safe [27], which has been validated in previous machine learning studies [25; 26; 47]. The argument for IRC safety is that the \(N\)-body phase space can be spanned by additive, IRC safe observables [48] and so probability distributions for quarks and gluons will in general take the form of a Sudakov factor near the infrared regions of phase space. The rate of suppression of emissions is controlled by the corresponding color Casimirs, and, because gluons carry more color than quarks, the likelihood ratio itself vanishes at all infrared boundaries. Therefore, all fixed-order divergences are mapped to the same value of the likelihood ratio, namely 0, and so the classifier is IRC safe. This explains why EFPs, which are an IRC-safe classifier, can match the performance of the IRC-unsafe PFNs. By contrast, the likelihood ratio for QCD jets vs. \(Z\) jets is expected to only be Sudakov safe, as optimal observables for general one-vs. two-prong discrimination take the form of the ratio of IRC safe observables [49; 50; 36; 51]. Ratios of IRC safe observables are in general themselves not IRC safe [52]. The main result of our work will be to show that the JFNs based on IRC-safe subjets achieve the same classification performance as PFNs for a finite, non-zero subjet radius for both discrimination tasks considered here. The JFNs represent the first example of a classifier based on IRC-safe inputs that achieves equivalent performance on several classification tasks as its IRC-unsafe counterpart. In addition, the machine learning architecture is equivalent to PFNs, which allows for one-to-one comparisons. The exact value of the subjet radius where the PFN performance is matched depends on the classification task at hand. Therefore, different than the classifiers based on complete IRC-safe sets of observables, JFNs constitute a "gapless" classifier indicating that the very soft aspects of jets are in fact not relevant for typical classification tasks at collider experiments. This answers in part the question about the features that are relevant for the performance of classifiers in high-energy physics. Throughout this work, PFNs are taken as a reference, but other permutation invariant classifiers such as GNNs, transformers and point clouds could equally well be trained on particles or reclustered subjets. In addition to shedding light on the role of IRC-safe information, JFNs allow for new insights into the physics of jet tagging and may lead to various future applications at high-energy collider experiments. By studying the performance of the JFNs as a function of the subjet radius and the jet transverse momentum, we are able to identify the relevant physical scales of different classification tasks. For example, for QCD vs. \(Z\)-jet tagging, we find that the subjet scale \(p_{T}r\) is sensitive the opening angle between the boosted hadronic decay products of the \(Z\)-boson. Second, we explore the generalization capability of JFNs to unseen data, which is crucial when deploying a classifier trained on simulations to experimental data. Due to the clustering of collinear and soft emissions into subjets, the resulting JFNs are relatively insensitive to the detailed modeling of the infrared (IR) physics that is often poorly understood. This raises the possibility to use JFNs to trade performance for generalizability by adjusting the number of reconstructed subjets. Lastly, we expect that subjets can be measured well in heavy-ion collisions despite the large fluctuating ackground. See Ref. [53] for recent measurements of the energy spectrum of inclusive and leading subjets by the ALICE Collaboration. See also Ref. [54]. The remainder of this paper is organized as follows. In section 2, we introduce the subjet basis and discuss differences between inclusive and exclusive subjet reconstruction algorithms. In section 3, we introduce the permutation invariant machine learning algorithms that take the kinematic information of the reconstructed subjets as input and in section 4, we briefly discuss the data sets used for different classification tasks used in this work. In section 5, we present numerical results for the classification performance of JFNs for quark vs. gluon and QCD vs. \(Z\) jets. In particular, we show that JFNs match the PFN performance for a finite subjet radius in both test cases. Based on these results, we describe in section 6 that the machine learning algorithm is sensitive to different physical scales, which it can effectively learn. In section 7, we investigate the tradeoff between performance and generalizability of the JFNs. In section 8, we draw conclusions and present an outlook. ## 2 The subjet basis In this section, we describe the reconstruction of subjets that will serve as the input to the machine learning classifier. The initial jet is identified using the anti-\(k_{T}\) algorithm [55] and jet radius parameter \(R\). In order to utilize the substructure of jets, we then recluster the jet constituents into subjets. We consider two approaches for the subjet reconstruction: inclusive anti-\(\mathrm{k}_{T}\) subjets and the exclusive \(\mathrm{k}_{T}\) subjets [56; 57]. In both cases soft and collinear emissions are first clustered into subjets, making the input to the IRC safe. In Figure 1: Illustration of a QCD jet with \(p_{T}=100\) GeV and radius parameter \(R=0.4\) reclustered into subjets for subjet radii \(r=0.1\) (left), \(r=0.2\) (middle), and \(r=0.3\) (right). We use the inclusive anti-\(\mathrm{k}_{T}\) algorithm to identify the initial jet and the subjets. Particles are represented by small filled circles with radii proportional to the particle transverse momentum in the \(\Delta y\) vs. \(\Delta\varphi\) plane, where \(\Delta\varphi=\varphi^{\mathrm{particle}|\mathrm{subjet}}-\varphi^{\mathrm{ jet}}\) is the azimuthal angle with respect to the jet axis and \(\Delta y=\Delta y^{\mathrm{particle}|\mathrm{subjet}}-\Delta y^{\mathrm{ jet}}\) is rapidity distance to the jet axis. Subjets are shown with larger colored areas where red marks the leading subjet, green marks the second leading subjet, blue marks the third leading jet, and shades of gray represent subjets with lower longitudinal momentum fraction \(z=p_{T}^{\mathrm{subjet}}/p_{T}\) with intensity proportional to \(z\). this sense, subjets serve as a useful tool for throttling or controlling the input data to the machine in a way that is theoretically interpretable in perturbative QCD. First, we consider inclusive subjets reconstructed with the anti-k\({}_{T}\) algorithm and a fixed jet radius \(r<R\). This approach fixes the maximally allowed size of the reconstructed subjets but the number of subjets varies for each jet. We illustrate the distribution of subjets in the \(\eta\)-\(\phi\) plane for three different subjet radii in Fig. 1. As \(r\) is increased, the central subjet contains a large fraction of particles. Second, we consider subjets reconstructed with the exclusive \(k_{T}\) algorithm. Particles are clustered with the \(k_{T}\) algorithm until a fixed number of subjets \(N\) is obtained. Different than in the case of inclusive subjets, the number of identified subjets is fixed but their size varies jet-by-jet. The \(N\) subjets span the full information content of the \(N\) most resolved emissions inside the jet analogous to the \(N\)-subjettiness basis developed in Refs. [10; 34; 35]. An alternative approach to the exclusive \(k_{T}\) algorithm is to identify subjets with the XCone algorithm [58]. We leave the exploration of this algorithm for future work. By taking the small-\(r\) (inclusive subjets) or large-\(N\) limit (exclusive subjets), we can study the transition to the nonperturbative regime where eventually, every subjet only contains a single hadron. To illustrate the qualitative differences between the two reconstruction methods discussed above, we show as an example the longitudinal momentum distributions of subjets \(z=p_{T}^{\rm subjet}/p_{T}\) in Fig. 2 separately for quark and gluon jets. Here \(p_{T}\) denotes the initial jet transverse momentum and \(p_{T}^{\rm subjet}\) the longitudinal subjet momentum using either the inclusive or exclusive reconstruction method. As an example, we choose \(N=30\) for the exclusive reconstruction of subjets and \(r=0.02\) for inclusive subjets, which yields a comparable average number of subjets. We observe that the two methods lead to qualitatively different spectra. The inclusive subjet spectrum exhibits a peak (quarks) or plateau (gluons) for intermediate to large values of \(z\). In contrast, the spectrum for exclusive subjets only Figure 2: The longitudinal momentum distribution of inclusive subjets \(z=p_{T}^{\rm subjet}/p_{T}\) originating from either a quark (blue) or a gluon (orange) jet simulated with Pythia[59]. We show the distributions for inclusive subjet clustering with \(r=0.02\) (left) and for exclusive clustering with a fixed number of \(N=30\) subjets (right), which yields a comparable average number of subjets. peaks at small values of \(z\) and falls off steeply for \(z\to 1\). This is due to the fact that for exclusive clustering the \(k_{T}\) algorithm is used, where soft hadrons are clustered first. Only at the end hard emissions are combined, making it unlikely to find a subjet with \(z\to 1\) for a fixed value of \(N\). We note that for inclusive subjets, the \(z\)-distributions are qualitatively the same for both the anti-\(k_{T}\) and \(k_{T}\) algorithms. The longitudinal momentum spectrum for inclusive subjets was calculated within perturbative QCD up to next-to-leading logarithmic (NLL) accuracy. See Refs. [60; 61; 62; 63]. This close connection to first-principles calculations may allow for an increased understanding of machine learning algorithms in QCD. From the identified subjets, the kinematic information \((z_{i},\eta_{i},\phi_{i})\) of the each subjet is used as input to the classifiers discussed below. In the limit that \(r\to 0\) (inclusive subjets) or \(N\to\infty\) (exclusive subjets), the subjet basis becomes equivalent to the set of particle four-vectors of the jet, and the classifier can make use of the full information content of the jet. The subjet basis therefore provides a means to limit the information supplied to the classifier, by using \(r>0\) or \(N<\infty\). ## 3 Jet Flow Networks (JFNs): Deep sets of subjets In this section, we describe the permutation invariant neural networks that use the kinematic information of subjets as input to perform binary classification tasks. As introduced above, we refer to the machine learning architecture and the pre-processing step of clustering particles into subjets as JFNs. The reconstructed subjets discussed in the previous section do not have an inherent ordering. Therefore, permutation-invariant neural networks are a natural choice to perform classification tasks that take as input the kinematic information of subjets. In Refs. [44; 45; 46] deep sets were introduced as a permutation invariant neural network. In the context of particle physics deep sets were first discussed in Ref. [11] as Particle Flow Networks (PFNs) that take as input the information of individual particles. A permutation invariant classifier \(f\), which takes as input the subjet four-momenta \(p_{i}\) satisfies \(f(p_{1},\ldots,p_{N})=f(p_{\pi(1)},\ldots,p_{\pi(N)})\). Here \(\pi\) denotes the permutation operator. Following Ref. [44], we can write the classifier \(f\) as \[f(p_{1},\ldots,p_{n})=F\bigg{(}\sum_{i=1}^{N}\Phi_{i}(p_{i})\bigg{)}\,, \tag{1}\] where \(F,\;\Phi\) are neural networks and, as an intermediate step, we sum over all reconstructed subjets \(N\). The first neural network \(\Phi:\mathbb{R}^{4}\to\mathbb{R}^{l}\) takes as input the individual subjet four momenta and maps it to an \(l\)-dimensional latent space. For massless subjets, we can write the individual four vectors in terms of \((z_{i},\eta_{i},\phi_{i})\). Here \(z_{i}\) is the subjet's longitudinal momentum fraction, see Fig. 2, and \((\eta_{i},\phi_{i})\) denote its coordinates in the rapidity-azimuth plane. We note that further information can be included in the per-subjet mapping such as the jet mass or the jet charge [64; 65], analogous to e.g. particle identification (PID) for PFNs. We leave quantitative studies of the impact of these additional features for future work. The summation in Eq. (1) ensures that the classifier \(f\) is invariant under permutations of the input variables. The second neural network \(F:\mathbb{R}^{l}\to\mathbb{R}\) is a map from the latent space where the summation operation is performed to the final classification score. Note that the classifier architecture in Eq. (1) can accommodate both a fixed number \(N\) of subjets (exclusive subjets) and input with variable length (inclusive subjets). We refer to the deep set classifier based on subjets in Eq. (1) as JFNs. The JFNs are a family of classifiers due to the dependence on the continuous parameter \(r\) in the case of inclusive clustering or on \(N\) in the case of exclusive clustering, in which case the clustering is performed until \(N\) subjets remain. Since the JFN takes subjet information as input, the resulting classifier is generally Sudakov safe [40]. We summarize the different aspects of permutation invariant network architectures based on deep sets in table 1. Since the JFNs are Sudakov safe, they constitute an intermediate point between IRC-unsafe PFNs and IRC-safe EFNs. In the limit of \(r\to 0\) (inclusive subjets) or the large-\(N\) limit (exclusive subjets), we recover the PFN classifier. ## 4 Data sets In this work, we will consider JFNs for two exemplary binary classification tasks in high-energy physics. First, we consider quark vs. gluon jet classification and, second, QCD vs. \(Z\) jet classification. For the quark vs. gluon case, we make use of the data set in Ref. [66], which consists of 2M jets with transverse momentum \(p_{T}=[500,550]\) GeV, rapidity \(|\eta|<1.7\), jet radius parameter \(R=0.4\), and center-of-mass energy \(\sqrt{s}=14\) TeV. We will make use of both the data set generated with Pythia[59] and Herwig[67]. In order to explore the dependence on the jet transverse momentum, we also generate two additional data sets consisting of 500k jets each with transverse momentum \(p_{T}=[300,350]\) GeV and \([1000,1050]\) GeV, respectively. The underlying processes are: \(q\bar{q}\to Z(\to\nu\bar{\nu})+g\) and \(q\bar{q}\to Z(\to\nu\bar{\nu})+(uds)\) analogous to Ref. [66]. For the QCD vs. \(Z\)-jet case, we generate 500k jets for three different bins of jet transverse momentum, [300, 350] GeV, [500, 550] GeV and [1000, 1100] GeV with a jet mass \(m_{j}=[45,135]\) GeV. The radius parameter is \(R=0.8\), the rapidity cut is \(|\eta|<1.7\) and the samples are generated using Pythia at \(\sqrt{s}=14\) TeV. Jets arising from \(Z\) bosons are identified by requiring that the leading \(Z\) boson is in the catchment area of the jet as extracted from the kinematics of the events at the particle level before hadronization with a \(Z\)-jet distance from the jet axis less than \(R/2\). A similar tagging procedure is performed to differentiate between quark and gluon jets in the QCD sample. The tag is based on the leading parton within the catchment area of the jet. However, to strengthen the parton-jet association, we use parton-level kinematics injected into the hadron-level event using so-called ghost particles (\(p_{T}=10^{-5}\) GeV) that do not affect the jet reconstruction but allow for efficient tagging after the jet finding step. \begin{table} \begin{tabular}{||c|c|c|c||} \hline & PFN [11] & JFN & EFN [11] \\ \hline \hline Input & particle 4-momenta & subjet 4-momenta & particle 4-momenta \\ \hline Classifier & IRC unsafe & Sudakov safe & IRC safe \\ \hline \end{tabular} \end{table} Table 1: Overview of different classifiers based on permutation invariant neural networks. he substructure of QCD jets is generally single-pronged, whereas the decay products of a \(Z\) cause the corresponding jets to have two prongs. The ratio of \(N\)-subjettiness observables [36; 37; 68; 69] is sensitive to the number of prongs inside a jet. In order to define the \(N\)-subjettiness, a given number of \(N\) axes are identified inside the jet using the exclusive \(k_{T}\) algorithm. The \(N\)-subjettiness variables \(\tau_{N}^{(\beta)}\) measure the radiation along these axes and are defined as \[\tau_{N}^{(\beta)}=\frac{1}{p_{T}}\sum_{i\in\mathrm{jet}}p_{Ti}\min\left\{R_{ 1i}^{\beta},R_{2i}^{\beta},\ldots,R_{Ni}^{\beta}\right\}\,. \tag{10}\] Here the \(p_{Ti}\) of each particle \(i\) is weighted by its distance \(R_{ji}\) to the closest axis \(j\) raised to the power \(\beta>0\), which is a tunable parameter. For jets that are more like a single-prong jet, the variable \(\tau_{2}\) will peak at smaller values compared to \(\tau_{1}\), whereas for two-prong like jets the variable \(\tau_{2}\) takes similar values compared to \(\tau_{1}\) (by construction, \(\tau_{n+1}\leq\tau_{n}\)). In the left panel of Fig. 3, we show the result for the ratio \(\tau_{2}^{(1)}/\tau_{1}^{(1)}\), which shows the expected separation of the two jet samples, and the jet mass \(m_{j}\) distribution for QCD and \(Z\) jets (right panel). For all classification tasks, the training/validation/test split is 80%/10%/10%. ## 5 JFN performance: gapless jet classification In this section, we will explore the performance of the JFNs and compare the results to PFNs, which are recovered as a limit of the JFNs where for \(r\to 0\) every subjet contains only a single hadron. We consider two exemplary binary classification tasks in high-energy physics: quark vs. gluon and QCD vs. \(Z\) boson jet identification. In order to implement the permutation-invariant neural networks, we parametrize the functions \(\Phi\) and \(F\) in Eq. (11) in terms of DNNs, using the EnergyFlow package [11] with Keras[70]/TensorFlow[71]. For \(\Phi\) we use two hidden layers with 100 nodes each and a latent space dimension of \(d=256\). For \(F\) we include three layers with 100 nodes each. Figure 3: \(N\)-subjettiness ratio \(\tau_{2}^{(1)}/\tau_{1}^{(1)}\) (left) and the jet mass distribution (right) for QCD and \(Z\) jets with \(p_{T}=[500,550]\) GeV. The N-subjettiness axes were identified using the one pass \(k_{T}\) clustering algorithm. For each dense layer, we use the ReLU activation function [72] and we use the softmax activation function for the final output layer of the classifier. We train the neural networks using the Adam optimizer [73] with learning rates ranging from \(10^{-3}\) to \(10^{-4}\). We use the binary cross entropy loss function [74] and train for 60 epochs with a batch size of 256 and a patience parameter of 8 for early stopping. We find no significant changes in performance when changing the size or number of the layers, latent space dimension, learning rate, and batch size by factors of 2-5. Following Ref. [11], we perform a preprocessing step to simplify the training process: we use the rescaled momentum fractions \(z_{i}\) and center the rapidity and azimuthal angles \(\eta_{i},\phi_{i}\) of the particles in the jet with respect to the jet direction. We quantify the performance of the different classifiers in terms of the Receiver Operator Characteristic (ROC) curve and the Area Under the Curve (AUC) of ROC curve. The ROC curve is the cumulative distribution function of the true positive rate vs. the false positive rate of a classifier as the decision threshold is varied. The AUC takes values between 0.5 and 1, where 0.5 (1) corresponds to a random (perfect) binary classifier. We estimate the statistical uncertainty of the AUC by training the deep sets four times for each choice of the subjet radius \(r\) and using the standard deviation as the uncertainty. Fig. 4 (left) shows the JFN results for \(q\) vs. \(g\) jet discrimination using inclusive subjet clustering and 2M jets for training, validation and testing. The top panel shows the AUC performance as we change the inclusive subjet radius \(r\), and the bottom panel shows the ROC curves for several \(r\). For comparison, we also show the PFN result. In the case of the AUC plot, we display the PFN classifier as the leftmost point on the \(r\) axis. As expected, we find that for the smallest subjet radii \(r\) the performance of the PFN is recovered. Strikingly, however, the performance of the JFN does not significantly diminish as \(r\) is increased for values of the subjet radius \(r\lesssim 0.01\). At this critical \(r\) value, we have on average \(n_{\rm subjets}/n_{\rm hadrons}\approx 0.75\). This observation is corroborated by the ROC curve in the lower panel, which shows that there is no performance loss in the JFN (\(r=0.01\)) as compared to the PFN. This demonstrates that there is little-to-no information encoded in the very soft/collinear emissions relevant for discriminating \(q\) vs. \(g\) jets, and suggests that IRC safe inputs are sufficient for the purpose of \(q\) vs. \(g\) classification. In section 6, we will further discuss the physical interpretation of this critical \(r\) value. Fig. 4 (right) shows the analogous JFN results for QCD vs. \(Z\) jet classification using inclusive subjet clustering and 500k jets for training, validation and testing. We observe again that the JFNs smoothly converge to the result of the PFN. Different than for quark vs. gluon jet tagging, we can now choose a significantly larger subjet radius \(r\lesssim 0.1\) without compromising the performance of the classifier. This is related to the fact that in this case, the boosted \(Z\)-boson decay products generally lead to a two-pronged jet substructure, whereas QCD (quark and gluon) jets exhibit a single-pronged jet substructure (see Fig. 3 as well as Refs. [1; 3] for different observables and perturbative calculations that characterize the radiation patterns of QCD and boosted \(Z\) jets). In general, machine-learned classifiers can make use of more information than the one- vs. two-pronged structure inside these jets; for \(r\sim 0.1\), a significant fraction of the hadrons inside the jets are clustered into subjets, \(n_{\rm subjets}/n_{\rm hadrons}\approx 0.4\). However, due to the observed saturation up to \(r\sim 0.1\), we conclude that the information contained in soft and collinear emissions is significantly less relevant for this classification task compared to quark vs. gluon jet tagging. This is due to the physical scales that are relevant for the different jet classification tasks, which we will explore in more detail in section 6. For completeness, we list the numerical values for the AUC including uncertainties in table 2. For both quark vs. gluon jet classification and QCD vs. \(Z\) jet classification, we have shown that for a range of subjet radii \(r\), the JFN exhibits no significant difference in performance compared to the PFN. That is, the JFN classifier here is "gapless" in the sense that we smoothly approximate the PFN performance (for finite values of \(r\)). The clustering of soft and collinear emissions into subjets does not affect the performance as long as \(r\) is sufficiently small. This is in contrast to previous studies based on observables such as \(N\)-subjettiness variables, which exhibit a small but persistent performance gap to PFNs [7; 10; 11]. The JFN provides the first example of a classifier with IRC-safe inputs that achieves equivalent performance to the IRC-unsafe PFNs for several classification tasks. Our results are consistent with the intuitive expectation that very soft particles are essentially uncorrelated with the hard process and therefore do not provide relevant information about the performance of the jet classification task. Figure 4: Top panel: AUC for quark vs. gluon jet (left) and QCD vs. \(Z\) jet tagging (right) using JFNs with different values of the (inclusive) subjet radius \(r\). The PFN classifier is shown for reference at the leftmost value of \(r\). Bottom panel: ROC curves for quark vs. gluon (left) and QCD vs. \(Z\) jet tagging using JFNs and the PFN with different values of the (inclusive) subjet radius \(r\) for the same datasets as the upper panel. information for typical classification tasks in high-energy physics. The main question of our paper has thus been answered by these observations. At least for the two classification tasks considered here, we have found that IRC-safe information is sufficient to close the gap to IRC-unsafe classifiers. This was achieved by using the machine learning architecture and input type (momentum, position information) for both cases and by including subjet reclustering as a preprocessing step in the IRC-safe case. We note that our conclusions come with the following caveat. While we are going to identify relevant physical scales with the performance of the classifiers, it is possible that future advances in machine learning lead to more powerful algorithms that may require us to reduce the subjet radius \(r\) to match the performance of IRC-unsafe classifiers. ## 6 Learning physical scales As discussed in the previous section, the performance of the JFNs based on IRC-safe input matches that of the IRC-unsafe PFNs for finite values of the subjet radius \(r\) where a significant fraction of hadrons is clustered into subjets. In this section, we quantify in more detail the onset of the drop in performance when the subjet radius crosses certain physical scales. Here we only focus on inclusive instead of exclusive subjet reconstructions since we are primarily interested in the physical scale associated with a fixed value of subjet radius \(r\). In order to identify the physical scale associated with the classification tasks and study its scaling behavior, we are going to analyze the AUC for different bins of jet transverse momentum \(p_{T}\). In the upper left panel of Fig. 5, we show the AUC for \(q\) vs. \(g\) discrimination for three different bins of the jet transverse momentum. For comparison, we show analogous to Fig. 4 the PFN result as the left-most point (\(r=0.001\)). For all three classification tasks we used 500k jets for training, validation and testing. First, we notice that at sufficiently large \(r\) all three AUC curves merge and the classification performance is (approximately) independent of the jet transverse momentum. This can be traced back to the approximate \begin{table} \begin{tabular}{|c|c|} \hline **Model** & **AUC q vs. g** \\ \hline PFN & 0.8912 \(\pm\) 0.0005 \\ \hline JFN (\(r=0.005\)) & 0.8911 \(\pm\) 0.0002 \\ \hline JFN (\(r=0.01\)) & 0.8904 \(\pm\) 0.0009 \\ \hline JFN (\(r=0.015\)) & 0.8865 \(\pm\) 0.0011 \\ \hline JFN (\(r=0.02\)) & 0.8812 \(\pm\) 0.0008 \\ \hline JFN (\(r=0.05\)) & 0.8550 \(\pm\) 0.0004 \\ \hline JFN (\(r=0.1\)) & 0.8207 \(\pm\) 0.0009 \\ \hline JFN (\(r=0.2\)) & 0.7629 \(\pm\) 0.0013 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline **Model** & **AUC QCD vs. \(Z\)** \\ \hline PFN & 0.9235 \(\pm\) 0.0015 \\ \hline JFN (\(r=0.01\)) & 0.9237 \(\pm\) 0.0018 \\ \hline JFN (\(r=0.05\)) & 0.9236 \(\pm\) 0.0010 \\ \hline JFN (\(r=0.1\)) & 0.9227 \(\pm\) 0.0017 \\ \hline JFN (\(r=0.15\)) & 0.9189 \(\pm\) 0.0009 \\ \hline JFN (\(r=0.2\)) & 0.9150 \(\pm\) 0.0012 \\ \hline JFN (\(r=0.3\)) & 0.7755 \(\pm\) 0.0025 \\ \hline JFN (\(r=0.4\)) & 0.7115 \(\pm\) 0.0015 \\ \hline \end{tabular} \end{table} Table 2: Numerical values for the AUC for both classification tasks considered in this work. scale invariance of the QCD parton shower cascade. Second, we observe that in all three cases, the AUC reaches a plateau for finite values of the jet radius as \(r\) is decreased. In the lower left panel of Fig. 5, we show the results in the transition region in more detail. The onset of the plateau shifts to the left as the jet \(p_{T}\) is increased. For higher jet \(p_{T}\), the jet constituents are more collimated leading to a smaller critical value \(r\) where the JFNs match the PFN performance. We can identify the following approximate scale where the AUC reaches a plateau and agrees with the PFN results: \[p_{T}\cdot r\sim 5\ \text{GeV}\,. \tag{6.1}\] Since there is no additional physical scale in the quark vs. gluon jet classification task besides the jet \(p_{T}\), the agreement with the PFN result is achieved for relatively low energy scales. However, we would like to stress that the identified scale is still in the perturbative regime. This suggests that because we only use particle momentum in our classifier that non-perturbative symmetries such as isospin forbid useful information at hadron level for discrimination. This would be broken and discrimination could improve if information sensitive to flavor was measured, which has been established in some recent studies of the jet charge [42; 43]. Since the (IRC-unsafe) particle multiplicity is known to be a powerful discriminant for quark vs. gluon jet tagging [75; 76; 27], we study the relation of our results to the average subjet multiplicities \(n_{q,g}\) as a function of the subjet radius \(r\), which is shown in the upper right panel of Fig. 5. As expected, the subjet multiplicities for both quark and gluon jets increase as the subjet radius \(r\) is decreased. In the limit \(r\to 0\), the subjet multiplicity smoothly asymptotes to the particle multiplicity. The expected value for the ratio of the particle multiplicities at leading order is \(n_{g}/n_{q}=C_{A}/C_{F}=9/4\)[77; 78]. In agreement with the discussion in the previous section, we notice that the subjet radius where the quark Figure 5: Left: AUC for quark vs. gluon jet classification for three different jet \(p_{T}\) intervals as a function of the subjet radius \(r\). Upper right: Average subjet multiplicity \(n_{q,g}\) for quark (solid) and gluon (dashed) jets. Lower right: Ratio of the average subjet multiplicities \(n_{g}/n_{q}\) for the three jet \(p_{T}\) intervals. s. gluon jet AUC reaches a plateau is larger than the \(r\) value where the subjet multiplicity reaches the particle multiplicity by an order of magnitude. This confirms that matching the PFN performance with JFNs is a non-trivial result. As shown in the lower right panel of Fig. 5, we observe that the ratio \(n_{g}/n_{q}\) peaks at intermediate values of \(r\), which is in the region of the better modeled perturbative physics [76; 79; 80; 81]. Interestingly, the location of the peaks is approximately the same as where the AUC for quark vs. gluon jet tagging reaches the plateau and agrees with the PFN result. This interesting correlation indicates that we can increase the subjet radius \(r\) without affecting the classification performance until the subjet multiplicity \(n_{g}/n_{q}\) starts to decrease. Next, we consider QCD vs \(Z\) jet classification. The AUC for three jet transverse momentum intervals is shown in Fig. 6 as a function of the subjet radius \(r\). In all three cases we used 500k jets for training, validation and testing. As the jet \(p_{T}\) is increased, the value of the subjet radius \(r\sim 0.1-0.2\), where JFNs match the PFN performance is shifted to the left. This observation is generally consistent with quark vs. gluon jet tagging discussed above. We note that for the AUC curve with \(p_{T}\in[300,350]\) GeV the choice of the jet radius \(R\) might start to play a role. The value of \(r\) where the performance reaches the plateau is roughly a factor of 10 higher compared to quark vs. gluon jet tagging. As already hinted at above, this is due to the presence of different physical scales. QCD jets do not have any additional intrinsic scales except for the hadronization scale. Instead, jets that contain the decay products of the boosted \(Z\) boson are sensitive to the \(Z\)-boson mass \(M_{Z}\). In order to gain further insights into the underlying physics, we are going to study the distribution of the opening angle of the two leading subjets \(\theta_{12}\). This variable is closely related to the 2-pronged structure of \(Z\) jets and serves as a useful discriminant since at Figure 6: Left: AUC for QCD vs. \(Z\) jets for three different jet \(p_{T}\) intervals as a function of the subjet radius \(r\). leading order the \(Z\)-boson decays into a quark and anti-quark, which correspond to the two leading subjets at this order. The boosted \(Z\) decay products have an opening angle \(\theta_{Z}\), which is determined by \(M_{Z}\) and the jet transverse momentum \(p_{T}\) as \[\theta_{Z}\sim\frac{2M_{Z}}{p_{T}}\,. \tag{6.2}\] For higher \(p_{T}\), the decay products are more boosted and \(\theta_{Z}\) is smaller. See Fig. 7 for an illustration of the boosted \(Z\)-boson decay products clustered into subjets. If the subjet radius parameter is sufficiently small \(r<\theta_{Z}/2\), the \(Z\)-boson decay products are clustered into separate subjets. Instead, for \(r>\theta_{Z}\), they are merged into a single subjet. In the intermediate region, \(r<\theta_{Z}<2r\), they are identified as two separate subjets but the subjet catchment areas overlap. In Fig. 8, we show the distributions of the opening angle \(\theta_{12}\) between the first two leading subjets for both QCD and \(Z\) jets for different values of the subjet radius \(r\). Here, \(\theta_{12}\) corresponds to the geometric distance in the \(\eta\)-\(\phi\) plane, i.e. without rescaling by the jet radius \(R\). The left column of Fig. 8 shows the opening angles \(\theta_{12}\) between the two leading hadrons, which corresponds to the subjet radius \(r=0\). The middle column shows the \(\theta_{12}\) distributions for \(r\) values in the plateau region where the AUC of the JFNs matches the PFN result. The right column corresponds to higher \(r\) values where the AUC has dropped significantly, see Fig. 6 above. The top and bottom row correspond to two different jet \(p_{T}\) intervals as indicated in the figure. We observe that for both QCD and \(Z\) jets, the distributions are bounded from below by the chosen subjet radius, \(\theta_{12}>r\), and the distributions vanish when the angle between the leading subjets reaches the jet radius \(\theta_{12}\lesssim R=0.8\). Due to collinear QCD emissions, the distribution of the angle between the two leading hadrons peaks at \(\theta_{12}\sim 0\). As the subjet radius is increased, it peaks close to the lower bound \(\theta_{12}\sim r\). Eventually the \(\theta_{12}\) distribution becomes broader for large values of \(r\). Instead, both at hadron level and for \(r\) values in the plateau region of the AUC, the \(\theta_{12}\) distribution of \(Z\)-jets has a two-peak structure. The left peak is due to QCD emissions and it occurs at the same \(\theta_{12}\) value as the single peak of QCD jets. The second peak occurs around the opening angle of the \(Z\) decay products \(\theta_{12}\sim\theta_{Z}\). The width of the peaks scales as \(\sim 1/p_{T}\). When \(r\) is chosen in the plateau region of the AUC in Fig. 6, the JFN performance agrees with the PFN result. In this region, the two-peak structure of the \(Z\)-jet \(\theta_{12}\) distribution can be clearly identified. The two-prong structure of \(Z\) jets is the Figure 7: Reclustering of the \(Z\)-boson decay products into subjets with different radii. most prominent feature that distinguishes the two jet samples and it is clearly resolved as long as \(r\) is sufficiently small. In this region, we found that the JFN performance is the same as the IRC-unsafe PFN result. While the location of the \(\theta_{12}\sim\theta_{Z}\) peak is fixed, the peak due to QCD emissions moves to larger \(\theta_{12}\) as \(r\) is increased. Eventually, the two peaks start to merge. This is illustrated in the right-most column of Fig. 8, which shows the \(\theta_{12}\) distributions for \(r\) values where the JFN performance is significantly below its maximal value. The two peaks of the \(Z\) jets have merged into one peak and the distribution is very similar to QCD jets. In this case, the \(Z\)-decay products cannot be clearly resolved and the performance of the classifier deteriorates. At this scale, the classifier does not have access to the UV physics anymore and as such the performance for the \(p_{T}=[1000,1100]\) GeV jets matches the performance for the \(p_{T}=[500,550]\) GeV jets. By comparing the upper and lower row of Fig. 8, we observe that the location of the peaks is shifted to lower values for higher jet \(p_{T}\). In addition, the width of the peaks is narrower \(\sim 1/p_{T}\). This agrees with the observation that for higher jet \(p_{T}\), the end of the AUC plateau in Fig. 6 is reached for smaller \(r\) values. Another way of illustrating the importance of resolving the \(Z\)-boson decay products is by training deep sets using only the information of the first few leading subjets. In Fig. 9, we show the AUC for QCD vs \(Z\) jets as a function of the subjet radius \(r\) for three classifiers. We compare the JFNs to deep sets trained on the kinematic information of only the first two or three leading subjets. As an example, we use the jet transverse momentum interval of \(p_{T}=[500,550]\) GeV. We observe that for large and intermediate values of the subjet radius \(r\), the JFN performance is close to the deep sets trained on only two or three leading Figure 8: Distributions of the opening angle \(\theta_{12}\) between the two leading subjets for both QCD and \(Z\) jets. We show the results for the two leading hadrons (\(r=0\), left column) and two representative \(r\) values (middle and right column). The upper and lower row correspond to two intervals of the jet transverse momentum \(p_{T}\). ubjets. For small values of \(r\), the leading two or three subjets do not contain enough information to match the JFN result. Especially, using the information of three leading subjets closely approximates the JFN performance down to a subjet radius of \(r\sim 0.2\). The relevance of the third leading subjet corroborates the results of Ref. [34], where the leading emission off the color dipole was identified as an important component for QCD vs. \(Z\) jet classification. Analogous classification tasks where physical scales can likely be identified are light QCD vs. \(c\) or \(b\)-jets [82; 83; 84], QCD vs. Higgs [85] or QCD vs. top quark jets [14]. We leave the exploration of these topics for future work. ## 7 Performance vs. generalizability Machine learning-based classifiers are often deployed in experimental analyses to tag jet topologies. A typical method is to train the classifier using fully supervised learning on precise theoretical simulations and apply it to experimental data [86; 84; 87]. However, this approach introduces model dependence as simulations do not perfectly match the actual data. In this section, we will explore some of the systematic uncertainties associated with this method. Other options that have been proposed include semi- or weakly-supervised techniques [88; 89], as well as data-driven methods [90]. When using fully supervised learning to develop classifiers, it is crucial to ensure that the model can generalize well to the unseen experimental data. For JFNs, soft and collinear particles are clustered into subjets making them less sensitive to the modeling of IR physics. Since it is generally challenging to model the very soft physics of collider events in Monte Carlo event generators, JFNs may have an advantage compared to PFNs Figure 9: AUC of the JFNs for QCD vs. \(Z\) jets trained on the full information (inclusive subjets) compared to deep sets trained only on the two or three leading subjets. in terms of generalizability. On the other hand, if too many particles are clustered into few subjets, the overall performance deteriorates. In order to assess whether a classifier performs well on unseen data, we train PFNs and JFNs with different parameters on Pythia[59] (training + validation data set) and test on Herwig[67] simulations. Here, Herwig can be considered as a surrogate for experimental data. We note that while the final results of both event generators are quite similar, the underlying physics of both the perturbative parton shower and the hadronization model can differ significantly. One generally expects that quark jets are quite similar in Pythia and Herwig but the results for gluon jets tend to differ more significantly [91; 92; 93]. See also Ref. [6], where Pythia and Herwig studies were presented using Convolutional Neural Networks (CNNs). Moreover, in Ref [94] mixed Herwig/Pythia samples were used together with a Bayesian Network in order to increase model robustness. We consider quark vs. gluon jet tagging for \(p_{T}=[500,550]\) GeV using exclusive \(k_{T}\) clustering of the subjets that are taken as input to the machine learning algorithm. Fig. 10 shows the AUC as a function of the number of the subjets \(N\). Analogous to the previous figures we show the PFN result as the left-most point (\(N=140\)). The upper panel shows the result for JFNs as a function of \(N\) trained on Pythia and tested on Pythia (blue) or Herwig (orange). In both cases, we observe a plateau in classifier performance as the number of (exclusive) subjets is increased. Within the shown errors, we observe that the AUC in both cases reaches its maximum value for \(N\sim 30\). As expected, there is Figure 10: Classification performance for quark vs. gluon jets using JFNs and exclusive \(k_{T}\) clustered subjets plotted as a function of the number of subjets \(N\). Upper panel: JFNs trained and tested on Pythia[59] (blue), JFNs trained on Pythia and tested on Herwig[67] (orange). Lower panel: The difference in the performance of the two results. a performance gap when testing the Pythia-trained classifier on quark vs. gluon jets generated with Herwig compared to testing it on Pythia simulations. This observation is consistent with the results of Refs. [6; 94]. However, we observe that the performance gap decreases as \(N\) decreases. To better visualize this aspect, we show in the lower panel the difference between the two AUC curves shown in the upper panel. The difference becomes smaller as \(N\) is increased indicating improved generalizability of the model. Our findings suggest that clustering particles into subjets can reduce the overall performance, but it also masks modeling uncertainties of the IR physics leading to more robust classifiers. Interestingly, we find that the difference between Pythia and Herwig does not decrease for small \(N\) for QCD vs. \(Z\) jet classification (not shown). We expect that the generalizability or robustness of machine learning-based classifiers will be useful for certain experimental applications where the trade-off between performance and generalizability needs to be considered. To illustrate this aspect, we introduce the objective function \(f(a,N)\), defined as: \[f(a,N)=\mathrm{AUC_{Pythia}}(N)-a\cdot(\mathrm{AUC_{Pythia}}(N)-\mathrm{AUC _{Herwig}}(N))\, \tag{10}\] where \(N\) is the number of exclusive subjects. Here the performance and generalizability are combined additively and a weighting factor \(a>0\) is introduced that allows us to increase/decrease the relevance of the two metrics. An optimization problem to find the optimal balance between performance (first term in Eq. (10)) and generalizability (second term \(\sim a\) in Eq. (10)) can now be formulated as follows: For a given choice of the weighting Figure 11: Upper panel: AUC of the JFNs trained on Pythia and tested on either Pythia or Herwig plotted as a function of the number of (exclusive) subjets \(N\), see also Fig. 10. Middle and lower panels: The objective function \(f(a,N)\) defined in Eq. (10), where \(a\) is a weighting factor between optimal performance and generalizability. factor \(a\), find the maximal value of the objective function \(f(a,N)\). The optimal number of exclusive subjets is then given by \(N_{\rm opt}=\arg\max_{N}f(a,N)\). We plot \(f(a,N)\) for two different values of \(a\) in Fig. 11 (middle and lower panels). We observe that as \(a\) is increased (the generalizability is weighted higher), the objective function peaks at an intermediate value of \(N\). For example, for \(a=4\) we find \(N_{\rm opt}=3\). While our objective function is constructed for illustration purposes, this result indicates that for certain experimental analyses that employ machine learning-based classifiers, it can be advantageous to use JFNs with a finite number of subjets to achieve the desired goals. ## 8 Conclusions The classification of jets at collider experiments is relevant for a wide range of tasks in high-energy particle and nuclear physics. Over the past years, machine learning-based classifiers have been developed that can achieve impressive tagging performance. While machine learning generally outperforms traditional methods by efficiently making use of the full information content, it is often unclear where the performance difference is coming from. In particular, it had been unclear if classifiers based on infrared-collinear information can can match the performance of IRC-unsafe classifiers. IRC safety is primarily motivated by theoretical considerations ensuring that observables are tractable in perturbative QCD. In addition, it is expected that the very soft physics is uncorrelated to the hard partonic process making it unlikely to be the reason of the performance gap that has been observed between IRC-unsafe machine learning results and traditional IRC-safe observables. In order to address these questions, we introduced in this work a new family of classifiers, the Jet Flow Networks (JFNs). Here, particles inside a jet are first clustered into subjets and their position and momentum are taken as input to a permutation-invariant neural network (deep set). The clustering of subjets allows us to control the sensitivity to soft and collinear emissions making the input to the classifier IRC safe. As the subjet radius vanishes, we recover the IRC-unsafe Particle Flow Networks (PFNs). We investigated both inclusive and exclusive subjet clustering, which can lead to important differences depending on the application. As representative examples, we considered two classification tasks: quark vs. gluon and QCD vs. \(Z\) jet tagging. Interestingly, we observed that the JFN performance matches the IRC-unsafe PFN result for finite values of the subjet radius. This makes JFNs the first classifier based on IRC-safe input without a performance gap to their IRC-unsafe counterpart for several jet classification tasks. This observation answered the main question we aimed to address in this work and indeed IRC-safe information is sufficient for the jet classification tasks considered here. As the subjet radius is increased, the performance of the JFNs remains unchanged (and in agreement with the PFNs) until physical thresholds are crossed. For example, for quark vs. gluon jets this threshold is around 5 GeV, whereas for QCD vs. \(Z\) jets it is determined by the kinematics of the hadronic boosted decay products of the \(Z\)-boson. In addition, we found that JFNs may offer a decreased model dependence for certain classification tasks with only a modest tradeoff in performance. This observation may lead to interesting applications of JFNs in collider phenomenology. Our results shed new light onto the information that machines learn in high-energy physics applications. As more powerful algorithms will be developed it will be interesting to revisit the question about the potential gap between classifiers based on IRC-safe and unsafe information. While more work is needed in this direction, our work represents an important step toward increasing the interpretability of machine learning methods in high-energy physics. In addition, we anticipate various applications of JFNs in heavy-ion collisions and the future Electron-Ion Collider [95]. ###### Acknowledgements. We thank Giacinto Piacquadio for helpful discussions. DA is supported by the NSF Grant PHY2210533 and the Onassis Foundation. JM, MP are supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under the contract DE-AC02-05CH11231. FR was supported by the Simons Foundation under the Simons Bridge program for Postdoctoral Fellowships at SCGP and YITP award number 815892; the NSF, award number 1915093; the DOE Contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab, and Old Dominion University. AL is supported in part by the UC Southern California Hub, with funding from the UC National Laboratories division of the University of California Office of the President. This research used resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
2310.02984
Scaling Laws for Associative Memories
Learning arguably involves the discovery and memorization of abstract rules. The aim of this paper is to study associative memory mechanisms. Our model is based on high-dimensional matrices consisting of outer products of embeddings, which relates to the inner layers of transformer language models. We derive precise scaling laws with respect to sample size and parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms. We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations.
Vivien Cabannes, Elvis Dohmatob, Alberto Bietti
2023-10-04T17:20:34Z
http://arxiv.org/abs/2310.02984v2
# Scaling Laws for Associative Memories ###### Abstract Learning arguably involves the discovery and memorization of abstract rules. The aim of this paper is to study associative memory mechanisms. Our model is based on high-dimensional matrices consisting of outer products of embeddings, which relates to the inner layers of transformer language models. We derive precise scaling laws with respect to sample size and parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms. We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations. ## 1 Introduction As the scale of large language models (LLMs) keeps increasing, scaling laws have become a crucial tool to empirically assess and predict the behavior of these models when varying the number of parameters and training data (Kaplan et al., 2020; Hoffmann et al., 2022). Despite their practical impact, the underlying phenomena leading to such scaling laws remain poorly understood. A better understanding of such phenomena could guide researchers towards improved models, algorithms, and datasets which may lead to improved scaling laws. Our study focuses on a simple model that aims to be representative of LLMs in two ways. First, we focus on heavy-tailed data distributions over discrete tokens, a natural assumption for text data (Piantadosi, 2014). Second, we consider associative memory models that store input-output pairs through outer-products of finite-dimensional embeddings, and can be seen as a proxy of the intermediate layers of transformers. Indeed, some transformer layers have been found to behave as key-value memories (Geva et al., 2021; Meng et al., 2022), and more generally outer-product associative memory matrices arise naturally from training dynamics on intermediate weights (Bietti et al., 2023). Beyond simple associative recall, the combination of multiple such associative rules at different layers may lead to certain circuits with rich "reasoning" behaviors based on context (Elhage et al., 2021; Bietti et al., 2023; Michaud et al., 2023). For example, an intermediate layer input token may encode for the topic "linux", leading to an output token that will trigger a specific behavior in the transformer's following layers when processing the token "terminal". Our contributions are as follows: * We provide precise statistical rates for outer-product memories with random embeddings, and compare different memory storage schemes in the context of Zipf-distributed data. * We compare theoretical schemes to the weights learned by various optimization algorithms used in practice, and illustrate the role of different design choices with numerical experiments. Related work.Associative memory models have a long history in the literature on neural computation (Steinbuch, 1961; Willshaw et al., 1969; Longuet-Higgins et al., 1970; Kohonen, 1972; Amari, 1972; Little, 1974; Hopfield, 1982; Smolensky, 1990; Schlag et al., 2021; Valle-Lisboa et al., 2023), though the statistical insights we provide based on specific data distributions are new, to the best of our knowledge. Memorization behaviors have drawn a lot of attention recently, and are believed to be an important notion to understand the learning happening in deep neural network (e.g., Sukhbaatar et al., 2019; Feldman, 2020; Feldman and Zhang, 2020; Geva et al., 2021; Wu et al., 2022). Building on memorization and heavy-tailed discrete data, our model bears similarities to the ones of Hutter (2021), Michaud et al. (2023) or Debowski (2023), although we focus on practical models with finite capacity. The discrete nature of tokens contrasts with other recent works on scaling laws that have focused on continuous Gaussian inputs (e.g., Bahri et al., 2021; Maloney et al., 2022; Sorscher et al., 2022). ## 2 Model for Associative Memory The data.In the following, we consider a joint distribution \(p\in\Delta_{[N]\times[M]}\) on inputs \(x\in[N]\) and outputs \(y\in[M]\). The inputs and outputs are respectively assumed to solely take \(N\) and \(M\) discrete values respectively. For example, \(N\) could be the number of potential sequences of fixed word length in the English language, while \(M\) would be all the potential words to complete the sequence. Abstractly, \(x\) and \(y\) will be referred to as tokens. To simplify the study, we assume for now that \(y\) is a deterministic function of \(x\), i.e., there is no noise in the labels. In consistency with language modeling, we equally assume that \(p(x)\) follows a Zipf law. Formally, there exists an parameter \(\alpha>0\), a normalizing constant \(C_{\alpha}\), a permutation \(\sigma\in\mathfrak{S}_{n}\) and a function \(f_{*}:[N]\rightarrow[M]\) such that \[\forall\:x,y\in[N]\times[M],\qquad\qquad p(\sigma(x))=C_{\alpha}x^{-\alpha}, \qquad p(y|x)=\mathbf{1}_{y=f_{*}(x)}. \tag{1}\] The distribution \(p\) is not known, but has generated \(T\) known independent samples \((x_{t},y_{t})_{t\in[T]}\sim p^{\otimes T}\). For readability sake, we will assume without restriction that \(\sigma\) is the identity (so that \(p\) is decreasing). The model, and the loss.The input tokens are embedded into a space \(\mathbb{R}^{d}\) of dimension \(d\) through an embedding map \(e:[N]\rightarrow\mathbb{R}^{d}\). This space is used for computation purposes. In particular, we focus on the linear transformation parameterized by a matrix \(W\in\mathbb{R}^{d\times d}\) mapping \(x\) to \(We(x)\). This latter vector is mapped back to the output space through an unembedding map \(u:[M]\rightarrow\mathbb{R}^{d}\) and the decoding rule \[f_{W}(x)=\operatorname*{arg\,max}_{y\in[M]}u_{y}^{\top}We_{x},\qquad\qquad W \in\mathbb{R}^{d\times d}, \tag{2}\] where \(e_{x}\) and \(u_{y}\) are abbreviations for \(e(x)\) and \(u(y)\). The model (2) can be seen as analogous to an attention layer where keys \(e_{x}\) are tested against queries \(u_{y}\) through a matrix \(W\) before going through a softmax layer, which, when the attention is peaky, identifies to an argmax. It also resembles next-token prediction from an intermediate representation \(We_{x}\), which may itself be the output of an attention block that attends to a token \(x\). The matrices \(W\) will be expressed as associative memories. Memory of an observed pair \((x,y)\) is represented as an outer product \(u_{y}e_{x}^{\top}\). Remembering those Figure 1: Scaling laws with respect to memory capacity \(d\) (left), respectively the number of data seen \(T\) (right), for various numbers of dataset size \(T\), respectively various memory capacity \(d\). This plots validates empirically the theory developed in the paper that proves scaling laws in \(\mathcal{E}(f_{q})\asymp d^{-\alpha+1}+T^{-1+1/\alpha}\) (dashed lines) under our setting with \(\alpha=2\) (1), (2), (5), and the association scheme (12) with \(\rho=0\) and \(P=d/8\). The experiments averaged over \(100\) runs, standard deviations are shown with solid color. with respect to a probability \(q\in\Delta_{[N]\times[M]}\) leads to the matrix \[W_{q}=\sum_{(x,y)\in[N]\times[M]}q(x,y)u_{y}e_{x}^{\top},\qquad\qquad q\in\Delta_{ [N]\times[M]}, \tag{3}\] This representation (3) is justified as the predictions (2) are insensitive to modifications of \(M\) outside the span of \((u_{y}e_{x}^{\top})_{x,y}\). In our deterministic setting (1) where one only observes pairs \((x,f_{*}(x))\), we shall consider the simpler model where1 Footnote 1: It should be noted that the proof techniques behind Theorem 1 do not break when considering \(q=q(x,y)\): both models would lead to similar results, with the case \(q=q(x,y)\) being simpler to comprehend. \[W_{q}=\sum_{x\in[N]}q(x)u_{f_{*}(x)}e_{x}^{\top},\qquad\qquad q\in\Delta_{[N]}. \tag{4}\] To simplify notations, we will write \(f_{q}\) for \(f_{W_{q}}\) (2). The model \(f_{q}\) is seen as superposing memories since all associations are mixed together in a single matrix. The quality of a mapping \(f\) is quantified through the generalization error \[\mathcal{E}(f)=\mathbb{E}_{(X,Y)\sim p}[\mathbf{1}_{f(X)\neq Y}],\qquad\qquad f :[N]\to[M]. \tag{5}\] Which questions are we interested in?Several questions naturally arise from our model. The first ones are related to scaling laws: how does the error depend on \(T\), the number of data? How does it scale with \(d\) that encodes for memory capacity? The second ones relate to the model itself: how does the error behave for different \(q\)? What about optimization-based algorithms? Arguably, the model (2) lays out a simple model to study memorization, which could easily be extended to model more intricate memorization and training behaviors inside a transformer language model. Indeed, memories of the form (4) were found to accurately model the behavior of weight matrices in multi-layer transformers trained by gradient methods on certain tasks (Bietti et al., 2023). Hence, we expect our study to shed light on more complex mechanisms in transformers, which may involve additional aspects such as attention layers, feed-forward layers, and noisy superpositions of embeddings representing multiple tokens from an input sequence. ## 3 Scaling laws with random embeddings Why do we make errors?With a simple deterministic model, one may wonder how can we not learn perfectly the mapping \(f_{*}\). There are two sources of error. One is due to not having enough data to see all the potential association \((x,f_{*}(x))\), and has already been studied by Hutter (2021). The other one is due to the limited memory capacity of our model, which we illustrate in Figure 2. **Proposition 1** (Finite data, infinite memory).: _Consider a infinite memory model \(\hat{f}\), which at time \(T\) predicts correctly all \(x\) that where seen in the past training, i.e., \(x\in\{X_{t}\}_{t\in[T]}\), where the \((X_{t},Y_{t})\) where drawn independently at random from a distribution \(p\in\Delta_{[N]\times[M]}\). Under the data model the generalization error reads, with respect to the random dataset \(\mathcal{D}_{T}=(X_{t},Y_{t})_{t\in[T]}\),_ \[\mathbb{E}_{\mathcal{D}_{T}}[\mathcal{E}(\hat{f})]\asymp T^{-1+1/\alpha}. \tag{6}\] _Here, the notation \(a\asymp b\) means that there exists two constants \(c_{1}\) and \(c_{2}\) such that \(c_{1}b\leq a\leq c_{2}b\)._ Proof.: Our proof follows directly from the characterization \(\mathbb{E}_{\mathcal{D}_{T}}[\hat{f}]\asymp\int_{1}^{\infty}p(x)e^{-Tp(x)} \,\mathrm{d}x,\) which actually allows us to generalize the results of Hutter (2021). Details are provided in Appendix. ### Tight error characterization The case where one has infinite data but finite memory is intrinsically a deterministic problem. However, characterizing interferences between embeddings and the corresponding generalization error is combinatorial in nature, and is hard to study without specific assumptions on the embeddings \(e\) and \(u\). A natural choice is to consider them to be random, as is the case at initialization. **Theorem 1** (Infinite data, finite memory).: _Let \(M\geq 4\) and \(d>8\log(M)\). For any memory weight scheme \(q:[N]\to\mathbb{R}\), when the embeddings \(e_{x}\) are independent random variables \(e_{x}\sim\mathcal{N}(0,I)\), and the unembeddings are taken uniformly at random on the sphere,_ \[\mathbb{E}_{e,u}[\mathcal{E}(f_{q})]\leq\inf_{\gamma}2d^{-\gamma}+p\Big{(} \Big{\{}x\in[N]\,|\,dq(x)^{2}\leq 16c_{\gamma}\Big{(}Q_{\infty}+\frac{8c_{ \gamma}\|q\|_{2}^{2}}{d}\Big{)}\Big{\}}\Big{)}, \tag{7}\] _where \(Q_{\infty}:=\max_{y}\sum_{x:f_{*}(x)=y}q(x)^{2}\), \(c_{\gamma}=\log(M)+\gamma\log(d)\), and \(p(\mathcal{X})=\sum_{x\in\mathcal{X}}p(x)\) denotes the probability of \(x\) to belong to \(\mathcal{X}\subset[N]\). In terms of lower bound,_ \[\mathbb{E}_{e,u}[\mathcal{E}(f_{q})]\geq\frac{1}{20}p(\{x\in[N]\,|\,3(d+1)q(x )^{2}\leq Q_{\infty}\}). \tag{8}\] Theorem 1 illustrates how the error made by a scheme \(q\) at the input \(x\) relates to the ratio between the signal \(dq(x)\), provided by the associative memory \(u_{f_{*}(x)}e_{x}^{\top}\), and the noise \(Q_{\infty}\), which corresponds to the signal provided by the most competitive class for \(y\in[M]\). This is true up to a higher term in \(\|q\|^{2}/d\), which corresponds to a class \(y=f_{*}(x)\) competing against itself when the random embeddings \(e_{x^{\prime}}\) for \(x^{\prime}\) such that \(f_{*}(x^{\prime})=y\) point in the opposite direction of \(e_{x}\). When \(d\) is large and \(p\) is regular, \(c_{\gamma}\|q\|_{2}^{2}/d\) will be dominated by \(Q_{\infty}\) and the cut-off of \(q(x)^{2}/Q_{\infty}\) at \(32c_{\gamma}/d\) will behave similarly to a cut-off at \(1/d\) up to logarithmic terms. Moreover, when \(q\) is chosen independently of \(p(y|x)\),2 one can expect \(Q_{\infty}\approx p_{*}\|q\|^{2}\) where \(p_{*}=\max_{y\in[M]}p(y)\). As a consequence, up to constants and logarithmic term, we get Footnote 2: To be more precise, one should actually choose \(q(x)\) to be class dependent so to cram in memory as many \(x\) as possible for each different class \(y=f_{*}(x)\), ensuring that \(y\mapsto\sum_{x:f_{*}(x)=y}q(x)^{2}\) is constant with respect to \(y\). For simplicity, we will not discuss this behavior that does not change the big picture beyond our exposition. \[\mathcal{E}(f_{q})\overset{\eqref{eq:eq:q(x)}}{\asymp}p(\{x\in[N]\,|\,dq(x)^ {2}\leq p_{*}\|q\|^{2}\}). \tag{9}\] ### Memory schemes Let us now discuss several natural choices for \(q\) and compare their corresponding performance. The first naive choice consists in storing all the data seen at time \(T\) in memory. It reads \[\hat{q}_{0}(x)=\mathbf{1}_{x\in\{X_{t}\}_{t\in[T]}},\qquad q_{0}(x)=1. \tag{10}\] Figure 2: Error due to finite memory capacity: the stacking of associative memories in a matrix \(W\) may exhibit a pattern \(W=\sum_{\vec{x}}u_{f_{*}(x)}e_{x}^{\top}\) where three inputs mapped to three different outputs interact in such a way that \(u_{2}^{\top}We_{1}=e_{2}^{\top}e_{1}+u_{2}^{\top}u_{3}e_{3}^{\top}e_{1}\geq 1+u_ {1}^{\top}u_{3}e_{3}^{\top}e_{1}=u_{1}^{\top}We_{1}\), so that \(f_{W}(x=1)=2\neq 1=f_{*}(x=1)\). In other terms, memory interference may lead to wrong prediction, illustrating the finite capacity of the model \(f_{W}\) (2) to store all data associations. Here, \(\hat{q}_{0}\) corresponds to the learned weighted scheme based on the \(T\) data, while \(q\) denotes an idealized limit when one has infinite data. In the idealized setting \(Q_{\infty}(q_{0})=Np_{*}\) where \(p_{*}:=\max_{y\in[M]}p(y)\). From Theorem 1, we deduce that \(\mathcal{E}(f_{W_{q_{0}}})\) will follow two regimes: an overflow regime where \(3(d+1)\leq Np_{*}\) and in essence the memory \(W_{q_{0}}\) is too full to recover any signal in it, and \(\mathbb{E}_{e,u}\mathcal{E}(f_{W_{q_{0}}})>1/20\) (8); a infinite memory regime where \(d\geq N\) and all associations \(e_{x}u_{f^{*}(x)}^{\top}\) can be stored orthogonally to one another, and the error \(\mathbb{E}_{e,u}\mathcal{E}(f_{W_{q_{0}}})\) quantify the tiny probability that some random inputs embeddings appear to be too correlated. Equipped with the knowledge that our associative memory model (2) has finite capacity, one may weight memories according to their frequencies, leading to the scheme, for \(\rho\geq 0\) \[\hat{q}_{\rho}(x)=\Big{(}\frac{1}{T}\sum_{t\in[T]}\mathbf{1}_{x=X_{t}}\Big{)} ^{\rho},\qquad q_{\rho}(x)=p(x)^{\rho}. \tag{11}\] A better option consists in explicitly limiting the storage of our model with a simple thresholding algorithm \[\hat{q}_{\rho,[P]}(x)=\hat{p}(x)^{\rho}\mathbf{1}_{x\in\mathrm{top}_{P}((x_{t })_{t\in[T]})},\qquad q_{\rho,[P]}(x)=p(x)^{\rho}\mathbf{1}_{x\in[P]}, \tag{12}\] where \(\mathrm{top}_{P}((x_{t}))\) denotes the set made of the \(P\) most frequent inputs in the data \((x_{t})\). **Proposition 2** (Without thresholding).: _Let \(p\) be an \(\alpha\)-Zipf distribution (1). For \(\rho>0\), the performance of \(f_{\rho}:=f_{q_{\rho}}\) (11) is, up to poly-logarithm factors and constants that depends on both \(\rho\) and \(\alpha\),_ \[\mathbb{E}_{e,u}\mathcal{E}(f_{\rho})\stackrel{{\mathrm{(log)}}} {{\asymp}}\left(\frac{d}{\varphi(N)}\right)^{-(\alpha-1)/2\rho\alpha},\quad \text{where}\quad\varphi(N)=\left\{\begin{array}{cl}1&\text{if }2\rho\alpha>1\\ \log(N)&\text{if }2\rho\alpha=1\\ N^{1-2\rho\alpha}&\text{if }2\rho\alpha<1\end{array}\right.. \tag{13}\] _In particular, when \(\rho=1\), \(\mathbb{E}_{e,u}\mathcal{E}(f_{0})\) scales in \(d^{-(\alpha-1)/2\alpha}\). In the limit where \(\rho=0\), \(\mathbb{E}_{e,u}\mathcal{E}(f_{0})\) can be understood as \((d/N)^{-\infty}\) which will go to zero if and only if \(d\) is bigger than \(N\)._ **Proposition 3** (With thresholding).: _Assume that \(p(x)\) follows a \(\alpha\)-Zipf law (1) with \(N=+\infty\). For \(\rho\geq 0\), setting \(P\simeq d^{1/(2\alpha\rho+1)}\), the error made by the memory scheme (12) scales as_ \[\mathbb{E}_{e,u}\mathcal{E}(f_{\rho})\stackrel{{\mathrm{(log)}}} {{\asymp}}d^{-(\alpha-1)/(2\rho\alpha+1)}. \tag{14}\] In particular, when \(\rho=0\) and \(P\simeq d\), one gets a scaling in \(d^{-\alpha+1}\), which is actually optimal. Figure 3: Generalization error (5) as a function of \(d\) and \(T\) for the model (4) averaged over \(100\) runs. The data follows a Zipf law with \(\alpha=0.5\), \(N=100\), \(M=5\) and \(f_{*}(x)=x\,\mathrm{mod.}\,M\). Left: error for \(q_{0}\) (10), either \(d\) is too small and there will be memory overflow leading to large error, either it is big enough and with enough data, the error will be null. Middle: error for \(q_{1}\) (11), for small \(d\) and big \(T\), it avoid memory overflow allowing a smaller error then \(q_{0}\); however for big \(d\) it does not allocated enough memory to rare association, leading to a bigger error. Those results can be interpreted mechanistically by looking at the corresponding memory matrices (see Figure 11). Right: Generalization error when \(T=+\infty\), \(N=100\) and \(\alpha=2\): the scheme \(q_{0}\) leads to a zero-one type of plot where if \(d<N\) the error is high, and if \(d>N\) the error decreases fast to zero (in blue); the scheme \(q_{1}\) leads to an error decreasing in \(d^{-(\alpha-1)/2\alpha}=d^{-1/4}\) as predicted by theory (in orange); the scheme \(q_{0,P}\) (12) with \(P=d/8\), decreases in \(d^{-(\alpha-1)}=d^{-1}\) until reaching the tipping point when \(d/8>N\) (in green). **Theorem 2** (Minimax performance).: _Assume that \(p(x)\) follows a \(\alpha\)-Zipf law (1) with \(N=+\infty\). For any weighting scheme \(q\), and \(p_{*}\in(0,1)\), there exists a conditional distribution \(p(y|x)\) with \(p_{*}=\max_{y}p(y)\) such that the error made for the distribution \(p\) is lower bounded by_ \[\mathbb{E}_{e,u}\mathcal{E}(f_{q})\geq c_{\alpha}(d+1)^{-\alpha+1}\qquad\text {where}\qquad c_{\alpha}=\frac{C_{\alpha}p_{*}^{\alpha-1}}{20(\alpha+1)\cdot 3^{ \alpha-1}}.\] _Moreover, this performance is reached (up to logarithms factor) by the thresholding algorithm (12) with \(P\simeq d/\log(d)\) and \(\rho=0\)._ Finally, we prove that the scaling laws proved for \(d\) when \(T=+\infty\) and for \(T\) when \(d=+\infty\) appears jointly when both \(d\) and \(T\) are finite. **Proposition 4** (Finite data and finite memory).: _For the previous bound with respect to \(d\), Proposition 2 and Proposition 3, considering finite data simply adds a term \(T^{-1+1/\alpha}\) (up to constants and logarithmic terms), matching the optimal bound of Proposition 1. In particular, (12) with \(\rho=0\) and \(P\simeq d/\log(d)\) reaches the optimal scaling in_ \[\mathbb{E}_{e,u,(x_{t},y_{t})_{t\in[T]}}\mathcal{E}(f_{\hat{q}})\asymp T^{-1+ 1/\alpha}+d^{-\alpha+1}. \tag{15}\] The optimal scaling (15) recovers the law of Hutter (2021) with respect to \(T\), and the one of Michaud et al. (2023) with respect to \(d\). This is intuitive, since Hutter (2021) assumes memorizing exactly all previously seen data, while each memory could be seen as specifying a "quantum of knowledge" as modeled in Michaud et al. (2023), with \(d^{-\alpha+1}\) corresponding to the risk (5) of only storing the most frequent \(d\) tokens. ## 4 Optimization-based memorization This section studies memory schemes privileged by optimization based algorithms, digging into the training dynamics behind memorization. In terms of relevance, we argue that our model (2) is a proxy for the inner layers of a transformer that memorize patterns before matching them against new data at inference time. As such, we want to understand how different key elements in the training of a transformer influence storage in our memory model. \begin{table} \begin{tabular}{|c|c|c|} \hline Model & Error scaling & Comment \\ \hline \(q(x)=p(x)\) & \(d^{-(\alpha-1)/2\alpha}+T^{-1+1/\alpha}\) & Found with large batches in one step \\ \(q(x)=\mathbf{1}_{x\leq d}\) & \(d^{-\alpha+1}+T^{-1+1/\alpha}\) & Optimal scaling with random embeddings \\ \hline \end{tabular} \end{table} Table 1: Some insightful provable scaling laws with respect to the memory capacity \(d\), and the number of data \(T\), for two schemes that store associations as (4) and random embeddings. Figure 4: Comparison between the error found by optimizing \(W\) (2) with SGD on the cross-entropy loss, and its approximation with \(q(x)\) (4) and the approximate update rule (20). We consider \(N=100\), \(M=5\), \(f_{*}(x)=x\operatorname{mod}.M\), \(\alpha=2\), and batch size equals one. Left: One run with \(d=N=100\) with \(\gamma=10\). Middle: Average over 100 runs with \(d=N=100\) with \(\gamma=1\). Right: Average when \(d=N/10=10\) with \(\gamma=1\), which implies that our approximation is not valid anymore. The same results can be obtained for bigger batch sizes as shown in Figure 13. Gradient updates.We consider the cross entropy loss as a surrogate objective to minimize, and study the form of gradient updates on batches of data. Formally, the matrix \(W\in\mathbb{R}^{d\times d}\) in (2) is optimized to minimize the loss \[\mathcal{L}(W)=\mathbb{E}_{(X,Y)\sim p}[\ell(x,y;W)],\qquad\ell(x,y;W)=-u_{y}^{ \top}We_{x}+\log(\sum_{z\in[M]}\exp(u_{z}^{\top}We_{x})). \tag{16}\] The gradient of this loss with respect to \(W\) takes the following form, as detailed in Appendix A.10: \[\nabla_{W}\ell(x,y;W)=-(1-p_{W}(y|x))(u_{y}-\varepsilon)e_{x}^{\top},\quad \text{with}\quad\varepsilon=\sum_{z\in[M]}p_{W}(z|x,z\neq y)u_{z}. \tag{17}\] where \(p_{W}(y|x)\propto\exp(u_{y}^{\top}We_{x})\) are model predictions for the current \(W\). For a batch of \(n\) data \(B=[x_{1},\cdots,x_{n}]\), a gradient update with step size \(\gamma_{t}\) updates \(W_{t}\) as \[W_{t+1}=W_{t}-\gamma_{t}\sum_{x\in B}\nabla_{W}\ell(x,f_{*}(x);W_{t}). \tag{18}\] Approximation of the updates.When \(p_{W}(z|x)\) does not change much for all \(z\neq f_{*}(x)\), since \(u_{z}\) were sampled at random in \(\mathcal{S}^{d}\), we expect \(\varepsilon\) (17) to concentrate around zero with \(\|\varepsilon\|^{2}\approx 1/M\), hence to be negligible in front of \(u_{f_{*}(x)}\). As a consequence, \[\nabla_{W}\ell(x,f_{*}(x);W)\approx-(1-p_{W}(f_{*}(x)|x))u_{y}e_{x}^{\top}. \tag{19}\] This is notably the case for \(W=0\), random \(W\), or if \(W\) only stores pairs \((x,f_{*}(x))\) with \(d\gg N\). With the update model above (19), \(T\) steps of SGD with batch size one lead to an association scheme of the form (4) with (see Appendix A.11) \[q_{\gamma}(x)\approx f^{Tp(x)}(0)=\underbrace{f\circ f\circ\cdots\circ f}_{ Tp(x)\text{ times}}(0),\qquad\text{where}\qquad f:x\mapsto x+\frac{\gamma}{1+M^{-1}\exp(x)}. \tag{20}\] This equation tells us what form to expect for \(q\) for optimization schemes with different hyperparameters. This approximation is shown in Figure 5, and is validated empirically in Figure 4. Step size effect.When \(d>N\), the updates approximation (20) and the resulting \(q_{\gamma}\) show how a large learning rate \(\gamma\) is beneficial for our problem, in particular when using SGD with batch size one. Interestingly, the same behavior holds in the presence of limited capacity, i.e., \(d<N\), although interferences between embeddings (Figure 2) break our approximation (19). In those settings, we resort to numerical simulation to study how optimization manages to rearrange memories. Figure 6 showcases two types of behaviors depending on the size of \(\gamma\). _(i)_ When the learning rate \(\gamma\) is large, Figure 5: Theoretical approximation of the association scheme found with stochastic gradient descent with batch one and fixed learning rates. Left: Plot of \(f^{n}(0)\) as a function of \(n\) where \(f\) is the effect of one gradient update on \(q(x)\) (20). Right: Plot of the resulting \(q_{\gamma}(x)\) when \(n_{x}\propto p(x)\propto(x+3)^{-\alpha}\) with \(\alpha=2\) and \(n_{N}=1\). In dashed, we represent \(q_{\rho}\) (11) for \(\rho=0.05\), \(\rho=0.35\) and \(\rho=1\). Those curves map well \(q_{\gamma}\) for \(\gamma=10\), \(\gamma=10^{-1}\) and \(\gamma=10^{-3}\) respectively. associations will be stored easily in memory, but will tend to overwrite previous storage. _(ii)_ When the learning rate \(\gamma\) is small, associations need to be seen often to build up in the matrix \(W\) (4) which will take more time, but will not erase memory. This provides another intuition explanation for why a bigger step size leads to better results on the left of Figure 8. The previous considerations also explain the usefulness of **scheduling** in our simple model, which we illustrate on Figure 7: using a large learning rate enables us to store associations while there is still memory space, while reducing it later in training avoids overwriting previous storage unless an association is highly frequent. Batch size effect.Table 1 recalls how storing associations with \(q=1\) under the model (4) is better than storing them with \(q=p\). As such, it suggests that, when processing a finite number of data \(T\), smaller batch size is preferable. Intuitively, processing an input \(x\) in a batch will reweight it by its frequency \(p(x)\), while processing it by itself will update \(W\) similarly to setting \(q_{\gamma}(x)=1\) if \(x\) has not been already seen Figure 5. Indeed, in the large batch limit where \(|B|\to+\infty\), one batch update corresponds to a population gradient update, which when \(p_{W}\ll 1\) assimilates to \(\nabla_{W}\mathcal{L}(W)\approx-\sum_{x}p(x)u_{f_{*}(x)}e_{x}^{\top}\). This contrasts with many small batch updates that rather lead to an association scheme akin to (4) with \(q=1\). In support of this line of reasoning, Figure 8 (middle) Figure 6: Gradient descent dynamics from perspective of the matrix \((u_{y}^{\top}W_{i}e_{x})_{y,x}\in\mathbb{R}^{M\times N}\) with \(N=10\), \(M=5\), \(\alpha=1.5\), \(f_{*}(x)=x\bmod.5\), and \(d=5<N\). A lighter color in the square \((y,x)\) means a higher value of \(u_{y}^{\top}We_{x}\). The optimal \(W\) corresponds to two diagonal strips of yellow boxes (see Figure 15). The matrix \(W_{t}\) is updated with stochastic gradient descent with batch size equal to one. From time to time, stochastic gradient descent will hit an association that is not properly stored in memory yet (the red boxes). It will consequently update the weight matrix \(W_{t}\to W_{t+1}\) (side by side pairs) to store it (18). Left pair: update with a big learning rate \(\gamma=10\), whose risk is to erase previous memories (the light colored boxes), similarly to \(q_{0}\) (10). Right pair: update with a small learning rate \(\gamma=10^{-1}\), which will not store rare memory, similarly to \(q_{\rho}\) (11) with large \(\rho\). Figure 7: Learning curve of the generalization error \(\mathcal{E}\) (5) with respect to the number of data processed by stochastic gradient descent in the setting of Figure 6. Left: comparison on a single run. A big step size allows to store more memory at the risk of overwriting past association, which explains the higher variance of the blue curve but its overall better performance. A small step size will avoid loss spikes due to memory overwriting, but will take more time to store rare associations, leading to worse performance. By decreasing the learning rates along training, e.g., with the “StepLR” scheduler (Paszke et al., 2019), one can get the best of both world, i.e., store memories fast at the beginning of training when storage capacity is underused, while being more cautious at the end of training when there is no more “free” memory space. Right: Similar plot with \(N=30\) averaged over one hundred runs. illustrates the benefits of splitting the descent with many steps, small batch size and large step size, even when \(d<N\). ### Practical considerations In order to optimize our simple model the fastest, we have seen the usefulness of large step size and small batch size. However, for large transformers such design choices are impractical. First, large step sizes may lead to instability in realistic models (Gilmer et al., 2021). Second, in order to reduce training time and improve hardware efficiency, one should process large batches (Smith et al., 2018). Adam.We have seen before how the update of SGD with large batch can be approximated with \[\gamma_{t}^{-1}(W_{t+1}-W_{t-1})=\sum_{x\in B}(1-p_{W}(f_{*}(x)|x))u_{f_{*}(x)} e_{x}^{\top}\approx\sum_{x\in\mathbb{N}}|B|(1-p_{W}(f_{*}(x)|x))p(x)u_{f_{*}(x)}e_{x}^ {\top}.\] Those naive updates would lead to a model that resembles (4) with \(q=p^{\rho}\) for \(\rho\approx 1\)(11). In concordance with previous research on the matter (Zhang et al., 2020; Kunstner et al., 2023), we found Adam to be helpful in our setup as well, see Figure 8 (right). In first order approximation, Adam is approximated as signSGD (Balles and Hennig, 2018). Arguably, this introduces a normalization effect to the gradient, helping to reach the saturation phase of \(n\mapsto f^{n}\)(20) shown on Figure 5, homogenizing the resulting matrix \(W\) to behave similarly to \(q_{1}=1\), therefore optimizing memory capacity. Experiments to underpin this intuition are reported in Figures 15 and 16 in Appendix B. Layer normalization.Minimizing the cross-entropy loss implies setting \(p_{W}(y|x)=1\), which will lead to \(W\) diverging to infinity and unstable loss gradients. In order to ensure numerical stability, it is natural to rescale the vector \(We_{x}\in\mathbb{R}^{d}\), especially since what matters for the final prediction \(f_{W}\) is only its direction. This is precisely what layer-norm does, introducing the logit score \[g_{y}^{\text{LN}}(x)=\langle u_{y},\frac{We_{x}}{\|We_{x}\|}\rangle,\qquad \text{instead of}\qquad g_{y}(x)=u_{y}^{\top}We_{x}.\] This leads to an added projection on the gradients in (17), as detailed in Appendix A.12, denoting \(\bar{W}=W/\|We_{x}\|\), \[\nabla_{W}\ell^{\text{LN}}(x,y;W)=\nabla_{W}\ell(x,y;\bar{W})=\frac{1}{\|We _{x}\|}\left(I-(\bar{W}e_{x})(\bar{W}e_{x})^{\top}\right)\nabla_{\bar{W}}\ell (x,y;\bar{W}). \tag{21}\] We recognize a projection that kills the signal that already aligns with \(We_{x}\). We conjecture that this introduces a clipping effect on the corresponding \(q(x)\), optimizing for memory storage, and explaining the good performance observed in the right of Figure 8. ### The benefits of learning the embeddings Taking a step back, Theorem 1 implies that our model with \(d^{2}\) parameters, the matrix \(W\in\mathbb{R}^{d\times d}\)(4), only memorize about \(d/\log(d)\) associations \((e_{x},u_{y})\in(\mathbb{R}^{d})^{2}\) of size \(2d\). Intriguingly, Lemma 1 below states that an exponential number of quasi-orthogonal elements can be put in \(\mathbb{R}^{d}\), an event that actually holds with high probability when embeddings are random, showcasing intrinsic limitations of our "linear" model (2). **Definition 1** (Quasi-orthogonality).: _The family \((u_{z})_{z\in[P]}\) with \(u_{z}\in\mathbb{R}^{d}\) is \(\eta\)-quasi orthogonal if_ \[\forall\left\{z,z^{\prime}\right\}\subset[P],\qquad|\langle u_{z},u_{z^{ \prime}}\rangle|\leq\eta,\qquad\text{and}\qquad\|u_{z}\|=1. \tag{22}\] **Lemma 1**.: _For any \(d\in\mathbb{N}\) and \(P\geq 3\), there exists an embedding \(u:[P]\to\mathbb{R}^{d}\) such that the family \((u_{z})_{z\in[P]}\) is \(\eta=2\sqrt{d^{-1}\log(P)}\)-quasi orthogonal._ As a consequence of Lemma 1, the following model \[f_{1}(x)=\operatorname*{arg\,max}_{y}u_{y}^{\top}\sum_{x^{\prime}\in[P]}u_{f_{*}(x ^{\prime})}\sigma(e_{x^{\prime}}^{\top}e_{x}-\eta), \tag{23}\] where \(\sigma(x)=x_{+}\) is the ReLU function, can fit \(P=\exp(\eta^{2}d/4)\) elements in memory, leading to a scaling in \(\mathcal{E}(f_{1})\asymp\exp(-(\alpha-1)\eta^{2}d/4)\) when \(p(x)\) follows a \(\alpha\)-Zipf law.3 Similarly, one could consider higher moments of \(e_{x^{\top}}^{\top}e_{x}\) which has been the basis for modern Hopfield networks (Krotov and Hopfield, 2016; Ramsauer et al., 2021). However, implementing the model (23) requires to keep track of each of the \(P\) vectors \(e_{x}\in\mathbb{R}^{d}\), leading to \(Pd\) parameters, in order to only store \(P\) associations of size \(d\), needing compute that scales with \(Pd\) at inference time, rather than just \(d^{2}\), Footnote 3: This result follows directly from two facts. When input embeddings are chosen at random, the probability that they are not \(\eta\)-quasi orthogonal is bounded by \(P^{2}\exp(-d\eta^{2}/2)\). When input embeddings are \(\eta\)-quasi orthogonal, \(f_{1}(x)=f_{*}(x)\) for any \(x\in[P]\). We also note that when embeddings are learned, it is actually possible to store as many memories as desired, which can be seen from the fact that \[W=I,\forall\,y\in[M]\,u_{y}\in\mathcal{S}^{d},e_{x}=u_{f_{*}(x)}\qquad \Rightarrow\qquad f_{*}(x)=\operatorname*{arg\,max}_{y}u_{y}^{\top}We_{x}, \tag{24}\] In particular, Figure 9 illustrates the solution found when \(d=2\) by optimization-based algorithm in order to get a zero generalization error on the task of Figure 3 where \(m=5\). Optimizing token embeddings is probably an important element to increase memorization capacity in transformers, although enforcing \(e_{x}=u_{f_{*}(x)}\) is unrealistic when embeddings are shared over different heads, and the input/output relationships to be learned differs across heads. Figure 8: Effect of step size, batch size, layer-norm and Adam (with \(\beta_{1}=\beta_{2}=0\), which corresponds to SignGD). All the experiments are conducted with \(N=100\), \(M=5\), \(\alpha=2\), \(f_{*}(x)=x\operatorname{mod}M\), averaged over ten runs. We initialized parameters and rescale learning rates to ensure maximal feature updates, as explained in Appendix B.1. To avoid confounders, we scale \(\gamma\) on the middle plot for the variance of the gradient updates to be independent of the batch size. Figure 9: Experiments with learned embeddings when \(\alpha=2\), \(N=100\) and \(M=5\) with \(y=f_{*}(x)=x\operatorname{mod}M\) and \(d=2\). Left: level lines of the function \(\mathbb{R}^{2}\to[5];u\mapsto\operatorname*{arg\,max}_{y\in[5]}u_{y}^{\top}u\) with \(u_{y}\) the learned unembedding. Middle: scatter plot of the learned input embeddings \(e_{x}\in\mathbb{R}^{2}\) for \(x\in[N]\) colored accordingly to \(f_{*}(x)\) for \(e_{x}\). It illustrates how the input embeddings match with the output ones, similarly to (24) and Proposition 5. Right: learned input embeddings obtained with \(M=10\), and allowing again a zero generalization error. Reaching a zero error with \(d=2\) greatly contrasts with the condition \(d\geq N\) needed to get to a zero generalization error when the embeddings are random. Conclusion This work considers a simple model to study memorization in transformers. Here, memorization is seen as a valuable behavior, the network memorizing useful patterns and association rules. We derive precise scaling laws with respect to both the number of data, and the model size, which plays the role of a memory capacity. We quantify the effect of different memorization schemes, illustrating the benefits of uniformly weighted outer products. We leverage these theoretical results to study how different optimization algorithms commonly used for transformers may lead to more efficient memorization. In particular, we showcase the efficacy of small batches and large learning rates, and, under the design constraints resulting from efficient hardware utilization and training stability, the usefulness of Adam and layer normalization. While our study focuses on simple memorization schemes, it opens up many possible new directions. This includes extending our study to richer models that are closer to transformers, where embeddings, attention and feed-forward layers are trained. We would equally like to leverage our framework for assessing memorization and generalization through clear metrics, and eventually automatically adapt the learning rates as a function of the "free" memory capacity left in a layer. Acknowledgements.The authors would like to thank Leon Bottou as well as Herve Jegou for many fruitful discussions on memorization mechanisms in transformer language models.
2302.13440
New GO-based Measures and Their Statistical Significance in Multiple Network Alignment
Protein-protein interaction (PPI) networks provide valuable insights into the function of biological systems, and aligning multiple PPI networks can reveal important functional relationships between different species. However, assessing the quality of multiple network alignments is a challenging problem. In this paper, we propose two new measures, the Squared GO Score (SGS) and the Exposed G Score, to evaluate the quality of multiple network alignments while using functional information from Gene Ontology (GO) terms. We also introduce a $p$-value measure, the Statistical Exposed G Score, to compute the exact significance of a multiple network alignment based on its revealed GO terms. We also show that our measures are highly correlated with the recovered Ortholog count, providing further evidence for their effectiveness. Our work contributes to the development of more reliable and accurate measures for evaluating multiple network alignments and has potential applications in predicting gene function and identifying evolutionary relationships between different species using multiple network alignment.
Reza Mousapour, Kimia Yazdani, Wayne B. Hayes
2023-02-26T23:48:13Z
http://arxiv.org/abs/2302.13440v1
# New GO-based Measures and Their Statistical Significance in Multiple Network Alignment ###### Abstract Protein-protein interaction (PPI) networks provide valuable insights into the function of biological systems, and aligning multiple PPI networks can reveal important functional relationships between different species. However, assessing the quality of multiple network alignments is a challenging problem. In this paper, we propose two new measures, the Squared GO Score (SGS) and the Exposed G Score, to evaluate the quality of multiple network alignments while using functional information from Gene Ontology (GO) terms. We also introduce a \(p\)-value measure, the Statistical Exposed G Score, to compute the exact significance of a multiple network alignment based on its revealed GO terms. We also show that our measures are highly correlated with the recovered Ortholog count, providing further evidence for their effectiveness. Our work contributes to the development of more reliable and accurate measures for evaluating multiple network alignments and has potential applications in predicting gene function and identifying evolutionary relationships between different species using multiple network alignment. **Keywords**: Multiple network alignment Gene Ontology GO terms PPI networks Quality Measures ## 1 Introduction Protein-Protein Interaction (PPI) networks have become a popular tool for investigating biological systems, as they show the interactions between proteins and the functions they perform. PPI networks can be used to predict gene ontology (GO) terms, protein function, and identify potential drug targets. However, these networks are often noisy and incomplete, which can limit their utility [1]. Network alignment methods have the potential to overcome these limitations. The alignment process seeks to identify conserved regions across networks, and to maximize topological similarity between the networks. However, evaluating the quality of multiple network alignments remains a challenge. Several measures have been proposed for evaluating the quality of multiple network alignment. One commonly used measure is the Maximal Common Subgraph (MCS), which quantifies the size of the largest common subgraph between the aligned networks [2]. Another measure is the Normalized Compression Distance (NCD), which evaluates the similarity between the compressed representations of the aligned networks [3]. Both these measures are topology-based and they provide information only about the structural similarity between the aligned networks, without considering the functional aspects of the networks. Although it is widely _assumed_ that common topology relates closely to common function, this has yet to be observed or objectively measured. As a result, in biological network alignment, it is crucial to consider the functional information encoded in GO terms, which offers important information about the biological roles and relationships of the aligned proteins [4]. The use of GO terms has been proposed as a way to assess the quality of pairwise and multiple network alignment methods beyond topology-based measures. One such method is the Functional Overlap (FO) score, which computes the ratio of shared GO terms between aligned nodes in the multiple network alignment [5]. FO only uses the ratio of shared GO terms and do not take into account the quality of the alignment for each individual GO term. Measures like FO assume equal importance for different GO terms, whereas some of GO terms can provide more useful information, for instance due to their lower frequency. Therefore, it is important to look at each GO term in its own context and evaluate them separately. Our work aims to address the mentioned challenges by proposing measures to compute the exact \(p\)-value of a multiple network alignment with respect to each GO term separately, and then combining their results to produce a single holistic \(p\)-value across all GO terms. To our knowledge, this is the first work that evaluates the statistical significance of multiple network alignments by calculating a \(p\)-value for each alignment. Additionally, we introduced two function-based quality measures for assessing the quality of multiple network alignment based on a single GO term. Our measures provide a comprehensive framework for evaluating the quality of multiple network alignments, which can help researchers to identify the most reliably aligned regions and improve the accuracy of their predictions and facilitate the identification of new functional relationships between proteins. Moreover, by determining the statistical significance of a multiple network alignment, we can assess the reliability of the alignment and determine the likelihood of obtaining the same alignment by chance. The significance of obtaining an alignment or a better one, can be used as a rigorous indicator of quality. A key difference of this measure is that it can be used to compare two multiple network alignments that have been created between different set of species. By providing measures to evaluate the quality and statistical significance of these alignments, researchers can make more informed decisions and drive new discoveries in the field. Although the measures presented in this study were applied to biological PPI networks, they can be extended to any type of multiple network alignments that are annotated with an ontology, including social networks and other complex systems. ## 2 Materials and Methods ### Definitions We consider multiple network alignment in a 1-to-1 sense, where each node is allowed to be in only one cluster, but every node must be in a cluster. For \(k\) undirected graphs \(G_{i},i=1,\ldots,k\), let \(G_{i}\) have \(n_{i}\) nodes and \(\lambda_{i}\) nodes annotated with GO term \(g\). The binomial coefficient \(c(n,x)\) and the number of permutations \(p(n,x)\) are used, where \(n\) is the total number of objects and \(x\) is the number of selected objects. Without loss of generality, we assume two conventions for sorting the networks: For **node-sorted** components, where the networks are sorted by node size, we assume: \(n_{1}\geq n_{2}\geq...\geq n_{k}\) For the **lambda-sorted** components, where the networks are sorted by the density of \(g\) we assume: \(\lambda_{1}\geq\lambda_{2}\geq,...\geq\lambda_{k}\). We also employ a _Shadow Network_[6], which schematically depicts the state of a multiple network alignment. Each node in the shadow network represents a cluster of aligned nodes, and edges between shadow notes carry an integer weight representing the number of aligned edges between two clusters. The \(k\) networks that are being aligned are considered peg-networks and the shadow network is considered a hole-network that can be larger than all of the networks. Although the equations allow for extra shadow nodes, for practical cases, the maximum size of the hole-network considered for quality measures is assumed to be no more than the size of the largest peg-network. We note that the existence of shadow nodes in the hole-network, which do not correspond to any node in a peg-network, should not affect the quality measures. Overall, our proposed conventions and measures aim to provide a consistent and comparable framework for evaluating multiple network alignment using a single GO annotation term \(g\). ### Equations Preliminaries:We represent a cluster of mutually aligned nodes as a _tower_ of pegs. Recall that, for now, we are working with a single GO term, called \(g\). Thus, each protein in the cluster (tower) is either annotated by \(g\), or not. Given tower \(T\), let \(a_{T}\) be the number of proteins in \(T\) that are annotated by \(g\). We propose two measures for evaluating a multiple network alignment: SGS, and EG. Note that these measures are never used to _guide_ the alignment, but are only used after-the-fact to evaluate an alignment generated without their use. The general principle is that an alignment that that "concentrates" more GO terms into fewer towers is better than spreading GO terms around equally between towers. This concentration can be measured in two ways: we can either reward towers that have more GO terms than average (the SGS score below), or we can reward the _entire alignment_ if all the GO terms are concentrated into fewer towers (the "exposed GO" score below). Thus, better alignments have a larger SGS score, but a _smaller_ EG value. EG is used in the denominator in the final equation of EG score, so the EG score is also higher if we have a better alignment. #### 2.2.1 Squared G Score (SGS) The proposed SGS score is a measure for evaluating the quality of multiple network alignments. The components are assumed to be lambda-sorted for this part. This measure computes the sum of the squared number of \(g\)-annotated nodes in each tower. The higher the SGS score, the better the alignment is. We define the SGS score for tower \(T\) as \(|a_{T}|^{2}\), the square of the number of annotated proteins. Then, the global score of an alignment \(F\) is \[SG(F)=\sum_{T}|a_{T}|^{2}. \tag{1}\] to normalize the SGS score between 0 and 1, we divide it by its maximum possible value, which is computed as follows: \[Max(SG)=k^{2}\lambda_{k}+\Sigma_{i=1}^{k-1}i^{2}(\lambda_{i}-\lambda_{i+1}) \tag{2}\] \[SGS=\frac{SG}{Max(SG)} \tag{3}\] where \(k\) is the number of networks in the alignment. #### 2.2.2 Exposed G Score Exposed \(G\) is another measure of the quality of a multiple network alignment. Here, we want to reward alignments where all the GO terms are concentrated into a smaller number of towers. We imagine that GO annotations in a tower are aligned vertically, and the uppermost GO term in the tower is "expose", while the ones "below" it are not. Thus, a better alignment is one that has a smaller number of exposed GO terms. A lower exposed \(G\) score indicates that more \(g\)-annotated nodes are aligned together and fewer nodes are "exposed," resulting in a better alignment. For this score too, the components are lambda-sorted. \[\text{Exposed g}=\text{Number of the towers with at least one $g$ annotated node} \tag{4}\] The calculation of exposed \(G\) involves sorting the components by \(\lambda\) and counting the number of towers with at least one \(g\)-annotated node. The minimum value of exposed \(G\) is \(\lambda_{1}\) and the maximum value is \(\Sigma_{i=1}^{k}\lambda_{i}\), where \(i\) is the index of the network and \(k\) is the total number of networks. The exposed \(G\) score is then calculated as the minimum value of exposed \(G\) divided by the actual exposed \(G\) value. \[\text{Min(Exposed g)}=\lambda_{1} \tag{5}\] \[\text{Exposed g Score}=\frac{\text{Min(Exposed g)}}{\text{Exposed g}} \tag{6}\] To incorporate exposed \(G\) into the final alignment score, it is placed in the denominator. This transforms it from a cost measure into a score measure between 0 and 1. The simplicity and effectiveness of exposed \(G\) make it a valuable addition to the set of measures used to evaluate multiple network alignments. #### 2.2.3 Statistical Exposed G, the \(p\)-value of a multiple network alignment Scores like SGS and EG are heuristic; they are not rigorous in the sense that we have no idea how "good" a particular score is compared to a random alignment. Here, we wish to transform the Exposed G score into a rigorous \(p\)-values. In order to evaluate the statistical significance of a multiple network alignment, we utilize exposed \(g\) as a measure to calculate the \(p\)-value of a specific alignment. The denominator of the \(p\)-value calculation represents the total number of possible ways a multiple network alignment can be created, while the numerator counts the number of multiple network alignments with a particular exposed \(g\) value. By determining the exposed \(g\) value of a given multiple network alignment and utilizing the equations for the numerator and denominator, we can calculate the statistical significance of obtaining that specific exposed \(g\) value. This approach allows for a more comprehensive evaluation of the quality of a multiple network alignment, as the statistical significance of obtaining a specific exposed \(g\) value or a better (lower) one can serve as a reflection of its quality. By calculating the \(p\)-value of getting a certain exposed \(G\) value or lower, we can assess how likely it is to observe such an alignment by chance, which can reflect the quality of the alignment in a more rigorous and objective manner. It should be noted that in this context, exposed \(g\) refers to the actual number of exposed GO terms, rather than the normalized exposed \(g\) score. Furthermore, the calculation of the numerator and denominator is described in two separate sections to provide a clear understanding of the statistical significance calculation for a single GO term. **2.2.3.a The denominator** The denominator in this context refers to the combinatorial number of ways that a multiple network alignment can be created between a set of networks, taking into consideration the existence of shadow nodes with no biological meaning. The ordering of these shadow nodes does not matter and their number is typically set to zero, but the general case is considered where the number of extra shadow nodes is bound to be between \(n_{1}\) to the sum of network sizes. It is important to note that this is the whole possible range, but usually the user defines an upper bound that is much closer to \(n_{1}\). We refer to this defines upper bound as \(n_{0}\). To calculate the denominator, we assume that the networks are node-sorted and each network is independently matched with the shadow network. \[LE(n)=\Pi_{i=1}^{k}p(n,n_{i}) \tag{7}\] where \(n=n_{0}\), and \(n_{0}\) is the number of shadow nodes allowed for the network. This equation is also the final equation that can be used when we set the extra number of shadow nodes to zero. However, given that number of shadow nodes is defined as \(n_{0}>n_{1}\), there are a number of empty shadow nodes in each alignment, and different orderings of filled shadow nodes are incorrectly accounted for. Therefore, \(LE(n)\) counts the number of ways that \(n\) or fewer shadow holes are filled, allowing for repetitive counts for each case. The exact number of ways that exactly \(n\) shadow holes are filled is represented by \(E(n)\), which is defined as \[LE(n)=\Sigma_{i=n_{1}}^{n}a_{i}E(n_{i}) \tag{8}\] where \(a_{i}\) represents the factor of \(E(n_{i})\) being counted in \(LE(n)\), where generally, \(a_{i}=p(n,i)\) since for the shadow network, \(LE\) chooses \(i\) shadow holes and permutes them. \[LE(n)=\Sigma_{i=n_{1}}^{n}\frac{n!E(n)}{(n-i)!}\implies E(n)=\Sigma_{i=n_{1}}^ {n}\frac{LE(i)\times(-1)^{n-i}}{(n-i)!i!} \tag{9}\] Finally, the count of different multiple network alignments is represented by \[\text{Denominator} =\Sigma_{n=n_{1}}^{n_{0}}E(n)=\Sigma_{n=n_{1}}^{n_{0}}(\Sigma_{i =0}^{n_{0}-n}\frac{(-1)^{i}}{i!})LE(i) \tag{10}\] \[=\Sigma_{n=n_{1}}^{n_{0}}(\Sigma_{i=0}^{n_{0}-n}\frac{(-1)^{i}}{i!(n-i)!})\Pi_{i=1}^{k}p(i,n_{i}) \tag{11}\] This equation takes into consideration the number of shadow holes filled and the factorials of each combination. The denominator essentially represents the total number of ways that multiple network alignments can be formed between the given set of networks. **2.2.3.b The numerator** Here we want to calculate the exact number of alignments that have \(x\) exposed \(g\)s. Again, we define \(LEG(x)\) to be the number of ways that we have \(x\) or less exposed \(g\)s. There are some duplicates in \(LEG(x)\) that we should eliminate by defining an exact version. To evaluate the quality of a multiple network alignment, we want to calculate the exact number of alignments that have a given number of exposed \(g\) values. We define \(LEG(x)\) to be the number of ways that we have \(x\) or fewer exposed \(g\)s. However, there are duplicates in \(LEG(x)\) that we must eliminate. Therefore, we define \(EG(x)\) to be the exact number of alignments that have exactly \(x\) exposed \(g\)s. Since we know that in the \(\lambda\)-sorted networks, the lower bound for exposed \(g\) is \(\lambda_{1}\), we can rewrite \(LEG(x)\) as: \[LEG(n,x)=c(n,x)\Pi_{i=1}^{k}p(x,\lambda_{i})p(n-\lambda_{i},n_{i}-\lambda_{i}) \tag{12}\] In the equation, we first choose \(x\) nodes from the shadow nodes and then for each of the networks, we choose all of the annotated nodes among these \(x\) nodes. Then, we align the rest of the network nodes with the remaining shadow nodes. In cases where all \(x\) chosen shadow nodes are aligned with at least one annotated node, there will be no duplicate counting. However, in other cases, there are duplicates because the shadow nodes that are not aligned with any annotated nodes are the same, whether they are chosen from \(x\) or not. The annotated nodes in \(x\) are not counted duplicated because they can only be in \(x\) and it matters to which shadow nodes they are aligned. But the unannotated ones could as well be left out of \(x\). As a result, we define \(EG(x)\) to be the exact number of alignments that have \(x\) exposed \(g\)s. Since we know that in the \(\lambda\)-sorted networks, the lower bound for exposed \(g\) is \(\lambda_{1}\), we rewrite \(LEG(x)\) as \[LEG(x)=\Sigma_{i=0}^{x-\lambda_{1}}a_{i}EG(i) \tag{13}\] Here, \(a_{i}\) is the factor being counted in \(EG(i)\). For \(i\) exposed \(g\)s, \(a_{i}=c(n-(x-i),i)\), where \(c(n,k)\) is the number of ways to choose \(k\) items from a set of \(n\) items. The equation for \(LEG(x)\) involves choosing \(x\) nodes from the shadow nodes and, for each network, choosing all of the annotated nodes among these \(x\) nodes. Then, we align the rest of the network nodes with the remaining shadow nodes. Since \[LEG(n,x)=\Sigma_{i=0}^{x-\lambda_{1}}c(n-(x-i),i)EG(x-i) \tag{14}\] We can calculate \(EG(n,x)\) as \[EG(n,x)=\Sigma_{i=0}^{x-\lambda_{1}}LEG(n,x-i)c(n-(x-i),i)(-1)^{i} \tag{15}\] Here, we also need to solve the shadow hole discrepancy. For this part, we can use the denominator equations since the equations have the same number of undercounting for different cases. \[E(x)=\Sigma_{n=n1}^{n0}\Sigma_{i=max(n1,x)}^{n0-n}\frac{EG(n,i)\times(-1)^{n- i}}{(n-i)!i!} \tag{16}\] In order to calculate the statistical significance of this, we should calculate \(E(x)\) for Exposed \(g\) s equal or better than the counted Exposed \(g\). So, the statistical significance of obtaining an exposed \(G\), can be calculated by \(E(x)\), but in order to calculate the statistical significance of obtaining an exposed \(G\) or a lower one, we should use this equation: \[Numerator=\Sigma_{i=\lambda_{1}}^{min(sum(lambdas),n_{0})}E(x) \tag{17}\] This approach allows for a more comprehensive evaluation of the quality of a multiple network alignment, as the \(p\)-value of obtaining a specific exposed \(g\) value or a better (lower) one can serve as a reflection of its quality. ### Validation and Logarithmic Implementation of Statistical Exposed G The statistical exposed G measure has been proposed as a means to evaluate the degree of conservation of a set of genes across multiple organisms. Although the equations for the statistical exposed G measure can be difficult to follow, we have validated the measure through numerical and empirical methods. To validate the measure numerically, we generated random network states and compared the sum of the numerator for all possible values of exposed Gs with the denominator. We then implemented the functions by logarithmic calculations to allow for the calculation of results for larger values. We tested the output for different network states, including real-sized and smaller networks, and observed that the result was equal for the numerator and the denominator. We only needed to produce different lambda values lower than the network sizes, an arbitrary allowed shadow network size in the possible range. For the logarithmic calculations, we calculated \(log(a+b)\) having \(log(a)\) and \(log(b)\) as in the equation \[log(a+b)=log(1+(exp(log(a)-log(b)))+log(b) \tag{18}\] assuming \(b\) is larger than a. For a value of \(exp(log(a)-log(b))\) close to zero, we used the Taylor series. Additionally, we had two sets of series that were multiplied by different powers of -1. Consequently, we also needed to calculate \(log(a-b)\) which was done by factoring \(b\) as demonstrated in the equation \[log(a-b)=log(1-exp(log(b)-log(a)))+log(a) \tag{19}\] again Taylor series was used for small values of \(exp(log(b)-log(a))\). To further validate the statistical exposed G measure, we used empirical tests. We generated numerous random alignments, calculated the exposed G, and tracked the number of times each exposed G was observed for a network state (which includes \(n_{1}\) to \(n_{k}\) and \(\lambda_{1}\) to \(\lambda_{k}\)). Dividing this number by the number of times that this network state was produced gave us the "empirical \(p\)-value" of the specific exposed G for the specific network state. We then used our equation to calculate the "theoretical \(p\)-value" for each of these values. We used IID mammalian networks for this purpose, with Rat, Cow, and Dog networks used for the case with three networks and 100 million random alignments generated. For the case with 5 networks, 9 billion random alignments were generated. It is important to note that if the theoretical \(p\)-values are much lower than \(10^{-9}\), the comparisons would be invalid since the sample size for the empirical results wouldn't be enough to represent them. In multiple network alignment, due to the numerous alignment states, the \(p\)-values generally decrease very fast. Therefore, we chose a subset of 100 nodes for each of these networks to get theoretical results within this range. Our method for choosing a hundred nodes included going through the Orthologs between the species and choosing the 100 sets of orthologs that had the highest minimum number of GO terms, selecting nodes with the highest shared GO terms. The empirical tests show excellent agreement with the theoretical results as shown in figure 1. The divergence in the lower \(p\)-values is due to the relatively small sample size that is not representative enough for these groups. ### Experiments Two main sets of experiments were used to assess the measures. These experiments compare the measures against an objective evidence to test their quality. #### 2.4.1 Perfect self-alignment with controlled error rate We used BioGrid networks for the first experiment [7]. Our goal was to produce controlled noise using self-alignment. To achieve this, we selected human, mouse, rat, and fruit fly networks. In each set, we began with a perfect self-alignment of \(k\) numbers in a single-species network. Next, we permuted a specific fraction of nodes in the networks to introduce a predetermined error rate. We generated 250 alignments for each combination of k, error rate, and species, resulting in a total of 30,000 self-alignments. A summary of the utilized BioGrid networks are demonstrated in the table 1. #### 2.4.2 Using SANA multiple network alignment on IID networks In the next experiment, SANA network aligner is used to produce real multiple network alignments [6]. SANA is an iterative aligner and we expected the quality of the alignments to increase with the number of iterations. IID mammalian networks were used for this experiment [8]. We specifically used IID mammalian networks, which have a higher edge density compared to BioGRID networks, as the latter was found to have a lower number of edges than necessary based on information theory [9]. To ensure meaningful results from iteration, we selected all the GO term annotated species within the IID mammalian networks, which included cow, dog, mouse, human, and rat. Within these networks, we created 30 real alignments for each combination of \(k=3\), and \(k=5\) species. The summary of the information of the utilized IID mammalian networks are shown in the table 2. The statistical significance of the exposed G measure was further validated by comparing it to the number of recovered orthologs within species. A higher number of recovered orthologs indicates a better alignment, providing additional objective evidence for the quality of the alignment. Therefore, the trend of the measures was compared to the trend of the recovered ortholog counts. Generally, for the statistical exposed G, In addition to running it on all the go terms with \(\lambda 1\) below 100, we used sampling on 100, 1000, and 10000 go terms. Adding sampling options help to run the program in a very short amount of time. It is important to note that averaging was used to combine the \(p\)-values of different GO terms because here, we are using the statistical exposed G as a quality measure, and we didn't \begin{table} \begin{tabular}{c c c c c} \hline Nodes & Common Name & Official Name & Abbr & TaxID \\ \hline 13276 & Human & H. Sapiens & HS & 9606 \\ 7937 & Fruit fly & D. Melanogaster & DM & 7227 \\ 4370 & Mouse & M. Musculus & MM & 10090 \\ 1657 & Rat & R. Norvegicus & RN & 10116 \\ \hline \end{tabular} \end{table} Table 1: A summary of experimented BioGrid networks Figure 1: The scatterplot of empirical \(p\)-values for statistical exposed G against the theoretical values. The left plot shows the results for three networks and 100 million randomly generated alignments. The right plot demonstrates the validation results for five IID mammalian networks and 9 billion randomly generated alignments. want it to be affected by the number of GO terms in the networks. Whereas, if we wanted to assess the alignment only statistically, multiplying them would be close to the actual value since each GO term provides additional information on the statistical significance of the alignment. ## 3 Results ### Exposed G and SGS score #### 3.1.1 Perfect self-alignment with controlled error rate Figure 2 display the results of the perfect self-alignment with a controlled error rate for both the exposed G and SGS score. Each circle in the right figures represents a single data point. As depicted in the figures, the data points are concentrated on a single point for every \(k\) and error rate, indicating that the measures are reliable in distinguishing the quality of alignments. Additionally, both SGS and exposed g measures are monotonically decreasing with error rate. The reason that higher values of \(k\) cause the score of the measures to decrease is that introducing the same error rate for a larger number of networks causes a higher percentage of perfect towers to be damaged. #### 3.1.2 Using SANA multiple network alignment on IID networks Figure 3 show the result of SGS and exposed G for SANA multiple network aligner on IID networks. The curves for both SGS score and exposed G score show a monotonically increasing pattern with iteration, as illustrated in the figures. The curves for SGS score and exposed G score produce very close absolute values, as shown in the first figure. Exposed G score is capable of distinguishing alignments with a different set of species for different alignments for \(k=3\), while for \(k=5\), all alignments for both measures get close final results. To combine the \(p\)-values of the GO terms for a holistic \(p\)-value, we used averaging. ### Statistical Exposed G #### 3.2.1 Perfect self-alignment with controlled error rate Figure 4 show the result of statistical exposed G on self-alignments. We tested the statistical \(p\)-value measure on self alignments with a fixed error rate. Fruit Fly (\(TaxID=7227\)) network is smaller than the other species and therefore, even when it reaches zero error rate, the \(p\)-value is less significant than other species. The other measures (Exposed G and SGS) were unable to detect these differences, but statistical exposed g offers a tool that can be used for more global comparisons. The second plot shows the result of the \(p\)-value alignment for self-alignments with a fixed error rate between 5 networks. This plot shows that reaching an error rate with \(K=5\) is more significant than reaching the same error rate for \(K=3\). The last plot shows the results for 7 networks. The same trend is visible here, and again, the \(p\)-values are lower. #### 3.2.2 Using SANA multiple network alignment on IID networks For these experiments, we observed that the trend in the figure 5, which shows the results for 3 networks, matches exactly with the recovered ortholog count, which provides further evidence for the legitimacy of the measures. Additionally, in figure 6 we observed that in species like mouse and human, where the PPI network is more complete, the final score is more statistically significant. The more difficult alignments with lower \(p\)-values improved more slowly around the iteration 500. \begin{table} \begin{tabular}{c c c c c} \hline Nodes & Common Name & Official Name & Abbr & TaxID \\ \hline 18079 & Human & H. Sapiens & HS & 9606 \\ 17529 & Mouse & M. Musculus & MM & 10090 \\ 15740 & Rat & R. Norvegicus & RN & 10116 \\ 14512 & Dog & C. Familiaris & CF & 9615 \\ 14783 & Cow & B. Taurus & BT & 9913 \\ \hline \end{tabular} \end{table} Table 2: A summary of experimented IID mammalian networks Figure 2: The figures show the result of Self-alignment experiment for BioGrid networks. Each row corresponds to one species; from above: human, rat, mouse, and fruit fly. The right column show every single datapoint on the plot separated for exposed G and SGS, while the left column compares the trend of SGS and exposed G. Figure 3: The plots show the result of exposed G and SGS, after running SANA for 1000 iteration on IID mammalian networks. The pink curves are related to 5 networks and the blue lines to 3 networks. The first figure is the result of exposed G and the second one is for SGS. Figure 4: The plots show the result of statistical exposed G for self-alignments between from above: 3, 5, and 7 IID networks. The \(p\)-values start lower by increasing the number of networks, as it’s more significant to generate high-quality alignments for a larger number of networks. The scale uses natural logarithms. Figure 5: This figure shows the result of statistical exposed G for different iterations of SANA multiple network aligner for 3 networks. The second plot, shows the number of recovered Ortholog count for the same alignments. Both curves match which shows the statistical exposed G is a promising indicator of quality. Figure 6: This figure shows the differences of species and the datapoints within the result of statistical exposed G for different iterations of SANA multiple network aligner for 3 networks. More complicated sets of species land on lower \(p\)-values. In figure 7, we observed that for \(k=5\), the ortholog recovered count exactly matches with the log \(p\)-value curve. Additionally, reaching good alignments is more statistically significant than in \(k=3\). Another noteworthy difference is that the data points become very close as the iterations go by. This shows that for similar combinations of species, almost the same \(p\)-value is reached, although the process of the alignment is different for different alignments. ## 4 Discussion Our evaluation of SGS, exposed G yielded promising results. Firstly, we observed a perfect match between the expected trend and our measures when we used a perfect self-alignment with a controlled error rate. Secondly, we demonstrated that our measures had a perfect match with the recovered ortholog count and with each other when we used the SANA multiple network aligner and allowed it to run through 1000 iterations on IID mammalian networks. Furthermore, our \(p\)-value measure was fully supported by our proposed measures and the recovered Ortholog counts. The \(p\)-value measure was validated by testing it against empirically generating random alignments and matching the empirical \(p\)-value against our calculated \(p\)-value. This measure was found to have a perfect curve very close to the empirical \(p\)-value. We also validated statistical exposed G by observing that it sums to one when combined for all the possibilities. ### Limitations Despite these promising results, there are several limitations to our study that must be acknowledged. Firstly, we combined scores for different GO terms using averaging, which may have overlooked their inter-dependencies. Secondly, to resolve the adverse effect of the large number of log-summations required on the precision, we restricted our analysis to GO terms with a \(\lambda 1\) value of 100 or less. These GO terms accounted for the majority of available GO terms in the network. Additionally, we set the size of the shadow network to be equal to the largest network and did not allow for outer matches, as setting a maximum number of extra shadow nodes would have had a significant impact on the \(p\)-value, which should reflect the quality of the alignment as indicated by the Exposed G measure. Figure 7: This figure shows the result of statistical exposed G for different iterations of SANA multiple network aligner for 5 networks. The second plot, shows the number of recovered Ortholog count for the same alignments. Both curves match which shows the statistical exposed G is a promising indicator of quality. ### Future Work In the future, we plan to further enhance our approach by computing the p-values for Squared GO Score (SGS) and a full combinatorial p-value. We also plan to use Empirical Brown's Method to account for the inter-dependencies between the GO terms. Additionally, we want to further test our measures on BioGrid, to learn how they perform on unbalanced data. ### Conclusion In conclusion, our proposed measures have shown promise in assessing the quality of multiple network alignment in PPI networks. The ability to evaluate the statistical significance of a multiple network alignment is particularly important for the analysis of large-scale biological networks. Our work has potential applications in predicting gene function and understanding the functional relationships between proteins. ## 5 Appendix ### Optimization of statistical Exposed G The numerator of the statistical measure is computationally intensive, with a computational complexity of \(O((n_{0}-n_{1})^{2}*(exposedG-\lambda 1)^{2}*k)\), where \(n_{0}-n_{1}\) represents the number of extra shadow nodes used. To address this, we utilized a hashmap memory for log factorials and different functions of the numerator. We also observed that different parts of the summations converged before the completion of the summation. However, computation for one \(E(x)\) took approximately one second, which is time-consuming as \(E(x)\) needs to be calculated for all values below exposed G. Furthermore, as this calculation needs to be performed for all GO terms, which number around 20,000, this process is computationally expensive but easily paralleled. The optimization is optional for the cases where parallelism isn't used and the \(p\)-value is needed to be produced on all of the GO terms. To reduce the need for calculating the measure for all \(x\) values between the \(\lambda_{1}\) and the exposed G, we plotted \(E(x)\) and cumulative \(E(x)\) as viewed in figure 8. Our analysis showed that the \(E(x)\) curve increases generally, with the maximum points and cumulative \(E(x)\) points being very close as shown in the figure below. To accurately reflect the quality of statistical exposed G, we only required a correctly calculated increasing curve by exposed G. Therefore, instead of calculating everything, we started with the exposed G and worked backward until we reached the previous peak. We then returned the current value. This returned the result within a few runs of \(E(x)\). We used the derivative of the function as an approximation of the error, and found the error to be negligible. We tested this approach on both random synthesized network states and real alignments, and found it reduced the time required from 5 minutes and 41 seconds to 18 seconds for real mammalian networks with exposed G in the middle of the allowed range for a single GO term. We experimented with different values of shadow holes and observed that using shadow nodes reduced the \(p\)-value by a significant factor, but did not indicate that we have a better alignment. This means that if two people used different numbers of maximum extra shadow nodes for the same multiple network alignment, they would receive different results. Consequently, we used the biggest network as the alignment base to ensure that the \(p\)-value was only affected by the exposed G as the quality indicator. The use of the zero extra shadow holes counterpart for the alignments could only be problematic for GO terms that are poorly aligned enough to exceed the number of nodes in the biggest network. However, this was only observed in a small number of GO terms with an average above \(N/k\) instances per network, which were common and uninformative GO terms. For these GO terms, we safely assumed the value of 1 for the \(p\)-value, as the holistic \(p\)-value is a combination of a large number of GO term \(p\)-values. Furthermore, using logarithmic summation and subtraction can significantly affect the precision. By removing the shadow holes, the time complexity dropped to \(O((exposedG-\lambda 1)^{2})\), and only two series of summation were required. Nonetheless, we used fixed-precision libraries with at least 60 decimal point precision to obtain valid results. It is important to note that setting extra shadow nodes is accounted in the equations and the codes and can be used but it's not recommended. For smaller exposed g values a shorter summation and subsequently a smaller precision was required. For bigger exposed g which was the case for the networks with big lambdas higher precision was needed. Using higher precision than 300 decimal points slowed the process too much. Fortunately, almost all of the go terms had lambdas smaller than 100 as demonstrated in the figure 9. It is important to note that the smaller the \(\lambda 1\) of a go term is, it is more likely that the go term is more informative and more specified for a certain function. Which means go terms with smaller lambdas are representative and should contribute to the final result. Using go terms with higher than a hundred lambdas, in most cases, had big errors due to the summations. As a result, we use go terms with lower than a hundred \(\lambda 1\) for obtaining the final result. It is also possible to use sampling options and only look at a specified number of go terms, but the result for all the go terms with \(\lambda 1\) below 100 is usually calculated within an hour. In order to see how logarithmic summation and subtraction is affected by a fixed precision we designed an experiment. We created a list by starting from a value close to our denominator and incrementing the value by 10 or 100 to get the next element. We used logarithmic summation on this list and then logarithmic subtraction to remove all the elements except for the first one. By increasing the length of this list, more precision was required to get results. We noticed that the precision is either close to perfect or completely out, which was close to what we were observing in the actual runs. Therefore, we wanted to see how many elements could be handled before we reach the precision drop for different fixed precisions. The precision wasn't affected by the first value that we chose, it was only affected by the fixed precision and the value by which we incremented. As shown in figure 10, the closer the values, more elements could be handled with high precision. However, still even with 300 decimal points below 70 summations could be done for a difference of 10. As the plot was linear we were almost able to derived that \(2/incrementValue\) was a good approximation of the slope. We validated this by checking it with an increment value of 1 and shuffling the lists. We calculated the mean of the differences that we encountered in the real alignments which had the increment value of around 3, so we used 0.66 as the slope to calculate the fixed precision that we need. Using this, the precision is allocated dynamically for a minimum value of 60 and a maximum value of 300, and using these equations to obtain the number. The calculations are even more precise and faster for better alignments which ensures that good alignments are easily distinguished from each other even in very low \(p\)-values. The figure also shows why we can't reach the required precision for GO terms with a big \(\lambda 1\). The number of logarithmic summations needed is \(exposedg-1+1\) for each GO Term which is in most cases too big for these go terms. Figure 8: This figure shows that the general trend of E(x) for a point estimate of an exposed G is generally increasing and is very close to the cumulative calculation if we use the peaks. Figure 10: This figure plots the number of log-summations done before the precision falls. The blue line shows the result when the numbers are distanced by 10, and the orange one shows it for 100. Figure 9: This figure plots the frequency histogram of \(\lambda\)1s for all GO terms. It shows that the majority of the GO terms have \(\lambda\)1s below 100. Overall, the optimizations allowed us to calculate the \(p\)-values for the majority of the GO terms within an alignment with a high precision and a reasonable time.
2305.07492
Consistency and Reproducibility of Grades in Higher Education: A Case Study in Deep Learning
Evaluating the performance of students in higher education is essential for gauging the effectiveness of teaching methods and achieving greater equality of opportunities for all. In this study, we investigate the correlation between two teachers' grading practices in a deep learning course at the master's level, offered at CentraleSup\'elec. The two teachers, who have distinct teaching styles, were responsible for marking the final project oral presentation. Our results indicate a significant positive correlation (0.76) between the two teachers' grading practices, suggesting that their assessments of students' performance are consistent. Although consistent with each other, grades do not seem to be fully reproducible from one examiner to the other suggesting serious drawbacks of only using one examiner for oral projects. Furthermore, we observed that the maximum difference between the grades assigned by the two examiners was 12.5%, with a mean of 6.3\% (and median of 5.0\%), highlighting the potential impact of inter-examiner variability on students' final grades.
Paul Dubois, Romain Lhotte
2023-05-12T14:06:16Z
http://arxiv.org/abs/2305.07492v3
# Marking Correlation ###### Abstract Evaluating the performance of students in higher education is essential for gauging the effectiveness of teaching methods and achieving greater equality of opportunities for all. In this study, we investigate the correlation between two teachers' grading practices in a deep learning course at the master's level, offered at CentraleSupelec. The two teachers, who have distinct teaching styles, were responsible for marking the final project oral presentation. Our results indicate a significant positive correlation (0.76) between the two teachers' grading practices, suggesting that their assessments of students' performance are consistent. Although consistent with each other, grades do not seem to be fully reproducible from one examiner to the other suggesting serious drawbacks of only using one examiner for oral projects. Furthermore, we observed that the maximum difference between the grades assigned by the two examiners was 12.5%, with a mean of 6.3% (and median of 5.0%), highlighting the potential impact of inter-examiner variability on students' final grades. ## 1 Introduction In recent years, there has been a surge of interest in the field of deep learning, with applications ranging from computer vision[10][3] and natural language processing[9] to bio-informatics[8][2] and medical applications[11]. As a result, there is an increasing demand for high-quality education and training programs in deep learning[1]. In response to this demand, many universities and engineering schools have started offering courses and programs in deep learning at various levels, including undergraduate and graduate levels. The evaluation of students' performance in the course is a critical aspect of teaching [7][5]. It can be used to measure the effectiveness of the teaching methods and the quality of the learning outcomes. In this study, we investigate the correlation between the grading practices of two instructors who taught the course, and the Kaggle assessments. The Kaggle assessments are regarded as impartial marking, and considered as a "ground truth". Our goal is to evaluate the reliability and consistency of grading practices in higher education and to provide insights that can inform efforts to improve the quality of teaching and learning outcomes. The findings of this study have implications for educators, administrators, and policymakers who are interested in improving the quality of education in deep learning and related fields. The results can also be used to inform the development of more effective evaluation frameworks and grading practices in higher education. Overall, this study contributes to the growing body of knowledge on the pedagogy of teaching scientific courses at advanced university level. ## 2 Methods ### Participants The participants in this study were 28 students enrolled in a deep learning course at the master's level offered at CentraleSupelec during the academic year 2022/2023. The class was taught by two instructors, both of whom had distinct teaching styles. The course was designed to provide students with a comprehensive understanding of the fundamental concepts, theories, and applications of deep learning. The course was structured into 10 teaching sessions, each spanning three hours. Each session comprised one hour of theoretical instruction followed by two hours of practical exercises. In addition to the classroom instruction, five Kaggle challenges were assigned to the students, who were expected to work on them independently and outside of class time. The final component of the course consisted of a group project, which required all 28 students to form 10 groups of 2-4 individuals. The project served as a comprehensive assessment of the students' proficiency in deep learning and required the application of the concepts and techniques covered in the course. Students were expected to present their project findings orally and respond to questions from the instructors. ### Evaluation Framework The evaluation of students' performance in the course was based on six components, which accounted for the final grade. The first five components were Kaggle challenges, each accounting for 6% of the total mark, adding up to 30% of the final mark. For each challenge, a marking scheme was established, which was publicly shared with the students: **0/6**: **no attempt** to the challenge or score _below_ a _basic_ attempt of the challenge **3/6**: achieved a better score than one typically obtained after a **basic** attempt of the challenge **4/6**: achieved a better score than one typically obtained after a **fair** attempt of the challenge **5/6**: achieved a better score than one typically obtained after an **advanced** attempt of the challenge **6/6**: given to the **top 10** students of the class; to obtain 6/6, students also needed to meet requirement for 5/6 (the class is 28 students, so this score is reachable for a third of the class) Details on the exact scores level for each Kaggle challenges are explained on the official website of the kaggles: **Kaggle 1**: [https://www.kaggle.com/competitions/fitting-a-1d-1d-function-with-deep-learning](https://www.kaggle.com/competitions/fitting-a-1d-1d-function-with-deep-learning) **Kaggle 2**: [https://www.kaggle.com/competitions/fitting-a-5d-5d-function-with-deep-learning](https://www.kaggle.com/competitions/fitting-a-5d-5d-function-with-deep-learning) **Kaggle 3**: [https://www.kaggle.com/competitions/binary-classification-glasses-no-glasses](https://www.kaggle.com/competitions/binary-classification-glasses-no-glasses) **Kaggle 4**: [https://www.kaggle.com/competitions/decimal-classification-mics-mnist](https://www.kaggle.com/competitions/decimal-classification-mics-mnist) **Kaggle 5**: [https://www.kaggle.com/competitions/sword-video-classification](https://www.kaggle.com/competitions/sword-video-classification) The final component was a project, which accounted for the remaining 70% of the final mark. Students completed the project in groups of 2-4, and the project was assessed based on an oral presentation of approximately 10-15 minutes, followed by approximately 10 minutes of questions. The final grade for the project was determined by averaging the independent scores provided by the two instructors, who evaluated the projects separately and without knowledge of each other's assigned scores. ### Unbiased versus Biased Evaluation The Kaggle challenges were employed as an impartial means of evaluating the students. The five Kaggle challenges were deemed to constitute a representative measure of the students' abilities. Teachers may harbor preferences for certain projects, be subject to halo effect or other types of biases[4][6]. As a result, the average score attained by the students in the Kaggle competitions was regarded as the "ground truth" for grading. It should be noted that certain students may have invested more time and effort in the project than in the Kaggle challenges, therefore making the Kaggle metric imperfect. Such factors, however, lie beyond the scope of this article. ### Ethical Considerations This study was conducted in accordance with the ethical guidelines of CentraleSupelec, and all participants were were anonymized for confidentiality purposes. ### Normalizing The Kaggles were originally graded out of 6, the project out of 20. We normalized all grades to be out of 100 (%), to ba able to compare mean and standard deviation. Results We rounded all numerical values to \(10^{-2}\) for readability purposes. ### General results We obtained the following results (\(mean\pm standard\_deviation\)): * Kaggle 1: \(88.89\pm 9.25\) * Kaggle 2: \(80.25\pm 26.16\) * Kaggle 3: \(89.51\pm 8.20\) * Kaggle 4: \(75.31\pm 24.62\) * Kaggle 5: \(70.06\pm 31.37\) Kaggle Average: \(80.80\pm 13.22\) * Project-Teacher 1: \(82.13\pm 8.20\) * Project-Teacher 2: \(80.83\pm 11.22\) Project: \(81.48\pm 9.13\) The Kaggle and project have similar average. The standard deviation is slightly lower on the project, but both are still comparable. ### Dependence of assessments Suppose that the score of students on each kaggle is independent. Then probability theory tells us that the standard deviation of the average kaggle score should be \(9.85\) (using \(\text{Var}\left(\frac{\sum_{i=1}^{5}K_{i}}{5}\right)=\frac{\sum_{i=1}^{5} \text{Var}(K_{i})}{5^{2}}\)). However, the variance found is \(13.22\). This means that the score on kaggle challenges is not independent of others. It makes sense, since we expect good students to perform well in all assessments. Similarly, the variance of the project mark (average of the mark given by teacher 1 and 2) should be \(6.95\) (using \(\text{Var}\left(\frac{P_{1}+P_{2}}{2}\right)=\frac{\text{Var}(P_{1})+\text{ Var}(P_{2})}{2^{2}}\)) if the two teachers' grades could be treated as independent random variables. However, we observe a variance of \(9.13\), leading to the conclusion that the two teachers marks cannot be treated as random independent variables. Again, this is expected, since teachers do not mark randomly. ### Distribution of the marks We can plot the Kernel Density Estimate (KDE) for the distribution of the marks, both for Kaggle challenges and for the project: Figure 1: KDE We observe that the class splits into two groups: one achieving nearly perfect score, and one getting about 75%. This trends can be observed in the project scores given by teachers 1 and 2, but also on some of the kaggle challenges: (kaggles 1 and 3 where it's obvious to observe, and kaggle 4 and 5 where it's less sharp). ### Quantification of the inter-assessments correlation You may find a complete table of correlation between all scores in the supplementary material. Here are the most interesting/interpretable ones: * Kaggles-Project correlation: 0.64; maximum absolute difference: 24.16% * Inter-examiner correlation: 0.76; maximum absolute difference: 12.50% * Examiner-Kaggles correlation: Examiner 1: 0.49 Examiner 2: 0.69 * Kaggle Avg: 0.64 Kaggle 2 - Kaggle Avg: 0.53 Kaggle 3 - Kaggle Avg: 0.33 Kaggle 4 - Kaggle Avg: 0.71 Kaggle 5 - Kaggle Avg: 0.83 ## 4 Discussion The observed higher inter-examiner correlation as compared to the examiner-Kaggles correlation could potentially suggest that there are inherent differences in the oral presentation skills of the students being evaluated. While the examiners demonstrate a higher degree of agreement in their evaluations of the oral presentations, their scores do not align as closely with those assigned by the impartial non-oral assessment provided by the Kaggles. These findings highlight the potential limitations of using oral assessments for evaluating students. This suggests that there may be individual differences in the oral presentation proficiency of the students under evaluation. The observed low correlation between the Kaggles and their respective average scores suggests that students may have directed their attention towards particular Kaggles. This can be explained by two facts: First, students might have found some Kaggles more interesting than others, ans therefore performed better on those. Second, since there was an extra point for being in the top 10, students may have focused their attention on one or two Kaggle to get the extra point. This may have resulted in an uneven distribution of effort across the different Kaggles. The average of the Kaggle, however, is still a good unbiased metric. While the study provides valuable insights about how oral presentations may be appreciated differently by different examiners, it also has several limitations. Firstly, the study only included one class with 28 students, which means that the findings may not be generalizable to other settings or larger sample sizes. Secondly, the study only used one project assessment to evaluate student learning outcomes, which may not provide a comprehensive picture of student performance or achievement. Thirdly, the study only had two examiners, which may limit the reliability and validity of the assessment process. Additionally, it is possible that students may have paid more attention to one aspect of the course (either the Kaggle competition or the class project) than the other. This could create of decorrelation between the two marks that is not due to examiner's subjective appreciation, but to an objectively better/worst performance than in the Kaggle competitions. Further research with larger sample sizes and more diverse student populations is needed to confirm and extend these findings.
2309.01336
Learning for Interval Prediction of Electricity Demand: A Cluster-based Bootstrapping Approach
Accurate predictions of electricity demands are necessary for managing operations in a small aggregation load setting like a Microgrid. Due to low aggregation, the electricity demands can be highly stochastic and point estimates would lead to inflated errors. Interval estimation in this scenario, would provide a range of values within which the future values might lie and helps quantify the errors around the point estimates. This paper introduces a residual bootstrap algorithm to generate interval estimates of day-ahead electricity demand. A machine learning algorithm is used to obtain the point estimates of electricity demand and respective residuals on the training set. The obtained residuals are stored in memory and the memory is further partitioned. Days with similar demand patterns are grouped in clusters using an unsupervised learning algorithm and these clusters are used to partition the memory. The point estimates for test day are used to find the closest cluster of similar days and the residuals are bootstrapped from the chosen cluster. This algorithm is evaluated on the real electricity demand data from EULR(End Use Load Research) and is compared to other bootstrapping methods for varying confidence intervals.
Rohit Dube, Natarajan Gautam, Amarnath Banerjee, Harsha Nagarajan
2023-09-04T03:45:28Z
http://arxiv.org/abs/2309.01336v1
# Learning for Interval Prediction of Electricity Demand: A Cluster-based Bootstrapping Approach ###### Abstract Accurate predictions of electricity demands are necessary for managing operations in a small aggregation load setting like a Microgrid. Due to low aggregation, the electricity demands can be highly stochastic and point estimates would lead to inflated errors. Interval estimation in this scenario, would provide a range of values within which the future values might lie and helps quantify the errors around the point estimates. This paper introduces a residual bootstrap algorithm to generate interval estimates of day-ahead electricity demand. A machine learning algorithm is used to obtain the point estimates of electricity demand and respective residuals on the training set. The obtained residuals are stored in memory and the memory is further partitioned. Days with similar demand patterns are grouped in clusters using an unsupervised learning algorithm and these clusters are used to partition the memory. The point estimates for test day are used to find the closest cluster of similar days and the residuals are bootstrapped from the chosen cluster. This algorithm is evaluated on the real electricity demand data from EULR (End Use Load Research) and is compared to other bootstrapping methods for varying confidence intervals. keywords: Time Series, Load Forecasting, Confidence Intervals, Machine Learning, Residual Errors. + Footnote †: journal: IJF ###### Abstract We propose a new method for constructing the 15-minute time interval in a day \(i\in I\). The proposed method is based on the 15-minute time interval in a day \(i\in I\). ## 1 Introduction The past few decades have led to the emergence of deregulated electricity markets resulting in Independent System Operators (ISO) allowing participants to buy and sell electricity in the market. The ISOs, such as the New England ISO and Electric Reliability Council of Texas (ERCOT), conduct market settlement of prices for electricity for supply and demand mainly at two-time levels called day-ahead followed by real-time markets. Consumers of electricity such as local residential microgrids defined as a group of interconnected loads with distributed energy generation, are envisioned to participate in the day-ahead markets to avoid the volatility of electricity prices in real-time markets. In order to do so, the microgrids require accurate day-ahead forecasts of electricity demands for the next 24-32 hours. However, there are difficulties associated with multi-horizon load forecasting, especially in residential microgrids due to lower load aggregation. The effect of low aggregation is represented in Figure 1 where the average electricity demand of 10 houses is compared with the 150 houses. Additionally, the pattern of electricity demand in the residential sector is non-stationary and periodic over daily, weekly, and annual cycles which makes the forecasting of electricity demand a difficult problem. This, along with locally distributed energy generation from solar photovoltaic and wind turbines, low aggregation of consumption units, and newer loads like Electric Vehicles leads to high volatility as the random noise from these sources is superimposed on the daily demand curve. The microgrids, however, offer advantages such as improved reliability and resiliency by providing backup during grid outages, increased renewable energy use, cost savings, and consumer-side control. Accurate prediction of energy demand and generation is necessary to reduce demand costs and meet the energy demands in microgrids. The forecasting requirements of microgrids can broadly range from days, hours, or real-time (short-term) for optimizing energy generation and distribution to several months (long-term) for scheduling maintenance, capacity planning, and policy formulation. Short-term prediction of one-day-ahead demand is defined as the process of forecasting the expected electricity demand of the microgrid over a short time horizon, in the case of this research, the next day. Short-term point forecasts of electricity demands have been traditionally used and extensively researched (Fildes et al. (1997), Lago et al. (2021)) and are an essential tool to plan generation schedules for electricity in the day-ahead energy markets. The application of short-term forecasting thus is to identify the demand cycle, accounting for the random noise while considering consistent noise deviations for the following day. Traditionally, point forecasting the expected electricity demand has dom Figure 1: Stochasticity of electricity demand in the EULR dataset Northwest Energy Efficiency Alliance (2020) due to varying levels of aggregation. inated literature (Harvey et al. (1993); Espinoza et al. (2005); Clements et al. (2016); Hippert et al. (2001)). But in the case of microgrids, electricity demand is extremely volatile, much due to the renewable energy generation and low aggregation of demand loads. The point estimates cannot represent the entire information accurately as a result of the noise and stochastic nature of demands. Thus forecasting prediction intervals are preferred over point forecasts as they provide the range of possible outcomes for demand (Li et al. (2017)) rather than predicting the volatile demand. The estimate of prediction intervals along with point forecasts becomes an important tool for cost-saving policies and decision-making for power system operations (Hong and Fan (2016)) as decisions can be made for many future scenarios. In the context of robust optimization, accurate demand interval predictions can be useful to form uncertainty sets. Robust reconfiguration of microgrids with an accurate prediction of intervals can lead to better solution strategies (Lee et al. (2015)). Machine Learning (ML) models have been growing in popularity for forecasting point and interval prediction of electric load (Mori and Kobayashi (1996); Ahmad et al. (2014); Kong et al. (2019); Zhang et al. (2013); Mocanu et al. (2016)) with a major emphasis on Linear Regression (LR) and Artificial Neural Networks (ANN). We will use the residuals obtained by ML models to form prediction intervals by bootstrapping residuals, but before doing so we will consider the problem associated with direct bootstrapping. An important assumption in ML models is that the model errors are independent and identically distributed (IID) implying that the residuals have no trends and are not connected to each other in any way. Bootstrap re-sampling developed by Efron (1979) is a procedure to generate a sampling distribution by repeatedly taking random samples from the known sample, with replacement. The methods available for bootstrapping depend on whether the input data for the method is an independent random variable or a time series (Hrdle et al. (2003)). As seen later in the section 4 there is an auto-correlation present in the residual series of electricity demand, block bootstrap is used instead of the normal bootstrap. Instead of single sample points, contiguous sections of time series are selected at random and joined together, maintaining the structural dependence or auto-correlations of the residuals required by time-series bootstrap methods. A similar approach is shown in Bergmeir et al. (2016) where the moving block bootstrap developed by Mignani and Rosa (1995) is used to sample from the residual series. LOESS (Locally Estimated Scatterplot Smoothing) developed by Cleveland et al. (1990) is used to obtain the residuals by applying seasonal-trend decomposition to the observations. Recently, tree-based ensemble models were one of the top performers in the M5 and Global Energy Forecasting (GEFCom) competition and are one of the leading models used on Kaggle (Bojer and Meldgaard (2021)). The performance of ensemble models on time-series data in the mentioned competitions is a major drive to test these models for forecasting in this research. While the recent results of ensemble models show potential, forecasting based on Linear Regression and Artificial Neural Networks have been used vastly. We train the tree ensemble point forecast models of Light Gradient Boosting Machine (LGBM) and Gradient Boosting Regression (GBR) as well as Linear Regression (LR) with exogenous variables. Assuming that the future residuals of time series will be like the past, i.e, stationary residuals, the future residual errors can be sampled multiple times from the ones seen in the past to simulate an entire set of future values for time series (Stime (1985), Clements and Kim (2007), Pan and Politis (2016)). However, we shall see that the residuals of the electricity demand obtained by ML models are non-stationary, as the ML model has a higher residual error during the months of high electricity demand and is lower residual values otherwise. We observe that the residuals of days with similar electricity demand patterns are similar, thus the paper proposes a cluster-based method to tackle the problem of non-stationarity. The proposed procedure is to cluster the residuals of the day with similar demand patterns together. The residual errors of the days within each cluster in the memory appear to have a near-constant variance. A similarity score can be assigned to the point estimates of the day-ahead demand and all the clusters and the residuals can be bootstrapped from the cluster with the best similarity score. The proposed method, called cluster-based block bootstrap, solves the two problems associated with non-IID structure and non-stationarity with block bootstrapping and clustering respectively. With that motivation, the objective of this paper is two-fold; first is to develop a non-parametric bootstrap algorithm to generate prediction intervals of day-ahead electricity demand with high confidence based on a point estimate ML model, and second is to reduce the computation time required by the proposed algorithm compared to the bootstrap aggregating models. The paper proceeds with the introduction of the electricity demand data in Section 2, problem definition, and the results of point estimates are described in Section 3 and Section 4. We propose the Cluster Block Bootstrap algorithm in Section 5. The results of the proposed algorithms are com pared to other bootstrapping algorithms in Section 6. We show that the proposed algorithm achieves the performance of the baseline algorithm with a considerable decrease in computation time. ## 2 Data ML models have the limitation of needing large amounts of data for accurate prediction. Large-scale electric load data collection projects like residential End Use Load Research (EULR) by Northwest Energy Efficiency Alliance (Northwest Energy Efficiency Alliance (2020)) and Pecan Street Austin (Pecan Street Inc. Dataport) have provided the spatial and temporal granularity to work with the ML models requiring vast amounts of training data. EULR data on electricity demand was collected at 1-minute intervals from 400 homes across the Northwestern region of the states of Washington, Oregon, Montana, and Idaho. The information about temperature, humidity, and other atmospheric conditions is provided in the EULR data. It is worthwhile to describe the EULR project before we go into the details of the model. The EULR project is a regional study designed to gather accurate electricity demand profiles that could help us in understanding contemporary electricity end-use patterns. While the project collects data for every minute interval, it has provided public access to the 15-minute interval data of electricity demand in residential and commercial units for research purposes. Each of the units is called sites, with each site recognized by its _unique id_. Since the inception of the project in 2020, data has been collected from around 400 such sites, including solar-powered sites. The data provided in EULR consists of electricity drawn by the residential site's main supply line as well as at some of the major electrical appliances. The sites with solar generation are labeled as such and can be filtered out from the dataset. For such sites, data on net electricity consumed by the main supply line is provided. Thus the time series of electricity demand and solar generation cannot be separated for sites with solar generation. As a result, in this paper, we train our models on the electricity demand registered at the site's main supply line without any solar power generation. Compared to all the states mentioned earlier, the data for the highest number of residential sites were recorded in Washington state. The number of units from Washington for which data were continuously collected from the year 2020 to 2022 is 50. This is still considerably low and thus creates a scenario where prediction for fewer households is needed as in a small Microgrid. Figure 2 shows the one-day moving average (96 intervals of 15 minutes) for the aggregate electricity demand of these 50 sites. The effects of annual seasonality can be seen as there is a downward trend in demand from the month of March to May and an upward trend from October to January. We describe the ML models in Section 3 for which data from the year 2021 is used as a training sample and the data from the first quarter of 2022 is used for testing. The train-test split will remain the same in all of the following sections. We begin defining the problem setup and show the results of ML point estimates in the following section. ## 3 Problem Definition The objective of this study is to accurately forecast the prediction interval of the one-day-ahead aggregate electricity demand of the 50 residential sites. The interval prediction model in this research is based on residuals obtained from point estimates of the ML model's forecast. This section explains the inputs to the ML model and compares the results of the point estimates of the implemented ML models. Furthermore, Section 4 formalizes the results of the point estimates discussed here and presents the necessary elements required for interval prediction. Recall from Section 2 that the data from the year 2021 is used as the training data. Each day in the training set is represented by \(j\) where Figure 2: One-day moving average of aggregate electricity demand for 50 sites in Washington \(J=\{1,\ 2,\dots,\ 365\}\). Further, the daily aggregated demand can be divided into 96 intervals represented by \(i\) such that \(i\in I=\{1,\ 2\ \dots,96\}\) with \(i=1\) representing time \(00:00:00\), sequentially increasing in intervals of 15 minutes until \(23:45:00\). The training data for time series can be considered as labeled data of the form \((\mathbf{X}_{i}^{j},y_{i}^{j})\), where \(\mathbf{X}_{i}^{j}\) is the input vector comprising of the lags and exogenous variables and \(y_{i}^{j}\) is the observed demand for the \(i_{th}\) interval on a \(j_{th}\) day. The input lag and exogenous variables for the ML model are selected as follows. ### Input Variable Selection The plot of the Partial Auto-Correlation Function (PACF) is used by auto-regressive models to measure the correlation between the observed values of time-series (Elsaraiti et al. (2021)), in our case, the electricity demand of \(y_{i}^{j}\) to \(y_{i-k}^{j}\) for different values of \(k\). The PACF for electricity demand data on the training set is plotted on the right-hand side of Figure 3 which shows the dependence of the demand \(y_{i}^{j}\) on \(y_{i-1}^{j}\) and \(y_{i-2}^{j}\) values. It should be noted that since we are making a multi-horizon prediction for a one-day ahead period, the lag values or the observed data during the \(i-1\) and \(i-2\), for \(i>1\) wouldn't be available for \(i_{th}\) interval prediction. However, the Auto-correlation function (ACF) in the left-hand side of Figure 3 suggests that the electricity demand during the interval \(i\) is correlated with the demand seen during the same interval of the previous day. Thus, using these observations from the PACF and ACF plots, observed values of \(y_{i-1}^{j-1}\) and \(y_{i-2}^{j-1}\) can serve as a naive estimate for the two lag input variables for the prediction of demand in interval \(i\). We shall now look at the input exogenous variables used by the ML model. The calendar effects of a quarterly period of a year and holidays including weekends and national holidays are shown to affect electricity demand (Son et al. (2022), U.S. Energy Information Administration). Also, the dependence of the electricity demand on temperature is seen in Figure 4 where more electricity is required at lower temperatures indicating the use of space heating units and at higher temperatures as a result of using space cooling units in residential sites. The temperature for any interval for a given day is the day-ahead predicted temperature from the nearest NOAA (National Oceanic and Atmospheric Administration) station. Thus, quarterly effects, holidays, and temperature predictions are considered as the input exogenous variables to the ML model. Considering lag and exogenous variables, the input vector \(\mathbf{X}_{i}^{j}\) for \(j_{th}\) day and \(i_{th}\) interval is thus defined as follows. \[\mathbf{X}_{i}^{j}=(x_{i1}^{j},\ x_{i2}^{j},\ x_{i3}^{j},\ x_{i4}^{j},\ x_{i5}^{ j})\] where, \[x_{i1}^{j}=y_{i-1}^{j-1}\quad\text{naive estimate for input lag variable of }y_{i-1}^{j},\] \[x_{i2}^{j}=y_{i-2}^{j-1}\quad\text{naive estimate for input lag variable of }y_{i-2}^{j},\] \[x_{i3}^{j}=\text{predicted temperature in Fahrenheit},\] \[x_{i4}^{j}=\begin{cases}0&\text{Jan-Mar}\\ 1&\text{Apr-Jun}\\ 2&\text{Jul-Sep}\\ 3&\text{Oct-Dec},\end{cases}\] \[x_{i5}^{j}=\begin{cases}1&\text{Holidays and Weekends (Saturday and Sunday)}\\ 0&\text{other days}.\end{cases}\] We consider the ML model of the form \(\hat{y}_{i}^{t}=\hat{f}(\mathbf{X}_{i}^{j})\), where \(\hat{f}\) is a real-valued function approximated by ML models. The usual assumption on the residual errors of such a model here denoted by \(z_{i}^{j}=y_{i}^{j}-\hat{y}_{i}^{j}\) is that they are IID. As can be seen in Figure 5, the residuals are centered around 0 and the variance of the residuals is higher in the months of January to March, decreases until July, and again increases from August to December. Figure 3: ACF plot (left) and PACF plot (right) of electricity demand during the days of higher electricity demand and vice versa, representing non-stationarity. ### Point Estimate Metrics The point estimates on the testing set are generated by an expanding window technique on the training set. The current test day observations are added to the training set and a new training model is obtained for the next test day predictions. The moving window proceeds by first predicting the day ahead demand and then adding the labeled data \(\mathbf{X_{i}^{j^{\prime}}}\) of the day \(j^{\prime}\) to the training set, where \(j^{\prime}\in J^{\prime}=\{1,\ 2,\ldots,\ 90\}\) denotes the label of test days. The model errors of the training and testing data are shown in Table 1. The absolute deviations from the observed demand are highest for GBR on test data compared to LR and LGBM. The lower error metrics on the LGBM model denote better point estimates on the test set. ML models are susceptible to over-fitting on the training set resulting in the lower error on the training set and higher errors on the test set. If the training errors are directly bootstrapped for the interval estimation of the test day, the intervals would be narrow due to the over-fitting problem. We overcome this problem by replacing the errors of the training set with the errors on the test set sequentially, which is further described in Section 5. Figure 4: Temperature vs Electricity Demand ## 4 Interval Estimation The proposed model of the construction of prediction intervals or interval estimation of the electricity demand involves the use of residual errors obtained by the ML models seen in the previous section. We define and formalize the need for residual blocks and the memory clusters in this section that will be used for bootstrapping. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Scores**} & \multicolumn{2}{c|}{**LR**} & \multicolumn{2}{c|}{**GBR**} & \multicolumn{2}{c|}{**LGBM**} \\ \cline{2-7} & **Train** & **Test** & **Train** & **Test** & **Train** & **Test** \\ \hline **MAE** & 1.3458 & 1.6712 & 1.6169 & 1.7674 & 1.2116 & 1.5947 \\ \hline **MSE** & 3.0945 & 4.5004 & 4.1095 & 5.1656 & 2.5399 & 4.0726 \\ \hline **RSME** & 1.7591 & 2.1214 & 2.0272 & 2.2728 & 1.5937 & 2.0181 \\ \hline **MAPE** & 15.27\% & 14.39\% & 18.05\% & 15.71\% & 13.69\% & 13.64\% \\ \hline **R2** & 0.7452 & 0.5301 & 0.3334 & 0.5231 & 0.7976 & 0.5521 \\ \hline **RMSLE** & 0.1709 & 0.1662 & 0.2114 & 0.1736 & 0.1541 & 0.1602 \\ \hline \end{tabular} \end{table} Table 1: Model performance for point forecasts Figure 5: Residual errors of GBR on the training set with moving average of observed demand ### Residual Block There are several methods to obtain the prediction intervals so that the future value of electricity demand could lie within the interval with a relatively high probability. We adopt a non-parametric approach where the residual errors are re-sampled in order to build the prediction intervals. We begin by building up notation for the residual errors. The observed forecast error on the training data for the ML model is given as follows \[z_{i}^{j}=y_{i}^{j}-\hat{y}_{i}^{j}\quad\forall i\in I\,\ j\in J \tag{1}\] where \(y_{i}^{j}\) is the observed demand and \(\hat{y}_{i}^{j}\) is the predicted demand by the ML models. We define a memory set \(E\) such that the elements are a tuple of the \(j_{th}\) day errors, thus for the training set we can define \(E\) as \[E=\{(z_{1}^{1},z_{2}^{1},....,z_{96}^{1}),\ldots,(z_{1}^{j},z_{2}^{j},....,z_{9 6}^{j}),\ldots,(z_{1}^{365},z_{2}^{365},....,z_{96}^{365})\}. \tag{2}\] Then the residual errors for test data are given by \[z_{i}^{j^{\prime}}=y_{i}^{j^{\prime}}-\hat{y}_{i}^{j^{\prime}} \quad\forall i\in I\,\ j^{\prime}\in J^{\prime}\] \[y_{i}^{j^{\prime}}=\hat{y}_{i}^{j^{\prime}}+z_{i}^{j^{\prime}}. \tag{3}\] The prediction interval for \(y_{i}^{j^{\prime}}\) can be built by bootstrapping for \(z_{i}^{j^{\prime}}\) from the residual error set \(E\), such that \(y_{i}^{j^{\prime}}=\hat{y}_{i}^{j^{\prime}}+\hat{z}_{i}^{j^{\prime}}\) if the errors are identically distributed. Thus, we shall first discuss the case of the traditional IID bootstrap method. This method considers that the future errors of the test set are similar to the past errors so that \(\hat{z}_{i}^{j^{\prime}}\) can be approximated with the bootstrapped values of the residual errors from the training set \(z_{i}^{j}\). Thus the residuals could be randomly selected with replacement from the memory set of the training residual errors \(E\), \(N\) times, where \(N\) is some large valued integer. Suppose \(N=1000\), and \((z_{(1)}^{*},z_{(2)}^{*},\ldots,z_{(1000)}^{*})_{i}^{j^{\prime}}\) is the ordered set of the bootstrapped residuals for day \(j^{\prime}\) and interval \(i\) randomly selected from memory \(E\) with replacement, then the \(5th\) and the \(95th\) percentile values of the prediction interval are represented by \(z_{(50)}^{*}\) and \(z_{(950)}^{*}\), respectively. However, the ACF and PACF plots of the residual series \(\hat{z}_{i}^{j}\) presented in Figure 6 indicate the existence of correlation among the residuals. As a result of this, the IID bootstrap cannot be applied to the dependent data of residual electricity demand. Also, there are variations in the magnitude of the residuals on the training set as seen in Figure 5 indicating that the errors are not identical. The inadequacy of the IID bootstrap method for dependent series is described in Singh (1981). Instead of re-sampling a single observation of residuals at a time, non-overlapping contiguous blocks of residuals can be re-sampled. As a result, the structural dependence of the residuals can be preserved. Thus the residual of the electricity demand isn't randomly selected from the memory \(E\) and in order to account for the correlations among the errors, non-overlapping blocks of fixed length are drawn from the observed residual set and then joined. A day is divided into 96 intervals of 15 minutes, where \(n=96\) which can be split into \(b\) consecutive blocks of equal length \(l\). We define the residual vector and the splitting rule as follows \[z^{j} =(z_{1}^{j},\ z_{2}^{j},\ z_{3}^{j},\ldots,z_{n}^{j})\quad\forall \ j\in J, \tag{4}\] \[z^{j} =(B_{1}^{j},\ldots,B_{b}^{j}),\] (5) \[\text{such that}\quad B_{k} =(z_{(k-1)l+1},\ldots,z_{kl}),\quad k=(1,\.\.\.,b),\] where the residual errors on the training data for \(j_{th}\) day are given as a vector \(z^{j}\), and the elements of this vector are calculated using Equation (1). The accuracy of the block bootstrap is sensitive to the size of the blocks. As suggested in Politis and White (2006), the empirical block length of \(n^{1/3}\) would be a good guess for the block length \(l\). ### Clustering The demand levels affect residual errors in a way where the errors are higher for higher levels of demand patterns and low for lower demands. Thus, Figure 6: ACF plot (left) and PACF plot (right) of residuals of ML model instead of random block bootstrapping from the set \(E\) on \(j^{\prime}th\) day, we bootstrap from a cluster of similar days. The idea here is to group together the days from the training set with similar demand levels in a cluster and then block bootstrap from the cluster representative of the test day. Such clusters can be created by measuring the similarity between different days and can be achieved by various unsupervised learning methods. A common unsupervised learning algorithm for creating labeled clusters is called k-means clustering (Hartigan and Wong (1979)). Then the k-means clustering algorithm takes the number of clusters (\(N_{c}\)) and the set of observed vectors to clusters. It then returns a set of centroids, one for each of the \(N_{c}\) clusters where the observation vector is classified with the cluster number (\(C_{i}\)) or centroid index of the centroid closest to it. The k-means clustering algorithm tries to minimize the within-cluster sum-of-squares (WSS) between each observation vector and its dominating centroid. The minimization is achieved by iterative reclassification of the set of vectors into new clusters and recalculating the centroids. Since there is no prior knowledge about the value of \(N_{c}\), we will heuristically choose \(N_{c}\) by using the elbow method as shown in Figure 7. Suppose the day's demand with \(n\) intervals of the electricity demand for the \(j_{th}\) day is represented by the vector \(y^{j}\), such that: \[y^{j}=(y^{j}_{1},\ y^{j}_{2},\ y^{j}_{3},\ldots,y^{j}_{n})\quad\forall j\in J \tag{6}\] then the _WSS_ (WSS) is given as follows: \[WSS=\sum_{k=1}^{N_{c}}\sum_{y^{j}\in C_{k}}\ d(y^{j},\bar{y}_{C_{k}}) \tag{7}\] where, \[N_{c} =\text{number of clusters},\] \[C_{k} =\text{index of a cluster},\] \[d(.) =\text{distance metric between two vectors},\] \[\bar{y}_{C_{k}} =\text{center of the centroid }C_{k}.\] The k-means clustering algorithm can be run for multiple values of \(N_{c}\) and the minimized WSS is calculated for each so as to determine the smallest \(N_{c}\) beyond which the WSS doesn't decrease much with the increase in \(N_{c}\) The elbow method suggests that the value of \(WSS\) doesn't decrease much after \(N_{c}=4\). While block bootstrapping considers the temporal correlations between the observed errors, dividing each day's demand vectors into clusters of similar days results in almost constant variance on the model residual errors within the clusters. ### Performance Metrics The Cluster-based Block Bootstrapping (CBB) method, proposed in the next section makes use of the residual blocks and clustering to output the estimated distribution of the forecast for a given time. Our interest is in finding the quantile values during the time interval \(i\) within which the values of electricity demand might lie with a probability \(100(1-\alpha)\%\) which is the size of the confidence interval where \(0\leq\alpha\leq 1\). We will assume a symmetric interval for simplicity where the upper quantile of demand value at \((\alpha/2)\) is defined by \(u^{j}_{\alpha,i}\) and the lower quantile at \((1-\alpha/2)\) is determined by \(l^{j}_{\alpha,i}\) for the time interval \(i\) and day \(j\). In order to compare different algorithms for interval estimation at confidence level \(100(1-\alpha)\%\) we use the _Winkler Score_ (\(WS(\alpha)\)) and _Coverage Probability_ (\(CP(\alpha)\)) which are defined as follows. Proposed by Winkler (1972), \(WS(\alpha)\) is used to evaluate the prediction interval for time series. For observed data \(y^{j}_{i}\) during the \(i_{th}\) time interval, \(j_{th}\) day and \(\alpha\) confidence level \(WS(\alpha)^{j}_{i}\) is described by Hyndman et al. (2021) as, Figure 7: Elbow Plot for the optimal number of clusters \[WS(\alpha)_{i}^{j}=\begin{cases}u_{\alpha,i}^{j}-l_{\alpha,i}^{j}+\frac{1}{\alpha }(l_{\alpha,i}^{j}-y_{i}^{j})&if\quad y_{i}^{j}<l_{\alpha,i}^{j},\\ u_{\alpha,i}^{j}-l_{\alpha,i}^{j}&if\quad l_{\alpha,i}^{j}\leq y_{i}^{j}\leq u _{\alpha,i}^{j},\\ u_{\alpha,i}^{j}-l_{\alpha,i}^{j}+\frac{1}{\alpha}(y_{i}^{j}-u_{\alpha,i}^{j} )&if\quad y_{i}^{j}>u_{\alpha,i}^{j}.\end{cases}\] For the \(j_{th}\) day, the values of \(WS_{\alpha,i}^{j}\) are averaged over all \(i\in I\), and the final \(WS_{\alpha}\) for the test set is obtained by averaging over all the day indices \(j\in J\) as \[WS(\alpha)=\frac{1}{|J|}\sum_{j\in J}\sum_{i\in I}\frac{WS_{\alpha,i}^{j}}{|I|}.\] The \(WS(\alpha)_{i}^{j}\) assumes the length of the prediction interval when the observed value of electricity demand \(y_{i}^{j}\) during time \(i\) falls with the prediction interval. If the observed value falls outside then the penalty is proportional to the length denoting how far \(y_{i}^{j}\) is outside the prediction interval. The \(WS\) score for the one-day time series is the average of \(CP(\alpha)\) explained in Hyndman et al. (2002) is a measure of the proportion of times the observed values of electricity demand \(y_{i}^{j}\) lie inside the prediction interval \([l_{\alpha,i}^{j},u_{\alpha,i}^{j}]\) and is defined as, \[CP(\alpha)=\frac{1}{|J|}\sum_{j\in J}\sum_{i\in I}\frac{\mathds{1}_{[l_{\alpha,i}^{j}\leq y_{i}^{j}\leq u_{\alpha,i}^{j}]}}{|I|}\] \[\text{where,}\quad\mathds{1}_{[l_{\alpha,i}^{j}\leq y_{i}^{j}\leq u_{\alpha,i }^{j}]} =1\quad when\quad[l_{\alpha,i}\leq y_{i}^{j}\leq u_{\alpha,i}]\] \[=0\quad otherwise.\] A wide prediction interval would lead to high \(CP\) values leading to conservative prediction, \(WS\) values would penalize the wide intervals. Thus a balance is to be maintained between both scores to get better prediction results. ## 5 Cluster Block Bootstrap Algorithm (CBB) In the previous section, we saw the methods to bootstrap residual blocks and to create clusters of similar days. In this section, we will combine these two methods together to generate the prediction intervals for one-day ahead forecasts. In the first step, we shall form the clusters of the index of similar days using the k-means clustering algorithm on the demand pattern \(y^{j}\) for \(j\in J\). As defined in Section 4.2, the number of clusters denoted by \(N_{c}\) has to be initialized for the k-means clustering algorithm to work, and through the elbow method, the total number of clusters, \(N_{c}\) is set at 4. The label of each cluster represented by \(C_{k}\) has a centroid at \(\bar{y}_{C_{k}}\) where \(k\in 1,\ldots,4\). We represent the index of days clustered together in the \(k_{th}\) cluster as \(\{(1),\ldots,(|C_{k}|)\}\), where \(|C_{k}|\) denotes the size of the \(k_{th}\) cluster and \(\{(1),\ldots,(|C_{k}|)\}\) are the clustered training data days partitioned off training set labeled \(I\) such that \(\{(1),\ldots,(|C_{k}|)\}\in I\). The next step is to train the ML model \(\hat{f}\) using the training data \((\mathbf{X}_{i}^{j},y_{i}^{j})\) and get the residual errors \(z_{i}^{j}\) on the training set \(j\). These training errors are stored in the memory set \(E\) defined in Equation (2). The memory set of residuals \(E\) is then partitioned to form the cluster memory set \(M_{k}\), where \(M_{k}\) is selected according to the days indexed in cluster \(C_{k}\). Thus for every cluster label \(C_{k}\in\{(1),\ldots,(|C_{k}|)\}\) we get \(M_{k}=\{z^{(1)},\ldots,z^{(|C_{k}|)}\}\). Using Equation (4) the set \(M_{k}\) can be denoted in terms of residual blocks \[M_{k}=\{(B_{1}^{(1)},\ldots,B_{b}^{(1)}),\ (B_{1}^{(2)},\ \ldots,B_{b}^{(2)}), \ldots,\ (B_{1}^{(|C_{k}|)},\ldots,B_{b}^{(|C_{k}|)})\}\] where number of blocks, \(b=16\), and length of residual vector \(n=96\) such that \(n=b\times l\) as defined in Section 4.1. The model \(\hat{f}\), the clustered residual sets \(M_{k}\) and the centroid of the clusters \(\vec{y_{C_{k}}}\) for \(k\in(1,\ldots,N_{c})\) are now ready to evaluate the point estimates and construct the prediction intervals. The ML model \(\hat{f}\), is used to get the point estimates \(\hat{y}^{j^{\prime}}\) for test day \(j^{\prime}\) for \(j^{\prime}\in J^{\prime}\), i.e., the first quarter of the year 2022. The closeness of \(y^{j^{\prime}}\) which is a vector of size 96, is evaluated with every cluster's centroid using the distance metric \(d(y^{j^{\prime}},\bar{y}_{C_{k}})\) and the closest \(k_{th}\) residual cluster memory \(M_{k}\) is selected to bootstrap the block residuals for the \(j^{\prime}th\) test day. The test day is also divided into 16 non-overlapping blocks of size 6, and for the \(i_{th}\) interval block of the test day, we bootstrap \(N=1000\) times, residual blocks \(B_{i}^{(n)}\) from the selected cluster \(M_{k}\) randomizing on \(n\) such that \(n\in(1,\ldots,|C_{k}|)\) and repeat this process for each \(i\in(1,\ldots,16)\). Then for the \(i_{th}\) time interval block we can get \(N\) bootstrap residual block samples and build \(\mathbf{B}_{i}=(B_{1}^{*},\ldots,B_{N}^{*})_{i}\)1. The sets \(\mathbf{B}_{1},\ldots,\mathbf{B}_{16}\) are then joined sequentially to form the prediction interval for the test day. Footnote 1: The star notation on x* indicates that x* isn’t the real data set but the randomized, re-sampled or bootstrapped version of x Due to the over-fitting issues seen in Section 3.2, the residuals bootstrapped from the training set alone results in narrow prediction intervals. The memory of the clustering algorithm can be made adaptive in a way where the training errors are replaced by the errors of the ML model on the test set. Thus we don't use the static cluster residual set \(M_{k}\) from the training data but we keep updating it with the newest residual errors on the observed day in the test data and use the updated \(M_{k}\) to bootstrap for the next day. The update scheme for an observed day \(j^{\prime}\) linked to the cluster is done by selecting the \(j_{th}\) day in the training set such that \(j^{\prime}=j\). Then replace and update \(z^{j}=z^{j^{\prime}}\) in the cluster and recalculate the clusters and their centroids in the interval with a specific update frequency. ## 6 Results In this section, we will compare the results of the CBB algorithm for constructing the prediction intervals with other bootstrapping methods like the bootstrap aggregating algorithm and block bootstrap without clustering. The experiments are carried out using the ML models for point estimation mentioned in Section 3. The performance of various combinations of bootstrap methods and ML models is discussed as follows. For ease of notation, we will use \(WS\) and \(CP\) instead of \(WS(\alpha)\) and \(CP(\alpha)\) respectively in the following sections. ### Bootstrap Aggregating The performance of the CBB algorithm is compared against the bootstrap aggregating algorithm, also called bagging. For bootstrap aggregating, multiple replicates or simulated copies of the training data \((\mathbf{X}_{i}^{j},y_{i}^{j})\) are made and an ML model is fitted to each. The first step of this process is to fit the ML model on the original training data and then obtain the residual set \(E\) without clusters following a similar procedure to CBB. Then the blocks of residuals are bootstrapped from the training memory \(E\) and the original observed demand \(y_{i}^{j}\) is perturbed to make new copies of \(y_{i}^{j*}\). Thus, a simulated version of the training set \((\mathbf{X}_{i}^{j},y_{i}^{j*})\) is obtained. This process is replicated \(N\) times and the ML model is trained on these \(N\) simulated training sets. For every trained model, a trajectory of future electricity demand forecast is obtained on the test set and thus \(N\) trajectories are obtained. In this paper, we use \(N=1000\) and the details of the performance of this algorithm for LR, GBR, and LGBM ML models are discussed further. Table 2 represents performance of bootstrap aggregate model on the test data. \(WS\) for the GBR model is the lowest compared to LR and LGBM which shows that the intervals for GBR are narrower. Ideally, the value of \(CP\) should be as close as possible to the interval size \(100(1-\alpha)\%\). The best \(CP\) values are attained by the LGBM model as the values are much closer to the size of the confidence intervals. A good model would be one for which the value of \(WS\) is lower simultaneously with higher \(CP\) values. A high \(WS\) with a relatively higher \(CP\) value is caused due to conservative estimates of upper and lower confidence levels. On the other hand, lower \(WS\) values with low \(CP\) scores suggest a higher magnitude of violations of observed data beyond the confidence limits. Another important point to note is the computation time required for the interval prediction. Bootstrap aggregating models are computationally expensive due to model training on multiple simulated training sets. A new synthetic training set is simulated and an ML model is trained on this set to generate a new trajectory of forecasted values of electricity demand. ### Block Bootstrap The results of the Block Bootstrap algorithm without clustering are discussed in this section. We will simply refer to it as the Block Bootstrap algorithm. The prediction intervals are constructed similarly to CBB algorithm, i.e., by bootstrapping the non-overlapping residuals block but without \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{**ML Model**} & \multirow{2}{*}{**Metrics**} & \multicolumn{4}{c|}{**(1-\(\alpha\))100\%**} & \multicolumn{1}{c|}{**Training**} \\ \cline{3-6} & & **85** & **90** & **95** & **99** \\ \hline \multirow{2}{*}{**LR**} & _WS_ & 7.52 & 8.608 & 10.266 & 13.394 \\ \cline{2-6} & _CP_ & 0.785 & 0.848 & 0.917 & 0.974 \\ \hline \multirow{2}{*}{**GBR**} & _WS_ & 7.549 & 8.431 & 9.711 & 12.018 \\ \cline{2-6} & _CP_ & 0.791 & 0.848 & 0.905 & 0.96 \\ \hline \multirow{2}{*}{**LGBM**} & _WS_ & 7.811 & 8.921 & 10.64 & 13.739 \\ \cline{2-6} & _CP_ & 0.815 & 0.887 & 0.951 & 0.986 \\ \hline \end{tabular} \end{table} Table 2: Model performance for Bootstrap Aggregating clustering the similar days. This will help us understand the effect of clustering done by the CBB algorithm by only using the block bootstrapping scheme. The performance of Block Bootstrap is shown in Table 3. We see that the GBR model achieves better performance on both the \(WS\) and \(CP\) values compared to the LR and LGBM models (except for \(WS\) value for \(85\%\) confidence interval size). The GBR Block Bootstrap models also have better \(CP\) values compared to the GBR Bootstrap Aggregate algorithm with a significant reduction in computation time. ### CBB algorithm The performance of the CBB algorithm proposed in Section 5 is shown in Table 4. The \(WS\) values for the LGBM model are consistently lower as compared to the LR and GBR models but the \(CP\) values are not high enough. The best \(CP\) values are attained by the GBR model followed by LR with only a slight trade-off on the \(WS\) values compared to LGBM. The CBB algorithm gains computation time over the block bootstrap method due to the clustering process but outputs a better prediction interval. The computation time when compared to the bootstrap aggregating method is still considerably lower. Figure 8 shows the one-day moving average of the demand prediction at \(90\%\) confidence interval and observed demand for the test set. Until now we have just compared the performance of the ML models within each of the bootstrap algorithms. We will now compare the performance across all the bootstrapping algorithms based on the \(WS\) and \(CP\) scores they achieve with the ML models, which will help us to analyze the effect of clustering. \begin{table} \begin{tabular}{|l|l|c|c|c|c|c|} \hline \multirow{2}{*}{**ML Model**} & \multirow{2}{*}{**Metrics**} & \multicolumn{4}{c|}{**(1-\(\alpha\))100\%**} & **Training** \\ \cline{3-8} & & **85** & **90** & **95** & **99** & **time (sec)** \\ \hline \multirow{2}{*}{**LR**} & _WS_ & 8.529 & 9.531 & 11.229 & 16.007 & \multirow{2}{*}{15.73} \\ \cline{2-8} & _CP_ & 0.741 & 0.807 & 0.885 & 0.959 & \\ \hline \multirow{2}{*}{**GBR**} & _WS_ & 7.973 & 8.735 & 10.005 & 12.453 & \multirow{2}{*}{34.95} \\ \cline{2-8} & _CP_ & 0.804 & 0.861 & 0.922 & 0.981 & \\ \hline \multirow{2}{*}{**LGBM**} & _WS_ & 7.887 & 8.754 & 10.113 & 13.2 & \multirow{2}{*}{23.42} \\ \cline{2-8} & _CP_ & 0.741 & 0.806 & 0.888 & 0.969 & \\ \hline \end{tabular} \end{table} Table 3: Model performance for Block Bootstrap ### Comparative analysis The comparison of the bootstrapping algorithms with different combinations of the ML models according to the confidence interval sizes is shown in Figure 9. For each confidence interval size \((1-\alpha)100\%\), the values of \(CP(\alpha)\) are plotted against \(WS(\alpha)\). In the plot for 85% CI, the rightmost point represents the highest \(CP\) value attained by CBB built on the GBR model. The block bootstrap model based on LR has the second-best \(CP\) value but a better \(WS\) than the CBB GBR model. Identical results are seen in the 90% interval, but the LR block bootstrap model almost achieves the \(CP\) of CBB GBR algorithm with a smaller \(WS\). However, the LR block bootstrap model slightly outperforms the GBR CBB algorithm for 95% CI. The GBR CBB algorithm has the best \(CP\) for the 99% interval with a lower \(WS\) value. The higher \(CP\) values for the CBB algorithm based on GBR show the effect of bootstrapping from clusters of similar days. LBGM with bootstrap aggregating has the lowest \(CP\) and highest \(WS\) values for all the confidence intervals indicating higher penalties due to the narrow prediction interval size. The analysis suggests that the CBB algorithm achieves higher coverage than the other algorithms especially when GBR is used as a point estimate ML model. ## 7 Conclusion The proposed CBB algorithm uses point forecasts of the ML model to build prediction intervals based on residuals of the ML model. The interval prediction CBB algorithm based on the ML point estimates has better \(CP\) \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline \multirow{2}{*}{**ML Model**} & \multirow{2}{*}{**Metrics**} & \multicolumn{4}{c|}{**(1-\(\alpha\))100\%**} & \multirow{2}{*}{**Training time (sec)**} \\ \cline{3-4} \cline{6-6} & & **85** & **90** & **95** & **99** \\ \hline \multirow{2}{*}{**LR**} & _WS_ & 8.259 & 9.127 & 10.518 & 14.92 \\ \cline{2-6} & _CP_ & 0.81 & 0.864 & 0.928 & 0.972 \\ \hline \multirow{2}{*}{**GBR**} & _WS_ & 8.387 & 9.266 & 10.73 & 12.657 \\ \cline{2-6} & _CP_ & 0.839 & 0.89 & 0.949 & 0.987 \\ \hline \multirow{2}{*}{**LGBM**} & _WS_ & 8.059 & 8.93 & 10.345 & 12.5 \\ \cline{2-6} & _CP_ & 0.777 & 0.838 & 0.91 & 0.983 \\ \hline \end{tabular} \end{table} Table 4: Model performance for CBB algorithm compared to bootstrap aggregating and block bootstrap methods with a relatively lower \(WS\) value, especially for GBR. The CBB algorithm uses the concept of similar days according to which the pattern of electricity demand doesn't deviate much from historical usage and errors of the ML model for similar days would be similar. These residual errors then can be clustered together and the prediction interval of the test day is built by bootstrapping residuals from the cluster closest to point forecast of the test day according to some distance metric. Introducing clustering of similar days leads to better \(CP\) for every confidence interval with comparable \(WS\) when compared to just block bootstrap without clustering. The CBB algorithm builds upon the lesser time taken by the just block bootstrapping and competes with the bootstrap aggregation. In comparison, the error metrics of the CBB algorithm are better than bootstrap aggregating algorithm, beating it on the \(CP\) scores when GBR is used as an ML model. The major highlight is reduction in computation time required by the CBB algorithm when compared to bootstrap aggregating. The experiment is carried out on residential sites of the EULR data in Figure 8: Moving Average of 90% Prediction interval on test data Figure 9: \(CP\) vs \(WS\) plots of (a) 85%, (b) 90%,(c) 95% and (d) 99% confidence intervals for combinations of ML models and interval estimation algorithm Washington state and shows the effectiveness of using bootstrapping methods to generate prediction intervals. ## 8 Future Work In this work, we have used a k-means clustering algorithm to group together similar days of electricity demand. The residual estimates to construct the interval for the prediction day are bootstrapped from the closest cluster to the point estimates of the prediction day. While we see that this method leads to better coverage probabilities for most of the confidence levels, additional features could be used to generate the clusters on the basis of exogenous variables. Clustering algorithms like density-based clustering or fuzzy clustering could capture more complex patterns in the data. Furthermore, the method of measuring the distance from clusters' centroids before prediction could lead to anomaly detection. The main aspect of using tree-based ensemble models in this research was the success of these models in time-series forecasting. Similarly, recent work on temporal fusion transformers by Lim et al. (2021) has been proven to work on a variety of real-world datasets and could be used to obtain better point estimates on multi-horizon forecasting. ## Acknowledgements The authors gratefully acknowledge funding from Triad National Security LLC under the grant from the Department of Energy National Nuclear Security Administration (award no. 89233218CNA000001), titled "An Integrated Approach for Managing Microgrids with Uncertain Renewable Sources, Demand Response, and Energy Markets".
2303.14760
Extrapolation to complete basis-set limit in density-functional theory by quantile random-forest models
The numerical precision of density-functional-theory (DFT) calculations depends on a variety of computational parameters, one of the most critical being the basis-set size. The ultimate precision is reached with an infinitely large basis set, i.e., in the limit of a complete basis set (CBS). Our aim in this work is to find a machine-learning model that extrapolates finite basis-size calculations to the CBS limit. We start with a data set of 63 binary solids investigated with two all-electron DFT codes, exciting and FHI-aims, which employ very different types of basis sets. A quantile-random-forest model is used to estimate the total-energy correction with respect to a fully converged calculation as a function of the basis-set size. The random-forest model achieves a symmetric mean absolute percentage error of lower than 25% for both codes and outperforms previous approaches in the literature. Our approach also provides prediction intervals, which quantify the uncertainty of the models' predictions.
Daniel T. Speckhard, Christian Carbogno, Luca Ghiringhelli, Sven Lubeck, Matthias Scheffler, Claudia Draxl
2023-03-26T15:43:37Z
http://arxiv.org/abs/2303.14760v3
Extrapolation to complete basis-set limit in density-functional theory by quantile random-forest models ###### Abstract The numerical precision of density-functional-theory (DFT) calculations depends on a variety of computational parameters, one of the most critical being the basis-set size. The ultimate precision is reached with an infinitely large basis set, i.e., in the limit of a complete basis set (CBS). Our aim in this work is to find a machine-learning model that extrapolates finite basis-size calculations to the CBS limit. We start with a data set of 63 binary solids investigated with two all-electron DFT codes, exciting and FHI-aims, which employ very different types of basis sets. A quantile-random-forest model is used to estimate the total-energy correction with respect to a fully converged calculation as a function of the basis-set size. The random-forest model achieves a symmetric mean absolute percentage error of lower than 25% for both codes and outperforms previous approaches in the literature. Our approach also provides prediction intervals, which quantify the uncertainty of the models' predictions. ## Introduction The assessment of the quality of density-functional-theory (DFT) calculations concerns the accuracy of the exchange-correlation functional and the numerical precision that depends on a variety of computational parameters. This paper deals with the latter, of which a most critical parameter is the size of the basis set. Only with an in-principle infinitely large basis-set size, the result of the calculation is as precise as possible for the chosen exchange-correlation functional. This limit is known as the complete basis-set (CBS) limit [1]. However, a basis-set size approaching this limit, would take infinite time to compute. Therefore, in practice, the basis set is truncated at a size that balances precision and computational cost. Extrapolation from low-precision settings to the CBS limit is commomplace in quantum chemistry [2]. In materials science, convergence tests with respect to the basis-set size are typically done, but extrapolation to the CBS limit is not often performed. Our aim in this work, is to find a model that can extrapolate the result of a DFT calculation to the CBS limit. More specifically, we seek to predict the difference in the total energy per atom computed with an incomplete basis-set size to a computation performed in the CBS limit. We exemplify our approach with a data set of binary materials. Note, the extrapolation to the CBS limit depends on the chosen functional. Here, we only consider the PBE functional of the generalized-gradient approximation (GGA). Our motivation is two-fold. First, such a model opens up the possibility to perform less precise, and therefore computationally less demanding _ab initio_ calculations to predict a more precise result. Here, we recall that typical DFT-GGA implementations scale with order \(\mathcal{O}(N^{3})\) where \(N\) is the basis-set size. Second, it allows us to assign uncertainty estimates to the huge amounts of _ab initio_ data contained in open-access databases. For instance, the Novel Materials Discovery (NOMAD) Respiratory [3] currently hosts about 140 million ground-state DFT calculations. These calculations were carried out for a variety of purposes ranging from molecular-dynamics simulations of complex systems with less precise settings to ultra high-precision calculations for elemental solids. Uncertainty estimates for the total energies would provide users useful information about the precision of these calculations and how the data can be re-used/re-purposed [4]. If one can do CBS extrapolation, one could even extrapolate all these data to the CBS limit, which would be even more useful. In this work, we train a quantile-random-forest (QRF) model to predict the total-energy difference \(\Delta E^{AB}\) for binary materials containing the two elements \(A\) and \(B\). We train on a data set consisting of DFT results for 71 elemental and 63 binary solids, computed by the two full-potential all-electron codes FHI-aims and exciting with varying basis-set size. For details concerning the data set we refer to Ref. [5]. The linearized augmented planewave (LAPW) code exciting employs augmented planewaves (APW) plus local orbitals (LO) as its basis, while FHI-aims uses numeric atom-centered orbitals (NAOs). These two codes are representatives of all-electron, full potential packages. This means that they can simulate the behavior of all electrons in a material on the same footing and are proven to be among the most precise DFT codes available [6]. Despite their significantly different concepts, algorithms, and numerical approaches, both codes are expected to give close-to identical results [6]. Due to the very different basis sets and thus numerical implementations, we investigate the behavior of their convergence separately. We compare our modeling efforts to a stoichiometric model which was introduced in Ref. [5] and find that our models outperform the latter in terms of several important metrics. ## Methods We formulate our task of extrapolation to the CBS limit as a \(\Delta\)-learning problem [7]. We are given the results of a single DFT calculation at a fixed basis-set size, \(N_{b}\), which is smaller than that of the converged case, \(N_{\infty}\), known as the CBS limit. The data set in Ref. [5] defines the total-energy convergence criteria with respect to the basis-set size as \(10^{-4}\) eV/atom. The data is fed into a statistical learning algorithm to estimate the difference between the imprecise DFT calculation and the CBS limit. Our task, in other words, is to estimate the total energy per atom of a binary material composed of elements \(A\) and \(B\), in the CBS limit, \(E^{AB}(N_{\infty})\), using the results of a DFT calculation with the fixed incomplete basis-set size, \(N_{b}\). Mathematically, we aim at finding the change (\(\Delta\)) in total-energy from an incomplete basis-set size, \(E^{AB}(N_{b})\), to the CBS limit \(E^{AB}(N_{\infty})\). As can be seen in eq. 1, we target \(\Delta E^{AB}(N_{b})\). \[E^{AB}(N_{\infty})=E^{AB}(N_{b})+\Delta E^{AB}(N_{b}) \tag{1}\] We employ QRF models for obtaining the total-energy differences \(\Delta E^{AB}\) per atom. Other DFT settings are kept constant. Physically, this means we aim to use an imprecise, less computationally intense, calculation that gives us \(E^{AB}(N_{b})\) in tandem with a statistically learned model that predicts \(\Delta E^{AB}(N_{b})\). Together with these two sources, the imprecise calculation and model, we predict the total-energy of the complete basis-set limit. ### Stoichiometric Model Our baseline model, to compare our new approach with, is a stoichiometric model introduced in Ref. [5], \[\Delta E^{AB}(N_{b})=C^{A}*\Delta E^{A}(N_{b})+C^{B}*\Delta E^{B}(N_{b}). \tag{2}\] Each binary solid, represented as \(AB\), is composed of two chemical elements, labeled \(A\) and \(B\). Here, the letter \(A\) (\(B\)) refers to the less (more) electronegative element in the binary. \(\Delta E^{A}(N_{b})\) refers to the CBS total-energy correction for the corresponding lowest-energy elemental solid of element \(A\) when using a basis-set size, \(N_{b}\). \(C^{A}\) is the stoichiometric fraction that the element \(A\) appears in the binary. ### Basis Set of exciting The most important parameter determining the quality of augmented plane-wave basis sets is \(RK_{\max}\), which is the product of the radius of the smallest atomic (muffin-tin) sphere and the plane-wave cutoff. In Ref. [5], a _precision factor_, \((RK_{\max}/RK_{\max}^{\text{opt}})^{2}\) was introduced which captures the precision of the basis set of exciting quite well. The data set we use contains elemental solids and binaries at the same percentage value of the precision factor. However, \(RK_{\max}^{\text{opt}}\) may be different for a binary and its elemental solids. Note, in this data set, the number of APWs is varied but the number of LOs is kept constant. More information about the basis-set precision parameter for exciting is given in the supplementary information of Ref. [5]. ### Basis Set of FHI-aims FHI-aims offers tabulated species-specific suggestions for numerical settings and NAOs, named as "light", "tight", or "really tight" defaults. In general, one does not need to use the tabulated settings. However, in this work we do. On top of the numerical settings defined in these defaults, we also consider different "basis-set size settings" (minimal, standard, tier1, or tier2). The combination of both ultimately dictates the number and type (\(s\), \(p\), \(d\), etc.) of basis functions included in a calculation. "Standard" refers to the default basis-set size suggested in the respective numerical setting. The difference in the number of NAOs per valence electron from the CBS limit, labeled \(\Delta SB_{PVE}^{AB}\), is used as a basis-set size metric. More information about the basis-set size in FHI-aims is given in the supplementary information of Ref. [5]. ### Model Features We feed the QRF model information about the two elements in terms of elemental solids by providing \(C^{A}*\Delta E^{A}(N_{b})\) and \(C^{B}*\Delta E^{B}(N_{b})\) which are the two terms in the stoichiometric model from eq. 2. Atomic information about the elements in terms of isolated atoms is also provided. Their use is motivated by other statistical learning models in materials science [8]. The electron affinity (\(EA^{A}\), \(EA^{B}\)), the ionization energy (\(EI^{A}\), \(EI^{B}\)) and the mean radius (\(r_{s}^{A}\), \(r_{s}^{B}\)) for the \(s\)-like pseudo orbital of element \(A\)/\(B\) computed with FHI-aims are fed into the QRF models. ### Quantile Random Forests We choose to use random-forests (RF) based methods for CBS extrapolation since RFs are known to perform well on a wide range of tasks and require minimal tuning and no data scaling [9]. We provide here a brief summary of the quantile random-forest method introduced by Meinshausen [10]. Random forests are a collection of decision tree models. A decision tree is a piece-wise constant model. Decision trees work by partitioning the input feature space into discrete regions. The tree then assigns a constant estimate for that region depending on the training data points that fall in that region [11]. The decision tree chooses the splits (and therefore the discrete regions of the input data space, using a greedy algorithm. This means each new split minimizes the optimization metric for that split and not for potential subsequent splits. In order to make a prediction on an input data point, the decision tree looks at the region in which the input data point falls. It then predicts the constant prediction for that region node. Random forests are built by sampling the training data with replacement (bootstrapping) and fitting separate decision trees that are forced to use only a random subset of model features. This randomness forces the many decision trees in the random forest to have different splits. The random forest model suffers less from overfitting than a single decision tree [12]. To make an inference, the random forest looks at what region the input data point fall into for each tree. Each tree therefore has an assigned constant for the input data point. Since there are several trees in the forest, a data point falls into a leaf node for each tree, and the random forest predicts the average of the constants from each tree. QRFs further examine the leaf nodes. When performing predictions, QRFs look at the leaf node into which the input data point falls, for each tree in the forest. The QRF creates quantiles (e.g., 2.5% and 97.5%) by sorting all of the inferences that each tree in the forest predicts for that data point. These quantiles are used as statistically meaningful prediction intervals. The median quantile can be used for inference. In this work, we continue to use the mean of the constant estimates (which is typical for a RF) for inference and use the QRF for the prediction intervals. In this work, we apply the QRF method in a regression setting to minimize the root-mean-squared-logarithmic-error (RMSLE) metric. We derive, in the supplementary information section, how decision-tree parameters are learned when optimizing for the RMSLE metric. Our data is randomly split into training and test data using an 80/20 split. We perform ten-fold grid-search cross-validation (CV) to choose the number of decision trees that comprise the random forest, the minimum number of samples per leaf, and the fraction of features considered at each split. More details on the cross-validation can be found in the accompanying Jupyter notebook. ### Combination Model We also investigate the performance of a QRF model trained on the residuals (remaining error) of the stoichiometric model. In mathematical terms, the residual to which we fit is the variable: \(\Delta E^{AB}(N_{b})-C^{A}*\Delta E^{A}(N_{b})-C^{B}*\Delta E^{B}(N_{b})\). This means our estimate of the CBS total-energy correction is the sum of the stoichiometric model and a new QRF trained on the stoichiometric residuals. We call this model the "combination model". The motivation is that subtracting the stoichiometric model from the target might allow the QRF to focus more on non-linear contributions. ### Metrics Our DFT data used for the training contain CBS energy corrections ranging in magnitude from \(10^{-6}\) eV/atom to several eV/atom. The root-mean-squared-error (RMSE) and mean-absolute-error (MAE), which are common loss functions, are known to give more weight to large targets in a data set that spans several orders of magnitude [13]. As such, we optimize our QRF models for the root-mean-squared-logarithmic-error (RMSLE) of the data to best capture logarithmic-scale differences in the target (DFT calculated CBS energy corrections). The RMSLE is given by several definitions. One of them, given in Ref. [14] and employed in the SciKit-Learn package that is widely used by machine-learning practitioners [15], reads: \[\text{RMSLE+1}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(\log(y_{i}(\vec{x_{i}})+1)- \log(h(\vec{x_{i}})+1))^{2}}. \tag{3}\] The \(\vec{x_{i}}\) are the feature vectors (combinations of feature values such as \(C^{A}*\Delta E^{A}(N_{b})\)). The \(y_{i}(\vec{x_{i}})\) values refer to the DFT calculated CBS energy corrections which our model tries to predict. Other authors do not employ the addition of one in the logarithm argument in the root mean squared error (RMSE) and instead add a small, different \(\epsilon\) value instead [16]. The addition of a small value in the logarithm argument is done to avoid undefined logarithmic arguments of zero. Note that due to the way we defined \(\Delta E^{AB}(N_{b})\) in eq. 1, the values of \(\Delta E^{AB}(N_{b})\) are always negative by the variational principle, which states that the total energy must stay the same or decrease when the basis-set size is increased [17]. To employ the RMSLE as a metric, the targets (what our model tries to predict) should be positive valued. We satisfy this constraint by setting \(y_{i}(\vec{x_{i}})\) equal to \(-\Delta E^{AB}(N_{b})\). This use of \(\epsilon=1\), however, as in the RMSLE+1, is not ideal for targets much less than one, since in the Taylor expansion for \(x<<1\) we have \(log(x+1)\approx x\) and we arrive back at metric similar to the MAE that gives more weight to larger targets. Since our CBS energy convergence criteria is 1E-4 eV/atom we are motivated to use this value as our \(\epsilon\). We term this metric RMSLE+1E-4. \[\text{RMSLE+1E-4}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(\log(y_{i}(\vec{x}_{i})+10^{-4 })-\log(h(\vec{x}_{i})+10^{-4}))^{2}}. \tag{4}\] Recall that QRF model in the combination model is trained on the stoichiometric residuals. The stoichiometric model may under or overestimate the CBS corrections giving the residual positive and negative signs. As such we cannot use the RMSLE to optimize the combination model. Instead we turn to a different metric, namely the the symmetric mean absolute percentage error (sMAPE)[13], which is another popular metric for targets that vary orders of magnitude and is defined as: \[sMAPE=\sum_{i=1}^{N}\frac{|h(\vec{x}_{i})-y_{i}(\vec{x}_{i})|}{\frac{1}{2}|y_{ i}(\vec{x}_{i})|+\frac{1}{2}|h(\vec{x}_{i})|}\times 100. \tag{5}\] Note, the closely related mean absolute percentage error (MAPE) is defined with a different denominator containing only the target as: \[MAPE=\sum_{i=1}^{N}\frac{|h(\vec{x}_{i})-y_{i}(\vec{x}_{i})|}{|h(\vec{x}_{i}) |}\times 100. \tag{6}\] Note that the MAPE is unbounded from above while the sMAPE is at most 200% when either the target or prediction is zero. This fact means the sMAPE metric avoids issues where the target (\(y_{i}(\vec{x}_{i})\)) is close to zero and causes the value of the MAPE metric to explode[13]. For this reason we optimize our models for the sMAPE rather than the MAPE. The sMAPE is often employed to optimize machine learning models operating on time series data where the target can grow exponentially[18]. We also consider, however, the MAPE as a metric in our results since it is easier to comprehend. We experimented with training our QRF models by minimizing the sMAPE and saw slightly worse performance on the training data set in terms of the RMSLE+1E-4 and sMAPE as compared to when minimizing for the RMSLE+1E-4. Besides the RMSLE+1, RMSLE+1E-4, sMAPE and MAPE, we also include the MAE for completeness and the 95% quantile absolute error (95% quantile metric for short). This last metric represents the 95% quantile of the absolute errors (AE) for each model. ## Results We analyze the performance of three models on the data, namely the stoichiometric model, the QRF model and the combination model. Table 1 summarizes the most relevant metrics obtained for the test and training data. Note that the models are trained separately on FHI-aims and exciting data. Recall also that the stoichiometric model has no free parameters to learn via training on the data. The QRF models perform better than the stoichiometric models for all metrics except the 95% quantile of absolute errors where it does slightly worse for exciting. In general the CBS energy corrections are larger for exciting than for FHI-aims, \begin{table} \begin{tabular}{|l|c c|c c|c c|c|c c|} \hline & \multicolumn{6}{c|}{**exciting**} & \multicolumn{6}{c|}{FHI-aims} \\ \hline Metric & Stoichiometric & QRF & & Combination & & Stoichiometric & QRF & Combination \\ \hline sMAPE (\%) & 28.7 & (29.6) & 24.2 & (10.6) & 26.4 & (13.3) & 48.9 & (53.3) & 13.9 & (10.6) & 14.84 & (10.8) \\ \hline MAPE (\%) & 27.9 & (30.6) & 27.3 & (11.6) & 27.2 & (27.8) & 58.2 & (70.1) & 14.9 & (29.0) & 21.49 & (33.6) \\ \hline RMSLE+1 & 0.247 & (0.016) & 0.202 & (0.069) & 0.187 & (0.061) & 0.203 & (0.189) & 0.040 (0.015) & 0.031 & (0.014) \\ \hline RMSLE+1E-4 & 0.383 & (0.376) & 0.318 & (0.150) & NA & & 0.663 & (0.916) & 0.218 (0.213) & NA & \\ \hline MAE (eV/atom) & 1.556 & (1.250) & 1.437 & (0.313) & 1.360 & (0.260) & 0.257 (0.212) & 0.036 (0.013) & 0.030 & (0.014) \\ \hline 95\% Quantile (eV/atom) & 5.900 & (8.018) & 7.795 & (1.584) & 8.472 & (1.340) & 1.520 (1.420) & 0.256 (0.072) & 0.177 & (0.074) \\ \hline \end{tabular} \end{table} Table 1: Metrics for the QRF and the combination models for exciting and FHI-aims on held-out test data. The stoichiometric model performance is shown for comparison. The 95% quantile metric refers to the 95% quantile of the absolute error between the model and the DFT CBS corrections. The corresponding training data metrics are shown in parentheses. Note that exciting data contains larger calculated DFT \(\Delta E^{AB}\) targets since the basis-set size was controlled manually whereas the FHI-aims basis variation stopped at the discrete _minimal_ option given by the code. This results in larger MAE and maximum error values for the exciting models. since the LAPW nature allows for manually reducing the basis-set size to close to zero 5. We see this fact in the standard deviation of the CBS energy corrections of the test data which is 8.127 and 0.738 eV/atom for exciting and FHI-aims, respectively. The larger 95% quantile of absolute errors indicates that the stoichiometric model does slightly better than the QRF when targeting very large CBS energy corrections on the order of several eV/atom. The QRF models predictions are plotted against the DFT targets in fig. 1. Note that logarithmic scales are used to visually capture several orders of magnitude in the total-energy corrections. Relevant metrics, the sMAPE and MAE, are shown to help the reader understand the quality of fit. The RMSLE+1E-4 is also shown (labeled RMSLE) as the metric that is optimized during training. For the MAPE metric in the FHI-aims case, surprisingly, the QRF model performs slightly better in the test data set than in the training data set. This may be due to the fact the QRF model is trained to fit the RMSLE+1E-4 metric and not the MAPE metric. We do not, in contrast, see a larger training error for the other five metrics of table 1. We observe that the stoichiometric model for FHI-aims does quite poorly (test sMAPE of 48.9%) whereas the stoichiometric model performs better on the exciting data set with an sMAPE of 28.7%. This is understood as the LAPW basis of exciting is gradually improved by increasing the basis-set size parameter \(RK_{\max}\). The atomic-centered orbitals used in FHI-aims, on the other hand, give rise to a discrete basis-set size, specified as tiers (_minimal, tier1, tier2_) which correspond to more abrupt changes in basis-set quality. However, our non-linear QRF model provides a pretty good model for the CBS limit of FHI-aims. The discrete and piece-wise nature of its basis sets may explain why the very non-linear and piece-wise nature of the RF models succeeds on the FHI-aims data set. The combination models perform worse than the QRF models in terms of test sMAPE and MAPE for both DFT codes. (Recall that the combination models are optimized for the sMAPE.) The combination models do, however, perform better in terms of RMSLE+1 and MAE. As discussed earlier, these metrics (with the 95% quartile AE) generally favor larger targets in our data set closer to 1 eV/atom. The hypothesis that the combination model will outperform the QRF model appears false over the wide range of targets but true for very large targets. This may be due to the fact the residuals of the stoichiometric model are in general quite large, e.g., the training sMAPE is 28.7% and 48.9% for exciting and FHI-aims respectively, which make training a QRF on these residuals too difficult a task. The violin plots in fig. 2 display the distributions of symmetric percentage errors (SPE) on the test data for all models. Violin plots[19] combine kernel-density plots with box plots, where the latter shows the model's SPE quantiles (5%, 25%, 50%, 75%, 95%). Outliers are plotted with dots above the respective 95% level. Note, the 95% quantile of SPE is not the same as the 95% quantile of AE metric in table 1. The kernel-density plots, underneath the box plots, provide estimates for the probability-density for the SPE, e.g., they estimate the likelihood of the prediction errors in a given range when using the model. For FHI-aims, the QRF model concentrates the SPE around 10% while the stoichiometric model shows a corresponding thin distribution spanning a much larger range of SPE. The FHI-aims combination model appears to share many of the very large errors of the stoichiometric model. This is expected since the former employs a QRF that is fit on the residuals of the Figure 1: Predictions of total-energy differences of the QRF model for FHI-aims (left) and exciting (right) data, plotted against the respective DFT results. Relevant error metrics are added to help interpret the quality of the fit. Note the logarithmic axes. The region of DFT calculated \(\Delta E^{AB}\) values between 1 meV/atom and 1 eV/atom is plotted with a lighter shade since these data are of particular interest to DFT practitioners. The RMSLE+1E-4 metric is labeled RMSLE. latter. When these residuals are sporadically very large, as is the case for both codes containing SPEs from their respective stoichiometric models of close to 100%, the combination model has a difficult task to fit these residuals. The **exciting** QRF model has a smaller median 25% quartile and 95% quantile than the stoichiometric model but the 75% quartile of the QRF model is larger. The larger 95% quantile of the stoichiometric model is likely the reason why the stoichiometric model returns a significantly worse sMAPE. This means that the QRF model has less very large SPE which is a desired behavior. The combination model for **exciting** shows the best 75% and 95% quantile SPE but the worst outlier behavior with several data points found between 100% and 200%. This is likely a similar effect as seen in the FHI-aims combination model. No meaningful trends can be identified for the QRF models for the predictions on test data of both codes that returned SPE above the 95% quantile. The Gini importance, or mean decrease of impurity (MDI) [12], of the features fed into the QRF models for both codes is displayed in fig. 3. The figure quantifies, for each variable, the QRF's ability to reduce the optimization metric using splits on that feature. Both codes' QRF models strongly depend primarily on either \(\Delta E^{A}(N_{b})\) or \(\Delta E^{B}(N_{b})\), which are the two features of the stoichiometric model. That these variables are effective in describing the CBS energy correction of the binary compounds comes as no surprise considering the overall success of the stoichiometric model [5]. \(EA^{B}\), the electron affinity of the less electronegative element, \(B\), in the binary, is the second and fourth most important feature for the FHI-aims and **exciting** QRF models. The basis-set size variable, \(N_{b}\), turns out to be the second and fourth most selected feature for **exciting** and FHI-aims respectively. For FHI-aims, the basis-set size variable, \(\Delta SB_{PVE}^{AB}\), is a single scalar. It maps an \(s\)-like orbital and a \(d\)-like orbital with the same weight. This loss of information in the mapping may be why we do not see the feature playing an important role. We experimented with feeding the model additional basis-set size variables, namely, the numerical and basis-set size settings encoded as integers but saw no improvement in CV performance. The fact that the features \(\Delta E^{A}(N_{b})\) and \(\Delta E^{B}(N_{b})\) are functions of the basis-set size may explain why the basis-set size variables themselves are not important since it is implicitly included in these variables. The improvement of the QRF models over the corresponding stoichiometric models comes from two sources, the ability to use more variables and the ability to express non-linear functions. Based on the non-negligible feature importance of the basis-set size variable and atomic chemical data such as the electron affinity, we learn that the added features are important. However, this is not the only source of improvement. To provide evidence for it, we trained a linear model (\(\ell\)-1 and \(\ell\)-2 regularization) with these additional variables and found no significant improvement over the stoichiometric model. Therefore, we conclude that the QRF model benefits also from its non-linear nature. QRF models not only provide predictions but also associated prediction intervals based on the training data. Prediction intervals provide the user with an estimate for the uncertainty of the QRF prediction [20]. When introducing quantile random forests, Meinshausen [10] uses 95% prediction intervals which are the difference between the 2.5% quantile and 97.5% quantile. Figure 2: Distribution of symmetric percentage errors of the stoichiometric, QRF, and combination model on held-out test data for FHI-aims (left) and **exciting** (right) as violin plots. The box plot on top shows the 5%, 25%, 50%, 75% and 95% quantiles. The white dot shows the mean symmetric percentage error. The black dots indicate values that fall outside of the 5% and 95% quantiles. We do not see the 5% quantiles in the figure or data points that are smaller than the 5% quantile since the 5% quantiles are close to zero. The target CBS energy corrections and their associated 95% prediction intervals are shown in fig. 4. They contain 92.7% of the held-out test data for FHI-aims and 85.9% for excitingIf the estimated 2.5% and 97.5% quantiles from our model were correct, we would expect 95% of the data to fall in this range. In other words, on average 2.5% of all unseen data points from the test data set should be above and below the 95% prediction interval. This implies that our prediction intervals for both best performing models cover slightly less of the test data than desired. That said, we find the deviation acceptable with only 10 and 11 data points for FHI-aims and exciting, respectively, being outliers. We note in passing that 99% prediction intervals (or other values) can also be easily created using the accompanying Jupyter Notebook if the user desires more conservative uncertainty estimates. We further analyze the 95% prediction intervals by plotting them against the calculated DFT values (the model target) in fig. 5. In general, we would expect larger calculated \(\Delta E^{AB}\) values to have larger associated model prediction intervals. This is indeed what we see. The exciting prediction intervals have a Pearson correlation of 0.73 with the calculated \(\Delta E^{AB}\) values. For FHI-aims, the correlation of the prediction intervals is lower although still significantly positive at 0.69. ## Discussion The QRF models allow us to perform CBS extrapolation of the total energy per atom. In this case, \(E^{AB}(N_{b})\), is known, and the model estimates the correction, \(\Delta E^{AB}(N_{b})\), the sum of both giving us the CBS estimate, \(E^{AB}(N_{\infty})\) (eq. 1). Assessing the overall performance of our models, for both codes, we find that the QRF models achieve improved results in all metrics except for the 95% quantile AE for which the exciting model performs slightly worse. This indicates that for the largest CBS extrapolation corrections of the exciting data set, which are around 20-40 eV/atom the stoichiometric model is preferred. This knowledge gives us a better understanding of the QRF model's domain of applicability -i.e., the user should not expect more reliable CBS corrections than this from the stoichiometric model for calculated corrections of over 20 eV/atom. Overall, we understand that the improvement of the QRF models over the stoichiometric models comes from the addition of new features and the ability to Figure 3: Feature (Gini) importances of the QRF models for FHI-aims and exciting. Black lines show the standard deviation in the feature importance across trees in the forests. The features are ordered in terms of their importance for the FHI-aims QRF model. \(C^{A}*\Delta E^{A}\) and \(C^{B}*\Delta E^{B}\) are the first and second terms in the stoichiometric model, respectively. \(EA^{B}\) and \(EI^{B}\), respectively, are the electron affinity and ionization potential of the elemental solid composed of element B, computed with FHI-aims. \(r_{s}^{B}\) is the mean radius for the s-like pseudo orbital of elemental solid \(B\) (computed with FHI-aims). \(N_{b}\) is the basis-set size for the respective code. fit non-linear distributions. We also find the combination models to perform worse than the QRF models in terms of our metrics that cover a wide range of targets (sMAPE, MAPE) and as such we don't recommend the use of the combination model. The QRF models also provide prediction intervals which offer the user a quantitative estimate of how uncertain the model is in each prediction it makes. The 95% prediction intervals are positively correlated with over 0.69 Pearson correlation for both codes. We expect the model to have larger prediction intervals for larger targets and we see this with the 95% prediction intervals which have over 0.69 Pearson correlation for both codes. The 95% prediction intervals contain slightly less than 95% of the held-out calculated CBS energy corrections. In general, however, the correlation of the prediction intervals and the percent of data they cover indicate the prediction intervals of the QRF models to be well-behaved. The QRF models offer users of materials databases a quantitative assessment of how far a total-energy result is from the CBS limit, which helps to evaluate how these results can be reused/repurposed. Looking forward, we believe that this work will allow such databases to provide estimated CBS corrections for hosted data. We find the overall performance of the QRF models in terms of sMAPE on the test data set of less than 25% for exciting and less than 15% for FHI-aims acceptable, especially considering that we provide prediction intervals that indicate the precision of the estimate (and thereby the domain of applicability of the model). This will help non-experts unlock the large potential these databases have in many fields (e.g., medical, transportation, energy). For instance, data that were simulated for a molecular dynamics investigation, likely have a large CBS correction and might be unsuitable for investigations where high precision is required. On the opposite case, data performed with very high precision settings, even coming from different sources, will have a low CBS correction and might be suitable for a wide range of machine-learning tasks. We also believe that this work may have future applications in recommending a basis-set size for DFT practitioners for achieving a certain degree of precision before a calculation is performed and thereby saving computational expenses. We expect highly expressive non-linear models such as neural networks and related methods to offer an opportunity to improve CBS-limit predictions for DFT data [21]. However, these models are notoriously data-hungry, and more data are needed before they can be used effectively. The authors plan high-throughput calculations to obtain more dedicated data for a systematic analysis. This not only includes total energies but also tackling more involved properties such as electronic or elastic properties, and more. ## Data Availability The raw DFT data, i.e., input and output files for both exciting and FHI-aims are hosted in the NOMAD Repository, under the following DOIs: exciting: DOI:10.17172/NOMAD/2020.07.15-1, FHI-aims: DOI:10.17172/NOMAD/2020.07.27-1. Figure 4: Prediction intervals (95%) for the QRF models for FHI-aims (left) and exciting (right). The blue points indicate the calculated \(\Delta E^{AB}\) DFT values. The yellow shaded areas show the prediction intervals for each DFT value target (blue point). The data (DFT values and associated prediction intervals) are ordered from left to right in terms of increasing DFT values. The insets zoom into the respective region of smaller values. Note, the x-axes are different for both codes since there is varying amount of data for each code due to the different basis-set size parameters. As a result, the y-axes are also different. exciting contains larger calculated DFT \(\Delta E^{AB}\) values since the basis-set size was controlled manually whereas the FHI-aims code basis variation stopped at the discrete _minimal_ option given by the code. ## Code Availability The code used to generate all figures and train all models in this paper can be found on the NOMAD AI-Toolkit [22] at this link: [https://nomad-lab.eu/aitutorials/error-estimates-qrf](https://nomad-lab.eu/aitutorials/error-estimates-qrf).
2302.03135
Monotone Function Intervals: Theory and Applications
A monotone function interval is the set of monotone functions that lie pointwise between two fixed monotone functions. We characterize the set of extreme points of monotone function intervals and apply this to a number of economic settings. First, we leverage the main result to characterize the set of distributions of posterior quantiles that can be induced by a signal, with applications to political economy, Bayesian persuasion, and the psychology of judgment. Second, we combine our characterization with properties of convex optimization problems to unify and generalize seminal results in the literature on security design under adverse selection and moral hazard.
Kai Hao Yang, Alexander K. Zentefis
2023-02-06T21:47:56Z
http://arxiv.org/abs/2302.03135v5
# Extreme Points and First-Order Stochastic Dominance: ###### Abstract We characterize the extreme points of first-order stochastic dominance (FOSD) intervals and show how these intervals are at the heart of many topics in economics. Using these extreme points, we characterize the distributions of posterior quantiles, leading to an analog of a classical result regarding the distributions of posterior means. We apply this analog to various subjects, including the psychology of judgement, political economy, and Bayesian persuasion. In addition, FOSD intervals provide a common structure to security design. We use the extreme points to unify and generalize seminal results in that literature when either adverse selection or moral hazard pertains. **JEL classification:** D72, D82, D83, D86, G23 **Keywords:** Extreme points, first-order stochastic dominance, posterior quantiles, overconfidence, gerrymandering, Bayesian persuasion, security design Introduction The notion of first-order stochastic dominance has been part of economics since at least the late 1960s. At that time, several authors established its importance for the analysis of choice under risk.1 In this paper, we show that many well-known economic questions can be recast in terms of first-order stochastic dominance. This reframing connects seemingly unrelated subjects in economics--including optimal security design, Bayesian persuasion, the psychology of judgment, and partisan redistricting--revealing that many of these subjects' insights share a common structure. Footnote 1: See, for example, Hadar and Russell (1969); Hanoch and Levy (1969); Rothschild and Stiglitz (1970); Whitmore (1970). See also Kroll and Levy (1980), Bawa (1982), and Levy (1990) for surveys of the body of work that followed. Our main result characterizes the extreme points of first-order stochastic dominance (FOSD) _intervals_. These intervals describe sets of distributions that dominate a distribution and are simultaneously dominated by another distribution, in the sense of FOSD. Figure I below illustrates an FOSD interval, which is the collection of cumulative distribution functions (CDFs) bounded by the blue and red CDFs. The convexity of FOSD intervals means that their _extreme points_ are fundamental to understanding their properties. We show that a distribution is an extreme point of an FOSD interval if and only if the distribution either coincides with one of the FOSD bounds or is flat. Wherever the distribution is flat, at least one end of the flat portion must be attached to one of the FOSD bounds, as illustrated by the black CDF in Figure I. Figure I An Extreme Point of an FOSD Interval This characterization is useful to economics because various settings studied in different literatures can be reformulated into problems involving FOSD intervals. Many canonical and novel results in the relevant literatures follow from the characterization. We demonstrate this through two broad classes of economic applications. In the first class of applications, we prove an analog to a celebrated result in probability theory that has been widely used in economics. Consider a random variable and a signal for it. For each signal realization, a posterior belief is determined by Bayes' rule. For every posterior belief, one can compute the posterior mean. Strassen's theorem (Strassen, 1965) implies that the distribution of these posterior means is a mean-preserving contraction of the prior, and vice versa. Rothschild and Stiglitz (1970) made clear the economic implications of Strassen's theorem, in particular toward the theory of risk. The Bayesian persuasion literature has extensively applied this theorem to obtain explicit solutions to many persuasion problems (see, for example, Gentzkow and Kamenica, 2016 and Dworczak and Martini, 2019). Instead of posterior means, one can derive many other statistics of a posterior. Using the characterization of the extreme points of FOSD intervals, we characterize the distributions of posterior _quantiles_, leading to an analog of Strassen's theorem. The distributions of posterior quantiles coincide with an FOSD interval bounded by an upper and a lower truncation of the prior. The characterization of the distributions of posterior quantiles further leads to many economic applications. For example, in the psychology of judgement, a seminal result on identifying overconfidence follows immediately. It is well documented that individuals can appear to be over or under confident when evaluating themselves.2 Observing this in the literature, Benoit and Dubra (2011) show that this finding alone does not imply irrationality. They consider a setting where individuals are asked to rank their ability on a certain task (e.g., driving skills) relative to a given population. The main result of Benoit and Dubra (2011) is a characterization of the set of self-ranking data that are rationalizable by a Bayesian model. From this characterization, they provide a necessary and sufficient condition for _apparent_ overconfidence (e.g., more than 50% of individuals ranking themselves above the population median) to imply _true_ overconfidence (i.e., individuals are not Bayesian). As an immediate corollary, our characterization of the distributions of posterior quantiles generalizes this result. This generalization extends the setting beyond self-ranking questions on a relative scale to self-evaluation questions on an absolute scale, such as raw test scores or the probability of employment after graduation, as studied in Weinstein (1980). Footnote 2: See, for example, Alicke, Klotz, Breitenbecher, Yurak and Vredenburg (1995); De Bondt and Thaler (1995); Moore (2007); Kruger, Windschitl, Burrus, Fessel and Chambers (2008) As another example, our characterization of the distributions of posterior quantiles leads to novel results in political economy--in particular, on gerrymandering, the manipulation of electoral district boundaries. In this setting, citizens identify with an ideal position on political issues along a spectrum. The variety of positions is represented as a distribution, which we can call a prior. An electoral map segments citizens into districts, which splits the prior distribution into different parts. This electoral map can be regarded as a signal, and the distribution of ideal positions within each district of the map can be interpreted as a posterior. If each district elects a representative holding the district's median position (Downs, 1957; Black, 1958), the composition of the legislative body (i.e., the distribution of ideal positions of elected representatives) can then be represented as a distribution of posterior medians. Our characterization of the distributions of posterior quantiles fully describes the scope of legislatures that unrestrained gerrymandering can achieve. Gerrymandering can induce _any_ legislature within the bounds of two extremes: an "all-left" body and an "all-right" body. In the former, the composition of the legislature only reflects citizens' ideal positions that are left of the population median; whereas in the latter, the composition of the legislature only reflects citizens' ideal positions that are right of the population median. At the same time, any compositions _beyond_ the "all-left" and the "all-right" bodies (e.g., anything more right-leaning than the distribution of citizens' ideal positions that are to the right of the population median) are not possible through _any_ kind of gerrymandering. Thus, the scope of unrestrained gerrymandering is identified by the all-left and all-right bodies, as well as _anything in between_. Our third application of the distributions of posterior quantiles is to Bayesian persuasion. Kamenica and Gentzkow (2011) provide a framework for studying a sender's communication to a receiver under the commitment assumption. A practical challenge, however, is that the concavification approach used in this literature loses tractability as the number of states increases. An exception is when the state is one-dimensional and only posterior _means_ are payoff-relevant to the sender. Our characterization complements this literature, as it brings tractability to settings where only posterior _quantiles_ are payoff-relevant to the sender. For example, our characterization leads to explicit solutions to a persuasion problem where the sender's payoff is state-independent, and the receiver chooses an action to match the state and minimizes the _absolute_ loss, rather than the quadratic loss. We show how this simple change has substantive implications for the type of information the sender optimally discloses. In the second class of applications, we put our characterization of the extreme points of FOSD intervals to use in the security design literature. We show how FOSD intervals present a unifying structure to security design, and we uncover common features of the optimal securities in a wide class of security design problems. A typical setting in security design involves an entrepreneur with an asset but no money, and investors with money but no asset. The entrepreneur considers the type of security to issue to investors in exchange for funding. The entrepreneur typically has more information than the investors about how much the asset actually earned or about the effort the entrepreneur exerted to jump-start the asset. Two widely adopted assumptions in the literature make the security design problem amenable to FOSD intervals. The first is limited liability. The entrepreneur cannot pay the investors any more than all the asset's cash flow, and the investors cannot receive anything less than zero. Limited liability places natural upper and lower stochastic bounds on the security's payoff. The second assumption is that the security's payoff is monotone in the asset's cash flow. See Innes (1990), Nachman and Noe (1994), and DeMarzo and Duffie (1999) for justifications of this assumption. Monotonicity introduces a natural first-order stochastic dominance between the asset and the security. Two seminal papers adopt these assumptions in their analysis of the security design problem. Innes (1990) studies the problem under moral hazard, whereas DeMarzo and Duffie (1999) consider a situation with adverse selection. Both papers derive a standard debt contract as an optimal security, which promises either a constant payment or the asset's realized cash flow, whichever is smaller. Many papers in security design that followed were influenced by the Innes (1990) or DeMarzo and Duffie (1999) environment. (See, for example, Schmidt 1997; Casamatta 2003 and Eisfeldt 2004; Biais and Mariotti 2005.) But the optimality of standard debt in Innes (1990) and DeMarzo and Duffie (1999) relies on another crucial assumption: The asset's cash flow distribution (or signal about the cash flow's distribution) satisfies the monotone likelihood ratio property (MLRP). The assumption is reasonable, but not without limitations (Hart 1995). By recasting the security design problem using FOSD intervals, our characterization of the extreme points allows us to solve for the optimal security without reliance on MLRP. This reframing also demonstrates that many security design problems, whether afflicted by moral hazard or adverse selection, can be unified under a common framework. Although the optimal security is not necessarily standard debt without assuming MLRP, we show that _contingent_ debt is enough for optimality. For this security, the face value of the entrepreneur's debt to investors is contingent on the realized cash flow of the asset.3 Consequently, the nature of standard debt contracts--which grants the entrepreneur only residual rights and never has the entrepreneur share partial equity withinvestors--is preserved even without assuming MLRP. In essence, using extreme points of FOSD intervals to find the optimal security allows one to separate the effects of limited liability on security design from the effects of the monotone likelihood ratio property. Footnote 3: Contingent debt contracts share some similarity with state-contingent debt instruments (SCPIs) from the sovereign debt literature, which tie a country’s principal or interest payments to its nominal GDP (Lessard and Williamson 1987; Shiller 1994; Borensztein and Mauro 2004). Overall, this paper uncovers the common underlying role of FOSD intervals in many topics in economics, and it offers a unifying approach to answering canonical economic questions that have been previously answered by separate, case-specific approaches. Not only do several classical results follow from our main characterization, but we also use that characterization to develop new findings that otherwise would have been challenging to obtain without it. Related Literature.This paper relates to several areas. The main result connects to characterizations of extreme points of convex sets. In this area, Hardy, Littlewood and Polya (1929) characterize the extreme points of a set of vectors \(x\) majorized by another vector \(x_{0}\) in \(\mathbb{R}^{n}\), which is often referred to as majorization orbits.4 They show that the extreme points of this set coincide with the permutations of \(x_{0}\). Ryff (1967) extends this result to infinite dimensional spaces. Kleiner, Moldovanu and Strack (2021) characterize the extreme points of a subset of orbits under an additional monotonicity assumption, which is equivalent to the set of probability distributions being either a mean-preserving spread or mean-preserving contraction of a probability distribution on \(\mathbb{R}\). Independently, Arieli, Babichenko, Smorodinsky and Yamashita (2023) also characterize the extreme points of mean-preserving contractions of a probability distribution on \(\mathbb{R}\) and show that these extreme points coincide with a class of signals the authors refer to as "bi-pooling." Footnote 4: A vector \(x\in\mathbb{R}^{n}\) majorizes \(y\in\mathbb{R}^{n}\) if \(\sum_{i=1}^{k}x_{(i)}\geq\sum_{i=1}^{k}y_{(i)}\) for all \(k\in\{1,\ldots,n\}\), with equality at \(k=n\), where \(x_{(j)}\) and \(y_{(j)}\) are the \(j\)-th smallest component of \(x\) and \(y\), respectively. Compared to Kleiner, Moldovanu and Strack (2021), this paper characterizes the extreme points of distributions under the _first-order_ stochastic dominance order, rather than the second-order stochastic dominance order. Moreover, our characterization applies to an _interval_ of distributions: those that are dominated by a distribution and dominate another distribution at the same time. This contrasts with an orbit, which contains only distributions that are either dominated by one distribution or dominate another. The qualitative structure of the extreme points of FOSD intervals shares some similarity with the extreme points of a mean-preserving spread orbit, as given by Kleiner, Moldovanu and Strack (2021). In particular, any extreme point of an FOSD interval either must coincide with one of the bounds or must pool all states in an interval into one mass point.5 Footnote 5: Note that the convex set of interest in Kleiner, Moldovanu and Strack (2021) is equivalent to extreme points of the set of distributions that dominate (or, are dominated by) some CDF, with an additional condition that every such distribution admits a nondecreasing density. Thus, our setting is different from theirs even when restricting to orbits. Several recent papers exploit properties of extreme points to derive economic implications. Bergemann, Brooks and Morris (2015) use the extreme points of the convex set of market segments that induce the same optimal monopoly price to construct the consumer surplus-maximizing market segmentation. Lipnowski and Mathevet (2018) use the extreme points of posterior covers to reduce the support of optimal signals in a general persuasion framework. Kleiner, Moldovanu and Strack (2021) use the extreme points of majorization orbits to derive novel proofs of the celebrated Border's condition, the Bayesian-dominance equivalence result, optimality of bi-pooling signals in mean-based persuasion settings, as well as the equivalence of persuasion and a class of delegation problems. Finally, several works in mechanism design use the extreme points of feasible mechanisms to establish the optimality of rationing and randomized posted prices (e.g., Dworczak () Kominers () Akbapour 2021, Loertscher and Muir 2022, Kang 2022). These papers, as we do in Section 4, exploit the result of Winkler (1988), which characterizes the extreme points of convex subsets defined by finitely many linear inequalities. The first application of this paper to the distributions of posterior quantiles is related to belief-based characterizations of signals, which date back to the seminal contributions of Blackwell (1953) and Harsanyi (1967-68). The characterization of distributions of posterior means can be derived from Strassen (1965). Our application can be regarded as a complement, as it characterizes the distributions of posterior quantiles, instead of means. This characterization generalizes the results of Benoit and Dubra (2011), who identify the Bayesian-rationalizable self-ranking data where subjects place themselves relative to the population according to a posterior quantile. Our gerrymandering results are related to the literature on redistricting, particularly to Owen and Grofman (1988), Friedman and Holden (2008), Gul and Pesendorfer (2010), and Kolotilin and Wolitzky (2020), who also adopt the belief-based approach and model a district map as a way to split the population distribution of voters. Existing work mainly focuses on a political party's optimal gerrymandering when maximizing either its expected number of seats or its probability of winning a majority. In contrast, our result characterizes the _feasible compositions_ of a legislative body that a district map can induce. Our application to Bayesian persuasion relates to that large literature (see Kamenica 2019 for a comprehensive survey), in particular to communication problems where only posterior means are payoff-relevant (e.g., Gentzkow and Kamenica 2016; Roesler and Szentes 2017; Dworczak and Martini 2019; Ali, Haghpanah, Lin and Siegel 2022). We complement this literature by providing a foundation for solving communication problems where only the posterior _quantiles_ are payoff-relevant. Finally, our reframing of security design using FOSD intervals connects this paper to that large literature. Allen and Barbalau (2022) provide a recent survey. In this application, we base our economic environments on Innes (1990), which involves moral hazard, and DeMarzo and Duffie (1999), which involves adverse selection. We generalize and unify results in those seminal papers under a common structure, revealing how security design problems can be solved using FOSD intervals when either type of asymmetric information is at play. ## 2 Extreme Points of First-Order Stochastic Dominance Intervals ### Notation Let \(\mathcal{F}\) be the collection of CDFs on \(\mathbb{R}\).6 For any \(F,G\in\mathcal{F}\) such that \(G(x)\leq F(x)\) for all \(x\in\mathbb{R}\), let Footnote 6: \(\mathcal{F}\) is endowed with the weak-* topology and the induced Borel \(\sigma\)-algebra, unless otherwise specified. \[\mathcal{I}(F,G):=\{H\in\mathcal{F}\,|G(x)\leq H(x)\leq F(x),\,\forall x\in \mathbb{R}\}.\] Namely, \(\mathcal{I}(F,G)\) is the collection of distributions that dominate \(F\) and simultaneously are dominated by \(G\) in the sense of _first-order stochastic dominance_ (FOSD). In other words, \(\mathcal{I}(F,G)\) is the first-order stochastic dominance _interval_ between \(G\) and \(F\). For any \(F\in\mathcal{F}\) and for any \(x\in\mathbb{R}\), let \(F(x^{-}):=\lim_{y\uparrow x}F(x)\) denote the left-limit of \(F\) at \(x\). Meanwhile, for any \(F\in\mathcal{F}\) and for any \(\tau\in(0,1)\), let \(F^{-1}\) be the _quantile_ function of \(F\). Namely, \(F^{-1}(\tau):=\inf\{x\in\mathbb{R}|F(x)\geq\tau\}\).7 Footnote 7: Note that \(F^{-1}\) is nondecreasing and left-continuous for all \(F\in\mathcal{F}\). Moreover, for any \(\tau\in(0,1)\) and for any \(x\in\mathbb{R}\), \(F^{-1}(\tau)\leq x\) if and only if \(F(x)\geq\tau\). ### Extreme Points of First-Order Stochastic Dominance Intervals For any two distributions \(F\) and \(G\), the FOSD interval \(\mathcal{I}(F,G)\) is a convex set. \(H\) is said to be an extreme point of \(\mathcal{I}(F,G)\) if \(H\) cannot be written as a convex combination of two distinct elements of \(\mathcal{I}(F,G)\). Theorem 1 characterizes the extreme points of \(\mathcal{I}(F,G)\). **Theorem 1** (Extreme Points of \(\mathcal{I}(F,G)\)).: _For any \(F,G,H\in\mathcal{F}\) such that \(G(x)\leq H(x)\leq F(x)\) for all \(x\in\mathbb{R}\), \(H\) is an extreme point of \(\mathcal{I}(F,G)\) if and only if there exists a countable collection of intervals \(\{[\underline{x}_{n},\overline{x}_{n})\}_{n=1}^{\infty}\) such that:_ 1. \(H(x)\in\{G(x),F(x)\}\) _for all_ \(x\notin\cup_{n=1}^{\infty}[\underline{x}_{n},\overline{x}_{n})\)_._ 2. _For all_ \(n\in\mathbb{N}\)_,_ \(H\) _is constant on_ \([\underline{x}_{n},\overline{x}_{n})\) _and either_ \(H(\overline{x}_{n}^{-})=G(\overline{x}_{n}^{-})\) _or_ \(H(\underline{x}_{n})=F(\underline{x}_{n})\)_._ Figure 0(A) depicts an extreme point of an FOSD interval \(\mathcal{I}(F,G)\), where the blue CDF is the lower bound \(F\), and the red CDF is the upper bound \(G\). According to Theorem 1, any extreme point \(H\) of \(\mathcal{I}(F,G)\) must either coincide with one of the bounds, or be constant on an interval, where at least one end of the interval reaches one of the bounds. Appendix A.1 contains the proof of Theorem 1. We briefly summarize the argument below. For the sufficiency part, consider any \(H\) that satisfies conditions 1 and 2 of Theorem 1. Suppose that \(H\) can be expressed as a convex combination of two distinct \(H_{1}\) and \(H_{2}\) in \(\mathcal{I}(F,G)\). Then, for any \(x\notin\cup_{n=1}^{\infty}[\underline{x}_{n},\overline{x}_{n})\), it must be that \(H_{1}(x)=H_{2}(x)=H(x)\), since otherwise at least one of \(H_{1}(x)\) and \(H_{2}(x)\) would be either above \(F(x)\) or below \(G(x)\). Thus, since \(H_{1}\neq H_{2}\), there exists \(n\in\mathbb{N}\) such that \(H_{1}(x)\neq H_{2}(x)\) and \(\lambda H_{1}(x)+(1-\lambda)H_{2}(x)=H(x)\) for all \(x\in[\underline{x}_{n},\overline{x}_{n})\), for some \(\lambda\in(0,1)\). Without loss, suppose that \(H_{1}(x)<H(x)<H_{2}(x)\) for all \(x\in[\underline{x}_{n},\overline{x}_{n})\). If \(H(\underline{x}_{n})=F(\underline{x}_{n})\), then \(F(\underline{x}_{n})=H(\underline{x}_{n})<H_{2}(\underline{x}_{n})\); whereas if \(H(\overline{x}_{n}^{-})=G(\overline{x}_{n}^{-})\), then \(H_{1}(\overline{x}_{n}^{-})>H(\overline{x}_{n}^{-})=G(\overline{x}_{n}^{-})\). In either cases, one of \(H_{1}\) and \(H_{2}\) must not be an element of \(\mathcal{I}(F,G)\), a contradiction. For the necessity part, consider any \(H\) that does not satisfy conditions 1 and 2 of Theorem 1. In this case, as depicted in Figure 2, there exists a rectangle that lies between the graphs of \(F\) and \(G\), so that when restricted to this rectangle, the graph of \(H\) is not a step function. Then, since extreme points of uniformly bounded nondecreasing functions are exactly the step functions (see, for example, Skreta 2006; Borgers 2015), \(H\) can be written as a convex combination of two distinct nondecreasing functions when restricted to this rectangle. Since the rectangle lies in between the graphs of \(F\) and \(G\), this, in turn, implies that \(H\) can be written as a convex combination of two distinct distributions in \(\mathcal{I}(F,G)\). In what follows, we demonstrate how the characterization of extreme points of FOSD intervals can be applied to various economic settings. These applications rely on two crucial properties of extreme points. The first property--formally known as Choquet's theorem--allows us to express any element \(H\) of \(\mathcal{I}(F,G)\) as a mixture of its extreme points. As a result, if one wishes to establish some property for every element of \(\mathcal{I}(F,G)\), and if this property is preserved under convex combinations, then it suffices to establish the property for all extreme points of \(\mathcal{I}(F,G)\), which is a much smaller set. The second property of extreme points that we rely on is that, for any convex optimization problem, one of the solutions must be an extreme point of the feasible set. This property is useful for economic applications because it immediately provides knowledge about the solutions to the underlying economic problem if it is convex and if the feasible set is related to an FOSD interval. ## 3 Distributions of Posterior Quantiles In this section, we use Theorem1 and exploit Choquet's theorem and characterize the distributions of posterior quantiles. This characterization is an analog of the celebrated characterization of the distributions of posterior means that follows from Strassen's theorem (Strassen, 1965). We also show how the characterization of distributions of posterior quantiles leads to several economic applications. The first among these is generalizing (and simplifying the proof of) a widely known result due to Benoit and Dubra (2011) in the literature on the psychology of judgment. The second application is to political redistricting, and the third application is to Bayesian persuasion. ### Characterization of the Distributions of Posterior Quantiles Consider a one-dimensional variable \(x\in\mathbb{R}\) that is drawn from a prior \(F_{0}\). A _signal_ for \(x\) is defined as a probability measure \(\mu\in\Delta(\mathcal{F})\) such that \[\int_{\mathcal{F}}F(x)\mu(\mathrm{d}F)=F_{0}(x), \tag{1}\] for all \(x\in\mathbb{R}\). Let \(\mathcal{M}\) denote the collection of all signals.8 Footnote 8: From Blackwell’s theorem (Blackwell, 1953), given any \(\mu\in\mathcal{M}\), each \(F\in\mathrm{supp}(\mu)\) can be interpreted as a _posterior_ for \(x\) obtained via Bayes’ rule under a prior \(F_{0}\), after observing the realization of a signal that is correlated with \(x\). The marginal distribution of this signal is summarized by \(\mu\). For any distribution \(F\in\mathcal{F}\) and for any \(\tau\in(0,1)\), denote the set of \(\tau\)-quantiles of \(F\) by \([F^{-1}(\tau),F^{-1}(\tau^{+})]\).9 Furthermore, we say that a transition probability \(r:\mathcal{F}\times[0,1]\to\Delta(\mathbb{R})\) is a _quantile selection rule_ if, for all \(F\in\mathcal{F}\) and for all \(\tau\in(0,1)\), \(r(\cdot|F,\tau)\) assigns probability \(1\) to a subset of \(\tau\)-quantiles of \(F\). In other words, a quantile selection rule \(r\) selects (possibly through randomization) a \(\tau\)-quantile for every CDF \(F\) and for every \(\tau\in(0,1)\), whenever it is not unique. Let \(\mathcal{R}\) be the collection of all selection rules. Footnote 9: \(F^{-1}(\tau^{+}):=\lim_{q\downarrow\tau}F^{-1}(q)\) denotes the right-limit of \(F^{-1}\) at \(\tau\). For any \(\tau\in(0,1)\), for any signal \(\mu\in\mathcal{M}\), and for any selection rule \(r\in\mathcal{R}\), let \(H^{\tau}(\cdot|\mu,r)\) denote the distribution of the \(\tau\)-quantile induced by \(\mu\) and \(r\). For any \(\tau\in(0,1)\), let \(\mathcal{H}_{\tau}\) denote the set of distributions of posterior \(\tau\)-quantiles that can be induced by some signal \(\mu\in\mathcal{M}\) and selection rule \(r\in\mathcal{R}\). Using Theorem1, we provide a complete characterization of the distributions of posterior quantiles induced by arbitrary signals and selection rules. To this end, define two distributions \(\underline{F}_{0}^{\tau}\) and \(\overline{F}_{0}^{\tau}\) as follows: \[\underline{F}_{0}^{\tau}(x):=\min\left\{\frac{1}{\tau}F_{0}(x),1\right\}, \quad\overline{F}_{0}^{\tau}(x):=\max\left\{\frac{F_{0}(x)-\tau}{1-\tau},0 \right\}.\] Note that \(\overline{F}_{0}^{\tau}(x)\leq\underline{F}_{0}^{\tau}(x)\) for all \(x\in\mathbb{R}\) and for all \(\tau\in(0,1)\). In essence, \(\underline{F}_{0}^{\tau}\) is the conditional distribution of \(F_{0}\) in the event that \(x\) is smaller than a \(\tau\)-quantile of \(F_{0}\); whereas \(\overline{F}_{0}^{\tau}\) is the conditional distribution of \(F_{0}\) in the event that \(x\) is larger than the same \(\tau\)-quantile. Theorem2 below characterizes the distributions of posterior quantiles \(\mathcal{H}_{\tau}\). **Theorem 2** (Distributions of Posterior Quantiles).: _For any \(\tau\in(0,1)\),_ \[\mathcal{H}_{\tau}=\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{ \tau}).\] Theorem2 completely characterizes the distributions of posterior \(\tau\)-quantiles by the FOSD interval \(\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\). Figure3 illustrates Theorem2 for the case when \(\tau=\nicefrac{{1}}{{2}}\). The distribution \(\underline{F}_{0}^{\nicefrac{{1}}{{2}}}\) is colored blue, whereas the distribution \(\overline{F}_{0}^{\nicefrac{{1}}{{2}}}\) is colored red. The green dotted curve represents the prior, \(F_{0}\). According to Theorem2, any distribution \(H\) bounded by \(\underline{F}_{0}^{\nicefrac{{1}}{{2}}}\) and \(\overline{F}_{0}^{\nicefrac{{1}}{{2}}}\) (for instance, the black curve in the figure) can be induced by a signal \(\mu\in\mathcal{M}\) and a select rule \(r\in\mathcal{R}\). Conversely, for any signal and for any selection rule, the induced graph of the distribution of posterior \(\tau\)-quantiles must fall in the area bounded by the blue and red curves. For example, under the signal that reveals all the information, the distribution of posterior \(\nicefrac{{1}}{{2}}\)-quantiles coincides with the prior, whereas under the signal that does not reveal any information, the distribution of posterior \(\nicefrac{{1}}{{2}}\)-quantiles coincides with the step function that has a jump (of size 1) at \(F_{0}^{-1}(\nicefrac{{1}}{{2}})\). Theorem2 can be regarded as a natural analog of the well-known characterization of the distributions of posterior _means_ that follows from Strassen (1965). Strassen's theorem implies that a CDF \(H\in\mathcal{F}\) is a distribution of posterior means if and only if \(H\) is a mean-preserving contraction of the prior \(F_{0}\) (i.e., \(H\)_majorizes_\(F_{0}\)). Instead of posterior means, Theorem2 pertains to posterior quantiles. According to Theorem2, \(H\) is a distribution of posterior \(\tau\)-quantiles if and only if \(H\) dominates the lower-truncated prior \(\underline{F}_{0}^{\tau}\) and is dominated by the upper-truncated prior \(\overline{F}_{0}^{\tau}\), in the sense of FOSD. The necessity part of Theorem2 is straightforward from the martingale property of pos terior beliefs. Indeed, for any signal \(\mu\in\mathcal{M}\) and for any \(r\in\mathcal{R}\), \[H^{\tau}(x|\mu,r)\leq\mu(\{F\in\mathcal{F}|F^{-1}(\tau)\leq x\})=\mu(\{F\in \mathcal{F}|F(x)\geq\tau\}),\] for all \(x\in\mathbb{R}\), where the first inequality holds because the right-hand side corresponds to the distribution of posterior quantiles induced by \(\mu\) when the lowest \(\tau\)-quantile is selected with probability \(1\). Furthermore, for any \(x\in\mathbb{R}\), if we regard \(F(x)\in[0,1]\) as a random variable whose distribution is implied by \(\mu\), it then follows from (1) that its distribution must be a mean-preserving spread of (the Dirac measure at) \(F_{0}(x)\). As a result, \(\mu(\{F\in\mathcal{F}|F(x)\geq\tau\})\) can be at most \(\min\{F_{0}(x)/\tau,1\}\), since otherwise, the mean of \(F(x)\) can never be \(F_{0}(x)\). This implies that \(H^{\tau}(x|\mu,r)\leq\underline{F}_{0}^{\tau}(x)\). A similar argument leads to the conclusion that \(H^{\tau}(x|\mu,r)\geq\overline{F}_{0}^{\tau}(x)\). The sufficiency part, however, is more challenging. To prove this, one would in principle need to construct a signal that generates the desired distribution of posterior quantiles for _every_ distribution \(H\in\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\). Although it might be easier to construct a signal that induces some specific distribution of posterior quantiles, constructing a signal for any arbitrary distribution \(H\in\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\) does not seem to be tractable.10 Nonetheless, Theorem 1 allows us to bypass this challenge and focus on distributions that satisfy its conditions 1 and 2. Indeed, since the mapping \((\mu,r)\mapsto H^{\tau}(\cdot|\mu,r)\) is affine, it suffices to construct signals that induce the extreme points of \(\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\) as posterior quantile distributions. The proof of Theorem 2 in Appendix A.2 explicitly constructs a signal (and a selection rule) for each extreme point of \(\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\). To illustrate the intuition, consider an extreme point \(H\) of \(\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\) that takes the following form: \[H(x)=\left\{\begin{array}{ll}\underline{F}_{0}^{\tau}(x),&\text{ if }x< \underline{x}\\ \underline{F}_{0}^{\tau}(\underline{x}),&\text{ if }x\in[\underline{x}, \overline{x})\\ \overline{F}_{0}^{\tau}(x),&\text{ if }x\geq\overline{x}\end{array}\right.,\] for some \(\underline{x},\overline{x}\) such that \(\underline{F}_{0}^{\tau}(\underline{x})=\overline{F}_{0}^{\tau}(\overline{x }^{-})\), as depicted by Figure IVA. To construct a signal that has \(H\) as its distribution of posterior quantiles, separate all the states \(x\notin[\underline{x},\overline{x}]\). Then, take \(\alpha\) fraction of the states in \([\underline{x},\overline{x}]\) and pool them uniformly with each separated state below \(\underline{x}\), while pooling the remaining \(1-\alpha\) fraction uniformly with the separated states above \(\overline{x}\). Since \(\underline{F}_{0}^{\tau}(\underline{x})=\overline{F}_{0}^{\tau}(\overline{x }^{-})\), by choosing \(\alpha\) correctly, each \(x<\underline{x}\), after being pooled with states in \([\underline{x},\overline{x}]\), would become a \(\tau\)-quantile of the posterior it belongs to, as illustrated in Figure IVB.11 Similarly, each \(x>\overline{x}\) would become a \(\tau\)-quantile of the posterior it belongs to as well. Together, by properly selecting the posterior quantiles, the induced distribution of posterior quantiles under this signal would indeed be \(H\). Footnote 11: Specifically, \(\alpha=\frac{1-\tau}{\tau}F_{0}(\underline{x})/(\frac{\tau}{1-\tau}(1-F_{0}( \overline{x}^{-}))+\frac{1-\tau}{\tau}F_{0}(\underline{x}))\). Although the characterization of Theorem 2 may seem to rely on selection rules \(r\in\mathcal{R}\) the result remains (essentially) the same even when restricted to signals that always induce a unique posterior \(\tau\)-quantile, provided that the prior \(F_{0}\) has full support on an interval. Theorem3 below formalizes this statement. To this end, Let \(\widetilde{\mathcal{H}}_{\tau}\subseteq\mathcal{H}_{\tau}\) be the collection of distributions of posterior \(\tau\)-quantiles that can be induced by some signal where (almost) all posteriors have a unique \(\tau\)-quantile. The characterization of \(\widetilde{\mathcal{H}}_{\tau}\) relates to a family of perturbations of the set \(\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\), denoted by \(\{\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon},\overline{F}_{0}^{\tau, \varepsilon})\}_{\varepsilon>0}\), where \[\underline{F}_{0}^{\tau,\varepsilon}(x):=\left\{\begin{array}{rl}\frac{1}{ \tau+\varepsilon}F_{0}(x),&\text{if }x<F_{0}^{-1}(\tau)\\ 1,&\text{if }x\geq F_{0}^{-1}(\tau)\end{array};\text{ and }\overline{F}_{0}^{\tau, \varepsilon}(x):=\left\{\begin{array}{rl}0,&\text{if }x<F_{0}^{-1}(\tau)\\ \frac{F_{0}(x)-(\tau-\varepsilon)}{1-(\tau-\varepsilon)},&\text{if }x\geq F_{0}^{-1}( \tau)\end{array},\right.\] for all \(\varepsilon\geq 0\) and for all \(x\in\mathbb{R}\). Note that \(\mathcal{I}(\underline{F}_{0}^{\tau,0},\overline{F}_{0}^{\tau,0})=\mathcal{I} (\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\), and that \(\{\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon},\overline{F}_{0}^{\tau, \varepsilon})\}_{\varepsilon>0}\) is decreasing in \(\varepsilon\) under the set-inclusion order.12 Footnote 12: As a convention, let \(\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon},\overline{F}_{0}^{\tau, \varepsilon}):=\emptyset\) when \(\varepsilon\geq\max\{\tau,1-\tau\}\). **Theorem 3** (Distributions of Unique Posterior Quantiles).: _For any \(\tau\in(0,1)\) and for any \(F_{0}\in\mathcal{F}_{0}\) that has a full support on an interval,_ \[\bigcup_{\varepsilon>0}\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon}, \overline{F}_{0}^{\tau,\varepsilon})\subseteq\widetilde{\mathcal{H}}_{\tau} \subseteq\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau}).\] As an immediate corollary of Theorem2 and Theorem3, we now have an analog of the celebrated law of iterated expectation, which we refer to as the _law of iterated quantiles_. **Corollary 1** (Law of Iterated Quantiles).: _Consider any \(\tau,q\in(0,1)\)._ 1. _For any_ \(F_{0}\in\mathcal{F}\) _and for any closed interval_ \(Q\subseteq\mathbb{R}\)_,_ \(Q=[H^{-1}(\tau),H^{-1}(\tau^{+})]\) _for some_ \(H\in\mathcal{H}_{q}\) _if and only if_ \(Q\subseteq[(\underline{F}_{0}^{q})^{-1}(\tau),(\overline{F}_{0}^{q})^{-1}(\tau ^{+})]\)_._ 2. _For any continuous_ \(F_{0}\in\mathcal{F}\) _that has a full support on an interval and for any_ \(\hat{x}\in\mathbb{R}\)_,_ \(\hat{x}\in[H^{-1}(\tau),H^{-1}(\tau^{+})]\) _for some_ \(H\in\widetilde{H}_{q}\) _if and only if_ \(\hat{x}\in[(\underline{F}_{0}^{q})^{-1}(\tau),(\overline{F}_{0}^{q})^{-1}( \tau)]\)_._ According to Corollary1, while the expectation of posterior means under any signal is always the expectation under the prior, the possible \(\tau\)-quantiles of posterior \(q\)-quantiles are exactly \([(\underline{F}_{0}^{q})^{-1}(\tau),(\overline{F}_{0}^{q})^{-1}(\tau^{+})]\). For example, the collection of all possible _medians_ of posterior _medians_ is exactly the _interquartiles_\([F_{0}^{-1}(\sfrac{1}{4}),F_{0}^{-1}(\sfrac{1}{4})]\) of the prior. ### Economic Applications #### Apparent Overconfidence A key issue in the psychology of judgment is explaining why people rank themselves better or worse than others in certain tasks. By the 2000s, a consensus had emerged among researchers that most people commonly rank themselves as better than average on simple tasks and worse than average on difficult tasks (Moore, 2007; Kruger, Windschitl, Burrus, Fessel and Chambers, 2008). Up for debate, however, was whether this behavior was rational. Here we show how Theorem 3 can speak to this debate. Consider the following setting of individual self-evaluation, a setting due to Benoit and Dubra (2011). There is a unit mass of individuals, and each one of them is attached to a "type" \(x\in[0,1]\), which is distributed according to a CDF \(F_{0}\in\mathcal{F}\). Common interpretations of \(x\) in the literature include skill levels, scores on a standardized test, the probability of being successful at a task, or simply an individual's ranking in the population in percentage terms. Individuals are asked to predict their own type \(x\). Given a finite partition \(0=z_{0}<z_{1}<\ldots<z_{K}=1\) of \([0,1]\), a _prediction dataset_ is a vector \((\theta_{k})_{k=1}^{K}\in[0,1]^{K}\) with \(\sum_{k=1}^{K}\theta_{k}=1\), where \(\theta_{k}\) denotes the share of individuals who predict their own type is in \([z_{k-1},z_{k})\). It is well-documented in the experimental literature that a prediction dataset can be very different from the population distribution \(F_{0}\). One common explanation found in this literature is that individuals are truly overconfident or truly underconfident (Alicke, Klotz, Breitenbecher, Yurak, 1995; De Bondt and Thaler, 1995; Camerer, 1997). But Benoit and Dubra (2011) proposed an alternative explanation: This difference can simply be caused by noises in each individual's signal. People are only _apparently_ misconfident. Individuals can still be fully Bayesian even if the prediction dataset is different from the population distribution. We show next how a general version of Benoit and Dubra (2011)'s insight follows immediately from Theorem 3. Consider the following Bayesian framework: Each individual receives a signal \(s\in S\) for their type \(x\), which is drawn from a conditional distribution given each realized \(x\). After observing their signal realizations, individuals then update their beliefs via Bayes' rule, and they predict their types according to their posterior medians (e.g., Hoelzl and Rustichini, 2005).13 Given the distribution \(F_{0}\) of types and a partition \(0=z_{0}<z_{1}<\ldots<z_{K}=1\), a prediction dataset \((\theta_{k})_{k\in K}\) is said to be _median rationalizable (\(\tau\)-quantile rationalizable)_, if there exists a signal for \(x\) such that the induced posterior has a unique median (\(\tau\)-quantile) with probability 1, and that for all \(k\in\{1,\ldots,K\}\), the probability of the posterior median (\(\tau\)-quantile) being in the interval \([z_{k-1},z_{k})\) is \(\theta_{k}\).14 Footnote 13: Not all experiments might clearly instruct subjects on which posterior statistic to use when predicting their abilities. Nonetheless, when experiments do provide instructions, the most common statistics requested are posterior means or medians. When subjects use the posterior mean to predict their types, the set of rationalizable data would be given by mean-preserving contractions of the prior, which follows immediately from Strassen’s theorem. Therefore, combining mean-rationalizable and median-rationalizable data covers most of the experiments in the literature. Under this framework, theorem 1 (and theorem 4) of Benoit and Dubra (2011) characterizes the collection of median (\(\tau\)-quantile) rationalizable datasets, under the assumption that \(F_{0}(z_{k})=\nicefrac{{k}}{{K}}\) for all \(k\in\{1,\ldots,K\}\). In other words, Benoit and Dubra (2011) characterize the collection of rationalizable datasets in the context of _self-ranking_, where individuals are asked to place themselves into a \(K\)-cile relative to the population according to their posterior medians (\(\tau\)-quantiles). Although relative self-ranking is one of the common types of experiments in the literature, as noted by Benoit and Dubra (2011), many other experiments involve some _absolute_ scales. For example, a large overconfidence literature asks students to forecast their exam scores (e.g., Murstein 1965; Grimes 2002; Hossain and Tsigaris 2015), which are typically on an absolute scale of 0 to 100. Alternatively, Weinstein (1980) asks students to predict their employment probabilities after graduation, which are also on an absolute scale of 0 to 1. As an immediate corollary of Theorem 3, we generalize the result of Benoit and Dubra (2011) and characterize the collection of \(\tau\)-quantile rationalizable datasets on an arbitrary scale.15 Footnote 15: Note that the scale on which the dataset lies is unrelated to the statistics that individuals use to predict their types. Therefore, it would be reasonable to assume that individuals predict their performance—both on a relative scale and an absolute scale—using either the median, a \(\tau\)-quantile, or the mean of their posteriors. As Benoit and Dubra (2011) note when discussing which statistics to use for individuals’ evaluations and population benchmarks on an absolute scale: “Just considering medians and means, there are four ways to interpret answers to scale questions.” **Corollary 2** (Rationalizable Apparent Misconfidence).: _For any \(\tau\in(0,1)\), for any \(F_{0}\in\mathcal{F}\) with full support on \([0,1]\), and for any partition \(0=z_{0}<z_{1}<\ldots<z_{K}=1\) of \([0,1]\), a prediction dataset \((\theta_{k})_{k=1}^{K}\) is \(\tau\)-quantile rationalizable if and only if for all \(k\in\{1,\ldots,K\}\),_ \[\sum_{i=1}^{k}\theta_{i}<\frac{1}{\tau}F_{0}(z_{k}) \tag{2}\] _and_ \[\sum_{i=k}^{K}\theta_{i}<\frac{1-F_{0}(z_{k-1}^{-})}{1-\tau} \tag{3}\] Proof.: The necessity part follows directly from the proof of theorem 4 of Benoit and Dubra (2011). For sufficiency, consider any prediction dataset \((\theta_{k})_{k=1}^{K}\) such that (2) and (3) hold. Let \(H(x)\) be the distribution that assigns probability \(\theta_{k}\) at \((z_{k}+z_{k-1})/2\). Then, there exists \(\varepsilon>0\) such that \(H\in\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon},\overline{F}_{0}^{\tau, \varepsilon})\). By Theorem 3, there exists a signal \(\mu\) with \(\mu(\{F\in\mathcal{F}|F^{-1}(\tau)<F^{-1}(\tau^{+})\})=0\) such that \(H(x)=H^{\tau}(x|\mu)\) for all \(x\in\mathbb{R}\), which in turn implies that \(\mu\)\(\tau\)-quantile-rationalizes \((\theta_{k})_{k=1}^{K}\), as desired. _Remark 1_.: For comparison, when \(z_{k}=\nicefrac{{k}}{{K}}\) for all \(k\), and when \(F_{0}\) is uniform, Corollary 2 specializes to theorem 4 of Benoit and Dubra (2011), whose proof relies on projection and perturbation arguments and is not constructive. In addition to having a more straightforward proof and yielding a more general result, another benefit of Theorem 3 is that the signals rationalizing a feasible prediction dataset are semi-constructive: The extreme points of \(\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon},\overline{F}_{0}^{\tau, \varepsilon})\) are attained by explicitly constructed signals, as shown in the proof of Theorem 3.16 Footnote 16: It is also noteworthy that, although theorem 4 of Benoit and Dubra (2011) can be used to prove Theorem 2 indirectly when \(F_{0}\) admits a density (by taking \(K\to\infty\) and establishing proper continuity properties), the same argument cannot be used to prove Theorem 3, which is crucial for the proof of Corollary 2. This is because of the failure of upper-hemicontinuity when signals that induce multiple quantiles are excluded. #### Limits of Gerrymandering Beyond the psychology of judgment, Theorem 2 and Theorem 3 can be applied to political redistricting. The study of redistricting ranges across many fields: Legal scholars, political scientists, mathematicians, computer scientists, and economists have all contributed to this vast literature.17 Footnote 17: See, for example, Shotts (2001); Besley and Preston (2007); Coate and Knight (2007); McCarty, Poole and Rosenthal (2009); Fryer Jr and Holden (2011); McGhee (2014); Stephanopoulos and McGhee (2015); Alexeev and Mixon (2018). While existing economic theory on redistricting has primarily focused on optimal redistricting or fair redistricting mechanisms (e.g., Owen and Groffman, 1988; Friedman and Holden, 2008; Gul and Pesendorfer, 2010; Pegden, Procaccia and Yu, 2017; Ely, 2019; Friedman and Holden, 2020; Kolotilin and Wolitzky, 2020), another fundamental question is the scope of redistricting's impact on a legislature. If _any_ electoral map can be drawn, what kinds of legislatures can be created? In other words, what are the "limits of gerrymandering"? Theorem 2 and Theorem 3 describe the extent to which unrestrained gerrymandering can shape the composition of elected representatives. Consider an environment in which a continuum of citizens vote, and each citizen has single-peaked preferences over positions on political issues. Citizens have different ideal positions \(x\in\mathbb{R}\), and these positions are distributed according to some \(F_{0}\in\mathcal{F}\). In this setting, a signal \(\mu\in\mathcal{M}\) can be thought of as an electoral _map_, which segments citizens into electoral _districts_, such that a district \(F\in\mathrm{supp}(\mu)\) is described by the conditional distribution of the ideal positions of citizens who belong to it.18 Each district elects a _representative_, and election results at the district-level follow the median voter theorem. That is, given any map \(\mu\in\mathcal{M}\), the elected representative of each district \(F\) must have an ideal position that is a median of \(F\). When there are multiple medians in a district, the representative's ideal position is determined by a selection rule \(r\in\mathcal{R}\), which is either flexible or stipulated by election laws.19 Footnote 19: Recall that any voting method that meets the Condorcet criterion (e.g., majority voting with two office-seeking candidates) satisfies the median voter property in this setting (Downs 1957; Black 1958). Given any \(\mu\in\mathcal{M}\) and any selection rule \(r\in\mathcal{R}\), the induced distribution of posterior medians \(H^{\nicefrac{{1}}{{2}}}(\cdot|\mu,r)\) can be interpreted as a distribution of the ideal positions of the elected representatives. Meanwhile, the bounds \(\underline{F}_{0}^{\nicefrac{{1}}{{2}}}\) and \(\overline{F}_{0}^{\nicefrac{{1}}{{2}}}\) can be interpreted as distributions of representatives that only reflect one side of voters' political positions relative to the median of the population. Specifically, \(\underline{F}_{0}^{\nicefrac{{1}}{{2}}}\) describes an "all-left" legislature, which only reflects citizens' ideal positions that are left of the population median. Likewise, \(\overline{F}_{0}^{\nicefrac{{1}}{{2}}}\) represents an "all-right" legislature, which only reflects citizens' ideal positions that are right of the population median. As an immediate implication of Theorem 2 and Theorem 3, Proposition 1 below completely characterizes the set of possible compositions of the legislature across all election maps. **Proposition 1** (Limits of Gerrymandering).: _For any \(H\in\mathcal{F}\), the following are equivalent:_ 1. \(H\in\mathcal{I}(\underline{F}_{0}^{\nicefrac{{1}}{{2}}},\overline{F}_{0}^{ \nicefrac{{1}}{{2}}})\)_._ 2. \(H\) _is a distribution of the representatives' ideal positions under some map_ \(\mu\in\mathcal{M}\) _and some selection rule_ \(r\in\mathcal{R}\)_._ _Furthermore, for any fixed selection rule \(\hat{r}\in\mathcal{R}\), every \(H\in\cup_{\varepsilon>0}\mathcal{I}(\underline{F}_{0}^{\nicefrac{{1}}{{2}}, \varepsilon},\overline{F}_{0}^{\nicefrac{{1}}{{2}},\varepsilon})\) is a distribution of the representatives' ideal positions under some map \(\mu\in\mathcal{M}\) and selection \(\hat{r}\)._ Hence, _any_ composition of the legislative body ranging from the "all-left" to the "all-right," and anything in between those two extremes, can be procured by some gerrymandered map. Meanwhile, _any_ composition that is more extreme than the "all-left" or the "all-right" bodies is not possible, regardless of how the districts are drawn.20 Footnote 20: Gomperg, Pancs and Sharma (forthcoming) also study how gerrymandering affects the composition of the legislature. However, the authors assume that each district elects a _mean_ candidate as opposed to the median. If we specify the model for the legislature to enact legislation, we may further explore the set of possible _legislative outcomes_ that can be enacted. One natural assumption for the outcomes, regardless of the details of the legislative model, is that the enacted legislation must be a median of the representatives (i.e., the median voter property holds at the legislative level).21 Under this assumption, an immediate implication of Corollary 1 is that the set of achievable legislative outcomes coincides with the interquartile range of the citizenry's ideal positions, as summarized by Corollary 3 below. **Corollary 3** (Limits of Legislative Outcomes).: _Suppose that the median voter property holds both at the district level and at the legislative level. Then an outcome \(x\in\mathbb{R}\) can be enacted as legislation under some map if and only if \(x\in[F_{0}^{-1}(\sfrac{1}{4}),F_{0}^{-1}(\sfrac{3}{4})]\)._ According to Corollary3, while the only Condorcet winners in this setting are the population medians, gerrymandering expands the set of possible legislation to the entire interquartile range of the population's views. Moreover, if the population is more polarized (i.e., the interquartile range is wider), more extreme legislation can pass. Conversely, Corollary3 also suggests it is impossible to enact any legislative outcome _beyond_ the interquartile range, regardless of how the districts are drawn. Finally, Proposition1 can help identify the citizenry's distribution of ideal positions. A common approach to identify that distribution is to map public opinion survey responses to an ideological spectrum. But a disadvantage of this approach is the absence of consistent questions asked over time to create a stable mapping and the lack of representativeness in some surveys (Lax and Phillips, 2009). Identifying the ideal positions of elected officials has been more successful because of the abundance of roll-call voting records available in the estimation (Poole and Rosenthal, 1985; Shor and McCarty, 2011). Nonetheless, inferring the citizenry's distribution of ideal positions from that of elected officials is difficult, as the distribution of ideal positions of elected officials might be very different from that of the citizenry due to gerrymandering. Using Proposition1, one can identify the possible distributions of citizens' ideal positions from the observed distribution of representatives' ideal positions. Suppose that \(H\) is the observed distribution of representatives' ideal positions. Proposition1 implies that the population distribution \(F_{0}\) must have \(H\) be dominated by \(\overline{F}_{0}^{\sfrac{1}{2}}\) and dominate \(\underline{F}_{0}^{\sfrac{1}{2}}\) at the same time. This leads to Corollary4 below. **Corollary 4** (Identification Set of \(F_{0}\)).: _Suppose that \(H\in\mathcal{F}\) is the distribution of ideal positions of a legislature. Then the distribution of citizens' ideal position \(F_{0}\) must satisfy_ \[\frac{1}{2}H(x)\leq F_{0}(x)\leq\frac{1+H(x)}{2}, \tag{4}\] _for all \(x\in\mathbb{R}\). Conversely, for any \(F_{0}\in\mathcal{F}\) satisfying (4), there exists a map \(\mu\in\mathcal{M}\) and a selection rule \(r\in\mathcal{R}\), such that \(H\) is the distribution of ideal positions of the legislature._ According to Corollary4, the distribution of citizens' ideal positions can be identified by (4), even when only the distribution of the representatives' ideal positions can be observed.22 #### Quantile-Based Persuasion Theorem2 and Theorem3 also lead to applications in Bayesian persuasion. Consider the Bayesian persuasion problem formulated by Kamenica and Gentzkow (2011): A state \(x\in\mathbb{R}\) is distributed according to a common prior \(F_{0}\). A sender chooses a signal \(\mu\in\mathcal{M}\) to inform the receiver, who then picks an action \(a\in A\) after seeing the signal's realization. The ex-post payoffs of the sender and receiver are \(u_{S}(x,a)\) and \(u_{R}(x,a)\), respectively. Kamenica and Gentzkow (2011) show that the sender's optimal signal and the value of persuasion can be characterized by the concave closure of the function \(\hat{v}:\mathcal{F}\to\mathbb{R}\), where \(\hat{v}(F):=\mathbb{E}_{F}[u_{S}(x,a^{*}(F))]\) is the reduced-form value function of the sender, and \(a^{*}(F)\in A\) is the optimal action of the receiver under posterior \(F\in\mathcal{F}\).23 Footnote 23: When there are multiple optimal actions, subgame-prefection would always select the one that the sender prefers most. When \(|\mathrm{supp}(F_{0})|\geq 2\), this "concavafication" method requires finding the concave closure of a multi-variate function, which is known to be computationally challenging, especially when \(|\mathrm{supp}(F_{0})|=\infty\). For tractability, many papers have restricted attention to preferences where the only payoff-relevant statistic of a posterior is its mean (i.e., \(\hat{v}(F)\) is measurable with respect to \(\mathbb{E}_{F}[x]\)). See, for example, Gentzkow and Kamenica (2016); Kolotilin, Li, Mylovanov and Zapechelnyuk (2017); Dworczak and Martini (2019); Kolotilin, Mylovanov and Zapechelnyuk (2020); and Arieli, Babichenko, Smorodinsky and Yamashita (2023). A natural analog of this "mean-based" setting is for the payoffs to depend only on the posterior quantiles. Just as mean-based persuasion problems are tractable because distributions of posterior means are mean-preserving contractions of the prior, Theorem2 and Theorem3 provide a tractable formulation of any "quantile-based" persuasion problem, as described in Proposition2 below. **Proposition 2** (Quantile-Based Persuasion).: _Suppose that the prior \(F_{0}\) has full support on some interval, and suppose that there exists \(\tau\in(0,1)\), a selection rule \(r\in\mathcal{R}\), and a measurable function \(v_{S}:\mathbb{R}\to\mathbb{R}\) such that \(\hat{v}(F)=\int_{\mathbb{R}}v_{S}(x)r(\mathrm{d}x|F,\tau)\), for all \(F\in\mathcal{F}\). Then_ \[\mathrm{cav}(\hat{v})[F_{0}]=\sup_{H\in\mathcal{I}(\underline{F}_{0}^{*}, \overline{F}_{0}^{*})}\int_{\mathbb{R}}v_{S}(x)H(\mathrm{d}x). \tag{5}\] Proof.: Let \(\bar{v}(F):=\sup_{x\in[F^{-1}(\tau),F^{-1}(\tau^{+})]}v_{S}(x)\) for all \(F\in\mathcal{F}\). Then, by Theorem2, \[\mathrm{cav}(\hat{v})[F_{0}]\leq\mathrm{cav}(\bar{v})[F_{0}]=\sup_{H\in \mathcal{I}(\underline{F}_{0}^{*},\overline{F}_{0}^{*})}\int_{\mathbb{R}}v_{S} (x)H(\mathrm{d}x).\] Meanwhile, by Theorem 3, \[\sup_{H\in\cup_{\hat{v}>0}\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon}, \overline{F}_{0}^{\tau,\varepsilon})}\int_{\mathcal{R}}v_{S}(x)H(\mathrm{d}x) \leq\mathrm{cav}(\hat{v})[F_{0}].\] Together, since \(\mathrm{cl}(\{\mathcal{I}(\underline{F}_{0}^{\tau,\varepsilon},\overline{F}_{0 }^{\tau,\varepsilon})\})=\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0 }^{\tau})\), (5) then follows. By Proposition 2, any \(\tau\)-quantile-based persuasion problem can be solved by simply choosing a distribution in \(\mathcal{I}(\underline{F}_{0}^{\tau},\overline{F}_{0}^{\tau})\) to maximize the expected value of \(v_{S}(x)\), rather than concavafying the infinite-dimensional functional \(\hat{v}\). Furthermore, since the objective function of (5) is affine, Theorem 1 further reduces the search for the solution to only distributions that satisfy its conditions 1 and 2.24 Footnote 24: A recent elegant contribution by Kolotilin, Corrao and Wolitzky (2022a) provides a tractable method that simplifies persuasion problems in certain environments. One of these environments is when the receiver’s payoff is supermodular and the sender’s payoff is state-independent and increasing in the receiver’s action. One of their examples in this environment is for the receiver’s optimal action for each posterior to be quantile-measurable. When one further assumes that the sender’s payoff is increasing, the conditions of Proposition 2 lead to the same example. Since we allow for arbitrary (state-independent) sender payoffs, Proposition 2 generalizes this example in an orthogonal direction and complements their method. For example, consider the canonical setting where the receiver chooses an action to match the state and minimizes some loss function, while the sender's payoff is state-independent. To fix ideas, we can let the sender be a financial advisor and the receiver be a client. The financial advisor wishes to persuade the client to allocate a fraction \(a\in[0,1]\) of wealth in stocks and the remaining \(1-a\) fraction in bonds. The client would prefer different portfolio allocations under different states \(x\in[0,1]\) of the economy. A standard assumption in this setting is that the receiver's loss function is quadratic, so that \(u_{R}(x,a):=-(x-a)^{2}\). Under this assumption, the receiver's optimal action \(a^{*}(F)\), given a posterior \(F\), equals the posterior expected value \(\mathbb{E}_{F}[x]\), and hence, the sender's problem is mean-measurable. This leads to a tractable problem since the distributions of the receiver's actions are equivalent to mean-preserving contractions of the prior.25 With Proposition 2, we are now able to completely solve the sender's problem when the receiver's loss function is _absolute_ rather than quadratic. That is, when \(u_{R}(x,a):=-|x-a|\), or more generally, when \(u_{R}(x,a):=-\rho_{\tau}(x-a)\), with \(\rho_{\tau}(y):=y(\tau-\mathbf{1}\{y<0\})\) being the "pinball" loss function. For any \(\tau\in(0,1)\), when the receiver's payoff is given by \(u_{R}(x,a)=-\rho_{\tau}(x-a)\) and the sender's payoff is \(u_{S}(x,a)=v_{S}(a)\), since any \(a\in[F^{-1}(\tau),F^{-1}(\tau^{+})]\) is optimal for the receiver when the posterior is \(F\), Proposition 2 applies, and the sender's problem can be rewritten via (5). Footnote 25: See Dworczak and Martini (2019) for a characterization of the solutions and an interpretation of the Lagrange multipliers. For instance, if the sender's payoff \(v_{S}\) is nondecreasing, then \(\overline{F}_{0}^{\tau}\) is optimal, whereas if \(v_{S}\) is nonincreasing, \(\underline{F}_{0}^{\tau}\) is optimal. Or, as in many settings, \(v_{S}\) may be non-monotonic. In the example of the financial advisor and the client, the advisor's commission might be tied to cross-selling some of the firm's newer mutual funds over others. If one of those newer funds is a blended portfolio of stocks and bonds, the advisor's payoff might be quasi-concave in the client's chosen portfolio weight, with a peak at some \(a_{0}\in(0,1)\) that has the client put some wealth in stocks and the remainder in bonds, rather than all wealth in either asset class alone. In this case, assuming that \(a_{0}<F_{0}^{-1}(\tau)\), the solution to (5) is given by \[H^{*}(x):=\left\{\begin{array}{rl}0,&\mbox{if }x<a_{0}\\ \underline{F}_{0}^{\tau}(x),&\mbox{if }x\geq a_{0}\end{array}\right..\] Notice that if \(v_{S}\) is concave, then the sender's optimal signal is always the null signal if the receiver's loss function is quadratic. In contrast, when the receiver's loss function is absolute, the sender would optimally reveal some information about the state. In other words, the shape of the receiver's loss function has substantive implications for the type of information the sender optimally discloses. ## 4 Security Design with Limited Liability In this second class of applications, we show how FOSD intervals pertain to security design with limited liability. Security design searches for optimal ways to divide the cash flows of assets across financial claims as a way to mitigate informational frictions. Recognizing that feasible securities are described by FOSD intervals, we use the second crucial property of extreme points--namely, for any convex optimization problem, one of the solutions must be an extreme point of the feasible set--to generalize and unify several results in security design under a common framework. To do so, we revisit the environments of two seminal papers in the literature: Innes (1990), which has moral hazard, and DeMarzo and Duffie (1999), which has adverse selection. ### Security Design with Moral Hazard Consider the following setting of security design in the presence of moral hazard, a setting due to Innes (1990). A risk-neutral entrepreneur issues a security to an investor to fund a project. The project needs an investment \(I>0\). If the project is funded, the entrepreneur then exerts costly effort to develop the project. If the effort level is \(e\geq 0\), the project's profit is distributed according to \(\Phi(\cdot|e)\in\mathcal{F}\), and the (additively separable) effort cost to the entrepreneur is \(C(e)\geq 0\). A security specifies the return to the investor for every realized profit \(x\geq 0\) of the project. Both the entrepreneur and the investor have limited liability, and therefore, any security must be a (measurable) function \(H:\mathbb{R}_{+}\to\mathbb{R}\) such that \(0\leq H(x)\leq x\) for all \(x\geq 0\). Moreover, a security is required to be monotone in the project's profit.26 Given a security \(H\), the entrepreneur chooses an effort level to solve Footnote 26: Requiring securities to be monotone is a standard assumption in the security design literature (Innes 1990; Nachman and Noe 1994; DeMarzo and Duffie 1999). Monotonicity can be justified without loss of generality if the entrepreneur could contribute additional funds to the project so that only monotone profits would be observed. \[\sup_{e\geq 0}\int_{0}^{\infty}(x-H(x))\Phi(\mathrm{d}x|e)-C(e). \tag{6}\] For simplicity, we make the following technical assumptions: 1) The supports of the profit distributions \(\{\Phi(\cdot|e)\}_{e\geq 0}\) are all contained in a compact interval, which is normalized to \([0,1]\). 2) \(\Phi(\cdot|e)\) admits a density \(\phi(\cdot|e)\) for all \(e\geq 0\). 3) \(\{\Phi(\cdot|e)\}_{e\geq 0}\) and \(C\) are such that (6) admits a solution and every solution to (6) can be characterized by the first-order condition.27 Footnote 27: For example, we may assume that \(C\) is strictly increasing and strictly convex and that \(\frac{\partial}{\partial e}\phi(x|e)>0\), \(\frac{\partial^{2}}{\partial e^{2}}\phi(x|e)\leq 0\) for all \(x\) and for all \(e\). The entrepreneur's goal is to design a security to acquire funding from the investor while maximizing the entrepreneur's expected payoff. Specifically, let \(F(x):=x\) and let \(G(x):=\mathbf{1}\{x=1\}\) for all \(x\in[0,1]\). The set of securities can be written as \(\mathcal{I}(F,G)\). The entrepreneur solves \[\sup_{H\in\mathcal{I}(F,G),\,e\geq 0} \left[\int_{0}^{1}[x-H(x)]\phi(x|e)\,\mathrm{d}x-C(e)\right]\] s.t. \[\int_{0}^{1}[x-H(x)]\frac{\partial}{\partial e}\phi(x|e)\, \mathrm{d}x=C^{\prime}(e) \tag{7}\] \[\int_{0}^{1}H(x)\phi(x|e)\,\mathrm{d}x\geq(1+r)I,\] where \(r>0\) is the rate of return on a risk-free asset. Innes (1990) characterizes the optimal security in this setting using an additional crucial assumption: The project profit distributions \(\{\phi(\cdot|e)\}_{e\geq 0}\) satisfy the monotone likelihood ratio property (Milgrom 1981). Under this assumption, he shows that every optimal security must be a standard debt contract \(H^{d}(x):=\min\{x,d\}\) for some \(d>0\). While the simplicity of a standard debt contract is a desirable feature, the monotone likelihood ratio property is arguably a strong condition (Hart 1995), where higher effort leads to higher probability weights on all higher project profits at any profit level. It remains unclear what the optimal security might be under a more general class of distributions. Using Theorem 1, we can generalize Innes (1990) and solve the entrepreneur's problem (7) without the monotone likelihood ratio property. As we show in Proposition 3 below, _contingent_ debt contracts are now optimal. We say that a security \(H\in\mathcal{I}(F,G)\) is a _contingent debt contract_, if there exists an interval partition \(\{I_{n}\}\) of \([0,1]\) and a sequence \(\{d_{n}\}\subseteq(0,1]\) such that \(H(x)=H^{d_{n}}(x)\) for all \(x\in I_{n}\). Figure V illustrates a contingent debt contract \(\widehat{H}\) with \(I_{1}=[0,\nicefrac{{1}}{{2}})\), \(I_{2}=[\nicefrac{{1}}{{2}},1]\), \(d_{1}=\nicefrac{{1}}{{4}}\), and \(d_{2}=\nicefrac{{3}}{{4}}\). Under \(\widehat{H}\), if the project's profit \(x\) is below \(\nicefrac{{1}}{{2}}\), the entrepreneur owes debt with face value \(\nicefrac{{1}}{{4}}\); instead, if the profit is above \(\nicefrac{{1}}{{2}}\), the entrepreneur owes debt with a higher face value \(\nicefrac{{3}}{{4}}\). The entrepreneur's required debt payment to the investor is contingent on the entrepreneur's capacity to pay, which itself is linked to the realized profit of the project.28 Footnote 28: Contingent debt contracts share some similarity with state-contingent debt instruments (SCDIs) from the sovereign debt literature, which tie a country’s principal or interest payments to its nominal GDP (Lessard and Williamson 1987; Shiller 1994; Borensztein and Mauro 2004). Clearly, every standard debt contract with face value \(d\) is a contingent debt contract where \(I_{1}=[0,1]\) and \(d_{1}=d\). Moreover, a contingent debt contract never involves the entrepreneur and investor sharing in the equity of the project. To see how the cash flow is split between parties, suppose the project earned \(x\in(\nicefrac{{1}}{{2}},\nicefrac{{3}}{{4}})\). The entrepreneur would default on the high face-value debt contract (\(d_{2}=\nicefrac{{3}}{{4}}\)), and the investor would take claim of all project profits \(x\). If, instead, the project earned \(x\in(\nicefrac{{1}}{{4}},\nicefrac{{1}}{{2}})\), the investor would receive the low face-value amount (\(d_{1}=\nicefrac{{1}}{{4}}\)), and the entrepreneur would retain the amount \(x-\nicefrac{{1}}{{4}}\). In general, under any contingent debt contract, either the entrepreneur defaults and the investor absorbs all rights to the project's worth, or the entrepreneur pays a certain face value and retains the residual profit. From Theorem 1, we show that a portfolio of at most three contingent debt contracts is optimal. **Proposition 3**.: _There exists contingent debt contracts \(\{H_{i}^{*}\}_{i=1}^{3}\) and \(\{\lambda_{i}\}_{i=1}^{3}\subseteq[0,1]\), with \(\lambda_{1}+\lambda_{2}+\lambda_{3}=1\), such that \(H^{*}:=\lambda_{1}H_{1}^{*}+\lambda_{2}H_{2}^{*}+\lambda_{3}H_{3}^{*}\) is a solution to the entrepreneur's problem (7)._ Proof.: For any fixed \(e\geq 0\), the objective function of the entrepreneur's security design problem (7) is linear, and the two constraints are linear. Thus, for any fixed \(e\), (7) must have a solution that is an extreme point of the feasible set. By proposition 2.1 of Winkler (1988), extreme points of the feasible set must take the form of a convex combination of at most three extreme points of \(\mathcal{I}(F,G)\). The proof is then completed by noticing that \(H\) is an extreme point of \(\mathcal{I}(F,G)\) if and only if \(H\) is a contingent debt contract. According to Proposition 3, it is sufficient for the entrepreneur to use a portfolio of contingent debt contracts without sharing the equity of the project with the investor. The nature of standard debt contracts, which grant the entrepreneur only residual rights, is preserved even without the monotone likelihood ratio assumption. The only difference is that the entrepreneur may be liable for more when the project earns more. To better understand Proposition 3, recall that the optimality of standard debt contracts in Innes (1990) is due to (i) the risk-neutrality and the limited-liability structure of the problem, and (ii) the monotone likelihood ratio property of the profit distributions. Indeed, for any incentive-compatible and individually-rational contract, risk neutrality allows one to construct an individually-rational standard debt contract with the same expected payment. Meanwhile, the monotone likelihood ratio property ensures that this debt contract incentivizes the entrepreneur to exert higher effort, thus relaxing the incentive-compatibility constraint. Without the monotone likelihood ratio assumption, simply replicating an individually-rational contract with a standard debt contract may distort incentives and lead to less efficient effort and suboptimal outcomes. In this regard, Proposition 3 shows that simple portfolios of contingent debt contracts are enough to replicate the profit level of all other feasible contracts while preserving incentive compatibility and individual rationality. In essence, the proposition separates the effects of risk neutrality and limited liability on security design from the effects of the monotone likelihood ratio property. At a more technical level, Proposition 3 is reminiscent of mechanism design problems whose solutions feature rationing or randomized posted prices. (See, for example, Samuelson, 1984; Dworczak, 12; Akbapour, 2021; Loertscher, 2022; Kang, 2022; Vaidya, 2022). The common structure of these problems is that the objective function is affine, the feasible set is the collection of uniformly bounded monotone functions, and the constraints are affine in the choice variables. Proposition 2.1 of Winkler (1988) implies that there must be at least one solution that can be represented as a convex combination of at most \(n+1\) extreme points of the feasible set, where \(n\) is the number of constraints. Just as rationing and randomized posted-price mechanisms are mixtures of posted-price mechanisms--which are extreme points of the feasible set--portfolios of contingent debt contracts are mixtures of extreme points of the feasible set \(\mathcal{I}(F,G)\) in problem (7) as well. ### Security Design with Adverse Selection Consider the following setting of security design in the presence of adverse selection, a setting due to DeMarzo and Duffie (1999). There is a risk-neutral security issuer with discount rate \(\delta\in(0,1)\) and a unit mass of risk-neutral investors. The issuer has an asset that generates a random cash flow \(x\geq 0\). The cash flow is distributed according to \(\Phi_{0}\in\mathcal{F}\), which is supported on a compact interval normalized to \([0,1]\). Because \(\delta<1\), the issuer has demand for liquidity and therefore has an incentive to sell a limited-liability security backed by the asset to raise cash. A security is a nondecreasing, right-continuous function \(H:[0,1]\rightarrow\mathbb{R}_{+}\) such that \(0\leq H(x)\leq x\) for all \(x\). Let \(F(x):=x\) and \(G(x):=\mathbf{1}\{x=1\}\) for all \(x\in[0,1]\). The set of securities can again be written as \(\mathcal{I}(F,G)\). Given any security \(H\in\mathcal{I}(F,G)\), the issuer first observes a signal \(s\in S\) for the asset's cash flow. Then, taking as given an inverse demand schedule \(P:[0,1]\rightarrow\mathbb{R}_{+}\), she chooses a fraction \(q\in[0,1]\) of the security to sell. If a fraction \(q\) of the security is sold and the signal realization is \(s\), the issuer's expected return is \[\delta\left(\mathbb{E}[x-H(x)|s]+(1-q)\mathbb{E}[H(x)|s]\right)+qP(q)=q(P(q)- \delta\mathbb{E}[H(x)|s])+\delta\mathbb{E}[x|s].\] Investors observe the quantity \(q\), update their beliefs about \(x\), and decide whether to purchase. DeMarzo and Duffie (1999) show that, in the unique equilibrium that survives the D1 criterion,29 the issuer's profit under a security \(H\), when the posterior expected value of the security is \(\mathbb{E}[H(x)|s]=z\), is given by Footnote 29: An equilibrium in this market is a pair \((P,Q)\) of measurable functions such that \(Q(\mathbb{E}[H(x)|s])(P\circ Q(\mathbb{E}[H(x)|s])-\delta\mathbb{E}[H(x)|s])\geq q (P(q)-\delta\mathbb{E}[H(x)|s])\) for all \(q\in[0,1]\) with probability \(1\), and \(P\circ Q(\mathbb{E}[H(x)|s])=\mathbb{E}[H(x)|Q(\mathbb{E}[H(x)|s])]\) with probability \(1\). \[\Pi(z|H):=(1-\delta)z_{0}^{\frac{1}{1-\delta}}z^{-\frac{\delta}{1-\delta}},\] where \(z_{0}\) is the lower bound of the support of \(\mathbb{E}[H(x)|s]\). Therefore, let \(\Phi(\cdot|s)\) be the conditional distribution of the cash flow \(x\) given signal \(s\), and let \(\Psi:S\rightarrow[0,1]\) be the marginal distribution of the signal \(s\). The expected value of a security \(H\) is then \[\Pi(H):=(1-\delta)\left(\inf_{s\in S}\int_{0}^{1}H(x)\Phi(\mathrm{d}x|s)\right)^{ \frac{1}{1-\delta}}\int_{S}\left(\int_{0}^{1}H(x)\Phi(\mathrm{d}x|s)\right)^{- \frac{\delta}{1-\delta}}\Psi(\mathrm{d}s).\] As a result, the issuer's security design problem can be written as \[\sup_{H\in\mathcal{I}(F,G)}\Pi(H).\] Using a variational approach, DeMarzo and Duffie (1999) characterize several general properties of the optimal securities without solving for them explicitly. They then specialize the model by assuming that the signal structure \(\{\Phi(\cdot|s)\}_{s\in S}\) has a _uniform worst case_, a condition slightly weaker than the monotone likelihood ratio property that requires the cash flow distribution to be smallest in the sense of FOSD under some \(s_{0}\), conditional on every interval \(I\) of \([0,1]\).30 With this assumption, DeMarzo and Duffie (1999) show that a standard debt contract \(H^{d}(x):=\min\{x,d\}\) is optimal. Footnote 30: Specifically, they assume that there exists some \(s_{0}\in S\) such that, for any \(s\in S\) and for any interval \(I\subset[0,1]\), (i) \(\Phi(I|s_{0})=0\) implies \(\Phi(I|s)=0\), and (ii) the conditional distribution of the asset’s cash flow given signal realization \(s\) and given that the cash flow falls in an interval \(I\), which is denoted \(\nicefrac{{\Phi}}{{|I\cdot(s)|}}/\nicefrac{{\Phi(I|s)}}{{\Phi(I|s)}}\), dominates that conditional distribution given signal realization \(s_{0}\), denoted \(\nicefrac{{\Phi}}{{|I\cdot(s_{0})}}/\nicefrac{{\Phi(I|s_{0})}}{{\Phi(I|s_{0})}}\), in the sense of first-order stochastic dominance. With Theorem1, we are able to generalize this result and solve for an optimal security while relaxing the uniform-worst-case assumption. As in Section4.1, we say that a security is a contingent debt contract if there exists an interval partition \(\{I_{n}\}\) of \([0,1]\) and \(\{d_{n}\}\subseteq(0,1]\) such that \(H(x)=H^{d_{n}}(x)\) for all \(x\in I_{n}\). Instead of a uniform worst case, we only assume that there is a worst signal \(s_{0}\) such that \(\Phi(\cdot|s)\) dominates \(\Phi(\cdot|s_{0})\) in the sense of FOSD for all \(s\in S\). With this assumption, the issuer's security design problem can be written as \[\sup_{H\in\mathcal{I}(F,G),\underline{s}\geq 0} \left[(1-\delta)\underline{z}^{\frac{1}{1-\delta}}\int_{S}\left( \int_{0}^{1}H(x)\Phi(\mathrm{d}x|s)\right)^{-\frac{\delta}{1-\delta}}\Psi( \mathrm{d}s)\right]\] \[\mathrm{s.t.} \int_{0}^{1}H(x)\Phi(\mathrm{d}x|s_{0})=\underline{z}. \tag{8}\] As shown by Proposition4 below, there always exists an optimal security in this setting that is a portfolio of at most two contingent debt contracts. **Proposition 4**.: _There exists contingent debt contracts \(H^{*}_{1},H^{*}_{2}\) and \(\lambda\in[0,1]\) such that \(H^{*}:=\lambda H^{*}_{1}+(1-\lambda)H^{*}_{2}\) is a solution to the issuer's problem (8). Furthermore, if \(\Phi(\cdot|s)\) has full support on \([0,1]\) for all \(s\in S\), this solution is unique._ Proof.: For any fixed \(\underline{z}\geq 0\), the objective function of the issuer's problem (8) is convex, and the constraint is linear. Thus, an extreme point of the feasible set must be a solution to (8). By proposition 2.1 of Winkler (1988), such an extreme point can be written as a convex combination of at most two extreme points of \(\mathcal{I}(F,G)\), as desired, since \(H\) is an extreme point of \(\mathcal{I}(F,G)\) if and only if \(H\) is a contingent debt contract. For uniqueness, notice that when \(\Phi(\cdot|s)\) has full support for all \(s\), the objective function of (8) is strictly convex in \(H\). Therefore, every solution must be an extreme point of the feasible set. This completes the proof. Overall, this section showcases the unifying role of extreme points of FOSD intervals in security design. Rationalizing the existence of different financial securities observed in practice has been a crowning achievement of this literature. The literature has done this under a variety of economic environments and assumptions, which punctuates the robustness of these securities as optimal contracts. But that variety also makes it hard to sort the essential modeling ingredients from the inessential ones. And the core features that connect these environments are not readily apparent. An advantage of recasting feasible securities as an FOSD interval is that it strips the problem down to its basic elements. Whether the setting has hidden action or hidden information, and whether the asset's cash flow distributions exhibit MLRP, are not defining. Limited liability, monotone contracts, and convexity of the issuer's objective function are the core elements that deliver debt as an optimal security. The terms of the debt contract somewhat differ from those of a standard one, as the face value of the debt is now contingent on the asset's cash flow, but the nature of debt contracts, which never has the issuer and investor share in the asset's equity and grants the issuer only residual rights, still prevails. Without knowledge of the extreme points of FOSD intervals, solving the security design problem without the MLRP assumption would have been substantially harder. Thus, just as in the other economic applications of this paper, Theorem 1 offers a unified approach to answering classic economic questions that have been previously answered by case-specific approaches. Well-known results directly follow, but so do new insights that are straightforward to uncover using this framework. ## 5 Conclusion We characterize the extreme points of first-order stochastic dominance (FOSD) intervals, and we reveal how these intervals are at the heart of many distinct topics in economics. We show that any extreme point of an FOSD interval must either coincide with one of the FOSD interval's bounds, or be constant on an interval, where at least one end of the interval reaches one of the bounds. FOSD intervals describe the distributions of posterior quantiles. We apply this insight to topics in the psychology of judgment, political economy, and Bayesian persuasion. We also use this insight to prove the law of iterated quantiles. Finally, FOSD intervals provide a common structure to security design. We unify and generalize seminal results in that literature when either adverse selection or moral hazard afflicts the environment. Other applications involving FOSD intervals undoubtedly exist. For instance, their link to the distributions of posterior quantiles opens many potential research avenues. When consumers' values or firms' marginal costs follow distributions, different points on the inverse supply and demand curves are quantiles, which might contain further applications in consumer or firm theory. Inequality is often measured as an upper percentile of the wealth or income distribution, making it eligible for analysis. Likewise, settings in which the feasible set can be represented as an FOSD interval, such as R&D investments and screening problems with stochastic inventories, are yet other directions for future work.
2310.13137
Mean Estimation Under Heterogeneous Privacy Demands
Differential Privacy (DP) is a well-established framework to quantify privacy loss incurred by any algorithm. Traditional formulations impose a uniform privacy requirement for all users, which is often inconsistent with real-world scenarios in which users dictate their privacy preferences individually. This work considers the problem of mean estimation, where each user can impose their own distinct privacy level. The algorithm we propose is shown to be minimax optimal and has a near-linear run-time. Our results elicit an interesting saturation phenomenon that occurs. Namely, the privacy requirements of the most stringent users dictate the overall error rates. As a consequence, users with less but differing privacy requirements are all given more privacy than they require, in equal amounts. In other words, these privacy-indifferent users are given a nontrivial degree of privacy for free, without any sacrifice in the performance of the estimator.
Syomantak Chaudhuri, Konstantin Miagkov, Thomas A. Courtade
2023-10-19T20:29:19Z
http://arxiv.org/abs/2310.13137v1
# Mean Estimation Under Heterogeneous Privacy Demands ###### Abstract Differential Privacy (DP) is a well-established framework to quantify privacy loss incurred by any algorithm. Traditional formulations impose a uniform privacy requirement for all users, which is often inconsistent with real-world scenarios in which users dictate their privacy preferences individually. This work considers the problem of mean estimation, where each user can impose their own distinct privacy level. The algorithm we propose is shown to be minimax optimal and has a near-linear run-time. Our results elicit an interesting saturation phenomenon that occurs. Namely, the privacy requirements of the most stringent users dictate the overall error rates. As a consequence, users with less but differing privacy requirements are all given more privacy than they require, in equal amounts. In other words, these privacy-indifferent users are given a nontrivial degree of privacy for free, without any sacrifice in the performance of the estimator. Heterogeneous Differential Privacy, Mean Estimation, Minimax Optimality ## I Introduction Increased computing power and storage technology, combined with fast internet, have made data a precious commodity. Platforms including social media and tech companies have established markets for data that are of great value for targeted advertisements [2, 3]. In tandem, the ever-growing digital footprint we have has led to a rise in privacy concerns. Thus, privacy-preserving techniques in data mining and statistical analysis are important and, at times, mandated by laws such as the GDPR in Europe [4] and the California Consumer Privacy Act (CCPA) [5]. The study of privacy-preserving techniques for data analysis is an old concern [6, 7, 8], and simple techniques such as not answering queries specific to a small portion of the dataset do not provide adequate privacy [9]. While several notions of privacy have been proposed, such as \(K\)-Anonymization [10], L-diversity [11], information-theoretic notions (for example, see [12]), and randomization techniques, the current de-facto standard for privacy - Differential Privacy (DP) - was proposed by Dwork et al. [13, 14]. DP is used in real-world applications by the US Census Bureau [15], Google [16], and Apple [17]. One convenient property of DP is that it allows quantification of loss in privacy of an algorithm, as opposed to privacy being a binary property. Recent extensions of DP include Renyi-DP [18], Concentrated-DP [19], and Zero-Concentrated-DP [20]. One of the key tasks in machine learning and statistics is estimating parameters of a distribution given independently drawn samples from it. When the observations correspond to sensitive information, such as user data on social media, the need for privacy arises. Therefore, statistical problems like mean estimation under privacy constraints are important, and we must understand the implicit trade-off between accuracy and privacy. The majority of existing literature on this topic considers a uniform privacy level for all users (see [21] for reference). However, this does not capture the real-world, where heterogeneous privacy requirements are ubiquitous (e.g., [22, Example 2]). Indeed, on virtually all digital platforms, users independently balance their individual privacy options against the utility they desire, typically through a menu of options provided by the platform. ### _Our Contribution_ We consider mean estimation under the Central-DP (_CDP_) model, with heterogeneous privacy demands. In the CDP model, also known as the Trusted-Curator model, users send their true data to a central server which is expected to respect the privacy constraint set forth by the users [13, 14]. We assume each user has a datapoint sampled from an unknown distribution, and users are allowed to select their own privacy level. The class of distributions is assumed to be univariate and bounded in a known range, and we consider the minimax expected squared error as our metric. We propose a certain affine estimator with judiciously chosen weights, and prove it to be minimax optimal. Our algorithm for computing the weights is efficient and has a near-linear (in the number of users) run-time and linear space complexity. As is the case in homogeneous DP, keeping the privacy requirement of some users fixed, one might expect to get better accuracy in mean estimation as the privacy of the other users are relaxed. However, we show that after a certain critical value, decreasing the privacy provides no further improvement in the accuracy of our estimator. By matching upper and lower bounds, we show that this phenomenon is fundamental to the problem itself and not an artifact of our algorithm. As a corollary of this saturation phenomenon, having an additional public dataset may have no extra benefit for mean estimation than a private dataset. Thus, the central-server can advertise and offer some extra privacy up to the critical value to the privacy-indifferent users while not sacrificing the estimation performance. Experiments confirm the superior performance of our proposed algorithm over other baseline methods. In addition, the approach for showing that the upper and the lower bounds are within constant factor of each other may be of independent interest to the readers. _Organization:_ In Section II, we define the problem setting. The proposed algorithm along with the upper bounds are presented in Section III, and the lower bound is presented in Section IV. The fact that the lower and the upper bounds are within a constant factor of each other is proved in Section V. Experiments and other baseline methods are presented in Section VI to support the theoretical claims made in this work. Conclusions and possible future directions are outlined in Section VII. ### _Related Work_ Estimation error in the homogeneous DP case has been studied in great detail in the recent years (see [23, 24, 25, 26]) under both the CDP model and the Local-DP (_LDP_) model. In the LDP model, users do not trust the central server and send their data through a noisy channel to the server to preserve privacy [27, 28]. Tasks like query release, estimation, learning, and optimization have been considered in the setting of a private dataset assisted by some public data [29, 30, 31, 32, 33, 34, 35, 36]. An importance sampling scheme to release statistical queries on a private dataset using a public dataset based on logistic regression is proposed in [37]. This method is unsuitable for our setting, where all users have the same underlying distribution. Bie et al. [38] consider using a few public samples to estimate Gaussian distributions with unknown mean and covariance matrix. The public samples eliminate the need for prior knowledge on the range of mean, but the effect on accuracy with more public samples is not studied. One-Sided DP to combat 'exclusion attacks' to protect a private dataset in the presence of a public dataset is considered in [22]. Another form of heterogeneity is a hybrid model where some users are satisfied with the CDP model while other users prefer the LDP model [39, 40]. Heterogeneous DP (_HDP_) for federated learning is considered in [41, 42]. Alaggan et al. [43] give a general recipe for dealing with HDP but their idea of scaling the data using a shrinkage matrix induces a bias in the estimator. Further, their approach can not deal with public datasets. Personalized Differential Privacy (_PDP_) is another term for HDP in literature. Li et al. [44] studied PDP and proposed a computationally expensive way to partition users into groups with similar privacy levels. Jorgensen et al. [45] propose a mechanism that samples users with high privacy requirements with less probability and then the sub-sampled dataset can then be analyzed with the conventional homogeneous DP algorithms under a suitable privacy level. While this is a general approach for dealing with heterogeneity, it is not optimal for mean estimation. Indeed, following the recommendation of the author for setting the parameters of the algorithm, in the presence of some public data in the dataset, the mechanism would ignore all private data. HDP mean estimation under the assumption that the variance of the unknown distribution is known is considered by Ferrando et al. [46]. However, as they mention, they add more noise than necessary for privacy. The reason why they add more noise than required is that they are essentially performing LDP instead of the more powerful CDP technique. As a result, no saturation phenomenon can be deduced in their method. Some works [45, 47] also consider the PDP setting for finite sets and give algorithms inspired by the Exponential Mechanism [48]. Heterogeneous privacy problems for recommendation systems is also considered in [49, 50]. PDP in the LDP setting has been studied by [51] for learning the locations of users from a finite set of possible locations. More general notions of DP which encompasses HDP have also been considered in literature [52, 53]. Another line of work in literature considers DP under heterogeneity in the data [54, 55]. Mean estimation, in the Bayesian setting, from user data in a private manner is considered in [56] where the heterogeneity is in the number of samples and distribution of each user's data, and not in the desired privacy levels. Most closely related to the present work is that of Fallah et al. [57], which considers the general HDP setting for mean estimation in the context of efficient auction mechanism design from a Bayesian perspective. While they encounter a saturation-type phenomenon in their algorithm, it cannot tightly characterize the saturation condition (see Section VI). In addition, for the homogeneous case, their algorithm does not agree with the minimax optimal estimator when they privacy demand is low (\(\epsilon>1/\sqrt{n}\) where \(n\) is the total number of users and \(\epsilon\) is the DP privacy-level defined in the next Section). They also assume that all the privacy levels are less than \(1\). This assumption is central to their upper and lower bounds, and therefore one cannot draw conclusions for the case when there is a public dataset. Section VI contains more comparisons of our proposed method with that of [57]. Concurrent with our work [1], Cummings et al. [58, Section 6] consider the same optimization problem that we solve to obtain an affine estimator of the mean. They also make observations related to the solution structure which are similar to ours (Lemma 1), including the saturation phenomenon. In their solution, to find the optimal weights and the optimal noise level, one needs to know the index of saturation in their Claim 7. The method they prescribe is via enumerating the \(n+1\) indices, where \(n\) is the total number of users. While they point out that this requires \(O(n)\) time, this does not provide a full accounting of the complexity for two reasons. First, sorting the privacy levels of the agents requires \(n\log n\) time. Second, their method makes \(O(n)\) calls to a function that requires \(O(n)\) compute time, rendering an overall complexity of \(O(n^{2})\). ADPM, our proposed solution, has a \(O(n\log n)\) runtime (\(O(n)\) if we don't consider sorting complexity). We are able to do this since ADPM finds the index of saturation naturally without a brute-force search. Further, the approach we use is different from theirs and provides some more insight on how the weights depend on the privacy parameters. Finally, we also prove that our method is within a constant factor of the lower bound, which establishes optimality; lower bounds are not considered in Cummings et al. [58]. ## II Problem Definition and Main Result ### _Problem Definition_ Heterogeneous Differential Privacy (HDP) allows different users to have different possible privacy requirements. We begin by defining a natural notion of heterogeneous Differential Privacy. Similar definitions were also considered in [43, 57]. We denote positive real numbers by \(\mathbb{R}_{>0}\). As we consider univariate data-points in our datasets, we use boldface, such as \(\mathbf{x}\) to denote a dataset, or equivalently, a vector. Capital boldface, such as \(\mathbf{X}\), are used to denote a random dataset, i.e., a random vector. Vectors with subscript \(i\), e.g. \(x_{i}\), refer to the \(i\)-th entry of the vector, while we use the notion \(\mathbf{x}^{\prime i}\) for a vector differing from \(\mathbf{x}\) at the \(i\)-th position. The notation \([n]\) refers to the set \(\{1,2,\ldots,n\}\). The probability simplex in \(n\)-dimensions is represented by \(\Delta_{n}\). Throughout this work the \(\ell^{1}\) norm, \(\|\cdot\|_{1}\), is interchangeably used for sum of elements of vectors with positive components. The notation \(a\wedge b\) is used to denote \(\min\{a,b\}\). In this work, with some abuse of notation, we shall represent a _sample_ from a randomized algorithm \(M\) mapping \(\mathcal{X}^{n}\) to a probability distribution on \(\mathcal{Y}\) as \(M(\mathbf{x})\) where \(\mathbf{x}\in\mathcal{X}^{n}\). **Definition 1** (Heterogeneous Differential Privacy).: _A randomized algorithm \(M:\mathcal{X}^{n}\to\mathcal{Y}\) is said to be \(\mathbf{\epsilon}\)-DP for \(\mathbf{\epsilon}\in\mathbb{R}_{>0}^{n}\) if_ \[\mathbb{P}\{M(\mathbf{x})\in S\}\leq e^{\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i })\in S\}\quad\forall i\in[n], \tag{1}\] _for all measurable sets \(S\subseteq\mathcal{Y}\), where \(\mathbf{x},\mathbf{x}^{\prime i}\in\mathcal{X}^{n}\) are any two 'neighboring' datasets that differ arbitrarily in only the \(i\)-th component._ **Remark 1**.: _Note that the probability in the above definition is taken over the randomized algorithm conditioned on the given datasets \(\mathbf{x},\mathbf{x}^{\prime i}\), i.e., it is a conditional probability._ Without loss of generality, we consider the case \(\mathcal{X}=[-0.5,0.5]\) and let \(\mathcal{P}\) denote the set of all distributions with support on \(\mathcal{X}\). Our results can be directly extended to distributions on any known finite length domain \(\mathcal{X}\). Under this privacy setting, we investigate the problem of estimating the sample mean from the users' data. There are \(n\) users and each user's data point is sampled i.i.d. from a distribution \(P\in\mathcal{P}\) over \(\mathcal{X}\) with mean denoted by \(\mu_{P}\in[-0.5,0.5]\) henceforth. Each 'data point' corresponds to a user's data in \(\mathcal{X}\), i.e., user \(i\) has a datapoint (sample) \(\mathbf{x}_{i}\) and the user has a privacy requirement of \(\epsilon_{i}\) (in the sense of Definition 1). We assume that the privacy constraint \(\epsilon_{i}\) does not depend on the realization \(\mathbf{x}_{i}\), and is itself not private. Without loss of generality, we assume that the vector of privacy levels \(\mathbf{\epsilon}\) is arranged in a non-decreasing order. Let the set of all \(\mathbf{\epsilon}\)-DP algorithms from \(\mathcal{X}^{n}\) to \(\mathcal{Y}=[-0.5,0.5]\) be denoted by \(\mathcal{M}_{\mathbf{\epsilon}}\). We consider the error metric as Mean-Squared Error (MSE) and are interested in characterizing the minimax estimation error, over all \(\mathbf{\epsilon}\)-DP algorithms. For an algorithm \(M(\cdot)\in\mathcal{M}_{\mathbf{\epsilon}}\), let \(E(M)\) denote the worst-case error attained by it, \[E(M):=\max_{P\in\mathcal{P}}\ \mathbb{E}_{\mathbf{X}\sim P^{n},M(\cdot)}[(M(\mathbf{X}) -\mu_{P})^{2}]. \tag{2}\] In (2), the expectation is taken over the randomness in the dataset \(\mathbf{X}\) and the algorithm \(M(\cdot)\). Let \(L(\mathbf{\epsilon})\) denote the minimax estimation error given by \[L(\mathbf{\epsilon}):=\min_{M\in\mathcal{M}_{\mathbf{\epsilon}}}E(M).\] Our goal is to characterize \(L(\mathbf{\epsilon})\) and provide an algorithm that achieves a MSE of the same order. ### _Main Result_ Our main result is that our proposed algorithm ADPM (Algorithm 1) is minimax optimal, as stated in Theorem 1 below. Note that Theorem 1 is instance optimal, in the sense that it establishes minimax optimality for each \(n\) and \((\epsilon_{1},\ldots,\epsilon_{n})\). ADPM has a near-linear time and linear space complexity. More precisely, the initial sorting of \(\mathbf{\epsilon}\) requires \(O(n\log n)\) time and \(O(n)\) space, computing the weights and the inner product \(\mathbf{w}\) requires \(O(n)\) time and space each. Note that for non-private mean estimation, we still require \(O(n)\) time to compute the mean. For a special setting, if we know that there are \(n\) users with any user's privacy parameter taking values in a known discrete set of values \(\mathcal{E}\) of size \(|\mathcal{E}|=k\), (\(k<n\)) then computing the weights can be done in \(O(k\log k)\) time instead. **Theorem 1**.: _ADPM described in Algorithm 1 achieves worst-case error within a (universal) constant factor of \(L(\mathbf{\epsilon})\), for all \(n\) and privacy constraints \(\mathbf{\epsilon}=(\epsilon_{1},\ldots,\epsilon_{n})\)._ ``` \(\text{ADPM}(\mathbf{\epsilon},\mathbf{x})\) \(n\leftarrow\textsc{length}(\mathbf{\epsilon})\) \(\mathbf{\epsilon}\leftarrow\textsc{Sot}(\mathbf{\epsilon})\) (ascending order) \(r_{1}\leftarrow\epsilon_{1}\) \(L_{1}\leftarrow\epsilon_{1}\) \(L_{2}\leftarrow\epsilon_{1}^{2}\) \(k\gets 1\) while\(k<n\)do \(r_{k+1}\leftarrow\min\{\epsilon_{k+1},\frac{L_{2}+8}{L_{1}}\}\) \(L_{1}\gets L_{1}+r_{k+1}\) \(L_{2}\gets L_{2}+r_{k+1}^{2}\) \(k\gets k+1\) endwhile if\(\frac{L_{2}+8}{L_{1}^{2}}>\frac{1}{4}\)then return\(0\) else \(\mathbf{w}\leftarrow\mathbf{r}/L_{1}\) sample \(N\sim\) Laplace(\(1/L_{1}\)) return\(\langle\mathbf{w},\,\mathbf{x}\rangle+N\) endif ``` **Algorithm 1** Affine Differentially Private Mean (ADPM) In Theorem 2 (Section III), we prove an upper bound on the MSE for ADPM, which is an upper bound on \(L(\mathbf{\epsilon})\). A lower bound on \(L(\mathbf{\epsilon})\) is shown in Theorem 3 (Section IV). Due to the different form of the lower and upper bounds, it is non-trivial to compare them. In Theorem 4 (Section V), it is shown that the lower and upper bound are within a constant factor of each other, proving the minimax optimality of ADPM and thus, proving Theorem 1. We shall switch to working with variable length vectors at times so we define some new notation. Consider any sequence of non-decreasing privacy values \(\{\epsilon_{i}\}_{i=1}^{n}\). The notation \(c_{i}^{j}\) refers to the vector \((\epsilon_{i},\ldots,\epsilon_{j})\) for \(j\geq i\). We now describe an important phenomenon in ADPM, that, by minimax optimality of ADPM implied by Theorem 1, is fundamental to the problem. Assume \(\mathbf{\epsilon}\) to be in non-decreasing order, i.e., the users with stricter privacy requirements are arranged earlier in the vector \(\mathbf{\epsilon}\). Let \(k\) be the minimum index at which \(\epsilon_{k+1}\geq\frac{\|\epsilon_{i}^{k}\|_{2}^{2}+8}{\|\epsilon_{i}^{k}\|_{ 1}}\). Note that such a \(k\) need not exist if all the privacy levels are sufficiently close to each other. When such a \(k\) exists, ADPM algorithm provides a privacy level exactly \(\epsilon_{i}\) to user \(i\) for \(i\in[k]\) and for the rest of the users, it provides a common privacy level of \(\frac{\|\epsilon_{i}^{k}\|_{2}^{2}+8}{\|\epsilon_{i}^{k}\|_{1}}\). Thus, for these latter users with lower privacy requirements, ADPM offers extra privacy without sacrificing on MSE. It is interesting to note that among the latter users, regardless of their relative privacy requirements, they all receive the same level of privacy. Thus, even if there are some users who are do not care about privacy (\(\epsilon_{i}\rightarrow\infty\)), it is still optimal to give them some privacy. As an example, if out of the \(n\)\((>10^{3})\) users, there are \(10^{3}\) users wanting a privacy requirement of \(\epsilon=0.1\) and the rest of the users have privacy requirements ranging from \(\epsilon=0.5\) to \(\epsilon\rightarrow\infty\), then all the users in the latter category receive a privacy guarantee of \(\epsilon=0.18\) by ADPM. ## III Upper Bound The main result of this section, an upper bound on the performance of ADPM (and therefore \(L(\mathbf{\epsilon})\)) is stated in Theorem 2. Subsequently, we motivate ADPM, observe some properties and then prove the theorem at the end of this section. **Theorem 2** (Upper Bound).: _Let \(r_{1}=\epsilon_{1}\) and_ \[r_{k+1}=\min\left\{\epsilon_{k+1},\frac{\|r_{1}^{k}\|_{2}^{2}+8}{\|r_{1}^{k}\| _{1}}\right\}\ \forall\ k\in[n-1]. \tag{3}\] _Then, ADPM defined in Algorithm 1, has a worst case MSE of \(\frac{\|r_{1}^{n}\|_{2}^{2}+8}{4\|r_{1}^{n}\|_{1}^{2}}\wedge\frac{1}{4}\), and thus,_ \[L(\mathbf{\epsilon})\leq\frac{\|r_{1}^{n}\|_{2}^{2}+8}{4\|r_{1}^{n}\|_{1}^{2}} \wedge\frac{1}{4}.\] The upper-bound, given by the MSE of ADPM, is centered around finding the optimal affine estimator \(\langle\mathbf{w},\,\mathbf{x}\rangle+N\) of the mean, where \(\mathbf{w}\in\Delta_{n}\) and \(N\) is some suitable noise to satisfy the \(\mathbf{\epsilon}\)-DP constraint. In particular, we prove that \(\mathbf{w}=\mathbf{r}/\|\mathbf{r}\|_{1}\), for the vector \(\mathbf{r}\) recursively defined in (3), is a global-minimizer of the MSE. ### _ADPM Motivation_ Let us constrain ourselves to affine estimators of form \(M(\mathbf{x})=\langle\mathbf{w},\,\mathbf{x}\rangle+N\), where a zero-mean random noise \(N\) is chosen appropriately to satisfy the privacy constraint and \(\mathbf{w}\in\Delta_{n}\) to make the estimator unbiased. Recall that \(\text{Laplace}(\eta)\) distribution has pdf \(\frac{1}{2\eta}e^{-|\cdot|/\eta}\). If the noise is distributed as \(\text{Laplace}(\eta)\), then it can be shown that the estimator \(\langle\mathbf{w},\,\mathbf{x}\rangle+N\), is \((\mathbf{w}/\eta)\)-DP (see Lemma 12 in Appendix A). Thus, from the privacy constraint, we impose the condition \[w_{i}\leq\eta\epsilon_{i}\ \forall i. \tag{4}\] The variance (or MSE) of the estimator under distribution \(P\) is given by \(\text{Var}(P)\|w\|_{2}^{2}+2\eta^{2}\leq\|w\|_{2}^{2}/4+2\eta^{2}\) (\(1/4\) is the worst case variance for distributions on \(\mathcal{P}\)). To minimize this, subject to (4), we set \(\eta=\max_{\mathbf{x}}w_{i}/\epsilon_{i}=\|\frac{\mathbf{w}}{\mathbf{\epsilon}}\|_{\infty}\), where the latter notation is element-wise division. Therefore, we have \[\text{MSE}\leq\frac{\|\mathbf{w}\|_{2}^{2}}{4}+2\left\|\frac{\mathbf{w}}{\mathbf{\epsilon}} \right\|_{\infty}^{2}. \tag{5}\] Thus, finding the optimal affine estimator requires us to solve the minimization problem \[\mathbf{w}^{*}=\arg\min_{\mathbf{w}\in\Delta_{n}}\frac{\|\mathbf{w}\|_{2}^{2}}{4}+2\left\| \frac{\mathbf{w}}{\mathbf{\epsilon}}\right\|_{\infty}^{2}. \tag{6}\] Although (6) can be solved by any modern convex optimization solver, we need to solve it analytically to get an upper bound on the MSE for showing minimax optimality. In the next subsection, we show how to solve this optimization problem in a recursive manner and ADPM uses this solution. **Remark 2** (Sub-Optimality of Proportional Weighing).: _Note that (4) can be satisfied by taking \(\mathbf{w}\propto\mathbf{\epsilon}\), i.e., \(\mathbf{w}=\mathbf{\epsilon}/\|\mathbf{\epsilon}\|_{1}\) and \(\eta=1/\|\mathbf{\epsilon}\|_{1}\). This corresponds to providing users with higher privacy a lower weightage in the estimator. While intuitive, this is not optimal. Consider the case where there are a total of \(1000\) users out of which \(999\) demand a privacy requirement \(\epsilon=0.1\) and one user has no privacy requirement (\(\epsilon\rightarrow\infty\)). In this case, the above estimator just considers a single data point and has worst case MSE of \(\frac{1}{4}\). One can do better by using the weights \(\mathbf{w}^{*}\) in (6), which would give a worst case MSE of the order \(10^{-4}\)._ ### _Solving the Minimization Problem_ We use the function \(f(\mathbf{x},\mathbf{\epsilon})\)1 to denote the upper bound on the MSE when using the weights \(\mathbf{x}\in\mathbb{R}_{\geq 0}^{n}\) for privacy requirement \(\mathbf{\epsilon}\) (see (5)). For convenience, we do not restrict the weights \(\mathbf{x}\) to be on the simplex and instead, we scale it to the simplex by considering \(\mathbf{w}=\frac{\mathbf{x}}{\|\mathbf{x}\|_{1}}\) to be the actual weights, i.e., Footnote 1: \(\mathbf{x}\) from here on doesn’t refer to the dataset. \[f(\mathbf{x},\mathbf{\epsilon})=\frac{\|\mathbf{x}\|_{2}^{2}}{4\|\mathbf{x}\|_{1}^{2}}+2\frac{ \left\|\mathbf{x}/\mathbf{\epsilon}\right\|_{\infty}^{2}}{\|\mathbf{x}\|_{1}^{2}}.\] The reason we do not restrict \(\mathbf{x}\) to the simplex is that it is easier to work with linearly-scaled domain instead of the simplex due to a pattern that emerges in the solution that we describe in this subsection. We remind the readers that \(\mathbf{\epsilon}\) is arranged in a non-decreasing order, i.e., \(\epsilon_{i}\leq\epsilon_{j},\ \forall i<j\). With this in mind, Lemma 1 shows an important property of the optimal weights \(\mathbf{w}^{*}\) (in the simplex) for the minimization problem under consideration. **Lemma 1**.: _Consider a fixed length non-decreasing privacy constraint vector \(\mathbf{\epsilon}\). Let \(\mathbf{w}^{*}=\arg\min_{\mathbf{w}\in\Delta_{n}}f(\mathbf{w},\mathbf{\epsilon})\), then_ \[\frac{w_{i}^{*}}{\epsilon_{i}}\geq\frac{w_{j}^{*}}{\epsilon_{j}}\quad\forall i<j.\] Proof.: We show that for any \(\mathbf{w}\in\Delta_{n}\) such that \(\exists i<j\), \(\frac{w_{i}}{\epsilon_{i}}<\frac{w_{j}}{\epsilon_{j}}\), it is possible to find a \(\tilde{\mathbf{w}}\in\Delta_{n}\) such that \(f(\tilde{\mathbf{w}},\mathbf{\epsilon})<f(\mathbf{w},\mathbf{\epsilon})\). Let \(\frac{w_{j}}{w_{i}}=K\), and \(\lambda=\frac{\epsilon_{j}}{\epsilon_{i}}\). Hence, \(K>\lambda\geq 1\). Consider \(\tilde{\mathbf{w}}\) that is equal to \(\mathbf{w}\) except \(\tilde{w}_{j}=w_{i}(K-\delta)\) and \(\tilde{w}_{i}=w_{i}(1+\delta)\). Thus, \(\|\tilde{\mathbf{w}}\|_{1}=\|\mathbf{w}\|_{1}\) Choosing any \(0<\delta\leq\frac{K-\lambda}{1+\lambda}\) implies \(\frac{w_{j}^{*}}{\epsilon_{j}}>\frac{\tilde{w}_{j}}{\epsilon_{j}}\geq\frac{ \tilde{w}_{i}}{\epsilon_{i}}>\frac{w_{i}}{\epsilon_{j}}\). Thus, \(\|\tilde{\mathbf{w}}/\mathbf{\epsilon}\|_{\infty}\leq\|\mathbf{w}/\mathbf{\epsilon}\|_{\infty}\). Next, \((w_{j}^{2}+w_{i}^{2})-(\tilde{w}_{j}^{2}+\tilde{w}_{i}^{2})=2\delta w_{i}^{2} (K-1-\delta)\). Thus, choosing \(0<\delta<K-1\), we have \(\|\mathbf{w}\|_{1}^{2}>\|\tilde{\mathbf{w}}\|_{2}^{2}\). Thus, overall, any \(0<\delta<\frac{K-\lambda}{1+\lambda}\) would result in \(f(\tilde{\mathbf{w}},\mathbf{\epsilon})<f(\mathbf{w},\mathbf{\epsilon})\). We shall instead minimize (globally) \(f(\mathbf{x},\mathbf{\epsilon})\) over \(\mathbf{x}\) with the constraint \(x_{1}=\epsilon_{1}\). Since the optimal weights \[\mathbf{r}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{n}_{\geq 0}:x_{1}= \epsilon_{1}}f(\mathbf{x},\mathbf{\epsilon})\] is given by the linear-scaling2\(r_{i}=\left(\frac{\epsilon_{1}}{w_{i}^{2}}\right)w_{i}^{*}\), it satisfies Lemma 1 as well, i.e., Footnote 2: Lemma 1 also implies that \(w_{1}^{*}\) is strictly greater than \(0\) \[\frac{r_{i}}{\epsilon_{i}}\geq\frac{r_{j}}{\epsilon_{j}}\ \forall i\leq j.\] This allows us to constrain our search to the global optimizer to a smaller region. Thus, it is sufficient to perform a constrained minimization over the domain \[\mathcal{D}(\mathbf{\epsilon})=\left\{\mathbf{x}\in\mathbb{R}^{n}_{\geq 0}:x_{1}= \epsilon_{1},\ \frac{x_{i}}{\epsilon_{i}}\geq\frac{x_{j}}{\epsilon_{j}}\ \forall j>i\right\}.\] Note that this is a closed convex domain. Further, constraining to this domain allows us to write \(\|\frac{\mathbf{x}}{\epsilon}\|_{\infty}=1\ \forall\ \mathbf{x}\in\mathcal{D}(\mathbf{ \epsilon})\). Thus, this observation and Lemma 1 allows us to get Corollary 1. **Corollary 1**.: _We note that_ \[\mathbf{r}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{D}(\mathbf{\epsilon})}f(\mathbf{x}, \mathbf{\epsilon})=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{D}(\mathbf{\epsilon})} \frac{\|\mathbf{x}\|_{2}^{2}+8}{4\|\mathbf{x}\|_{1}^{2}}, \tag{7}\] _and_ \[\operatorname*{arg\,min}_{\mathbf{w}\in\Delta_{n}}f(\mathbf{w},\mathbf{\epsilon})=\frac{ \mathbf{r}}{\|\mathbf{r}\|_{1}}.\] In Lemma 2, we show that the function \(\frac{\|\mathbf{x}\|_{2}^{2}+8}{4\|\mathbf{x}\|_{1}^{2}}\) is strictly quasi-convex, which means that a local minimizer in (7) is the unique global minimizer. **Lemma 2**.: _The function_ \[K(\mathbf{x})=\frac{\|\mathbf{x}\|_{2}^{2}+8}{\|\mathbf{x}\|_{1}^{2}}\] _is strictly quasi-convex (domain restricted to the non-negative reals and \(\|\mathbf{x}\|_{1}>0\))._ Proof.: Since the domain is non-negative reals, we use \(\|\cdot\|_{1}\) to represent sums. Consider \(\mathbf{x},\mathbf{y}\) in domain and \(\mathbf{x}\neq\mathbf{y}\). Let \(\lambda\in(0,1)\) then, \[K(\lambda\mathbf{x}+(1-\lambda)\mathbf{y})=\] \[\frac{\lambda^{2}\|\mathbf{x}\|_{2}^{2}+(1-\lambda)^{2}\|\mathbf{y}\|_{1}^ {2}+2\lambda(1-\lambda)\langle\mathbf{x},\,\mathbf{y}\rangle+8}{\lambda^{2}\|\mathbf{x}\|_ {1}^{2}+(1-\lambda)^{2}\|\mathbf{y}\|_{1}^{2}+2\lambda(1-\lambda)\|\mathbf{x}\|_{1}\| \mathbf{y}\|_{1}}\] \[\leq\max\left\{K(\mathbf{x}),K(\mathbf{y}),\frac{\langle\mathbf{x},\,\mathbf{y} \rangle+8}{\|\mathbf{x}\|_{1}\|\mathbf{y}\|_{1}}\right\}\] In the above, for equality to hold, it is necessary that \(K(\mathbf{x})=K(\mathbf{y})=\frac{\langle\mathbf{x},\mathbf{y}\rangle+8}{\|\mathbf{x}\|_{1}\|\mathbf{y} \|_{1}}\). We show that this is not possible, implying strict quasi-convexity. To show \(\frac{\langle\mathbf{x},\mathbf{y}\rangle+8}{\|\mathbf{x}\|_{1}\|\mathbf{y}\|_{1}}<\max\{K(\mathbf{ x}),K(\mathbf{y})\}\), let \(\mathbf{x},\mathbf{y}\) be \(n\)-dimensional vectors. Then, in the following, consider \(\mathbf{x}^{\prime},\mathbf{y}^{\prime}\) to be the \((n+1)\)-dimensional vectors obtained by appending a \(\sqrt{8}\) at the end to \(\mathbf{x},\mathbf{y}\) respectively. We have, \[\begin{split}\frac{\langle\mathbf{x},\,\mathbf{y}\rangle+8}{\|\mathbf{x}\|_{1} \|\mathbf{y}\|_{1}}&=\frac{\langle\mathbf{x}^{\prime},\,\mathbf{y}^{\prime} \rangle}{\|\mathbf{x}\|_{1}\|\mathbf{y}\|_{1}}\\ &<\frac{\|\mathbf{x}^{\prime}\|_{2}\|\mathbf{y}^{\prime}\|_{2}}{\|\mathbf{x}\|_ {1}\|\mathbf{y}\|_{1}}\qquad(\mathbf{x}^{\prime}\neq c\mathbf{y}^{\prime})\\ &\leq\max\left\{\frac{\|\mathbf{x}^{\prime}\|_{2}^{2}}{\|\mathbf{x}\|_{1}^ {2}},\frac{\|\mathbf{y}^{\prime}\|_{2}^{2}}{\|\mathbf{y}\|_{1}^{2}}\right\}\\ &=\max\left\{K(\mathbf{x}),K(\mathbf{y})\right\}.\end{split} \tag{8}\] where (8) follows from Cauchy-Schwarz inequality and \(\mathbf{x}^{\prime}\) can not be proportional to \(\mathbf{y}^{\prime}\) since they have equal values in the \((n+1)\)-th coordinate and \(\mathbf{x}\neq\mathbf{y}\). We shall now overload the the function \(f(\cdot,\cdot)\) to map variable length vectors to \(\mathbb{R}\). Corresponding to the \(\epsilon\) series, we construct a series of weights denoted by \(\{r_{i}\}_{i=1}^{n}\) which is the solution to (7). Lemma 2 implies that it is sufficient to search for a local minimizer in (7). We can find the local minimizer recursively. We drop the privacy argument in \(f(\cdot,\cdot)\) for convenience in the following writing, i.e., \(f(x_{1}^{k})\) refers to \(f(x_{1}^{k},\epsilon_{1}^{k})=\frac{\|\mathbf{x}_{1}^{k}\|_{2}^{2}+8}{4\|x_{1}^{k} \|_{1}^{2}}\) (when \(x_{1}^{k}\in\mathcal{D}(\epsilon_{1}^{k})\)). Next, we recursively build a sequence of scaled weights that gives the solution to (7). We remind the readers that the \(\{\epsilon_{i}\}_{1}^{n}\) sequence is arranged in a non-decreasing order. Let \(r_{1}=\epsilon_{1}\) and define \[r_{k+1}=\min\left\{\epsilon_{k+1},4f(r_{1}^{k})\|r_{1}^{k}\|_{1}\right\}. \tag{9}\] For clarity, \(4f(r_{1}^{k})\|r_{1}^{k}\|_{1}=\frac{\|r_{1}^{k}\|_{2}^{2}+8}{\|r_{1}^{k}\|_{1}}\), i.e., we have \(\|\frac{\|r_{1}^{k}\|}{\epsilon_{1}^{k}}\|_{\infty}=1\); this will be clear from Lemma 3. Algorithm 1 is an efficient implementation of the recursion described above in (9) that runs in near-linear time. We shall prove that this sequence generates the optimal weights \(r_{1}^{j}\) for privacy constraint \(\epsilon_{1}^{j}\) for any \(j>0\). An intuition behind the particular recursive definition is presented in Section III-C. A notion of saturation is defined in Definition 2 which is useful in tracking some properties of the \(\{r_{i}\}_{1} that the \(\{r_{i}\}\) sequence satisfies \(r_{1}^{k}\in\mathcal{D}(\epsilon_{1}^{k})\)\(\forall k\)3 and hence, the sequence is in the domain of the optimization in (7). Footnote 3: Thus, \(\|\frac{r_{1}^{k}}{2^{k}}\|_{\infty}=1\). **Lemma 3**.: _If \(\mathsf{Sat}(j+1)\) occurs, then we have the following A) \(\bigwedge_{i=j+1}^{n}\mathsf{Sat}(i)\) occurs, B) \(f(r_{1}^{j+1})=f(r_{1}^{j})/(1+4f(r_{1}^{j}))\), C) \(r_{j+1}=r_{j+2}=\ldots=r_{n}\). Further, we have_ \[r_{i}\leq r_{k},\text{ for any }i<k. \tag{10}\] Proof.: If \(\mathsf{Sat}(j+1)\), we have \[4f(r_{1}^{j})\|r_{1}^{j}\|_{1} \leq\epsilon_{j+1},\] \[r_{j+1} =4f(r_{1}^{j})\|r_{1}^{j}\|_{1}.\] Evaluating \(f(r_{1}^{j+1})\) using this, \[f(r_{1}^{j+1}) =\frac{\|r_{1}^{j}\|_{2}^{2}+r_{j+1}^{2}+8}{4\|r_{1}^{j}\|_{1}^{2} (1+4f(r_{1}^{j}))^{2}}\] \[=\frac{4\|r_{1}^{j}\|_{2}^{2}f(r_{1}^{j})+r_{j+1}^{2}}{4\|r_{1}^{j }\|_{1}^{2}(1+4f(r_{1}^{j}))^{2}}\] \[=\frac{4\|r_{1}^{j}\|_{1}^{2}f(r_{1}^{j})+4^{2}f(r_{1}^{j})^{2}\| r_{1}^{j}\|_{1}^{2}}{4\|r_{1}^{j}\|_{1}^{2}(1+4f(r_{1}^{j}))^{2}}\] \[=f(r_{1}^{j})/(1+4f(r_{1}^{j})).\] This proves (B). To check for \(\mathsf{Sat}(j+2)\), compare \(\epsilon_{j+2}\) and \(4f(r_{1}^{j+1})\|r_{1}^{j+1}\|_{1}\). Note that \(\epsilon_{j+2}\geq\epsilon_{j+1}\) since \(\epsilon\)-sequence is non-decreasing. Further, \(\|r_{1}^{j+1}\|_{1}=\|r_{1}^{j}\|_{1}(1+4f(r_{1}^{j}))\) which implies \(4f(r_{1}^{j+1})\|r_{1}^{j+1}\|_{1}=4f(r_{1}^{j})\|r_{1}^{j}\|_{1}\) by (B). Therefore, \(\mathsf{Sat}(j+2)\) occurs as well and (A) is proved by repeating the same argument for \(\mathsf{Sat}(j+3),\ldots,\mathsf{Sat}(n)\). Further, as noted, \(4f(r_{1}^{j+1})\|r_{1}^{j+1}\|_{1}=4f(r_{1}^{j})\|r_{1}^{j}\|_{1}\) when \(\mathsf{Sat}(j+1)\). Along with (A) and the fact \(\epsilon_{k}\) are increasing implies \(r_{j+1}=r_{j+2}\). Repeating the same argument proves (C). \(r_{i}\leq r_{k}\) for any \(i<k\) follows immediately: if saturation does not occur until index \(j\), then \(r_{1}^{j-1}=\epsilon_{1}^{j-1}\), which is non-decreasing. Once saturation happens at any index (not necessary to occur), then the \(r\) values stay constant by part (C). Figure 1 shows a specific example of \(\{\epsilon_{i}\}_{i=1}^{50}\) and corresponding \(\{r_{i}\}_{i=1}^{50}\). After saturation, the \(r_{i}\) values remain constant as proved in Lemma 3(C). Finally, we show that \(r_{1}^{n}=\arg\min_{\mathbf{x}_{1}^{n}\in\mathcal{D}(\epsilon_{1}^{n})}\frac{\|\bm {x}_{1}^{n}\|_{2}^{2}+8}{4\|\mathbf{x}_{1}^{n}\|_{1}^{2}}\). **Lemma 4**.: _For a fixed \(\epsilon_{1}^{n}\), construct \(r_{1}^{n}\) as described in (9). Then, \(r_{1}^{n}\) is a local optima for \(f(x_{1}^{n})\) on the domain \(\mathcal{D}(\epsilon_{1}^{n})\)._ Proof.: Consider the partial derivative of \(f(\cdot)\) with respect to \(x_{i}\) at \(x_{1}^{n}=r_{1}^{n}\). We have \[\frac{\partial f(r_{1}^{n})}{\partial r_{i}} =\frac{r_{i}\|r_{1}^{n}\|_{1}-\|r_{1}^{n}\|_{2}^{2}-8}{2\|r_{1}^ {n}\|_{1}^{3}}\] \[=\frac{Y_{A}+Y_{B}}{2\|r_{1}^{n}\|_{1}^{3}}.\] where \(Y_{A}=r_{i}\|r_{i+1}^{n}\|_{1}-\|r_{i+1}^{n}\|_{2}^{2}\) and \(Y_{B}=\|r_{1}^{i-1}\|_{1}(r_{i}-4\|r_{1}^{i-1}\|_{1}f(r_{1}^{i-1}))\). Now consider the two cases: * \(\mathsf{Sat}(i)\): We have \(r_{i}=4\|r_{1}^{i-1}\|_{1}f(r_{1}^{i-1})\) so \(Y_{B}=0\) and by Lemma 3(C), \(Y_{A}=0\). Thus, \(\frac{\partial f(r_{1}^{n})}{\partial r_{i}}=0\). * \(\neg\mathsf{Sat}(i)\): We have \(Y_{B}\leq 0\) and by (10), \(Y_{A}\leq 0\). Therefore, the only local perturbation that decreases the objective would need to increase \(r_{i}\), which is not possible since \(r_{i}=\epsilon_{i}\) and we have the domain restriction \(\frac{r_{i}}{\epsilon_{i}}\leq\frac{r_{i}}{\epsilon_{1}}=1\). The above two cases show that \(r_{1}^{n}\) is indeed a local minimum for \(f(x_{1}^{n})\) on the domain \(\mathcal{D}(\epsilon_{1}^{n})\). Proof of Theorem 2.: Note that \(f(x_{1}^{n})\) is strictly quasi-convex on the closed convex domain \(\mathcal{D}(\epsilon_{1}^{n})\) by Lemma 2. Lemma 4 shows that \(r_{1}^{n}\) is a local minimizer of the function, thus, it is also a global minimizer. By Corollary 1, we can use the weights \(r_{1}^{n}/\|r_{1}^{n}\|_{1}\). Note that if the upper bound on MSE, \(f(r_{1}^{n})\), is too high then one can simply output \(0\) as the estimator and incur a maximum of \(\frac{1}{4}\) as the MSE, as is done in the ADPM algorithm. ### _Interpreting ADPM_ ADPM exploits a crucial property of the solution of (7): \(\mathbf{r}^{*}=\min_{\mathbf{x}\in\mathcal{D}(\epsilon_{1}^{n})}f(\mathbf{x},\epsilon_{1}^{n})\) is closely related to \(\mathbf{r}^{\prime}=\min_{\mathbf{x}\in\mathcal{D}(\epsilon_{1}^{n-1})}f(\mathbf{x}, \epsilon_{1}^{n-1})\). That is, the optimal weights when there are \(n-1\) users with privacy constraints \(\epsilon_{1}^{n-1}\) and the optimal weights when there are \(n\) users with privacy constraint \(\epsilon_{1}^{n}\) are closely related. In fact, \(\mathbf{r}_{i}^{*}=\mathbf{r}_{i}^{\prime}\) for \(1\leq i\leq n-1\). This recursive property allows us to efficiently give a solution for \(\mathbf{r}^{*}\) given \(\mathbf{r}^{\prime}\), where the last component of the vector \(\mathbf{r}^{*}\) can be found by performing a local minimization while fixing the first \(n-1\) components to be \(\mathbf{r}^{\prime}\). This is precisely why we chose to work with the scaled domain \(\mathcal{D}(\epsilon_{1}^{n})\) instead of the simplex. From Lemma 3, we see that upon saturation at some index \(k+1\), all the subsequent indices remain saturated and they all have the same weight \(r_{k+1}=4\|r_{1}^{k}\|_{1}f(r_{1}^{k})\) (and \(\leq\epsilon_{k+1}\)). Since no saturation occured before index \(k+1\), we have \(r_{1}^{k}=\epsilon_{1}^{k}\), and thus, \(r_{j}=\frac{\|\epsilon_{1}^{k}\|_{1}^{2}+8}{\|\epsilon_{1}^{k}\|_{1}}\)\(\forall j\geq k+1\). By the discussion around (4), using weights \(\nu=r_{1}^{n}/\|r_{1}^{n}\|_{1}\) and \(\eta=\|\mathbf{w}/\mathbf{\epsilon}\|_{\infty}=1/\|r_{1}^{n}\|_{1}\) gives a privacy of \(r_{i}\) to user \(i\). Thus, all the privacy-desiring users with lower \(\epsilon\) requirement till user \(k\) get exactly the privacy they ask for. However, for users who want less privacy and \(\epsilon\geq\epsilon_{k+1}\) receive a higher privacy guarantee of \(\frac{\|\epsilon_{1}^{k}\|_{2}^{2}+8}{\|\epsilon_{1}^{k}\|_{1}^{2}}\) for free. **Remark 3** (Special Case of Two Groups of Privacy).: _While the minimax optimality of ADPM is proven in this work, let us consider a special setting here to get a better intuition. Consider the case of \(n\) users where a fraction \(f\) of the users all have a common privacy level \(\epsilon_{1}\) and the rest of the users have a common privacy level \(\epsilon_{2}\) (without loss of generality assume \(\epsilon_{1}\leq\epsilon_{2}\)). This setting was consider in [1]._ _The condition for saturation was found to be \(\epsilon_{2}\geq\epsilon_{1}+\frac{8}{nf\epsilon_{1}}\). It is easy to see that we recover the same condition from (9). One can also verify that the weights assigned according to (9) match the optimal weights derived in [1]. Thus, keeping \(n\), \(\epsilon_{1}\) and \(f\) fixed, if \(\epsilon_{2}\) is increased from \(\epsilon_{1}\), then until \(\epsilon_{2}\leq\epsilon_{1}+\frac{8}{nf\epsilon_{1}}\), the optimal affine estimator weighs the datapoints proportional to the privacy level. After this saturation point, the weights do not change and latter group receives a privacy of \(\epsilon_{1}+\frac{8}{nf\epsilon_{1}}\) for free despite possibly having no privacy requirements \((\epsilon_{2}\rightarrow\infty)\)._ ## IV Lower Bound In a system with \(n\) users with homogeneous differential privacy requirement \(\epsilon\), the known minimax rate for mean estimation under mean-squared error is known to be \(\Theta(\frac{1}{n}+\frac{1}{(n\epsilon)^{2}})\). Many of the lower bound techniques in the literature for DP separately obtain the \(1/n^{2}\) term and add the \(1/n\) term by citing classical statistics results [23, 59]. Such an approach is not suitable here since the \(1/n\) and the \(1/n^{2}\) terms become intertwined by the privacy levels for heterogeneous setting. For example, consider a special case where out of \(n\) users, \(n-m\) users have privacy level of \(\epsilon\to 0\), and the rest \(m\) users have privacy level \(\epsilon\rightarrow\infty\). This corresponds to the classical mean estimation problem with \(m\) samples and the mean-squared error should be of order \(O(\frac{1}{m})\). Simply adding \(1/n\) to the lower bound cannot give tight results (consider the case where \(m\) is a constant and \(n\) is large). We use a form of Le Cam's method adapted to differential privacy constraint to obtain a lower bound based on ideas from [59, 28, 60]. Our method is similar to [57] but it is stronger since it can handle arbitrarily large \(\epsilon\) values, as is required for the case when we have a public dataset. Intuitively, DP restricts the variation in output probability with varying inputs which helps bound the total-variation norm term in Le Cam's method. We remind the readers that \(\mathbf{\epsilon}\) is assumed to be in a non-decreasing order. **Theorem 3** (Lower Bound).: _For privacy vector \(\mathbf{\epsilon}\) of length \(n\), we have_ \[L(\mathbf{\epsilon}) \gtrsim H(\mathbf{\epsilon})\wedge\frac{1}{4},\] _where_ \[H(\mathbf{\epsilon})=\max_{i=0}^{n}\frac{1}{\|\epsilon_{1}^{i}\|_{ 1}^{2}+n-i}.\] Proof.: Let \(P_{1}\), \(P_{2}\) be two distributions in \(\mathcal{P}\) and let \(M\) be any \(\mathbf{\epsilon}\)-DP estimator of the mean, then denote the output distribution of \(M(\mathbf{X})\) with \(\mathbf{X}\sim P_{i}^{n}\) as \(Q_{i}\) for \(i=1,2\). In other words, \(Q_{i}\) is a distribution over \(\mathcal{Y}\) and \(Q_{i}(A)=\mathbb{P}_{\mathbf{X}\sim P_{i}^{n}}\{M(\mathbf{X})\in A\}\). Let \(\delta\in[0,0.5]\); consider the distribution \(P_{1}\) which is \(0.5\) with probability \(\frac{1+\delta}{2}\) and -0.5 with probability \(\frac{1-\delta}{2}\). Similarly, \(P_{2}\) is \(0.5\) with probability \(\frac{1-\delta}{2}\) and -0.5 with probability \(\frac{1+\delta}{2}\). In this case, \(\mu_{P_{1}}=\delta/2\) and \(\mu_{P_{2}}=-\delta/2\). Further, \(\|P_{1}-P_{2}\|_{TV}=\delta\) and \(D_{\text{KL}}(P_{1}\|P_{2})\leq 3\delta^{2}\) (for \(\delta\in[0,0.5]\)). Define \(\gamma=\frac{1}{2}\left|\mu_{P_{1}}-\mu_{P_{2}}\right|=\delta/2\), then, Le Cam's method specialized to differential privacy setting (see [59, 28, 60]) yields the lower bound \[L(\mathbf{\epsilon})\geq\frac{\gamma^{2}}{2}(1-\|Q_{1}-Q_{2}\|_{TV}). \tag{11}\] Using Lemma 14 in the Appendix, and \(1-x\leq e^{-x}\,\forall x\geq 0\), we obtain \[\|Q_{1}-Q_{2}\|_{TV} \leq 2\delta\|\epsilon_{1}^{k}\|_{1}+\delta\sqrt{\frac{3(n-k)}{2}} \quad\forall k\in\{0,\ldots,n\}\] \[\leq 2\delta\|\epsilon_{1}^{k}\|_{1}+\delta\sqrt{4(n-k)} \quad\forall k. \tag{12}\] Note that (12) holds for arbitrarily large \(\epsilon_{i}\) values and degrades gracefully as compared to the \(e^{\epsilon_{i}}-1\) bound obtained in [57, Lemma 3]. We could achieve this due to the stronger bound we derive in Lemma 13 and Lemma 14. In particular, this allows us to deal with the general case when one of the datasets is public. Using (12) in (11), we obtain \[L(\mathbf{\epsilon}) \geq\frac{\delta^{2}}{8}\left[1-\delta\left(2\|\epsilon_{1}^{k}\| _{1}+\sqrt{4(n-k)}\right)\right]\forall k. \tag{13}\] Setting \(\delta=\frac{1}{4\|\epsilon_{1}^{k}\|_{1}+4\sqrt{(n-k)}}\wedge\frac{1}{2}\), in (13), get \[L(\mathbf{\epsilon}) \geq\frac{1}{16}\left(\frac{1}{16(\|\epsilon_{1}^{k}\|_{1}+\sqrt {(n-k)})^{2}}\wedge\frac{1}{4}\right)\forall k\] \[\geq\frac{1}{16}\left(\frac{1}{32(\|\epsilon_{1}^{k}\|_{1}^{2}+(n- k))}\wedge\frac{1}{4}\right)\forall k\] \[\geq\frac{1}{512}\left(\frac{1}{\|\epsilon_{1}^{k}\|_{1}^{2}+n- k}\wedge\frac{1}{4}\right)\forall k\] \[\implies L(\mathbf{\epsilon}) \geq\frac{1}{512}\left(H(\mathbf{\epsilon})\wedge\frac{1}{4}\right)\] ## V Optimality It remains to show that the lower and the upper bound are within constant factor of each other. Concretely, we prove that there exists an universal constant \(c\), independent of \(n\) and \(\epsilon_{1},\ldots,\epsilon_{n}\), such that \[c\left(f(r_{1}^{n})\wedge\frac{1}{4}\right)\leq\frac{1}{512}\left(H(\epsilon_{1 }^{n})\wedge\frac{1}{4}\right).\] In the above, \(r_{1}^{n}\) is the ADPM weights defined in (9). Showing the above inequality would imply \[c\left(f(r_{1}^{n})\wedge\frac{1}{4}\right)\leq L(\epsilon_{1}^{n})\leq\left( f(r_{1}^{n})\wedge\frac{1}{4}\right),\] proving the minimax optimality of ADPM. Thus, it suffices to show \(c^{\prime}f(r_{1}^{n})\leq H(\epsilon_{1}^{n})\) for all \(n\), and \(\epsilon_{1}^{n}\), leading to \((c^{\prime}\wedge 1)\left(f(r_{1}^{n})\wedge\frac{1}{4}\right)\leq\left(H( \epsilon_{1}^{n})\wedge\frac{1}{4}\right)\). In this Section, we show that this is indeed true despite \(f(r_{1}^{n})\) and \(H(\epsilon_{1}^{n})\) being expressed in rather different forms. We show this via an indirect, recursive-like comparison of the two. Theorem 4 states our main result on this. It should be noted that the proof for Theorem 4 is done to show that the lower and upper bound are within constant factors of each other without care to make the constant sharp. One can possibly get better constants with finer analysis (see Section VI-C). **Theorem 4** (Optimality).: _For any \(n\) and \(\epsilon_{1}^{n}\), it holds that_ \[\frac{1}{443}f(r_{1}^{n})\leq H(\epsilon_{1}^{n}).\] _Therefore, we have_ \[\frac{1}{226816}\left(f(r_{1}^{n})\wedge\frac{1}{4}\right)\leq L(\epsilon_{1}^ {n})\leq f(r_{1}^{n})\wedge\frac{1}{4}.\] The proof of Theorem 4 can be found at the end of this Section and has roughly two main parts. Consider a privacy constraint \(\epsilon_{1}^{n}\), arranged in ascending order. Suppose saturation first occurs at index \(k+1\), i.e., \(\text{Sat}(k+1)\), then in Section V-A, we prove an algebraic result showing \(c^{\prime}f(r_{1}^{k})\leq H(\epsilon_{1}^{k})\), where \(c^{\prime}\) is independent of \(k\) and \(\epsilon_{1}^{k}\). Next, in Section V-B, we show that if \(\text{Sat}(k+1)\) and \(c^{\prime}f(r_{1}^{k})\leq H(\epsilon_{1}^{k})\) then \(c^{\prime}f(r_{1}^{k+1})\leq H(\epsilon_{1}^{k+1})\). By Lemma 3(A), noting that saturation occurs at all the following indices, the theorem follows. ### _Unsaturated Regime_ Consider the sequence, of length \(n\), \(\epsilon_{1}^{n}\) such that no saturation occurs. Since there is no saturation, \(r_{1}^{n}=\epsilon_{1}^{n}\). We prove \(f(\epsilon_{1}^{n})\leq 443H(\epsilon_{1}^{n})\) for this unsaturated case. The result is stated in Lemma 5. By Definition 2, no saturation implies we have \(\epsilon_{k-1}\leq\epsilon_{k}<4\|\epsilon_{1}^{k-1}\|_{1}f(\epsilon_{1}^{k-1})\ \forall\ 1<k\leq n\). For the curious readers, Lemma 15 in the Appendix shows that \(\epsilon_{k-1}<4\|\epsilon_{1}^{k-1}\|_{1}f(\epsilon_{1}^{k-1})\ \forall\ 1<k\leq n\), thus, the above intervals are valid. **Lemma 5**.: _If \(\bigwedge_{i=2}^{n}(\neg\text{Sat}(i))\), then_ \[\frac{1}{443}f(\epsilon_{1}^{n})\leq H(\epsilon_{1}^{n}).\] Proof.: Recall that \(H(\epsilon_{1}^{n})=\max_{i=0}^{n}\frac{1}{\|\epsilon_{1}^{n}\|_{1}^{2}+n-i}\). Observe that \(H(\epsilon_{1}^{n})\geq 1/\|\epsilon_{1}^{n}\|_{1}^{2}\). Further, it is easy to see that the maximum in the definition of \(H(\epsilon_{1}^{n})\) occurs at index \(p\) which is the largest such that \(\epsilon_{p}(\epsilon_{p}+2\|\epsilon_{1}^{p-1}\|_{1})\leq 1\) (or at \(p=0\)). Thus, \[\epsilon_{p}\|\epsilon_{1}^{p}\|_{1}\leq 1 \tag{14}\] unless \(p=0\). Note that if \(0<p<n\), then we have \(\epsilon_{p+1}(\epsilon_{p+1}+2\|\epsilon_{1}^{p}\|_{1})>1\), which implies \[2\epsilon_{p+1}\|\epsilon_{1}^{p+1}\|_{1}>1.\] Consider the three cases regarding \(p\). \(\bullet\)\(p=0\) : if \(p=0\), then \(\epsilon_{1}\geq 1\) and \(H(\epsilon_{1}^{n})=1/n\). By Lemma 8 below, we have \(\frac{9^{2}}{4}H(\epsilon_{1}^{n})\geq\frac{\|\epsilon_{1}^{n}\|_{2}^{2}+8}{4 \|\epsilon_{1}^{n}\|_{1}^{2}}\). Recall that \(f(\epsilon_{1}^{n})=\frac{\|\epsilon_{1}^{n}\|_{2}^{2}+8}{4\|\epsilon_{1}^{n} \|_{1}^{2}}\). Since, \(2H(\epsilon_{1}^{n})\geq 8/4\|\epsilon_{1}^{n}\|_{1}^{2}\), we get \(23H(\epsilon_{1}^{n})\geq f(\epsilon_{1}^{n})\). \(\bullet\)\(p=n\) : if \(p=n\), then by (14), \[\|\epsilon_{1}^{n}\|_{2}^{2}\leq\epsilon_{n}\|\epsilon_{1}^{n}\|_{1}\leq 1.\] we have \(f(\epsilon_{1}^{n})\leq\frac{9}{4\|\epsilon_{1}^{n}\|_{1}^{2}}\leq 3H(\epsilon_{1}^{n})\). \(\bullet\)\(1<p<n\) : This case requires a more careful analysis. By Lemma 10 below, we have \(441H(\epsilon_{1}^{n})=\frac{441}{\|\epsilon_{1}^{n}\|_{1}^{2}+n-p}\geq\frac{\| \epsilon_{1}^{n}\|_{2}^{2}}{4\|\epsilon_{1}^{n}\|_{1}^{2}}\). Thus, \(443H(\epsilon_{1}^{n})\geq f(\epsilon_{1}^{n})\). The above three cases combined prove the required identity in the unsaturated regime. We need Lemma 6, Lemma 7, Lemma 8, and Lemma 9 to prove Lemma 10. Lemma 6 gives an important inequality that is utilized in the lemmata that follow. **Lemma 6**.: _If \(\bigwedge_{i=2}^{n}(\neg\text{Sat}(i))\), then_ \[\epsilon_{j}\leq\epsilon_{i}+\frac{8}{\|\epsilon_{1}^{i}\|_{1}}\ \forall\ n\geq j>i\geq 1. \tag{15}\] Proof.: From the unsaturation condition, get \[\epsilon_{k} \leq 4\|\epsilon_{1}^{k-1}\|_{1}f(\epsilon_{1}^{k-1})\] \[\implies\epsilon_{k} \leq\frac{\|\epsilon_{1}^{k-1}\|_{2}^{2}+8}{\|\epsilon_{1}^{k-1}\|_ {1}}\] \[\implies\epsilon_{k}\|\epsilon_{1}^{k-1}\|_{1} \leq\|\epsilon_{1}^{k-1}\|_{2}^{2}+8,\] \[\implies\sum_{m=1}^{k-1}\epsilon_{m}(\epsilon_{k}-\epsilon_{m})\leq 8\] \[\implies\sum_{m=1}^{k}\epsilon_{m}(\epsilon_{k}-\epsilon_{m})\leq 8 \forall\ 1<k\leq n.\] As a consequence, for any \(j>i\geq 1\), \[(\epsilon_{j}-\epsilon_{i})\sum_{m=1}^{i}\epsilon_{m} \leq 8\] \[\implies\epsilon_{j} \leq\epsilon_{i}+\frac{8}{\|\epsilon_{1}^{i}\|_{1}}.\] Now we prove two lemmata that come in use to prove Lemma 9. **Lemma 7**.: _Suppose \(\epsilon_{n}\leq C\epsilon_{i}\) for some \(C>0\) and \(i\), then_ \[C^{2}\|\epsilon_{i}^{n}\|_{2}^{2}\geq(n-i+1)\|\epsilon_{i}^{n}\|_{2}^{2}.\] Proof.: \(C^{2}\|\epsilon_{i}^{n}\|_{2}^{2}\geq(n-i+1)^{2}\epsilon_{n}^{2}\geq(n-i+1)\| \epsilon_{i}^{n}\|_{2}^{2}.\) **Lemma 8**.: _Suppose \(\epsilon_{i}>C\) for some \(C>0\) and \(i\), then_ \[(1+8C^{-2})^{2}\|\epsilon_{i}^{n}\|_{2}^{2}\geq(n-i+1)\|\epsilon_{i}^{n}\|_{2}^{ 2}.\] Proof.: From (15), \(\epsilon_{n}\leq\epsilon_{i}+\frac{8}{C}\leq\epsilon_{i}(1+8C^{-2}).\) The result then follows from Lemma 7. Before going to Lemma 10, we state and prove another lemma. **Lemma 9**.: _Suppose that for some \(i\), \(\epsilon_{i}\|\epsilon_{1}^{i}\|_{1}\leq 1\) and \(2\epsilon_{i+1}\|\epsilon_{i+1}^{i}\|_{1}>1\), then_ \[42^{2}\|\epsilon_{i+1}^{n}\|_{1}^{2}\geq(n-i)\|\epsilon_{i+1}^{n}\|_{2}^{2}\] Proof.: The second inequality in the statement of the lemma implies that \[\epsilon_{i+1}>\frac{1}{2}\|\epsilon_{1}^{i+1}\|_{1}^{-1}.\] On the other hand, (15) gives us \(\epsilon_{n}\leq\epsilon_{i+1}+8\|\epsilon_{1}^{i+1}\|_{1}^{-1}\). Thus, \[\frac{\epsilon_{n}}{\epsilon_{i+1}} <2\frac{\epsilon_{i+1}+8\|\epsilon_{1}^{i+1}\|_{1}^{-1}}{\| \epsilon_{1}^{i+1}\|_{1}^{-1}}\] \[=2(8+\epsilon_{i+1}\|\epsilon_{1}^{i+1}\|_{1}). \tag{16}\] Note that \[\epsilon_{i+1}\|\epsilon_{1}^{i+1}\|_{1} =\epsilon_{i+1}(\epsilon_{i+1}+\|\epsilon_{1}^{i}\|_{1})\] \[=\epsilon_{i+1}^{i}+(\epsilon_{i+1}-\epsilon_{i})\|\epsilon_{1}^ {i}\|_{1}+\epsilon_{i}\|\epsilon_{1}^{i}\|_{1}\] \[\leq\epsilon_{i+1}^{2}+(\epsilon_{i+1}-\epsilon_{i})\|\epsilon_{1 }^{i}\|_{1}+1. \tag{17}\] From (15) we know that \[\epsilon_{i+1}-\epsilon_{i}\leq 8\|\epsilon_{1}^{i}\|_{1}^{-1}. \tag{18}\] Combining (17) and (18) into (16), we get \[\frac{\epsilon_{n}}{\epsilon_{i+1}}<2(8+\epsilon_{i+1}^{2}+9)=34+2\epsilon_{i +1}^{2}.\] Now if \(\epsilon_{i+1}\leq 2\), the result follows from Lemma 7. Otherwise it follows from Lemma 8. We now prove the main lemma. **Lemma 10**.: \[42^{2}\|\epsilon_{1}^{n}\|_{1}^{2}\geq\|\epsilon_{1}^{n}\|_{2}^{2}(n-p+\| \epsilon_{1}^{p}\|_{1}^{2}).\] Proof.: From (14), \[\|\epsilon_{1}^{p}\|_{2}^{2}\leq\epsilon_{p}\|\epsilon_{1}^{p}\|_{1}\leq 1.\] Thus, \[\|\epsilon_{1}^{p}\|_{1}^{2}\geq\|\epsilon_{1}^{p}\|_{2}^{2}\cdot\|\epsilon_{1 }^{p}\|_{1}^{2}. \tag{19}\] Applying Lemma 9 we also have \[42^{2}\|\epsilon_{p+1}^{n}\|_{1}^{2}\geq(n-p)\|\epsilon_{p+1}^{n}\|_{2}^{2}. \tag{20}\] Further, observe that \[\|\epsilon_{1}^{p}\|_{1}\cdot\|\epsilon_{p+1}^{n}\|_{1}\geq(n-p)\|\epsilon_{1 }^{p}\|_{2}^{2}. \tag{21}\] since \(\{\epsilon_{i}\}\) is non-decreasing. From (15), note that for all \(i>p\) we have \[\epsilon_{i}\|\epsilon_{1}^{p}\|_{1}\leq\epsilon_{p}\|\epsilon_{1}^{p}\|_{1}+ 8\leq 9.\] Multiplying by \(\epsilon_{i}\) and adding over all \(i>p\) we get \[9\|\epsilon_{p+1}^{n}\|_{1} \geq\|\epsilon_{p+1}^{n}\|_{2}^{2}\cdot\|\epsilon_{1}^{p}\|_{1}\] \[\implies 9\|\epsilon_{p+1}^{n}\|_{1}\|\epsilon_{1}^{p}\|_{1} \geq\|\epsilon_{p+1}^{n}\|_{2}^{2}\cdot\|\epsilon_{1}^{p}\|_{1}^{2}. \tag{22}\] From (19), (20), (21) and (22) (add them and upper bound the upper bound), we get \[42^{2}\|\epsilon_{1}^{n}\|_{1}^{2}\geq\|\epsilon_{1}^{n}\|\|_{2}^{2}(n-p+\| \epsilon_{1}^{p}\|_{1}^{2}).\] ### _Saturated Regime_ In the saturated regime, we show that if \(\mathsf{Sat}(k+1)\) occurs and we have \(H(\epsilon_{1}^{k})\geq\frac{1}{443}f(r_{1}^{k})\), then \(H(\epsilon_{1}^{k+1})\geq\frac{1}{443}f(r_{1}^{k+1})\). **Lemma 11**.: _If \(\mathsf{Sat}(k+1)\) and \(H(\epsilon_{1}^{k})\geq\frac{1}{443}f(r_{1}^{k})\), then \(H(\epsilon_{1}^{k+1})\geq\frac{1}{443}f(r_{1}^{k+1})\)._ Proof.: Suppose at index \(p\), \(H(\epsilon_{1}^{k})\) was maximized, i.e., \(H(\epsilon_{1}^{k})=\frac{1}{\|\epsilon_{1}^{p}\|_{1}^{2}+k-p}\). Then, \[H(\epsilon_{1}^{k+1}) \geq\frac{1}{\|\epsilon_{1}^{p}\|_{1}^{2}+k+1-p}\] \[=\frac{H(\epsilon_{1}^{k})}{1+H(\epsilon_{1}^{k})}\] Noting that \(x/(1+x)\) is increasing function in \(x\) and \(H(\epsilon_{1}^{k})\) is lower bounded by \(\frac{1}{443}f(r_{1}^{k})\), we have, \[H(\epsilon_{1}^{k+1}) \geq\frac{f(r_{1}^{k})}{443+f(r_{1}^{k})}\] \[\geq\frac{1}{443}\frac{f(r_{1}^{k})}{1+4f(r_{1}^{k})}\] \[=\frac{1}{443}f(r_{1}^{k+1})\] where we used \(f(r_{1}^{k+1})=\frac{f(r_{1}^{k})}{1+4f(r_{1}^{k})}\) due to \(\mathsf{Sat}(k+1)\) (see Lemma 3(B)). Proof of Theorem 4.: Lemma 5 and Lemma 11 together prove Theorem 4: suppose for \(\epsilon_{1}^{n}\), \(k\) is the least index at which \(\mathsf{Sat}(k+1)\) occurs, then by Lemma 5, since \(\epsilon_{1}^{k}\) is an unsaturated sequence, \(H(\epsilon_{1}^{k})\geq\frac{1}{443}f(r_{1}^{k})\). By Lemma 3(A), all indices from \(k\) onward are saturated, so applying Lemma 11\(n-k\) times for all subsequent indices result in \(H(\epsilon_{1}^{n})\geq\frac{1}{443}f(r_{1}^{n})\). ## VI Experiments ### _Baseline Schemes_ We first describe several baseline DP techniques and discuss why they are not optimal in HDP. Supporting experiments follow. **Uniformly enforce \(\epsilon_{1}\)-DP (UNI):** One approach to this problem is to offer \(\epsilon_{1}\) (the lowest value in \(\epsilon\)) privacy to all the datapoints and use the minimax estimator, i.e., sample mean added with Laplace noise, to get an error of \(O(1/n+1/(n\epsilon_{1})^{2})\). UNI can be arbitrarily worse than the ADPM. Consider the case when only 1 datapoint has an extremely stringent privacy requirement while rest of the data is public. **Sampling Mechanism (SM) [45]:** Let \(t=\|\mathbf{\epsilon}\|_{\infty}\), then sample the \(i\)-th datapoint independently with probability \((e^{\epsilon_{i}}-1)/(e^{t}-1)\) and apply any homogeneous \(t\)-DP algorithm on the sub-sampled dataset. [45] proved this mechanism is \(\mathbf{\epsilon}\)-DP. For our case, take the sample mean of the sub-sampled dataset and add Laplace noise with variance \(2/(N_{s}t)^{2}\), where \(N_{s}\) is the realization of the number of datapoints sub-sampled. However, this approach can be easily seen to be suboptimal. When one datapoint is public, the SM algorithm disregards rest of the private data. **Local Differential Private Estimator (LDPE):** For lack of a good estimator for HDP, we also consider a Local-DP estimator. Each user with privacy requirement \(\epsilon_{i}\) adds \(\text{Laplace}(1/\epsilon_{i})\) noise to their data and sends it to the central server. The central server produces a weighted combination of the noisy data as the estimator. We can take optimal linear combinations of these noisy values to minimize the mean squared error if the variance of the unknown distribution is known (see [46] for details). We take the worst case variance as a proxy in our problem setting. In general, Local-DP setting adds more noise to get privacy from the central serve - this is a known shortcoming of the Local-DP model so we should not expect this LDPE model to be on par with the other baseline techniques. **Fallah et al.'s mean estimator (FME) [57]:** For brevity, we direct the readers to [57, Theorem 1] for details on the algorithm. We refer to this algorithm as FME in the rest of this work. One of the shortcomings of this method is it assumes \(\|\mathbf{\epsilon}\|_{\infty}\leq 1\) for its theoretical guarantees. For our experiments, we still use this algorithm as it is stated for \(\|\mathbf{\epsilon}\|_{\infty}>1\). Even when \(\|\mathbf{\epsilon}\|_{\infty}\leq 1\), FME may perform orders of magnitude worse than ADPM (see Table I). Further, if the example in Remark 2 is considered, then FME obtains worst-case MSE of the order \(10^{-3}\) as compared to ADPM's MSE of \(10^{-4}\). **Proportional DP (PropDPM):** We refer to the affine estimator with weights proportional to the \(\mathbf{\epsilon}\) vector and appropriate Laplace noise as PropDPM. This estimator also suffers from the problem of disregarding private data if even one of the datapoints is public, as pointed out in Remark 2. ### _Experiments_ We run two types of experiments for comparing ADPM to the other algorithms under heterogeneous DP constraints. We consider two cases for \(\mathbf{\epsilon}\) of dimension \(n=10^{3}\): high variance and low variance in \(\mathbf{\epsilon}\). The low variance case is obtained by uniformly sampling \(\log\mathbf{\epsilon}\) in \([-3,-2]\). Independently, the high variance case corresponds to sampling \(\log\mathbf{\epsilon}\) in \([-4,2]\). Keeping the sampled \(\mathbf{\epsilon}\) fixed, the average of the squared errors was taken over 20K simulations under \(\text{Beta}(2,3)\) distribution on \(\mathcal{X}\). The results are presented in Table I. It is not surprising that UNI, PropDPM and ADPM enjoy similar performance in the low variance regime, while they diverge in the higher variance regime. LDPE performing poorly in the low variance regime is also not surprising as explained earlier. However, in high variance regime, LDPE performs decently. It should be noted that in the low variance regime, the realization of \(\mathbf{\epsilon}\) satisfied \(\|\mathbf{\epsilon}\|_{\infty}\leq 1\), the condition required in FME. However, FME is still 2 orders of magnitude worse than ADPM. ### _On Lemma 5_ In order to check the tightness of the constant derived in Lemma 5, we scatter plot various values of \(\log H(\epsilon_{1}^{n})\) and \(\log f(r_{1}^{n})\) in Figure 2. \(n\) is randomly chosen, and \(\epsilon_{i+1}\) is sampled from \([\epsilon_{i},4]\|\epsilon_{1}^{*}\|_{1}f(\epsilon_{1}^{*}))\) for \(1<i\leq n\). The resulting values of \(\log H(\epsilon_{1}^{n})\) and \(\log f(r_{1}^{n})\) is shown in a scatter-plot. The figure also includes lines showing \(y=x-\log c\) for \(c\in\{4,443\}\). While Figure 2 is not conclusive, it empirically suggests the constant \(443\) in Lemma 5 can be improved to \(4\). ## VII Conclusion Bounded univariate mean estimation under heterogeneous privacy constraints is studied under the Central-DP model. We propose an efficient algorithm (ADPM) and prove the minimax optimality of ADPM up to constant factors. In order to do this, we provide a recursive solution to an optimization problem and a lower bound. We further show the order-equivalence of the lower and the upper bound. Experimentally, we confirm the superior performance of our algorithm. Further research directions can include studying the problem for sub-Gaussian random variables and the multivariate case. ## Acknowledgments The authors thank Yigit Efe Erginbas, and Justin Singh Kang for their valuable insights. Fig. 2: Scatter Plot of \(\log H(\epsilon_{1}^{n})\) vs \(\log f(r_{1}^{n})\). The dotted green line plots \(y=x-\log 4\) and the dotted blue line plots \(y=x-\log 443\). ## Appendix A Proofs Lemma 12 is included for completeness (see [57, 61]). **Lemma 12** (Laplace Mechanism).: _The affine estimator \(M_{\mathbf{w}}(\mathbf{x})=\langle\mathbf{x},\,\mathbf{w}\rangle+L(\eta)\) is \((\mathbf{w}/\eta)\)-DP when \(\mathcal{X}=[-0.5,0.5]\)._ Proof.: We verify this by comparing the density of output of the algorithm on neighboring datasets. We also drop the subscript for the algorithm \(M_{\mathbf{w}}\). \[\frac{p(M(\mathbf{x})=s)}{p(M(\mathbf{x}^{\prime i})=s)} =\frac{\exp\{-|\langle\mathbf{x},\,\mathbf{w}\rangle-s|/\eta\}}{\exp\{-| \langle\mathbf{x}^{\prime i},\,\mathbf{w}\rangle-s|/\eta\}},\] \[\leq\exp\{|\langle\mathbf{x}^{\prime i},\,\mathbf{w}\rangle-\langle\mathbf{x },\,\mathbf{w}\rangle|/\eta\},\] \[\leq\exp\{w_{i}/\eta\}, \tag{23}\] where (A) follows from the fact that \(\langle\mathbf{x}^{\prime i},\,\mathbf{w}\rangle\) and \(\langle\mathbf{x},\,\mathbf{w}\rangle\) can differ by at most \(w_{i}\) since \(w_{i}\) is the coefficient of the \(i\)-th element and \(x_{i},x_{i}^{\prime i}\in[-0.5,0.5]\). **Lemma 13**.: _By the definition of \(\mathbf{\epsilon}\)-DP in (1), it follows that for all measurable sets \(S\subseteq\mathcal{Y}\),_ \[e^{-\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i}) \in S\}\leq\mathbb{P}\{M(\mathbf{x})\in S\} \tag{24}\] \[\leq\begin{cases}e^{\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i}) \in S\}\\ 1-e^{-\epsilon_{i}}+e^{-\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i})\in S\} \end{cases}\] _Further, it follows that_ \[|\mathbb{P}\{M(\mathbf{x})\in S\}-\mathbb{P}\{M(\mathbf{x}^{\prime i}) \in S\}|\leq 1-e^{-\epsilon_{i}} \tag{25}\] Proof.: Note that the DP definition also implies \(e^{-\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i})\in S\}\leq\mathbb{P}\{M(\bm {x})\in S\}\). By applying the definition with \(S^{C}\) and combining the conditions, one obtains \[e^{-\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i})\in S\}\] \[1-e^{\epsilon_{i}}+e^{\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i })\in S\}\leq\mathbb{P}\{M(\mathbf{x})\in S\}\] and \[\mathbb{P}\{M(\mathbf{x})\in S\}\leq\begin{cases}e^{\epsilon_{i}}\mathbb{P}\{M( \mathbf{x}^{\prime i})\in S\}\\ 1-e^{-\epsilon_{i}}+e^{-\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i})\in S\} \end{cases}\] The condition \(1-e^{\epsilon_{i}}+e^{\epsilon_{i}}\mathbb{P}\{M(\mathbf{x}^{\prime i})\in S\} \leq\mathbb{P}\{M(\mathbf{x})\in S\}\) can be removed since \(e^{-\epsilon_{i}}\lambda\geq 1-e^{\epsilon_{i}}+e^{\epsilon_{i}}\lambda\) for all \(0\leq\lambda\leq 1\) and non-negative \(\epsilon_{i}\). The Lemma gives a much stronger bound than the straightforward DP definition when \(\epsilon_{i}\) is large. Using (24), one can obtain (25). **Lemma 14**.: _For any \(k\in\{0,1,\ldots,n\}\),_ \[\|Q_{1}-Q_{2}\|_{TV}\leq 2\|P_{1} -P_{2}\|_{TV}\sum_{i=1}^{k}(1-e^{-\epsilon_{i}})\] \[+\sqrt{\frac{n-k}{2}D_{\text{KL}}(P_{1}\|P_{2})}.\] Proof.: We use a method similar to [57] to prove this but obtain stronger results due to Lemma 13. Let \(\tilde{Q}\) be the distribution of the output of \(\mathbf{\epsilon}\)-DP estimator \(M(\cdot)\) when the input is dataset \(\mathbf{X}\) draw from the product distribution \(P_{1}^{k}P_{2}^{n-k}\). By triangle inequality, \[\|Q_{1}-Q_{2}\|_{TV}\leq\|Q_{1}-\tilde{Q}\|_{TV}+\|\tilde{Q}-Q_{2}\|_{TV}.\] By Data Processing Inequality and Pinsker's inequality, \[\|Q_{1}-\tilde{Q}\|_{TV}\leq\|P_{1}^{n}-P_{1}^{k}P_{2}^{n-k}\|_{TV}\leq\sqrt{ \frac{n-k}{2}D_{\text{KL}}(P_{1}\|P_{2})}.\] We remind the readers that in DP-literature, the expression \(\mathbb{P}\{M(\mathbf{X})\in A\}\) generally refers to the conditional probability \(\mathbb{P}\{M(\mathbf{X})\in A|\mathbf{X}\}\) as pointed out in Definition 1. With this convention, consider the term \(\|\tilde{Q}-Q_{2}\|_{TV}\), \[|\tilde{Q}(A)-Q_{2}(A)|=|\mathbb{E}_{\mathbf{X}\sim P_{1}^{k}P_{2}^{n -k}}\mathbb{P}\{M(\mathbf{X})\in A\}\] \[\qquad\qquad\quad-\mathbb{E}_{\mathbf{X}\sim P_{1}^{n}}\mathbb{P}\{M (\mathbf{X})\in A\}|\] \[=\Big{|}\sum_{i=1}^{k}\big{(}\mathbb{E}_{\mathbf{X}\sim P_{1}^{i}P_{2 }^{n-i}}\mathbb{P}\{M(\mathbf{X})\in A\}\] \[\qquad\quad-\mathbb{E}_{\mathbf{X}\sim P_{1}^{i-1}P_{2}^{n-i+1}} \mathbb{P}\{M(\mathbf{X})\in A\}\big{)}\Big{|}\] \[\leq\sum_{i=1}^{k}|\mathbb{E}_{\mathbf{X}\sim P_{1}^{i}P_{2}^{n-i}} \mathbb{P}\{M(\mathbf{X})\in A\}\] \[\qquad\qquad\quad-\mathbb{E}_{\mathbf{X}\sim P_{1}^{i-1}P_{2}^{n-i+1}} \mathbb{P}\{M(\mathbf{X})\in A\}\big{|}.\] Let \(\mathbf{X}^{\prime i}\) denote another dataset which differs from \(\mathbf{X}\) at only the \(i\)-th index and this index can have any arbitrary value independent of \(\mathbf{X}\). Then, note that \[\mathbb{E}_{\mathbf{X}\sim P_{1}^{i}P_{2}^{n-i}} \mathbb{P}\{M(\mathbf{X}^{\prime i})\in A\}\] \[-\mathbb{E}_{\mathbf{X}\sim P_{1}^{i-1}P_{2}^{n-i+1}}\mathbb{P}\{M( \mathbf{X}^{\prime i})\in A\}=0\quad a.s.\] Thus, \[|\tilde{Q}(A)-Q_{2}(A)|\leq\] \[\sum_{i=1}^{k}\Big{|}\mathbb{E}_{\mathbf{X}\sim P_{1}^{i}P_{2}^{n-i}} \big{[}\mathbb{P}\{M(\mathbf{X})\in A\}-\mathbb{P}\{M(\mathbf{X}^{\prime i})\in A\} \big{]}\] \[-\mathbb{E}_{\mathbf{X}\sim P_{1}^{i-1}P_{2}^{n-i+1}}[\mathbb{P}\{M( \mathbf{X})\in A\}-\mathbb{P}\{M(\mathbf{X}^{\prime i})\in A\}]\Big{|}.\] Let \(\mathbf{X}_{-i}\) denote the random vector \(\mathbf{X}\) except at the \(i\)-th position, i.e., \(\mathbf{X}_{-i}\sim P_{1}^{i-1}P_{2}^{n-i}\) means that elements of \(\mathbf{X}\) from position \(1\) to position \(i-1\) are iid \(P_{1}\) while those in positions \(i+1\) to \(n\) are iid \(P_{2}\). Therefore, we get, \[|\tilde{Q}(A)-Q_{2}(A)|\leq\] \[\sum_{i=1}^{k}\bigg{|}\mathbb{E}_{\mathbf{X}_{-i}\sim P_{1}^{i-1}P_{2 }^{n-i}}\Big{[}\mathbb{E}_{X_{i}\sim P_{1}}[\mathbb{P}\{M(\mathbf{X})\in A\}\] \[-\mathbb{P}\{M(\mathbf{X}^{\prime i})\in A\}]\] \[-\mathbb{E}_{X_{i}\sim P_{2}}[\mathbb{P}\{M(\mathbf{X})\in A\}- \mathbb{P}\{M(\mathbf{X}^{\prime i})\in A\}]\Big{]}\bigg{|}\] \[\leq\sum_{i=1}^{k}\mathbb{E}_{\mathbf{X}_{-i}\sim P_{1}^{i-1}P_{2}^{n-i}} \Big{|}\mathbb{E}_{X_{i}\sim P_{1}}[\mathbb{P}\{M(\mathbf{X})\in A\}\] \[-\mathbb{P}\{M(\mathbf{X}^{\prime i})\in A\}]\] \[\leq\sum_{i=1}^{k}\mathbb{E}_{\mathbf{X}_{-i}\sim P_{1}^{i-1}P_{2}^{n-i}}[2(1 -e^{-\epsilon_{i}})\|P_{1}-P_{2} \[=2\|P_{1}-P_{2}\|_{TV}\sum_{i=1}^{k}(1-e^{-\epsilon_{i}}).\] In (26), we used Lemma 13 and the fact that for a bounded function \(|f(x)|\leq C\), we have \(|E_{X\sim P_{1}}[f(X)]-E_{X\sim P_{2}}[f(X)]|\leq 2\|P_{1}-P_{2}\|_{TV}C\). **Lemma 15**.: _We have \(\epsilon_{1}<4\epsilon_{1}f(\epsilon_{1})\) and if \(\epsilon_{k-1}\leq\epsilon_{k}<4\|\epsilon_{1}^{k-1}\|_{1}f(\epsilon_{1}^{k-1})\), then \(\epsilon_{k}<4\|\epsilon_{1}^{k}\|_{1}f(\epsilon_{1}^{k})\)._ Proof.: Note that \(4\epsilon_{1}f(\epsilon_{1})=\epsilon_{1}+\frac{8}{\epsilon_{1}}>\epsilon_{1}\). We also have \[\epsilon_{k}<4\|\epsilon_{1}^{k-1}\|_{1}f(\epsilon_{1}^{k-1})\] \[\implies\epsilon_{k}\|\epsilon_{1}^{k-1}\|_{1} <\|\epsilon_{1}^{k-1}\|_{2}^{2}+8\] \[\implies\epsilon_{k}\|\epsilon_{1}^{k}\|_{1} <\|\epsilon_{1}^{k}\|_{2}^{2}+8\] \[\epsilon_{k}<4\|\epsilon_{1}^{k}\|_{1}f(\epsilon_{1}^{k}).\]
2302.13326
Technical constraints on interstellar interferometry and spatially resolving the pulsar magnetosphere
Scintillation of pulsar radio signals caused by the interstellar medium can in principle be used for interstellar interferometry. Changes of the dynamic spectra as a function of pulsar longitude were in the past interpreted as having spatially resolved the pulsar magnetosphere. Guided by this prospect we used VLBI observations of PSR B1237+25 with the Arecibo and Green Bank radio telescopes at 324 MHz and analyzed such scintillation at separate longitudes of the pulse profile. We found that the fringe phase characteristics of the visibility function changed quasi-sinusoidally as a function of longitude. Also, the dynamic spectra from each of the telescopes shifted in frequency as a function of longitude. Similar effects were found for PSR B1133+16. However, we show that these effects are not signatures of having resolved the pulsar magnetosphere. Instead the changes can be related to the effect of low-level digitizing of the pulsar signal. After correcting for these effects the frequency shifts largely disappeared. Residual effects may be partly due to feed polarization impurities. Upper limits for the pulse emission altitudes of PSR B1237+25 would likely be well below the pulsar light cylinder radius. In view of our analysis we think that observations with the intent of spatially resolving the pulsar magnetosphere need to be critically evaluated in terms of these constraints on interstellar interferometry.
M. V. Popov, N. Bartel, A. S. Andrianov, M. S. Burgin, E. N. Fadeev, A. G. Rudnitskiy, T. V. Smirnova, V. A. Soglasnov, V. A. Zuga
2023-02-26T14:44:08Z
http://arxiv.org/abs/2302.13326v2
Technical constraints on interstellar interferometry and spatially resolving the pulsar magnetosphere ###### Abstract Scintillations of pulsar radio signals caused by the interstellar medium can in principle be used for interstellar interferometry. Changes of the dynamic spectra as a function of pulsar longitude were in the past interpreted as having spatially resolved the pulsar magnetosphere. Guided by this prospect we used VLBI observations of PSR B1237+25 with the Arecibo and Green Bank radio telescopes at 324 MHz and analyzed such scintillations at separate longitudes of the pulse profile. We found that the fringe phase characteristics of the visibility function changed quasi-sinsoidally as a function of longitude. Also, the dynamic spectra from each of the telescopes shifted in frequency as a function of longitude. Similar effects were found for PSR B1133+16. However, we show that these effects are not signatures of having resolved the pulsar magnetosphere. Instead the changes can be related to the effect of low-level digitizing of the pulsar signal. After correcting for these effects the frequency shifts largely disappeared. Residual effects may be partly due to feed polarization impurities. In view of our analysis we think that observations with the intend of spatially resolving the pulsar magnetosphere need to be critically evaluated in terms of these constraints on interstellar interferometry. scattering -- pulsars: individual B1237+25 - techniques 0000-0002-4807-8665]M. V. Popov 0000-0002-4880-7888]N. Bartel 0000-0002-4880-7888]A. S. Andrianov 0000-0002-4880-7888]M. S. Burgin 0000-0002-0788-8888]E. N. Fadeev 0000-0002-4880-7888]A. G. Rudnitskiy 0000-0002-4880-7888]T. V. Smirnova 0000-0002-4880-7888]V. A. Soglasnov 0000-0002-4880-7888]V. A. Zuga ## 1 Introduction Scattering of radio waves by inhomogeneities of the interstellar plasma causes angular broadening of the source image, distortion of radio spectra, and scintillations. Refractive scintillations are slow and occur on time scales of \(t_{\rm ref}\) and diffractive scintillations are fast and occur on time scales of \(t_{\rm dif}\), with corresponding spatial scales of \(\rho_{\rm ref}\) and \(\rho_{\rm dif}\). Fluctuations in the density of free electrons in the interstellar plasma produce variations in the index of refraction. Pulsars observed through such a medium produce a diffractive pattern in the plane of the observer. The spatial scale of such diffractive pattern treated as a lens is about \(\lambda/\theta_{\rm sc}\), where \(\lambda\) is the observing wavelength and \(\theta_{\rm sc}\) is the angular size of the scattering disk. If the angular size of the diffractive pattern at the Earth is less than the angular size of the pulsar then emission regions separated by a light-cylinder radius, \(r_{\rm LC}\), will produce scintillations that are independent. Using interstellar scintillation it is possible in principle to measure not only the size of a radio source with a high angular resolution but even the spatial separation, or upper limits of it, of the emission sources. Early discussions and studies were reported by Scheuer (1968) and Lovelace (1970). The theory of interstellar plasma lensing was further developed by, e.g., Rickett (1977); Cordes et al. (1986); Gwinn et al. (1998). First attempts to resolve the pulsar magnetosphere by comparing scintillation patterns of different pulse profile components were made by Backer (1975) and Cordes et al. (1983). No evidence of independent scintillation was found resulting in upper limits of the corresponding emission regions of as small as \(1.2\times r_{\rm LC}\) and even \(10^{3}\,\) km. First phase sensitive attempts of resolving the pulsar magnetosphere were done by Bartel et al. (1985) with VLBI and gating. These authors discussed the effects of polarization impurities of the feeds on measurements of phase as a function of longitude and therefore on the separation of emission regions. They derived stringent limits for the feeds so that the bias due to the changing polarisation characteristics across the pulsar's profile would be reduced for sensitive fringe phase measurements along pulsar longitude. Again, no evidence for having resolved the pulsar magnetosphere was found. The first apparently clear evidence for having resolved with interstellar interferometry the magnetosphere of a pulsar was reported by Wolszczan & Cordes (1987) by having made use of occasionally occurring refractive scintillation. A strong refraction event can split the image into two or multiple subimages. Wolszczan & Cordes (1987) observed periodic structure in the dynamic spectrum of PSR B1237+25 with the Arecibo telescope (AR) at 430 MHz. They interpreted that structure as an interference pattern formed when two radiation beams having different paths through the interstellar medium (ISM) intersect in the observer plane. They detected a quasi-sinusoidal smooth fringe phase variation of the periodic structure in the dynamic spectra as a function of pulse longitude and estimated a typical transverse separation between the emission regions of \(\sim 10^{3}\) km. Their measurements have also repercussions on the location of the emitter and the geometry of the magnetic field lines. Very similar results were reported for PSR B1133+16 by Gupta et al. (1999) using the Ooty radio telescope at 327 MHz. They also used multiple imaging during a refractive event and again found quasi-sinusoidal fringe phase variations across the pulse profile. They inferred a separation of the emission regions of the leading and trailing parts of a pulse of \(\geq 3\times 10^{7}\) m corresponding to an emission height of \(\geq 2.6\times 10^{3}\) km. Further apparently successful attempts to spatially resolve the emission region were reported by several authors using diffractive and refractive scintillation. Smirnova & Shishov (1989) analyzed dynamic diffractive scintillation spectra as a function of longitude and found through cross-correlation analysis lags in time and frequency indicating separations of the corresponding emission regions of \(3\times 10^{2}\,\)km, which for a dipole magnetic field corresponds to \(0.08\times r_{\rm LC}\). In a similar analysis using however the cross-correlation peak degradation as a function of longitude separation, Smirnova (1992) and Smirnova et al. (1996) found for some other pulsars, including PSR B1237+25, in contrast, emission heights close to \(r_{\rm LC}\). Johnson et al. (2012) used diffractive scintillation of the Vela pulsar and measured the size of the emitter to be \(\leq 4\) km and its height to be \(\leq 3.4\times 10^{2}\) km. In contrast, much larger sizes and separations, of approximately \(r_{\rm LC}\) were found with diffractive scintillation for the pulse and interpulse of the Crab pulsar (Main et al., 2021). Also Pen et al. (2014) following Brisken et al. (2010) reduced observations of PSR B0834+06 at 327 MHz using VLBI imaging of its scattering speckle pattern to measure the changing phase response on the scattering screen as a function of pulse longitude. The authors found a phase change of \(\sim 0.005\) rad and interpreted it as an apparent motion of the emission region of \(\sim 1000\) km s\({}^{-1}\), effectively performing high-precision astrometry with a resolution of 50 picoarcseconds. ## 2 Observations and Primary Data Reduction Inspired by the results from interstellar interferometry we used VLBI observations of the multi-component PSR B1237+25 with AR and GB (Green Bank) at 324 MHz with a bandwidth, B=16 MHz at right (RCP) and left-circular (LCP) polarization. The observations were made in the context of the Radioastron space-VLBI science program AO-5. We used two observing sessions separated by two months, each 2 h long. General information about the sessions is given in Table 1. Both sessions consisted of six 1160 s long recording scans separated by 30 s gaps. During each scan, 840 pulsar periods, P, with P=1.38 s were used for our analysis. The data were processed at Astro Space Center with the ASC software correlator (Likhachev et al., 2017) with gating and incoherent dedispersion applied. We used 512 channels at the correlator, providing a frequency resolution of 31.25 kHz. First, we computed the average pulse profile and dynamic spectrum for AR for each of the two observing sessions. The pulse profiles are shown in Figures 1a and 1b. The width of the pulse window is \(\sim 65\) ms, corresponding to a longitude range of \(\sim 16\) deg. Note, the slight false decrease of intensity at the leading and trailing part of the profile. The dynamic spectra are shown in Figures 1c and 1d. The spectra are similar to what would be expected under regular, diffractive, scattering conditions. No quasi-periodic modulation as a consequence of multiple imaging due to refraction is observed. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ Code} & Date & Time & Stations & \(\Delta f_{1/e}\) & \(t_{\rm dif}\) & \(\Delta\tau_{1/2}\) \\ & (yyyy mm dd) & (hh:mm -hh:mm) & & (MHz) & (s) & (ns) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline rags29c & 2017 12 22 & 10:00 - 12:00 & AR, GB, WB & \(1.70-4.40\) & \(345\pm 20\) & \(65\pm\ 5\) \\ rags29j & 2018 02 26 & 05:30 - 07:30 & AR, GB & \(0.71-1.22\) & \(250\pm 10\) & \(260\pm 10\) \\ \hline \end{tabular} Note. – Columns are as follows: (1) Observing code. (2) Date of observations. (3) Time span of observations for six scans. Each scan is 1160 s long which corresponds to 840 pulsar periods. (4) Radio antennas scheduled for the observations, AR – Arecibo, GB – Robert C. Byrd Green Bank Telescope, WB – Westerbork. We report here only results for AR and GB in VLBI mode and single antenna mode. (5) Decorrelation bandwidth as half-width at 1/e of the maximum of the frequency section of the 2-dim autocorrelation function (ACF) at zero time lag measured with AR for window, w\({}_{4}\), of the pulse profile (see Figure 1). The ranges refer to the minimum and maximum values for the six scans for each of the two epochs. (6) Diffractive scintillation time. (7) Half-width at half maximum of the visibility magnitude obtained in sub-window w\({}_{1}\) for one scan. Uncertainties in (6) and (7) are statistical standard errors. \end{table} Table 1: List of observations. Second, we divided the pulse window into nine sub-windows, \(\rm w_{\rm{\rm{}_{\rm{}_{\rm{\rm{}_{\rm{\rm{}_{\rm{\rm{}_{\rm{\rm{}_{\rm{}}}}}}}}}}}}\), with \(0\leq k\leq 8\), each \(\sim 7\) ms wide (see Figures 1a and 1b). We also selected three off-pulse sub-windows, \(\rm w_{\rm{}_{\rm{\rm{}_{\rm{\rm{}_{\rm{\rm{\rm{}}}}}}}}}\), with \(9\leq k\leq 11\), of the same duration. Then we computed with the correlator for each window the complex cross-spectrum for the baseline AR-GB, \(S_{k}^{AR-GB}(f_{i},t_{j})\), and the auto-spectrum, \(S_{k}(f_{i},t_{j})\) (dynamic spectrum), for each of the two radio telescopes. Each dynamic spectrum consists of \(N_{\rm{}_{\rm{\rm{}_{\rm{\rm{}_{\rm{\rm{\rm{}}}}}}}}}N_{\rm{}_{\rm{\rm{}_{\rm{ \rm{t}}}}}}\) values, with \(1\leq i\leq N_{\rm{}_{\rm{\rm{}_{\rm{\rm{t}}}}}}\) and \(1\leq j\leq N_{\rm{}_{\rm{\rm{t}}}}\) where \(N_{\rm{}_{\rm{\rm{f}}}}\) is the number of frequency channels covering the frequency range in the bandpass from 316 to 332 MHz, and \(N_{\rm{}_{\rm{\rm{t}}}}\) the number of spectra in a given set of observations. We corrected the dynamic spectrum \(S_{k}(f_{i},t_{j})\) in each sub-window, \(\rm w_{\rm{}_{\rm{\rm{}_{\rm{\rm{}}}}}}\), for the background baseline as follows: \[S_{k}(f_{i},t_{j})=S_{k}^{\rm{on}}(f_{i},t_{j})-S^{\rm{off}}(f_{i},t_{j}). \tag{1}\] Here, \(S_{k}^{\rm{on}}\) is the spectrum obtained in each of the sub-windows, \(\rm w_{\rm{}_{\rm{0}}}\) to \(\rm w_{\rm{}_{\rm{8}}}\), and \(S^{\rm{off}}\) is the spectrum average over the off-pulse windows, \(\rm w_{\rm{}_{\rm{9}}}\) to \(\rm w_{\rm{}_{\rm{11}}}\). The individual spectra were averaged over four pulse periods to smooth out the intensity fluctuations from pulse to pulse, reducing \(N_{\rm{}_{\rm{\rm{t}}}}\) to 210 but keeping \(N_{\rm{}_{\rm{\rm{t}}}}=512\). Third, we computed the two-dimensional cross-correlation functions between the dynamic spectra of each of the sub-windows and the dynamic spectrum of the sub-window in the center of the pulse, \(\rm w_{\rm{}_{\rm{4}}}\), which served as a reference. \[CCF_{4k}(\Delta f,\Delta t)=\frac{1}{(N_{\rm{}_{\rm{\rm{f}}}}-i)(N_{\rm{}_{ \rm{\rm{t}}}}-j)}\sum_{i=0}^{N_{\rm{}_{\rm{\rm{f}}}}-i}\sum_{j=0}^{N_{\rm{}_{ \rm{\rm{t}}}}-j}\Delta S_{4}(f_{i},t_{j})\Delta S_{k}(f_{i}+\Delta f,t_{j}+ \Delta t). \tag{2}\] Here \(\Delta f=\frac{B}{N_{\rm{}_{\rm{\rm{f}}}}}i\) and \(\Delta t=4Pj\) are frequency and time lags, and \(\Delta S_{k}(f,t)=S_{k}(f,t)-\langle S_{k}(f,t)\rangle\) with \[\langle S_{k}(f,t)\rangle=\frac{1}{N_{\rm{}_{\rm{\rm{f}}}}N_{\rm{}_{\rm{\rm{t} }}}}\sum_{i=1}^{N_{\rm{}_{\rm{\rm{f}}}}}\sum_{j=1}^{N_{\rm{}_{\rm{\rm{t}}}}}S_{ k}(f_{i},t_{j})\,. \tag{3}\] After the primary data reduction we proceeded to the main analysis. ## 3 Phase and frequency shifts as a function of pulsar longitude ### Phase shifts of the VLBI visibility functions For every single pulse and every sub-window, with \(0\leq k\leq 11\), we computed the complex visibility function \(V_{k}(\tau)\) as the inverse Fourier transform of \(S_{k}^{AR-GB}(f)\) with the sampling interval in interferometer delay, \(\tau\), being equal to 31.25 ns. We used only visibilities for the AR-GB baselines. As an example we give the average of the visibility function magnitude for one scan and for window \(\rm w_{\rm{}_{\rm{1}}}\) in Figure 2. Then we investigated the phases, \(\phi_{k}\) of \(V_{k}(\tau)\) relative to \(\phi_{4}\) of \(V_{4}(\tau)\) as a function of longitude. We selected only strong pulses with the signal-to-noise ratio (SNR) in both the selected sub-window, \(\rm w_{\rm{}_{\rm{\rm{}_{\rm{\rm{}_{\rm{\rm{}}}}}}}}\), and sub-window \(\rm w_{\rm{}_{\rm{4}}}\) being \(>\)5 with respect off-pulse window, \(\rm w_{\rm{}_{\rm{11}}}\). For these pulses we computed for each sub-window in the pulse profile the phase relative to the phase in sub-window, \(\rm w_{\rm{}_{\rm{4}}}\), \((\varphi_{k}-\varphi_{4})\). We analyzed data from six scans and obtained for every scan and every window in general several hundreds of phase differences from our strong pulses. As an example, we present the average phase differences for the leading \((\varphi_{0}-\varphi_{4})\) and the trailing \((\varphi_{8}-\varphi_{4})\) sub-windows as a function of \(\tau\) in Figure 2. In the approximately \(\pm 125\) ns delay window the curves of phase differences can be approximated by straight lines with a fitted slope of \(\frac{d(\varphi_{k}-\varphi_{4})}{d\tau}\) for k=0 and 8. As can be seen in Figure 2, the slopes are very different for the leading and trailing windows relative to window 4. In general, these slopes vary across the pulse window. In Figures 1e Figure 1: Pulse profiles of PSR B1237+25 observed at RCP (panels a, b), dynamic spectra of diffractive scintillation at RCP (panels c, d), and rate of change of the visibility phases as a function of pulsar longitude for both senses of polarization (panels e, f), each averaged over six scans. Left and right panel columns are for our two observing epochs, 2017 December 22 and 2018 February 26, respectively. Pulse profiles in (a, b) are computed as the visibility magnitude obtained by the correlator at zero baseline for AR with the off-pulse levels subtracted. The vertical dashed lines indicate the nine on-pulse sub-windows, w\({}_{0}\) to w\({}_{8}\), used as gates for the correlation and for our analysis. Dynamic spectra in (c, d) are the averages for AR over the full on-pulse windows and shown on normalized linear gray-scale. The black and white regions represent the maxima and minima of the power density, respectively. Phase rates, in (e, f), are the derivatives of the AR-GB VLBI visibility phases with respect to delay, \(\frac{d(\varphi_{k}-\varphi_{4})}{d\tau}\) for the nine pulse windows with \(0\leq k\leq 8\). Uncertainties are standard errors. They were derived from the 1\(\sigma\) uncertainty from the least-squares fit of the phase rates in the approximately \(\pm\) 125 ns central delay region for each of the six scans and then divided by \(\sqrt{6}\). and 1f we plot the values of the varying slopes and their standard errors from the fit for each sub-window with respect to the value for window, w\({}_{4}\), for the two polarizations and the two days of observations. The variations are smooth, highly significant and appear to be quasi-sinusoidal. The patterns in Figures 1e and 1f are very similar for the two polarizations and for the two days. We note that our pattern of the VLBI phase rate versus pulse longitude is also very similar to the comparable pattern of phase versus longitude presented for PSR B1237+25 by Wolszczan & Cordes (1987) and for PSR B1133+16 by Gupta et al. (1999). However, we will show that in our case this pattern is not an indication of having resolved the pulsar magnetosphere. The variation of the derivative of phase along longitude can be converted to a frequency shift. through the relation \(|\Delta f|=\frac{1}{2\pi}|\frac{d\varphi}{d\tau}|\). From, e.g., Figure 1e we obtain the total change of the LCP phase derivative across the profile from w\({}_{0}\) to w\({}_{8}\) of \(\sim 2.65\,\mathrm{rad}/\mu s\) which, with the shifting property of the Fourier transform, corresponds to a shift in frequency of the dynamic spectra to lower frequencies by an amount of \(\sim\)-420 kHz. ### Frequency shifts of the dynamic spectra We can also see the frequency shift in our dynamic spectra as a function of longitude for each of the two telescopes separately. In Figure 3 we show on the left side the dynamic spectra in windows w\({}_{0}\) (top panel) and w\({}_{8}\) (bottom panel). The latter one is slightly but clearly shifted toward lower frequencies with respect to the former. Figure 2: Upper panel: Average AR-GB visibility magnitude as a function of interferometer delay for PSR B1237+25 obtained in sub-window w\({}_{1}\) for one scan (10:00) on 2017 December 22. The solid line shows a Lorenzian, fit to the data. It has a half-width-at-half-maximum (HWHM) of 65 ns. Lower panel: Corresponding average differences of the visibility phases between the leading (w\({}_{0}\)) and trailing (w\({}_{8}\)) sub-windows relative to the phases for sub-window, w\({}_{4}\). The phases were corrected for 2\(\pi\)-ambiguities. Only RCP pulses with the highest SNR were used. The tilted lines are least-squares fits to the aligned phases in the central region of the visibility magnitude curve. Uncertainties are smaller or approximately equal to the symbol sizes. Phases outside the central region have much larger errors. A more detailed presentation of the frequency shift is obtained with \(CCF_{4k}(\Delta f,\Delta t)\), of the dynamic spectra magnitudes \(S(f,t)\). First, we determined the decorrelation bandwidth, \(\Delta f_{1/e}\) as the lag at 1/e of the maximum from the frequency cross-section of the autocorrelation function, \(CCF_{44}\), of the dynamic spectra in w\({}_{4}\). We list the range of values for the six scans for each of the two observing epochs in Table 1 and list the individual values in Table 2. Second, we determined the frequency shift of the dynamic spectra in each of the sub-windows relative to the spectrum in w\({}_{4}\). For this purpose we used the frequency sections of \(CCF_{4k}(\Delta f,\Delta t)\) for \(\Delta t=0\). As an example we take the dynamic spectra from w\({}_{0}\) and w\({}_{8}\) from Figure 3 (left panels) and cross-correlate them with respect to that of w\({}_{4}\). The corresponding functions \(CCF_{4k}\) with k=0 and 8 are plotted for AR and GB in Figure 4. A shift of spectra in w\({}_{8}\) to lower frequencies with respect to spectra in w\({}_{0}\) is clearly visible. For an accurate determination of the frequency shift we accounted for the asymmetry of the functions by fitting to them the function, \(Y(x)\), with \(x=\Delta f\) and \(x_{0}\) as the frequency shift with \[Y(x)=A\exp\left(-\frac{\left|x-x_{0}\right|^{\alpha}}{B}\right)+C+D(x-x_{0}). \tag{4}\] Here, \(C\), compensates for a possible baseline offset, D, accounts for the asymmetry of the CCF and, \(\alpha\), for the shape of the CCF. We determined the frequency shifts, \(x_{0}\) for the dynamic spectra in each window relative to that in w\({}_{4}\) for each of the six scans and each of the observing epochs. The fitted values of \(\alpha\) are in the range of 1.5 to 1.8. The frequency shift values, \(x_{0}\), correspond to shifts of the spectra from the leading part of the pulse profile in w\({}_{0}\) to the trailing part in w\({}_{8}\). Figure 3: The dynamic spectra in longitude windows w\({}_{0}\) (top) and w\({}_{8}\) (bottom) observed at AR in LCP on 2017 December 22 (rags29c, scan 1). The left panels show the dynamic spectra before correction. A shift of about 1 MHz toward lower frequencies from w\({}_{0}\) to w\({}_{8}\) can be seen. The right panels show the corresponding spectra after correction as described in Section 4. The spectrum in w\({}_{0}\) is copied from the left side for better comparison. The spectrum in w\({}_{8}\) shows that the frequency shift has largely disappeared. We plot the frequency shifts with dashed lines for AR and GB for each of the scans for the first epoch in Figure 5 and plot the averages from all six scans in Figure 6. The frequency shifts for our second epoch could be better inspected by averaging over all six scans and the two polarizations and telescopes. For comparison we plot these averages in the right panel in the same Figure. Figure 6 shows that the quasi-sinusoidal modulation of the frequency shift as a function of longitude is very similar for AR and GB on 2017 December 22. It is also similar to the shape of the modulation on 2018 February 26. However, the modulation amplitude is about 20 times smaller. We found that this decrease in amplitude is related to a decrease in \(\Delta f_{1/e}\). In Table 2 we list the magnitudes of the maximum frequency shifts as a function of \(\Delta f_{1/e}\) for each of the six scans of the two epochs and plot them in Figure 7. A weighted least-squares fit of the maximum frequency shift, \(M_{f-shift}\), to a polynomial, \(M_{f-shift}=A(\frac{\Delta f_{1/e}}{MHz})^{b}\), gives A=0.04\(\pm\)0.01 MHz and b=2.7\(\pm\)0.2 with scaled uncertainties so that Chi-square per degree of freedom, \(\chi^{2}_{\nu}=1\). These characteristics are similar to what could be expected for a pulsar signal sweeping down in frequency with different pulse components illuminating scintles in the spectrum at different frequencies. The average profile of PSR B1237+25 consists of 5 components as it is shown in Figure 1. With \(DM=9.3\) cm\({}^{-3}\) pc, it takes about 36 ms for the pulsar signal to sweep across the 16 MHz band from 332 to 316 MHz. The spectrum \(S_{0}\) corresponds to the case where the strong leading component comes to the high frequency part of the bandpass dominating the illumination and amplification of the diffraction features in the averaged dynamic spectrum between 332 and 328 MHz, thus causing a false visible shift of the spectrum to higher frequencies. The weight of the illumination and the amplification changes and becomes more balanced while the pulse travels through the bandpass, minimizing the frequency shift in the central region of our profile. However, the shift continues clearly further to lower frequencies when the strong trailing component of the pulse profile dominates the illumination and amplification of the dynamic spectrum between 324 and 318 MHz causing a Figure 5: Frequency shifts of dynamic spectra in w\({}_{0}\) to w\({}_{8}\) relative to spectrum in w\({}_{4}\). Panels (a-g) present results for the six scans obtained in LCP at AR (red lines) and GB (green lines) on 2017 December 22. For scan 6 the GB data were not usable and are omitted. Each scan is 840 pulse periods long. Dashed lines correspond to original, uncorrected frequency shifts. Solid lines correspond to remaining relatively small frequency shifts after correction for distortion discussed in subsection 4.1. Uncertainties are 1\(\sigma\) statistical standard errors determined from the fit of eq. 4. false visible shift of \(S_{8}\) to lower frequencies. This effect is already indicated in Figure 3 (left panels) for the dynamic spectra of w\({}_{0}\) and w\({}_{8}\) but can be more clearly seen in Figure 8. \begin{table} \begin{tabular}{c c c c} \hline \hline Date & Scan & \(\Delta f_{1/e}\) & Max. freq. shift \\ (yyyy mm dd) & (number) & (\(MHz\)) & (\(MHz\)) \\ \hline 2017 12 22 & 1 & \(3.57\pm 1.80\) & \(1.05\pm 0.01\) \\ & 2 & \(3.53\pm 1.76\) & \(1.30\pm 0.03\) \\ & 3 & \(2.50\pm 1.10\) & \(0.37\pm 0.01\) \\ & 4 & \(1.70\pm 0.60\) & \(0.22\pm 0.02\) \\ & 5 & \(3.50\pm 1.60\) & \(1.25\pm 0.03\) \\ & 6 & \(4.40\pm 2.50\) & \(0.95\pm 0.03\) \\ 2018 02 26 & 1 & \(1.15\pm 0.26\) & \(0.044\pm 0.008\) \\ & 2 & \(0.70\pm 0.14\) & \(0.015\pm 0.005\) \\ & 3 & \(1.02\pm 0.26\) & \(0.042\pm 0.007\) \\ & 4 & \(1.03\pm 0.26\) & \(0.068\pm 0.007\) \\ & 5 & \(1.20\pm 0.30\) & \(0.058\pm 0.005\) \\ & 6 & \(1.23\pm 0.32\) & \(0.052\pm 0.005\) \\ \hline \end{tabular} Note. – The decorrelation bandwidth, \(\Delta f_{1/e}\) and the magnitude of the maximum frequency shift, with standard errors, along pulse longitude for scans 1 to 6 for each of the two observing dates. For the computation of the error of \(\Delta f_{1/e}\), see Bartel et al. (2022). For errors of the frequency shift, see caption of Figure 5 \end{table} Table 2: Decorrelation bandwidth and freq. shift Figure 6: Left panel: The averages of the frequency shifts for 2017 December 22 for windows w\({}_{0}\) to w\({}_{8}\) relative to w\({}_{4}\) for the six scans in Figure 5. Right panel: The corresponding frequency shifts for 2018 February 26 averaged over all scans, the two polarizations and the two telescopes. Uncertainties are standard errors Also, the effect is particularly strong for AR for which the bandpass is almost flat across the 16 MHz bandwidth (see, Figure 9). In contrast, the bandpass for GB attenuates the high and low frequencies of the full bandwidth which weakens the effect of the illuminations of the diffraction features at the filter ends. The result is a smaller frequency shift which can be seen in Figures 4, 6, and 8. ### Effect of signal digitization Figure 8: Comparison of AR and GB time-averaged LCP spectra for leading (\(S_{0}\)) and trailing (\(S_{8}\)) windows, each with the spectrum for the off-pulse subtracted. Upper panels show spectra before correction and lower panels after correction. The spectra correspond to scan 1 of 1160 s duration on 2017 December 22. The spectra were normalized so that frequency-averaged power spectral density \(\bar{S}=1\). Figure 7: Maximum frequency difference observed in AR LCP dynamic spectra in w\({}_{8}\) relative to w\({}_{0}\) as a function of decorrelation bandwidth, \(\Delta f_{1/e}\), taken from Table 2. The shift is to lower frequencies for the dynamic spectra from the leading part of the pulse profile in w\({}_{8}\) to the trailing part in w\({}_{0}\). The data from Feb. 2018 February 26 are all in the lower left corner. The ASC correlator corrects for the dispersion delay by composing the full spectrum for a given time from a sample of delayed spectra. In our case we used 1000 time bins per pulsar period, giving us 1000 corresponding spectra with 512 channels each. With this number of channels our correlator produces spectra every 32 \(\mu\)s. Each such spectrum is subject to a redistribution of harmonics between corresponding bin spectra. Thus, for every pulsar period, the spectrum at every bin will be corrected for the dispersion delay. Our processing system is described by Likhachev et al. (2017). Nevertheless we still have, for spectra at different longitudes, distortions similar to those expected due to the influence of the dispersion delay. The reason for the distortions can be traced back to the effects of signal digitizing at the telescope (see, Jenet & Anderson, 1998). For one-bit digitizing (clipping) where negative values are recorded as -1, and positive values as +1 the signal variance \(\sigma^{2}=1.0\). For a signal with frequency components, \(1\leq k\leq N_{f}\), up to a maximum of \(N_{f}\) the variance of a signal is related to the spectral power density, \(s_{k}\), by \(\sigma^{2}=2\sum_{k=1}^{N_{f}}s_{k}\). If we assume that there is an excess of spectral power density at relatively high frequencies beyond component \(a\), with \(k>a\), then there will be a false deficiency of spectral power density at relatively low frequencies, \(k<a\). Let us assume that the current output spectrum from the correlator has such high frequency excess because the beginning of the pulse reached the receiver band. This output single spectrum would be redistributed between time bins with inadequate values, namely, underestimated low frequency values of the spectral power density would go to the preceding time bins causing a false decrease of total power. On the other hand, the high frequency portion of the spectrum at the leading longitude of the average profile would be overestimated. Thus, the one-bit digitizing acts like a non-compensated dispersion delay. The same effect can be seen under saturation conditions for two-bit (four level) digitizing used at both AR and GB. Under normal conditions with no saturation of the signal four values are utilized for such digitizing, -3, -1, +1, and +3. A transition level, \(s_{0}\), between \(\pm 1\) and \(\pm 3\) values must be close to \(\sigma\), with \(s_{0}=0.995\sigma\)(Thompson et al., 2017). An automatic gain control system (AGC) is Figure 9: Bandpasses for AR and GB from average spectra of off-pulse longitude windows, w\({}_{9}\) to \(w_{11}\), from scan 1 on 2017 December 22. used at each of our telescopes to keep the system at such a level during VLBI observing session. The AGC will compensate progressive slow signal variations caused by a change in elevation of the source or weather condition. For pulsar observations the AGC would not work correctly due to its inertia. Therefore, we switched off the AGC system in our observations. With the AGC switched off, the digitizer was saturated for strong pulses at such sensitive radio telescopes as AR and GB. Under saturation conditions the digitizer acts like a one-bit sampler with only values of \(\pm 3\), generating false spectral distortions. One can see this effect in the average profiles shown in Figure 1. Such digitizing also acts like a non-compensated dispersion delay. For multilevel digitization of pulsar signals the non-compensated dispersion delay is less likely and decreases with the number of digitization levels. ### Distortion observations of other pulsars Can the effect of distortion also be found in other pulsars? As an example we present in Figure 10 results for PSR B1133+16, with an average profile with two components, \(DM=4.84\) cm\({}^{-3}\) pc, and a decorrelation bandwidth of \(\sim 250\) kHz. We analyzed the average profile and the phase rate between visibility functions for different longitude windows, as described in Section 3 and plot the results in Figure 10. As for PSR B1237+25 the average pulse profile is distorted with intensity decreases at both sides of the profile, and the phase rates vary non-monotonically as a function of pulse longitude. The maximum difference is \(\sim\)1.2 rad/\(\mu\)s which corresponds to a frequency shift of \(\sim\)190 kHz. The shift is again from higher frequencies at the leading part of the pulse profile to lower frequencies at the trailing part. However, the magnitude of the shift for this pulsar is much larger than what would be expected from Figure 7 for PSR B1237+25. Clearly, the maximum frequency shift across the pulse profile depends also on the pulsar under study. Figure 10: Top: Average profile of PSR B1133+16 at LCP obtained at AR for one scan (10:40) on 2018 March 02 (session rags29g). Bottom: Phase rates as the derivatives of the AR-GB VLBI visibility phases with respect to delay as a function of pulse longitude for the same observing scan. For more information, compare with Figure 1. In general we think that this distortion effect can be observed in any pulsar independent of the complexity of its profile. However while the frequency shift across a profile with two or more components is rather modulated, this frequency shift would likely be much smoother and generally monotonic for a single component profile. ## 5 Significant Reduction of Distortion As was indicated in the previous section, the observed distortions as a function of pulsar longitude can be explained in terms of effects of dispersion and low-level digitization. Since all VLBI observations of the RadioAstron mission were archived, we accessed the original VLBI data and applied the corrections of 2-bit sampling as outlined by Jenet & Anderson (1998). We also coherently dedispersed the data (see, Hankins, 1974) for improvement over incoherent dedispersion although without any apparent effect on the correction itself. For details of our data reduction and correction process, see Girin et al. (2023). Figure 11 shows the average profile of PSR B1237+25 after correction to be compared with the profile before correction in Figure 1a. The dips of intensity below the off-pulse level on each side of the profile have disappeared. As for the dynamic spectra in \(\rm w_{0}\) and \(\rm w_{8}\), we show in Figure 3 that after correction the frequency shift has also largely disappeared. The details of the frequency shift as a function of longitude after correction are displayed in Figure 5. The corrected frequency shifts (solid lines) for both AR and GB are significantly reduced in amplitude and are almost constant along longitude in comparison to the original data (dashed lines). The same characteristics can also be found in the averaged spectra in Figure 6. Further insight can be derived from the time-averaged spectra in Figure 8. Close inspection shows that the narrow spectral features in the original data are not shifted along longitude. Instead the weight within the spectra is shifted. For the leading part of the pulse the spectrum, \(S_{0}\), has on average more power at higher frequencies which shifts to lower frequencies in the spectrum for the trailing part of the profile, \(S_{8}\) (see also interpretation in subsection 4.1. Again, the corrected spectra show almost no differences between those of the leading and trailing pulse parts. ## 6 Discussion Our observations can be compared with those of PSRs B1237+25 at 430 MHz at AR by Wolszczan & Cordes (1987) and B1133+16 at 327 MHz at Ooty by Gupta et al. (1999). The authors observed Figure 11: Average profile of PSR B1237+25 for corrected RCP data obtained at AR for scan 1 on 2017 December 12 (rags29c), to be compared with profile in Figure 1a. the pulsars in a refractive scintillation state and found that the phase of the periodic pattern in their dynamic spectra increased quasi-sinusoidally or non-monotonically from the trailing to the leading part of the pulse profile. This phase shift corresponds to a shift of the scintillation spectrum from high frequencies at the leading part of the pulse profile to low frequencies at the trailing part within their receiver bandpasses. In our observations these pulsars were in a diffractive scintillation state and no periodicities were produced in the dynamic spectra. Nevertheless we found a similar shift of the spectra from high to low frequencies in our VLBI observations with AR and GB as well as in observations with each single telescope. In particular, Wolszczan & Cordes (1987) reported for PSR B1237+25 a frequency shift of 39 kHz across the pulse profile. The decorrelation bandwidth for their two days of observations was 442 and 615 kHz. Although their observing frequency of 430 MHz was somewhat higher than ours at 327 MHz, their observed maximum frequency shift is comparable with our prediction of 10 kHz from Figure 7 which further indicates that the nature of their longitudinal frequency shift of the dynamic spectra is similar to ours. An important aspect is that both groups used, as we did, low-level digitizers, at AR 3-level (Wolszczan & Cordes, 1987) and at Ooty 1-bit samplers (Bhat et al., 1999). Through our analysis we can now trace our longitudinal frequency shift back to digitization effects. In view of interstellar interferometry we expect that most if not all pulsar observations with low-level digitization can be affected by longitudinal distortion if the effects discussed above are not addressed in detail. Despite having largely corrected for the low-level digitizing effects, some apparently significant discrepancies remain in our data. The cause is not clear. Perhaps our estimated uncertainties are too small and the discrepancies are not significant. The correction process itself may not be completely effective for our data. Lastly, the polarization impurities of the telescope feeds, although not subject of this paper, need to be considered when computing the limiting effect of the changing polarization characteristics along pulsar longitude on the phase and frequency shifts (see, Bartel et al., 1985). In this respect, the frequency shifts along longitude in RCP and LCP for AR and GB (see, Figure 1) are of interest. While there is fair consistency between the curves, there are also slight differences between the RCP and LCP curves that vary with longitude differently for AR and GB. Such an effect may be conceivable for different polarization impurities of the feeds at the two telescopes. We think that the constraints discussed here need to be taken into account for observations with the goal of resolving spatially the pulsar magnetosphere. With appropriate considerations a distortionless use of interstellar interferometry could likely be achieved. ## 7 Summary and Conclusions Here we summarize our observations and give our conclusions. 1. Inspired by earlier reports of having resolved the magnetosphere of pulsars with interstellar interferometry, we used VLBI observations of PSR B1237+25 conducted with AR and GB at 324 MHz and analyzed the interferometry as well as the single telescope data. All observations were done in the context of the RadioAstron space-VLBI mission. 2. During a time of diffractive scintillation, the dynamic spectra changed as a function of pulsar longitude with the spectrum of the leading part at higher frequencies shifting for the trailing part to lower frequencies. 3. In VLBI data as well as single telescope data the frequency shift displayed a quasi-sinusoidal pattern as a function of pulsar longitude. Although the patterns were largely similar for AR and GB, differences could be due to differences in the bandpasses. 4. The maximum frequency shift, between the spectra of the leading and trailing parts of the pulse profile is a steep function of decorrelation bandwidth. 5. The integrated pulse profile showed characteristic deficiencies in intensity at both sides of the profile. 6. Similar distortions of the scintillation spectra as a function of longitude and of the pulse profile were found for PSR B1133+16. 7. Despite having observed characteristic phase and frequency shifts in scintillation spectra as a function of pulsar longitude, we do not contribute the shifts to having resolved the pulsar magnetosphere. 8. We attribute the distortions and frequency shifts of the longitude-dependent dynamic spectra mostly to uncorrected low-level digitizing of the data. 9. With the data corrected (see, Jenet & Anderson, 1998), the quasi-sinusoidal frequency shift pattern of the dynamic spectra along pulse longitude largely disappeared. 10. Small remaining distortions could perhaps partly be caused by polarization impurities of the feeds. 11. In view of our analysis we think that observations with the intend to resolve the pulsar magnetosphere need to be critically evaluated in terms of these constraints on interstellar interferometry. The RadioAstron project is led by the Astro Space Center of the Lebedev Physical Institute of the Russian Academy of Sciences and the Lavochkin Scientific and Production Association under a contract with the Russian Federal Space Agency, in collaboration with partner organizations in Russia and other countries. N.B. was supported by the national Sciences and Engineering Research Council of Canada. The Arecibo Observatory is a facility of the National Science Foundation operated under cooperative agreement by the University of Central Florida and in alliance with Universidad Ana G. Mendez, and Yang Enterprises, Inc. The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Arecibo, Green Bank Telescope.
2308.06016
Integral closure and normality of edge ideals of some edge-weighted graphs
Let $G_\omega$ be an edge-weighted simple graph. In this paper, we give a complete characterization of the graph $G_\omega$ whose edge ideal $I(G_\omega)$ is integrally closed. We also show that if $G_\omega$ is an edge-weighted star graph, a path or a cycle, and $I(G_\omega)$ is integrally closed, then $I(G_\omega)$ is normal.
Shiya Duan, Guangjun Zhu, Yijun Cui, Jiaxin Li
2023-08-11T08:45:53Z
http://arxiv.org/abs/2308.06016v1
# Integral closure and normality of edge ideals of some edge-weighted graphs ###### Abstract. Let \(G_{\omega}\) be an edge-weighted simple graph. In this paper, we give a complete characterization of the graph \(G_{\omega}\) whose edge ideal \(I(G_{\omega})\) is integrally closed. We also show that if \(G_{\omega}\) is an edge-weighted star graph, a path or a cycle, and \(I(G_{\omega})\) is integrally closed, then \(I(G_{\omega})\) is normal. 2020 _Mathematics Subject Classification_. Primary 13B22, 13F20; Secondary 05C99, 05E40. Keywords: Integrally closed, normal, edge-weighted graph, edge ideal ## 1. Introduction Let \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\) be a polynomial ring in \(n\) variables over a field \(\mathbb{K}\). The class of monomial ideals of \(S\) has been intensively studied and many problems arise in when we would like to study good properties of monomial ideals, such as the integral closure and normality. Recall that an ideal \(I\subset S\) is called _integral closure_ if \(I=\overline{I}\) (see Definition 2.1 for the exact definitions of \(\overline{I}\)), and \(I\) is called _normal_ if \(I^{i}=\overline{I^{i}}\) for all \(i\geq 1\). This notion is related to the graded algebras arising from \(I\) such as the Rees algebra \(\mathcal{R}(I)=\oplus_{i\geq 0}I^{i}t^{i}\). It is known that \(I\) is normal if and only if \(\mathcal{R}(I)\) is normal, see [21, Theorem 4.3.17]. This highlights the importance of studying the normality of ideals. It is well-known that every square-free monomial ideal is integrally closed, see [7, Theorem 1.4.6]. Appearing as edge and cover ideals of graphs, the square-free monomial ideals play a key role in the connection between commutative algebra and combinatorics, see [4, 19]. The normality of such ideals has been of interest to many authors, see [11, 19, 20]. For example, in [14] it is shown that the edge ideals of bipartite graphs are normal. And it is also shown in [21, Corollary 14.6.25] that the cover ideals of perfect graphs are normal. In [1] it is shown that the cover ideals of odd cycles and wheel graphs are normal. Let \(G\) be a simple graph with vertex set \(V(G)=[n]\) and edge set \(E(G)\), where \([n]\) is by convention the set \(\{1,2,\ldots,n\}\). Let \(G_{\omega}\) be an _edge-weighted_ (or simply weighted) graph whose underlying graph is \(G\), that is, \(G_{\omega}\) is a triplet \((V(G_{\omega}),E(G_{\omega}),w)\) where \(V(G_{\omega})=V(G)\), \(E(G_{\omega})=E(G)\) and \(w:E(G_{\omega})\to\mathbb{N}^{+}\) is a weight function. Here \(\mathbb{N}^{+}\) denotes the set of positive integers. We often write \(G_{\omega}\) for the triplet \(G_{\omega}=(V(G_{\omega}),E(G_{\omega}),\omega)\). In other words, \(G_{\omega}\) is obtained from \(G\) by assigning a weight to its edges. An edge-weighted graph is called a _non-trivially weighted_ graph if there is at least one edge with a weight greater than \(1\). Otherwise, it is called a _trivially weighted_ graph. We consider the polynomial ring \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\) in \(n\) variables over a field \(\mathbb{K}\). The _edge-weighted ideal_ (or simply edge ideal) of \(G_{\omega}\), was introduced in [12], is the ideal of \(S\) given by \[I(G_{\omega})=(x_{i}^{\omega(e)}x_{j}^{\omega(e)}\mid e:=\{i,j\}\in E(G_{\omega})).\] If \(G_{\omega}\) is trivially weighted, then \(I(G_{\omega})\) is the usual edge ideal of underlying graph \(G\) of \(G_{\omega}\), that has been extensively studied in the literature [5, 7, 10, 16, 21, 22, 23]. Paulsen and Sather-Wagstaff in [12] studied the primary decomposition of these ideals. They also studied the unmixedness and Cohen-Macaulayness of these ideals, in the case where \(G_{\omega}\) is a cycle, a tree, or a complete graph. In [13], Seyed Fakhari et al. characterize the unmixedness and Cohen-Macaulayness of edge-weighted ideals of very well-covered graphs. Little is known about integral closure and normality of edge ideals of edge-weighted graphs. In this paper, we aim to characterise a weighted graph \(G_{\omega}\) whose edge ideal \(I(G_{\omega})\) is integrally closed. Under the condition that \(I(G_{\omega})\) is integrally closed, we show that \(I(G_{\omega})\) is normal if \(G_{\omega}\) is a weighted star graph or a path or a cycle. The paper is organized as follows. In Section 2, we recall some essential definitions and terminology that we will need later. In Section 3, we give a complete characterization of a weighted graph \(G_{\omega}\) whose edge ideal \(I(G_{\omega})\) is integrally closed. Under the condition that \(I(G_{\omega})\) is integrally closed, we show in Section 4 that \(I(G_{\omega})\) is normal if \(G_{\omega}\) is a weighted star graph or a path or a cycle. ## 2. Preliminary In this section, we gather together the needed definitions and basic facts that will be used throughout this paper. However, for more details, we refer the reader to [2, 7, 9, 12, 18]. A weighted graph \(H_{\omega}=(V(H),E(H),\omega)\) is called an _induced subgraph_ of a weighted graph \(G=(V(G),E(G),\omega)\) if \(V(H)\subset V(G)\), for any \(u,v\in V(H)\), \(\{u,v\}\in E(H)\) if and only if \(\{u,v\}\in E(G)\), and its weight \(\omega_{H}(\{u,v\})\) in \(H\) is equal to its weight \(\omega_{G}(\{u,v\})\) in \(G\). For convenience, we call \(H_{\omega}\) an induced subgraph of \(G_{\omega}\). For \(A\subset V(G)\), let \(G[A]\) denote the _induced subgraph_ of \(G\) on the set \(A\). A connected weighted graph \(G_{\omega}\) is called a cycle if \(\deg_{G}(v)=2\) for all \(v\in V(G)\). A cycle with \(n\) vertices is called to be an \(n\)-cycle, denoted by \(C_{\omega}^{n}\). A connected weighted graph on the set \([n]\) is called a path, if \(E(G)=\{\{i,i+1\}|1\leq i\leq n-1\}\). Such a path is usually denoted by \(P_{\omega}^{n}\). A weighted simple graph \(G_{\omega}\) on vertex set \([n]\) is called to be a complete graph, if \(\{i,j\}\in E(G)\) for all \(i,j\in[n]\). A complete graph with \(n\) vertices is usually denoted by \(K_{\omega}^{n}\). A weighted graph \(G_{\omega}\) is chordal if every induced cycle in \(G_{\omega}\) is a 3-cycle \(C_{\omega}^{3}\). **Definition 2.1**.: ([7, Definition 1.4.1]) _Let \(R\) be a ring and \(I\) an ideal in \(R\). An element \(f\in R\) is said to be integral over \(I\), if there exists an equation_ \[f^{k}+c_{1}f^{k-1}+\cdots+c_{k-1}f+c_{k}=0\ \ \text{with}\ \ c_{i}\in I^{i}.\] _The set \(\overline{I}\) of elements in \(R\) which are integral over \(I\) is the integral closure of \(I\). The ideal \(I\) is integrally closed, if \(I=\overline{I}\), and \(I\) is normal if all powers of \(I\) are integrally closed._ For an ideal \(I\) in \(R\), it is clear that \(I\subseteq\overline{I}\), so \(I\) is integrally closed if and only if \(\overline{I}\subseteq I\). Further, if \(I\) is a monomial ideal, then \(\overline{I}\) can be described as follows: **Theorem 2.2**.: ([7, Theorem 1.4.2]) _Let \(I\subset S\) be a monomial ideal. Then \(\overline{I}\) is a monomial ideal generated by all monomials \(f\in S\) for which there exists an integer \(k\) such that \(f^{k}\in I^{k}\)._ According to Theorem 2.2, we have another description of the integral closure of \(I\): \[\overline{I}=(f\in S\mid f\text{ is a monomial and }f^{i}\in I^{i}\text{ for some }i\geq 1).\] Let \(G\) be a simple graph with the vertex set \(V(G)=[n]\) and the edge set \(E(G)\), where for convention the notation \([n]\) denotes the set \(\{1,\ldots,n\}\). The _neighbourhood_ of a vertex \(v\) in \(G\) is defined as \(N_{G}(v)=\{u\in V(G):\{u,v\}\in E(G)\}\) and its degree, denoted by \(\deg_{G}(v)\), is \(|N_{G}(v)|\). For a monomial \(u=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in S\), we denote by \(\Gamma(u)\) the exponent vector \((a_{1},\ldots,a_{n})\) of \(u\). In this case, we can write \(u\) as \(u=\mathbf{x^{a}}\) with \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{Z}_{+}^{n}\). Observe that there exists a bijection which takes a monomial \(u=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) into a vector \((a_{1},\ldots,a_{n})\) in \(\mathbb{Z}_{+}^{n}\), where \(\mathbb{Z}_{+}^{n}\) is the set of those vectors \((a_{1},\ldots,a_{n})\in\mathbb{Z}^{n}\) with each \(a_{i}\geq 0\). Similarly, if \(A\) is a set of monomials in \(S\) we set \(\Gamma(A)=\{\Gamma(u):u\in A\}\). For a monomial ideal \(I\subset S\), let \(\mathcal{G}(I)\) denote the minimal set of generators of its monomial. If \(\mathcal{G}(I)=\{\mathbf{x^{b_{1}}},\ldots,\mathbf{x^{b_{m}}}\}\), then we denote the convex hull of \(\Gamma(\mathcal{G}(I))\) by \(\mathcal{C}(I)\), i.e., \(\mathcal{C}(I)=\{\mathbf{a}\in\mathbb{Q}_{+}^{n}\mid\mathbf{a}\in conv( \mathbf{b}_{1},\ldots,\mathbf{b}_{m})\}=\{\mathbf{a}=\sum\limits_{i=1}^{m} \lambda_{i}\mathbf{b}_{i}\mid\sum\limits_{i=1}^{m}\lambda_{i}=1,\lambda_{i}\in \mathbb{Q}_{+}\}\), where \(\mathbb{Q}_{+}\) is the set of all nonnegative rational numbers. We call \(\mathcal{C}(I)\) the _Newton polyhedron_ of \(I\). **Lemma 2.3**.: ([21, Proposition 12.1.4]) _Let \(I\subset S\) be a monomial ideal with \(\mathcal{G}(I)=\{\mathbf{x^{b_{1}}},\ldots,\mathbf{x^{b_{m}}}\}\). Then \(\overline{I}\) is generated by the monomials \(\mathbf{x^{a}}\), where \(\mathbf{a}=(\lceil a_{1}\rceil,\ldots,\lceil a_{n}\rceil)\) with \((a_{1},\ldots,a_{n})\in\mathcal{C}(I)\) and each \(\lceil a_{i}\rceil\) is the smallest integer \(\geq a_{i}\)._ ## 3. Integral closure of edge ideals of edge-weighted graphs In this section, we will give a characterization of weighted graphs whose edge ideals are integrally closed. **Theorem 3.1**.: _Let \(G_{\omega}=(V(G_{\omega}),E(G_{\omega}))\) be a weighted graph with at most one edge having non-trivial weight, then \(I(G_{\omega})\) is integrally closed._ Proof.: Let \(E(G_{\omega})=\{e_{1},\ldots,e_{m}\}\), where \(e_{i}=\{u_{i},v_{i}\}\) and \(\omega_{i}=\omega(e_{i})\). Without loss of generality, we assume that \(\omega_{1}\geq 1\) and \(\omega_{i}=1\) for all \(i=2,\ldots,m\). Let \(\mathbf{x^{b_{i}}}=x_{u_{i}}^{\omega_{i}}x_{v_{i}}^{\omega_{i}}\) for \(i=1,\ldots,m\), then \(\mathbf{b}_{i}=(0,\ldots,0,\omega_{i},0,\ldots,0,\omega_{i},0,\ldots,0)\) where \(\omega_{i}\) are the \(u_{i}\)-th and \(v_{i}\)-th entries of \(\mathbf{b}_{i}\) respectively. By Lemma 2.3, we have \[\mathcal{G}(\overline{I(G_{\omega})})=\{\mathbf{x^{a}}\mid\mathbf{a}=(\lceil a_ {1}\rceil,\ldots,\lceil a_{n}\rceil)\text{ with }(a_{1},\ldots,a_{n})\in \mathcal{C}(I(G_{\omega}))\}.\] Let \(\mathbf{x^{a}}\in\mathcal{G}(\overline{I(G_{\omega})})\) with \(\mathbf{a}=(\lceil a_{1}\rceil,\ldots,\lceil a_{n}\rceil)\) satisfying \((a_{1},\ldots,a_{n})=\sum\limits_{i=1}^{m}\lambda_{i}\mathbf{b}_{i}\) with \(\sum\limits_{i=1}^{m}\lambda_{i}=1\) and \(\lambda_{i}\in\mathbb{Q}_{+}\). If \(\lambda_{i}=1\) and \(\lambda_{j}=0\) for any \(j\in[m]\) with \(j\neq i\), then \(\mathbf{x^{a}}=\mathbf{x}^{\lambda_{i}\mathbf{b}_{i}}\in I(G_{\omega})\). If there exists some \(t\geq 2\) such that \(\lambda_{i_{1}},\ldots,\lambda_{i_{t}}>0\). Let \(i_{1}<\cdots<i_{t}\), then \(i_{2}\geq 2\). Since \(\omega_{i}=1\) for each \(i=2,\ldots,m\), \(\mathbf{x^{b_{i_{2}}}}|\mathbf{x^{a}}\). It follows that \(\mathbf{x^{a}}\in I(G_{\omega})\). \(\Box\) **Lemma 3.2**.: _Let \(G_{\omega}\) be a weighted graph and \(H_{\omega}\) its induced subgraph. If for some \(k\in\mathbb{N}\), \(I(G_{\omega})^{k}\) is integrally closed, then \(I(H_{\omega})^{k}\) is also integrally closed._ Proof.: For any monomial \(f\in\mathcal{G}(\overline{I(H_{\omega})^{k}})\), we first prove that if \(x_{u}\) divides \(f\) then \(u\) is in \(V(H_{\omega})\). Indeed, by the choice of \(f\) one has \(f^{s}\in I(H_{\omega})^{sk}\) for some integer \(s\geq 1\) by Theorem 2.2. So we can write \(f^{s}=h\prod\limits_{i=1}^{sk}(x_{u_{i}}x_{v_{i}})^{\omega(e_{i})}\) for some monomial \(h\) and \(e_{i}=\{u_{i},v_{i}\}\in E(H_{\omega})\) for \(i=1,\ldots,sk\). Since \(x_{u}|f\), we have \(x_{u}^{s}|f^{s}\). If \(u\notin V(H_{\omega})\), then \(u\cap e_{i}=\emptyset\) for \(i=1,\ldots,sk\). This forces that \(x_{u}^{s}|h\). Let \(h=x_{u}^{s}h_{1}\), then \(f^{s}=x_{u}^{s}h_{1}\prod\limits_{i=1}^{sk}(x_{u_{i}}x_{v_{i}})^{\omega(e_{i})}\). So \((f/x_{u})^{s}=h_{1}\prod\limits_{i=1}^{sk}(x_{u_{i}}x_{v_{i}})^{\omega(e_{i})}\), which implies that \(f/x_{u}\in\overline{I(H_{\omega})^{k}}\) by Theorem 2.2. This contradicts the fact that \(f\in\mathcal{G}(\overline{I(H_{\omega})^{k}})\). Since \(f\in\mathcal{G}(\overline{I(H_{\omega})^{k}})\), one has \(f\in\overline{I(G_{\omega})^{k}}\) by [9, Remark 1.1.3]. So \(f\in I(G_{\omega})^{k}\), since \(I(G_{\omega})^{k}\) is integrally closed. It follows that \(f=h\prod\limits_{i=1}^{k}(x_{u_{i}}x_{v_{i}})^{\omega(e_{i})}\) for some monomial \(h\) and \(e_{i}=\{u_{i},v_{i}\}\in E(G_{\omega})\) for \(i=1,\ldots,k\). By the above proof, we get \(u_{i},v_{i}\in V(H_{\omega})\) and \(e_{i}=\{u_{i},v_{i}\}\in E(H_{\omega})\). Consequently, \(f\in I(H_{\omega})^{k}\). This completes our proof. \(\Box\) **Remark 3.3**.: _Let \(G_{\omega}\) be a weighted graph and \(H_{\omega}\) be its induced subgraph. If \(I(G_{\omega})\) is normal then \(I(H_{\omega})\) is also normal._ The next lemma gives a list of weighted graphs which are not integrally closed. **Lemma 3.4**.: _Let \(G_{\omega}\) be a non-trivially weighted graph, such that all of its edges have non-trivial weights._ 1. _If_ \(G_{\omega}=P_{\omega}^{3}\) _is a path of length_ \(2\)_, then_ \(I(G_{\omega})\neq\overline{I(G_{\omega})}\)_._ 2. _If_ \(G_{\omega}=P_{\omega}^{2}\sqcup P_{\omega}^{2}\) _is a disjoint union of two paths_ \(P_{\omega}^{2}\)_, then_ \(I(G_{\omega})\neq\overline{I(G_{\omega})}\)_._ 3. _If_ \(G_{\omega}=C_{\omega}^{3}\) _is a_ \(3\)_-cycle, then_ \(I(G_{\omega})\neq\overline{I(G_{\omega})}\)_._ Proof.: (1) Let \(V(G_{\omega})=[3]\) and \(E(G_{\omega})=\{\{1,2\},\{2,3\}\}\), then \(\mathcal{G}(I(G_{\omega}))=\{x_{1}^{\omega_{1}}x_{2}^{\omega_{1}},\)\(x_{2}^{\omega_{2}}x_{3}^{\omega_{2}}\}\) with each \(\omega_{i}=\omega(\{i,i+1\})\). Choose \(f=x_{1}^{\omega_{1}-1}x_{2}^{\omega_{1}+\omega_{2}}x_{3}^{\omega_{2}-1}\), then \(f\notin I(G_{\omega})\), but \(f^{2}=x_{1}^{2\omega_{1}-2}x_{2}^{2\omega_{1}+2\omega_{2}}x_{3}^{2\omega_{2}-2}=( x_{1}^{\omega_{1}}x_{2}^{\omega_{1}})(x_{2}^{\omega_{2}}x_{3}^{\omega_{2}})x_{2}^{ \omega_{1}+\omega_{2}}x_{1}^{\omega_{1}-2}x_{3}^{\omega_{2}-2}\in I(G_{\omega})^ {2}\). This means that \(f\in\overline{I(G_{\omega})}\) by Theorem 2.2. (2) Let \(V(G_{\omega})=[4]\) and \(E(G_{\omega})=\{\{1,2\},\{3,4\}\}\), then \(\mathcal{G}(I(G_{\omega}))=\{x_{1}^{\omega_{1}}x_{2}^{\omega_{1}},x_{3}^{ \omega_{3}}x_{4}^{\omega_{3}}\}\), where \(\omega_{1}=\omega(\{1,2\})\) and \(\omega_{3}=\omega(\{3,4\})\). Choose \(g=x_{1}^{\omega_{1}-1}x_{2}^{\omega_{1}-1}x_{3}^{\omega_{3}-1}x_{4}^{\omega_{3} -1}\), then \(g\notin I(G_{\omega})\), but \(g^{2}=x_{1}^{2\omega_{1}-2}x_{2}^{2\omega_{1}-2}x_{3}^{2\omega_{3}-2}x_{4}^{2 \omega_{3}-2}=(x_{1}^{\omega_{1}}x_{2}^{\omega_{1}})(x_{3}^{\omega_{3}}x_{4}^{ \omega_{3}})(x_{1}x_{2})^{\omega_{1}-2}(x_{3}x_{4})^{\omega_{3}-2}\)\(\in I(G_{\omega})^{2}\). This implies that \(g\in\overline{I(G_{\omega})}\) by Theorem 2.2. (3) Let \(V(G_{\omega})=[3]\) and \(E(G_{\omega})=\{\{1,2\},\{2,3\},\{3,1\}\}\), then \(\mathcal{G}(I(G_{\omega}))=\{x_{1}^{\omega_{1}}x_{2}^{\omega_{1}},x_{2}^{\omega_ {2}}x_{3}^{\omega_{2}},x_{3}^{\omega_{3}}x_{1}^{\omega_{3}}\}\), where \(\omega_{i}=\omega(\{i,i+1\})\) and \(i+1\equiv j\mod 3\) with \(0<j\leq 3\) for \(i=1,2,3\). If \(\omega_{1}>\omega_{2}-1\), we choose \(h=x_{1}^{\omega_{3}-1}x_{2}^{\omega_{2}-1}x_{3}^{\omega_{2}+\omega_{3}}\), It is clear that \(h\notin I(G_{\omega})\), but \(h^{2}=(x_{1}^{\omega_{3}}x_{3}^{\omega_{3}})(x_{2}^{\omega_{2}}x_{3}^{\omega_ {2}})x_{1}^{\omega_{3}-2}x_{2}^{\omega_{2}-2}x_{3}^{\omega_{3}+\omega_{2}}\in I (G_{\omega})^{2}\). Otherwise, we choose \(h=x_{1}^{\omega_{1}+\omega_{3}}x_{2}^{\omega_{1}-1}x_{3}^{\omega_{3}-1}\). In this case, we get that \(h\notin I(G_{\omega})\), but \(h^{2}=(x_{1}^{\omega_{1}}x_{2}^{\omega_{1}})(x_{1}^{\omega_{3}}x_{3}^{\omega_ {3}})x_{1}^{\omega_{1}+\omega_{3}}x_{2}^{\omega_{1}-2}x_{3}^{\omega_{3}-2}\in I (G_{\omega})^{2}\). Therefore, by Theorem 2.2, we get that \(h\in I(G_{\omega})\). **Corollary 3.5**.: _Let \(G_{\omega}\) be a weighted graph. If \(G_{\omega}\) contains one of the three graphs described in Lemma 3.4 as an induced subgraph, then \(I(G_{\omega})\) is not integrally closed._ Proof.: Let \(H_{\omega}\) be an induced subgraph of \(G_{\omega}\) as described in Lemma 3.4, then \(I(H_{\omega})\neq\overline{I(H_{\omega})}\) by Lemma 3.4. The desired result follows from Lemma 3.2. **Theorem 3.6**.: _Let \(G_{\omega}\) be a weighted graph. Then \(I(G_{\omega})\) is integrally closed if and only if \(G_{\omega}\) does not contain one of the three graphs described in Lemma 3.4 as an induced subgraph._ Proof.: Necessity follows from Corollary 3.5. For sufficiency, suppose that \(G_{\omega}\) does not contain any of the three graphs described in Lemma 3.4 as its induced subgraph. Let \(E(G_{\omega})=\{e_{1},\ldots,e_{m}\}\) with each \(e_{i}=\{u_{i},v_{i}\}\) and \(\omega_{i}=\omega(e_{i})\). If \(G_{\omega}\) has at most one edge with non-trivial weight, then \(I(G_{\omega})\) is integrally closed by Theorem 3.1. Now suppose that \(G_{\omega}\) has \(p\) edges with non-trivial weights, where \(p\geq 2\). Without loss of generality, we assume that \(\omega_{i}\geq 2\) for \(i=1,\ldots,p\) and \(\omega_{i}=1\) for \(i=p+1,\ldots,m\). Set \(\mathbf{x}^{\mathbf{b}_{i}}=x_{u_{i}}^{\omega_{i}}x_{v_{i}}^{\omega_{i}}\) for \(i=1,\ldots,m\), then the exponent vector \(\mathbf{b}_{i}=(0,\ldots,0,\omega_{i},0,\ldots,0,\omega_{i},0,\ldots,0)\), where \(\omega_{i}\) are the \(u_{i}\)-th and \(v_{i}\)-th entries of \(\mathbf{b}_{i}\), respectively. By Lemma 2.3, we have \[\mathcal{G}(\overline{I(G_{\omega})})=\{\mathbf{x}^{\mathbf{a}}\mid\mathbf{a} =(\lceil a_{1}\rceil,\ldots,\lceil a_{n}\rceil)\text{ with }(a_{1},\ldots,a_{n})\in\mathcal{C}(I(G_{\omega}))\}.\] Let \(\mathbf{x}^{\mathbf{a}}\in\mathcal{G}(\overline{I(G_{\omega})})\) with \(\mathbf{a}=(\lceil a_{1}\rceil,\ldots,\lceil a_{n}\rceil)\) satisfying \[(a_{1},\ldots,a_{n})=\sum_{i=1}^{m}\lambda_{i}\mathbf{b}_{i}\ \text{ with }\ \sum_{i=1}^{m}\lambda_{i}=1\ \text{ and }\ \lambda_{i}\in\mathbb{Q}_{+}. \tag{1}\] We will prove that \(\mathbf{x}^{\mathbf{a}}\in I(G_{\omega})\). We distinguish into the following two cases: (i) If \(\lambda_{i}=1\) and \(\lambda_{\ell}=0\) for any \(\ell\in[m]\) with \(\ell\neq i\) in the above expression (1) of \((a_{1},\ldots,a_{n})\), then \(\mathbf{x}^{\mathbf{a}}=\mathbf{x}^{\mathbf{b}_{i}}\in I(G_{\omega})\). (ii) If \(\lambda_{i_{1}},\ldots,\lambda_{i_{l}}>0\) with \(t\geq 2\) in the expression (1) of \((a_{1},\ldots,a_{n})\). In this case, we consider the following two cases: (a) If \(\lambda_{i_{\ell}}>0\) with \(i_{\ell}>p\), then \(\mathbf{x}^{\mathbf{b}_{i_{\ell}}}|\mathbf{x}^{\mathbf{a}}\), since \(\omega_{i}=1\) for \(i=p+1,\ldots,m\). This implies that \(\mathbf{x}^{\mathbf{a}}\in I(G_{\omega})\). (b) If \(\{i_{1},\ldots,i_{t}\}\subseteq\{1,\ldots,p\}\). Without loss of generality, we can assume that \(\lambda_{i}>0\) for all \(i\in[t]\). In this case, let \(H_{\omega}\) be an induced subgraph of \(G_{\omega}\) on the set \(A\), where \(A=\{u_{1},v_{1}\}\cup\{u_{2},v_{2}\}\). If \(|E(H_{\omega})|=2\), then \(H_{\omega}=P_{\omega}^{3}\) or \(H_{\omega}=P_{\omega}^{2}\sqcup P_{\omega}^{2}\) is a disjoint union of two paths \(P_{\omega}^{2}\). In both cases, every edge of \(H_{\omega}\) has non-trivial weight, which contradicts the assumption that \(G_{\omega}\) does not contain \(P_{\omega}^{3}\) or \(P_{\omega}^{2}\sqcup P_{\omega}^{2}\) as its induced subgraph. Consequently, \(|E(H_{\omega})|\geq 3\). \(H_{\omega}\) can be only one of the following six cases: (1) \(C_{\omega}^{3}\) with \(\omega_{i}\geq 2\) for \(i=1,2\) (2) \(P_{\omega}^{4}\) with \(\omega_{i}\geq 2\) for \(i=1,2\) (3) \(C_{\omega}^{4}\) with \(\omega_{i}\geq 2\) for \(i=1,2\) (4) chordal graph with \(e_{i}\geq 2\) for \(i=1,2\) (5) chordal graph with \(e_{i}\geq 2\) for \(i=1,2\) \(e_{i}\geq 2\) for \(i=1,2\) Claim: In each of the six cases above, there exists some \(e\in E(H_{\omega})\) such that \(\omega(e)=1\). If \(H_{\omega}\) is an induced subgraph of \(G_{\omega}\), as shown in case (1) or case (6), and \(\omega(e)\geq 2\) for all \(e\in E(H_{\omega})\), then \(G_{\omega}\) has an induced subgraph \(C_{\omega}^{3}\), such that all of whose edges have non-trivial weights, which contradicts the hypothesis. Hence there exists some \(e\in E(H_{\omega})\) such that \(\omega(e)=1\). If \(H_{\omega}\) is an induced subgraph of \(G_{\omega}\), as shown in one of the cases (2)-(5), and \(\omega(e)\geq 2\) for all \(e\in E(H_{\omega})\), then \(G_{\omega}\) has an induced path \(P_{\omega}^{3}\), such that all of its edges have non-trivial weights, a contradiction. Without loss of generality, we can assume that \(\omega(\{u_{1},u_{2}\})=1\). Note that \((a_{1},\ldots,a_{n})\) satisfies the expression (1), we have \(a_{u_{1}}\geq\lambda_{1}\omega_{1}>0\) and \(a_{u_{2}}\geq\lambda_{2}\omega_{2}>0\), where \(a_{u_{1}}\) and \(a_{u_{2}}\) are the \(u_{1}\)-th and \(u_{2}\)-th entries of \((a_{1},\ldots,a_{n})\) respectively. It follows that \(\lceil a_{u_{1}}\rceil\geq 1\) and \(\lceil a_{u_{2}}\rceil\geq 1\), so \(x_{u_{1}}x_{u_{2}}\) divides \(\mathbf{x^{a}}\). So \(\mathbf{x^{a}}\in I(G_{\omega})\). This completes the proof. \(\Box\) ## 4. Normality of edge ideals of some edge-weighted graphs In this section, we will show that for a weighted star graph or a weighted path or a weighted cycle with the edge ideal \(I\), if \(I\) is integrally closed, then \(I\) is normal. First, we recall a key notion from [6], which will be helpful in understanding the integral closure of ideals. Let \(S=\mathbb{K}[x_{1},\ldots,x_{n}]\) be a polynomial ring in \(n\) variables over a field \(\mathbb{K}\) and \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in S\) be a monomial with an exponent vector \(\mathbf{a}=(a_{1},\ldots,a_{n})\). Let \(I\subset S\) be a monomial ideal with \(\mathcal{G}(I)=\{\mathbf{x^{b_{1}}},\ldots,\mathbf{x^{b_{m}}}\}\). We call the \(n\times m\) matrix \(M\), whose columns are exponent vectors \(\mathbf{b}_{1},\ldots,\mathbf{b}_{m}\), the _exponent matrix_ of \(I\). We set \[v_{\mathbf{a}}(I) =\max\,\{\mathbf{1}^{m}\cdot\mathbf{y}\mid M\cdot\mathbf{y}\leq \mathbf{a}\text{ with }\mathbf{y}\in\mathbb{Z}_{+}^{m}\},\] \[v_{\mathbf{a}}^{*}(I) =\max\,\{\mathbf{1}^{m}\cdot\mathbf{y}\mid M\cdot\mathbf{y}\leq \mathbf{a}\text{ with }\mathbf{y}\in\mathbb{R}_{\geq 0}^{m}\},\] where \(\mathbb{R}_{\geq 0}\) is the set of all non-negative real numbers. **Lemma 4.1**.: ([17, Proposition 3.1]) _Let \(I\subset S\) be a monomial ideal. Then_ 1. \(\mathbf{x^{a}}\in I^{k}\) _if and only if_ \(v_{\mathbf{a}}(I)\geq k\)_,_ 2. \(\mathbf{x^{a}}\in\overline{I^{k}}\) _if and only if_ \(v_{\mathbf{a}}^{*}(I)\geq k\)_._ **Remark 4.2**.: _Let \(k\) be a positive integer._ 1. _If_ \(x+y\geq k\) _with_ \(x,y\in\mathbb{R}_{\geq 0}\)_, then_ \(\lceil x\rceil+\lfloor y\rfloor\geq k\)_;_ 2. _If_ \(x+y\leq k\) _with_ \(x,y\in\mathbb{R}_{\geq 0}\)_, then_ \(\lceil x\rceil+\lfloor y\rfloor\leq k\)_._ _where \(\lfloor y\rfloor\) is the largest integer \(\leq y\)._ Proof.: (1) If \(x+y\geq k\), then \(\lceil x\rceil+y\geq k\), i.e, \(y\geq k-\lceil x\rceil\). Since \(k-\lceil x\rceil\) is an integer, we have \(\lfloor y\rfloor\geq k-\lceil x\rceil\), i.e, \(\lceil x\rceil+\lfloor y\rfloor\geq k\). (2) If \(x+y\leq k\), then \(x+\lfloor y\rfloor\leq k\), i.e, \(x\leq k-\lfloor y\rfloor\). It follows that \(\lceil x\rceil\leq k-\lfloor y\rfloor\), i.e, \(\lceil x\rceil+\lfloor y\rfloor\leq k\). We now prove some of the main results of this section. **Theorem 4.3**.: _Let \(G_{\omega}\) be a weighted star graph with \(n\) vertices, and let \(I=I(G_{\omega})\) be its edge ideal. If \(I\) is integrally closed, then \(I\) is normal._ Proof.: Let \(E(G_{\omega})=\{e_{1},\ldots,e_{n-1}\}\), where \(e_{i}=\{i,n\}\) and \(\omega_{i}=\omega(e_{i})\) for \(i\in[n-1]\). Since \(I\) is integrally closed, \(G_{\omega}\) has at most one edge with non-trivial weight by Theorem 3.6. If \(G_{\omega}\) is trivially weighted, then \(I\) is normal by [15, Proposition 2.1 and Corollary 2.8] and [8, Proposition 2.1.2]. Now we assume that \(G_{\omega}\) has an edge with non-trivial weight. In this case, we can assume by symmetry that \(\omega_{1}\geq 2\) and \(\omega_{i}=1\) for \(i\in[n-1]\) with \(i\neq 1\). We will prove that \(\overline{I^{k}}=I^{k}\) for all \(k\geq 2\). Since \(I^{k}\subseteq\overline{I^{k}}\) is always valid, it suffices to prove that \(\overline{I^{k}}\subseteq I^{k}\). Let \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in\mathcal{G}(\overline{I^{k}})\), then \(v_{\mathbf{a}}^{*}(I)\geq k\) by Lemma 4.1(2). From the definition of \(v_{\mathbf{a}}^{*}(I)\) it follows that there exists the vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n-1})^{T}\in\mathbb{R}_{\geq 0}^{n-1}\) which satisfies the following system of inequalities \[(1)\quad\quad\left\{\begin{aligned} y_{1}+\cdots+y_{n-1}& \geq k,&&\text{\text{\text{\text{\text{\text{\text{ \text{1}}}}}}}}\\ \omega_{1}y_{1}&\leq a_{1},&&\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\ 2. If \(a_{i}<k\) for all \(i\in[n-1]\backslash\{1\}\). We consider the following three subcases: 1. If there exists some \(j\in[n-2]\backslash\{1\}\) such that \(a_{2}+\cdots+a_{j+1}=k\), then in this case \(\mathbf{x^{a}}\) can be divisible by \((x_{2}x_{n})^{a_{2}}(x_{3}x_{n})^{a_{3}}\cdots(x_{j}x_{n})^{a_{j}}(x_{j+1}x_{n })^{b_{j+1}}\) where \(b_{j+1}=k-(a_{2}+\cdots+a_{j})\), so that \(\mathbf{x^{a}}\in I^{k}\). 2. If there exists some \(j\in[n-2]\backslash\{1\}\) such that \(a_{2}+\cdots+a_{j+1}>k\), then we choose the maximum \(\ell\) such that \(a_{2}+\cdots+a_{\ell}\leq k\). In this case, \(\mathbf{x^{a}}\) can be divisible by \((x_{2}x_{n})^{a_{2}}(x_{3}x_{n})^{a_{3}}\cdots(x_{\ell}x_{n})^{a_{\ell}}(x_{ \ell+1}x_{n})^{b_{\ell+1}}\) where \(b_{\ell+1}=k-(a_{2}+\cdots+a_{\ell})\), so \(\mathbf{x^{a}}\in I^{k}\). 3. If \(a_{2}+\cdots+a_{n-1}<k\). In this case, let \(b=a_{2}+\cdots+a_{n-1}\), then \(y_{2}+\cdots+y_{n-1}\leq b<k\). It follows from 1 in system (1) that (2) \[y_{1}\geq k-b.\] Therefore \(a_{1}\geq\omega_{1}y_{1}\geq\omega_{1}(k-b)\) by 2 in system (1). By 3 in system (1) and the inequality (2), we get \[a_{n} \geq\omega_{1}y_{1}+y_{2}+\cdots+y_{n-1}\] \[=y_{1}+\cdots+y_{n-1}+(\omega_{1}-1)y_{1}\] \[\geq k+(\omega_{1}-1)(k-b)\] \[=b+\omega_{1}(k-b).\] It follows that \(\mathbf{x^{a}}\) is divisible by \((x_{2}x_{n})^{a_{2}}\cdots(x_{n-1}x_{n})^{a_{n-1}}(x_{1}^{\omega_{1}}x_{n}^{ \omega_{1}})^{k-b}\), so \(\mathbf{x^{a}}\in I^{k}\), since \((k-b)+a_{2}+\cdots+a_{n-1}=(k-b)+b=k\). **Lemma 4.4**.: _Let \(n\geq 2\) be an integer and let \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in S\) be a monomial whose exponent vector \(\mathbf{a}=(a_{1},\ldots,a_{n})\) satisfies one of the following two conditions:_ 1. \(n\geq 3\)_,_ \(a_{j}\geq a_{j-1}-a_{j-2}+\cdots+(-1)^{i-1}a_{j-i}+\cdots+(-1)^{j-2}a_{1}\) _for each_ \(j=2,\ldots,n-1\) _and_ \(a_{n}\leq a_{n-1}-a_{n-2}+\cdots+(-1)^{i-1}a_{n-i}+\cdots+(-1)^{n-2}a_{1}\)_._ 2. \(n=2r\) _and_ \(a_{2i-1}\geq a_{2i}\) _for each_ \(i=1,\ldots,r\)_._ _Suppose that a vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})^{T}\in\mathbb{R}_{\geq 0}^{n}\) satisfies the following inequality system_ \[(2)\qquad\left\{\begin{aligned} y_{1}&\leq a_{1},\\ y_{1}+y_{2}&\leq a_{2},\\ y_{2}+y_{3}&\leq a_{3},\\ \vdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},\\ y_{n-1}+y_{n}&\leq a_{n}.\end{aligned}\right.\] _Let \(h=\lceil y_{1}+\cdots+y_{n}\rceil\), then there exist at least \(h\) monomials \(e_{1},\ldots,e_{h}\in\{x_{1}x_{2},x_{2}x_{3},\\ \ldots,x_{n-1}x_{n}\}\) such that \(\mathbf{x^{a}}\) can be divisible by \(\prod_{i=1}^{h}e_{i}\)._ Proof.: (1) Let \(b_{j-1}=a_{j-1}-a_{j-2}+\cdots+(-1)^{i-1}a_{j-i}+\cdots+(-1)^{j-2}a_{1}\) for \(j=2,\ldots,n-1\), then by the assumption we have \(a_{j}\geq b_{j-1}\) for \(j=2,\ldots,n-1\), and \(a_{n}\leq b_{n-1}\). Meanwhile, we also get that \(b_{1}=a_{1}\), \(b_{j}+b_{j-1}=a_{j}\) for \(j=2,\ldots,n-1\), and \(a_{n}\leq a_{n-1}-b_{n-2}\). It follows that \(a_{n-1}\geq a_{n}+b_{n-2}\), and \(b_{j}\geq 0\) since \(b_{j-1}\). By comparing the indices of each variable we find that \(\mathbf{x^{a}}\) can be divisible by \((x_{1}x_{2})^{b_{1}}(x_{2}x_{3})^{b_{2}}\cdots(x_{n-2}x_{n-1})^{b_{n-2}}(x_{n-1} x_{n})^{a_{n}}\). From the inequality system (2) above, we see that if \(n=2r\) then \(b_{1}+\cdots+b_{n-2}+a_{n}=\sum\limits_{i=1}^{r}a_{2i}\geq y_{1}+\cdots+y_{n}\); if \(n=2r-1\) then \(b_{1}+\cdots+b_{n-2}+a_{n}=\sum\limits_{i=1}^{r}a_{2i-1}\geq y_{1}+\cdots+y_{n}\). In both cases, we always have \(b_{1}+\cdots+b_{n-2}+a_{n}\geq h\), as desired. (2) If \(n=2r\) and \(a_{2i}\leq a_{2i-1}\) for each \(i=1,\ldots,r\), then it is clear that \(\mathbf{x^{a}}\) can be divisible by \(\prod\limits_{i=1}^{r}(x_{2i-1}x_{2i})^{a_{2i}}\) and \(\sum\limits_{s=1}^{r}a_{2s}\geq y_{1}+\cdots+y_{n}\) by the system (2), thus \(\sum\limits_{s=1}^{r}a_{2s}\geq h\). As expected. Applying similar techniques, we can get the following lemma. **Lemma 4.5**.: _Let \(n\geq 2\) be an integer and let \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in S\) be a monomial whose exponent vector \(\mathbf{a}=(a_{1},\ldots,a_{n})\) satisfies one of the following four conditions:_ 1. \(a_{j}\geq a_{j-1}-a_{j-2}+\cdots+(-1)^{i-1}a_{j-i}+\cdots+(-1)^{j-2}a_{1}\) _for each_ \(j=2,\ldots,n\)_._ 2. \(n\geq 3\)_,_ \(a_{j}\geq a_{j-1}-a_{j-2}+\cdots+(-1)^{i-1}a_{j-i}+\cdots+(-1)^{j-2}a_{1}\) _for each_ \(j=2,\ldots,n-1\) _and_ \(a_{n}\leq a_{n-1}-a_{n-2}+\cdots+(-1)^{i-1}a_{n-i}+\cdots+(-1)^{n-2}a_{1}\)_._ 3. \(n=2r\) _and_ \(a_{2i-1}\geq a_{2i}\) _for each_ \(i\in[r]\)_._ 4. \(n=2r-1\) _with_ \(r\geq 2\) _and_ \(a_{2i-1}\geq a_{2i}\) _for each_ \(i\in[r-1]\)_._ _Suppose that a vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n-1})^{T}\in\mathbb{R}_{\geq 0}^{n-1}\) satisfies the following inequality system_ \[\left\{\begin{aligned} y_{1}&\leq a_{1},\\ y_{1}+y_{2}&\leq a_{2},\\ y_{2}+y_{3}&\leq a_{3},\\ \vdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},\\ y_{n-1}&\leq a_{n}.\end{aligned}\right. \tag{3}\] _Let \(h=\lceil y_{1}+\cdots+y_{n-1}\rceil\), then there exist at least \(h\) monomials \(e_{1},\ldots,e_{h}\in\{x_{1}x_{2},x_{2}x_{3},\ldots,x_{n-1}x_{n}\}\) such that \(\mathbf{x^{a}}\) can be divisible by \(\prod_{i=1}^{h}e_{i}\)._ Proof.: (1) For each \(j=2,\ldots,n\), let \(b_{j-1}=a_{j-1}-a_{j-2}+\cdots+(-1)^{i-1}a_{j-i}+\cdots+(-1)^{j-2}a_{1}\), then \(b_{1}=a_{1}\) and \(b_{j}+b_{j-1}=a_{j}\) for all \(j=2,\ldots,n\). It follows that \(b_{j}\geq 0\) from the assumption \(a_{j}\geq b_{j-1}\). By comparing the indices of each variable we see that \(\mathbf{x^{a}}\) can be divisible by \((x_{1}x_{2})^{b_{1}}(x_{2}x_{3})^{b_{2}}\cdots(x_{n-2}x_{n-1})^{b_{n-2}}(x_{n- 1}x_{n})^{b_{n-1}}\). Note that \(\sum\limits_{i=1}^{n-1}b_{i}=b_{1}+(b_{2}+b_{3})+\cdots(b_{n-2}+b_{n-1})=\sum \limits_{i=1}^{r}a_{2i-1}\geq y_{1}+\cdots+y_{n-1}\) when \(n=2r\), \(\sum\limits_{i=1}^{n-1}b_{i}=(b_{1}+b_{2})+\cdots(b_{n-2}+b_{n-1})=\sum \limits_{i=1}^{r-1}a_{2i}\geq y_{1}+\cdots+y_{n-1}\) when \(n=2r-1\). In both cases, we always have \(\sum\limits_{i=1}^{n-1}b_{i}\geq h\), as desired. (2) and (3) can be shown by arguments similar to Lemma 4.4. (4) If \(n=2r-1\) with \(r\geq 2\) and \(a_{2i-1}\geq a_{2i}\) for each \(i\in[r-1]\), then it is clear that \(\mathbf{x}^{\mathbf{a}}\) is divisible by \((x_{1}x_{2})^{a_{2}}(x_{3}x_{4})^{a_{4}}(x_{5}x_{6})^{a_{6}}\cdots(x_{n-4}x_{n -3})^{a_{n-3}}(x_{n-2}x_{n-1})^{a_{n-1}}\). And \(\sum\limits_{i=1}^{r-1}a_{2i}\geq y_{1}+\cdots+y_{n-1}\) by the system (3), which implies that \(\sum\limits_{i=1}^{r-1}a_{2i}\geq h\), as wished. **Theorem 4.6**.: _Let \(n\geq 2\) be an integer and let \(\mathbf{x}^{\mathbf{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in S\) be a monomial with exponent vector \(\mathbf{a}=(a_{1},\ldots,a_{n})\). Suppose that a vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n-1})^{T}\in\mathbb{R}_{\geq 0}^{n-1}\) satisfies the following system of inequalities_ \[\left\{\begin{aligned} y_{1}&\leq a_{1},\\ y_{1}+y_{2}&\leq a_{2},\\ y_{2}+y_{3}&\leq a_{3},\\ \vdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},\\ y_{n-1}&\leq a_{n}.\end{aligned}\right. \tag{4}\] _Let \(h=\lceil y_{1}+\cdots+y_{n-1}\rceil\), then there exist at least \(h\) monomials \(e_{1},\ldots,e_{h}\in\{x_{1}x_{2},x_{2}x_{3},\ldots,x_{n-1}x_{n}\}\) such that \(\mathbf{x}^{\mathbf{a}}\) is divisible by \(\prod_{i=1}^{h}e_{i}\)._ Proof.: If \(n=2\), then it is trivial. If \(n=3\), then by comparing the sizes of \(a_{1}\), \(a_{2}\) and \(a_{3}\), we see that \(\mathbf{a}=(a_{1},a_{2},a_{3})\) satisfies Lemma 4.5, so the desired result follows from Lemma 4.5. Now we assume that \(n\geq 4\). If \(\mathbf{a}=(a_{1},\ldots,a_{n})\) satisfies Lemma 4.5, then the desired result follows from Lemma 4.5. Otherwise, there are two subcases: 1. When \(a_{1}>a_{2}\). If \(n=2r\), then there exists some \(t\in[r-1]\) such that \(a_{2i-1}\geq a_{2i}\) for each \(i\in[t]\) and \(a_{2t+1}<a_{2t+2}\). In this case, the vector \((a_{1},\ldots,a_{2t})\) satisfies the assumption (2) of Lemma 4.4. Otherwise, if \(n=2r-1\), then there exists some \(t^{\prime}\in[r-2]\) such that \(a_{2i-1}\geq a_{2i}\) for each \(i\in[t^{\prime}]\) and \(a_{2t^{\prime}+1}<a_{2t^{\prime}+2}\). In this case, the vector \((a_{1},\ldots,a_{2t^{\prime}})\) satisfies the assumption (2) of Lemma 4.4. 2. If \(a_{1}\leq a_{2}\), then there exists some \(s\in[n-1]\) such that \(a_{j}\geq a_{j-1}-a_{j-2}+\cdots+(-1)^{i-1}a_{j-i}+\cdots+(-1)^{j-2}a_{1}\) for each \(j=2,\ldots,s-1\) and \(a_{s}\leq a_{s-1}-a_{s-2}+\cdots+(-1)^{i-1}a_{s-i}+\cdots+(-1)^{s-2}a_{1}\). In this case, the vector \((a_{1},\ldots,a_{s})\) satisfies the assumption (1) of Lemma 4.4. When \(a_{1}>a_{2}\). If \(n=2r\), then we let \(s=2t\). Otherwise, if \(n=2r-1\), then we set \(s=2t^{\prime}\). Thus the first \(s\) components of the vector \((a_{1},\ldots,a_{s},0,\ldots,0)\) satisfy the following system (5) of inequalities \[\left\{\begin{aligned} y_{1}&\leq a_{1},\\ y_{1}+y_{2}&\leq a_{2},\\ y_{2}+y_{3}&\leq a_{3},\\ \vdots\\ y_{s-2}+y_{s-1}&\leq a_{s-1},\\ y_{s-1}+y_{s}&\leq a_{s}.\end{aligned}\right. \qquad\qquad(6)\qquad\left\{\begin{aligned} y_{s+1}&\leq a _{s+1},\\ y_{s+1}+y_{s+2}&\leq a_{s+2},\\ y_{s+2}+y_{s+3}&\leq a_{s+3},\\ \vdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},\\ y_{n-1}&\leq a_{n}.\end{aligned}\right. \tag{5}\] Let \(\mathbf{c}=(a_{1},\ldots,a_{s})\), then the vector \(\mathbf{c}\) satisfies Lemma 4.4, so there exist at least \(h^{\prime}\) monomials \(e_{1},\ldots,e_{h^{\prime}}\in\{x_{1}x_{2},x_{2}x_{3},\ldots,x_{s-1}x_{s}\}\) such that \(\mathbf{x^{c}}\) can be divisible by \(\prod_{i=1}^{h^{\prime}}e_{i}\), where \(h^{\prime}=\lceil y_{1}+\cdots+y_{s}\rceil\). Now we consider two subcases depending on whether \(s+1=n\) or not: (a) If \(s+1=n\), then in this case we choose \(h=h^{\prime}\) and the result follows. (b) If \(s+1<n\), then the last \((n-s)\) components of the vector \((0,\ldots,0,a_{s+1},\ldots,a_{n})\) satisfy the above system (6). In this case, let \(\mathbf{d}=(a_{s+1},\ldots,a_{n})\), then the vector \(\mathbf{d}\) satisfies Lemma 4.5. It follows from Lemma 4.5 that there exist at least \(h^{\prime\prime}\) monomials \(f_{1},\ldots,f_{h^{\prime\prime}}\in\{x_{s+1}x_{s+2},x_{s+2}x_{s+3},\ldots,x_ {n-1}x_{n}\}\) such that \(\mathbf{x^{d}}\) can be divisible by \(\prod_{i=1}^{h^{\prime\prime}}f_{i}\), where \(h^{\prime\prime}=\lceil y_{s+1}+\cdots+y_{n-1}\rceil\). Therefore \(\mathbf{x^{a}}\) can be divide by \((\prod_{i=1}^{h^{\prime}}e_{i})(\prod_{j=1}^{h^{\prime\prime}}f_{j})\) and \(h^{\prime}+h^{\prime\prime}\geq h\). Otherwise, by repeating the above discussion, we can decompose the set \(\{a_{s+1},a_{s+2},\ldots,a_{n}\}\) into disjoint unions of finite continuous segments, say \(t\), such that for each \(i\in[t-1]\), the \(i\)-th continuous segment satisfies the assumption (1) or (2) in Lemma 4.4 and the \(t\)-th segment satisfies one of the four conditions in Lemma 4.5. Thus we can write \(\{a_{s+1},\ldots,a_{n}\}\) as \(\{a_{s+1},\ldots,a_{n}\}=\bigsqcup\limits_{i=1}^{t}C_{i}\), where \(C_{i}=\{a_{p_{i}+1},a_{p_{i}+2},\ldots,a_{p_{i+1}}\}\) for each \(i\in[t]\) with \(p_{1}=s\) and \(p_{t+1}=n\). Note that for each \(i\in[t-1]\), \(C_{i}\) satisfies the following system (7) of inequalities and \(C_{t}\) satisfies the following system (8) of inequalities when \(|C_{t}|\geq 2\). \[\left\{\begin{array}{rl}y_{p_{i}+1}\leq a_{p_{i}+1},\\ y_{p_{i}+1}+y_{p_{i}+2}\leq a_{p_{i}+2},\\ y_{p_{i}+2}+y_{p_{i}+3}\leq a_{p_{i}+3},\\ \vdots\\ y_{p_{i+1}-2}+y_{p_{i+1}-1}\leq a_{p_{i+1}-1},\\ y_{p_{i+1}-1}+y_{p_{i+1}}\leq a_{p_{i+1}}.\end{array}\right.\qquad\left\{ \begin{array}{rl}y_{p_{t}+1}\leq a_{p_{t}+1},\\ y_{p_{t}+1}+y_{p_{t}+2}\leq a_{p_{t}+2},\\ y_{p_{t}+2}+y_{p_{t}+3}\leq a_{p_{t}+3},\\ \vdots\\ y_{p_{t+1}-2}+y_{p_{t+1}-1}\leq a_{p_{t+1}-1},\\ y_{p_{t+1}-1}\leq a_{p_{t+1}}.\end{array}\right. \tag{7}\] It follows from Lemma 4.4 that for each \(i\in[t-1]\), there exist at least \(h_{i}\) monomials \(u_{i1},\ldots,u_{ih_{i}}\in\{x_{p_{i}+1}x_{p_{i}+2},x_{p_{i}+2}x_{p_{i}+3}, \ldots,x_{p_{i+1}-1}x_{p_{i+1}}\}\) such that \(\mathbf{x^{c_{i}}}\) can be divisible by \(\prod_{j=1}^{h_{i}}u_{ij}\), where the exponent vector \(\mathbf{c_{i}}=(0,\ldots,0,a_{p_{i}+1},a_{p_{i}+2},\ldots,a_{p_{i+1}},0,\ldots,0)\) and each \(h_{i}=\lceil y_{p_{i}+1}+\cdots+y_{p_{i+1}}\rceil\). If \(|C_{t}|=1\), then in this case we have \(p_{t}+1=n\), which implies that \(h^{\prime}+\sum\limits_{i=1}^{t-1}h_{i}\geq\lceil y_{1}+\cdots+y_{s}\rceil+ \sum\limits_{i=1}^{t-1}\lceil y_{p_{i}+1}+\cdots+y_{p_{i+1}}\rceil\geq h\) and \(\mathbf{x^{a}}\) can be divisible by \((\prod_{i=1}^{h^{\prime}}e_{i})(\prod_{i=1}^{t-1}\prod_{j=1}^{h_{i}}u_{ij})\), since \(\mathbf{x^{a}}=\mathbf{x^{c}}(\prod_{i=1}^{t-1}\mathbf{x^{c_{i}}})x_{n}^{a_{n}}\). If \(|C_{t}|\geq 2\), then by Lemma 4.5, there exist at least \(h_{t}\) monomials \(u_{t1},\ldots,u_{ht_{t}}\in\{x_{p_{t}+1}x_{p_{t}+2},x_{p_{t}+2}x_{p_{t}+3}, \ldots,x_{n-1}x_{n}\}\) such that \(\mathbf{x^{c_{t}}}\) can be divisible by \(\prod_{j=1}^{h_{t}}u_{tj}\), where \(h_{t}=\lceil y_{p_{t}+1}+\cdots+y_{n-1}\rceil\) and the exponent vector \(\mathbf{c_{t}}=(0,\ldots,0,a_{p_{t}+1},a_{p_{t}+2},\ldots,a_{n})\). In this case, \(h^{\prime}+\sum\limits_{i=1}^{t}h_{i}\geq\lceil y_{1}+\cdots+y_{s}\rceil+ \sum\limits_{i=1}^{t}\lceil y_{p_{i}+1}+\cdots+y_{p_{i+1}}\rceil\geq h\) and \(\mathbf{x^{a}}\) can be divided by \((\prod_{i=1}^{h^{\prime}}e_{i})(\prod_{i=1}^{t}\prod_{j=1}^{h_{i}}u_{ij})\). This completes the proof. **Theorem 4.7**.: _Let \(C_{\omega}^{n}\) be a weighted cycle on the set \([n]\), where exactly three edges have non-trivial weights. Let \(I=I(C_{\omega}^{n})\) be the edge ideal of the cycle \(C_{\omega}^{n}\). If \(I\) is integrally closed, then \(I\) is normal._ Proof.: If \(C_{\omega}^{n}\) has exactly three edges with non-trivial weights, then, by Theorem 3.6 we have \(n=6\) and any two edges in these non-trivially weighted edges do not share a common vertex. By symmetry, let \(E(C_{\omega}^{n})=\{e_{1},\ldots,e_{6}\}\) with each \(\omega_{i}=\omega(e_{i})\), where \(e_{i}=\{i,i+1\}\) for \(i\in[5]\) and \(e_{6}=\{6,1\}\). Then we can assume that \(\omega_{1},\omega_{3},\omega_{5}\geq 2\) and \(\omega_{2}=\omega_{4}=\omega_{6}=1\). We will prove that \(\overline{I^{k}}=I^{k}\) for all \(k\geq 2\). Since \(I^{k}\subseteq\overline{I^{k}}\) is always valid, it suffices to prove that \(\overline{I^{k}}\subseteq I^{k}\). Let \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{6}^{a_{6}}\in\mathcal{G}(\overline{I^{k}})\), then \(v_{\mathbf{a}}^{*}(I)\geq k\) by Lemma 4.1(2). It follows from the definition of \(v_{\mathbf{a}}^{*}(I)\) that there exists the vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{6})^{T}\in\mathbb{R}_{\geq 0}^{6}\) satisfying the following system (9) of inequalities \[\left\{\begin{aligned} y_{1}+\cdots+y_{6}&\geq k,& \hbox{\textcircled{1}}\\ \omega_{1}y_{1}+y_{6}&\leq a_{1},&\hbox{ \textcircled{2}}\\ \omega_{1}y_{1}+y_{2}&\leq a_{2},&\hbox{ \textcircled{3}}\\ y_{2}+\omega_{3}y_{3}&\leq a_{3},&\hbox{ \textcircled{4}}\\ \omega_{3}y_{3}+y_{4}&\leq a_{4},&\hbox{ \textcircled{5}}\\ y_{4}+\omega_{5}y_{5}&\leq a_{5},&\hbox{ \textcircled{6}}\\ \omega_{5}y_{5}+y_{6}&\leq a_{6}.&\hbox{ \textcircled{7}}\end{aligned}\right. \tag{9}\] In this case, we get \(a_{2}+a_{3}\geq\omega_{1}y_{1}+2y_{2}+\omega_{3}y_{3}\geq 2(y_{1}+y_{2}+y_{3})\) by \(\hbox{\textcircled{3}}\) and \(\hbox{\textcircled{4}}\) in system (9). Similarly, \(a_{4}+a_{5}\geq 2(y_{3}+y_{4}+y_{5})\) and \(a_{1}+a_{6}\geq 2(y_{1}+y_{5}+y_{6})\). Due to symmetry, we only need to prove that \(\mathbf{x^{a}}\in I^{k}\) provided that \(a_{2}+a_{3}\geq 2(y_{1}+y_{2}+y_{3})\). We distinguish between the following two cases: (I) If \(a_{2},a_{3}\geq y_{1}+y_{2}+y_{3}\), then \(a_{2},a_{3}\geq\lceil y_{1}+y_{2}+y_{3}\rceil\). By the system (9), we have \(a_{4}\geq\lceil y_{4}\rceil\), \(a_{5}\geq\omega_{5}\lfloor y_{5}\rfloor+\lceil y_{4}\rceil\), \(a_{6}\geq\omega_{5}\lfloor y_{5}\rfloor+\lceil y_{6}\rceil\) and \(a_{1}\geq\lceil y_{6}\rceil\), so \(\mathbf{x^{a}}\) is divisible by \((x_{2}x_{3})^{\lceil y_{1}+y_{2}+y_{3}\rceil}(x_{4}x_{5})^{\lceil y_{4}\rceil }(x_{5}^{\omega_{5}}x_{6}^{\omega_{5}})^{\lceil y_{5}\rceil}(x_{6}x_{1})^{ \lceil y_{6}\rceil}\). Note that \(\lceil y_{1}+y_{2}+y_{3}\rceil+\lceil y_{4}\rceil+y_{5}+\lceil y_{6}\rceil\geq k\), since \(y_{1}+\cdots+y_{6}\geq k\). It follows that \(\lfloor y_{5}\rfloor\geq k-(\lceil y_{1}+y_{2}+y_{3}\rceil+\lceil y_{4}\rceil +\lceil y_{6}\rceil)\), which forces that \(\lfloor y_{5}\rfloor\geq k-(\lceil y_{1}+y_{2}+y_{3}\rceil+\lceil y_{4} \rceil+\lceil y_{6}\rceil)\). since \(k-(\lceil y_{1}+y_{2}+y_{3}\rceil+\lceil y_{4}\rceil+\lceil y_{6}\rceil)\) is an integer. Therefore, \(\lceil y_{1}+y_{2}+y_{3}\rceil+\lceil y_{4}\rceil+\lfloor y_{5}\rfloor+\lceil y _{6}\rceil\geq k\), which gives the desired result. (II) If \(a_{2}\geq y_{1}+y_{2}+y_{3}>a_{3}\), or \(a_{3}\geq y_{1}+y_{2}+y_{3}>a_{2}\). By symmetry, it suffices to prove that \(\mathbf{x^{a}}\in I^{k}\) provided that \(a_{2}\geq y_{1}+y_{2}+y_{3}>a_{3}\). In this case, we get \(y_{1}=y_{1}+y_{2}+y_{3}-(y_{2}+y_{3})\geq(y_{1}+y_{2}+y_{3})-a_{3}\) by \(\hbox{\textcircled{4}}\) in system (9). It follows from \(\hbox{\textcircled{2}}\) in system (9) that \(a_{1}\geq\omega_{1}y_{1}+y_{6}\geq\omega_{1}(y_{1}+y_{2}+y_{3}-a_{3})+y_{6}\). So by Remark 4.2(2), we get \[a_{1}\geq\omega_{1}\lfloor y_{1}+y_{2}+y_{3}-a_{3}\rfloor+\lceil y_{6}\rceil. \tag{3}\] On the other hand, from 3 and 4 in system (9), we have \[a_{2}+a_{3} \geq\omega_{1}y_{1}+2y_{2}+\omega_{3}y_{3}\] \[=\omega_{1}(y_{1}+y_{2}+y_{3}-a_{3})+\omega_{1}(a_{3}-y_{2}-y_{3}) +2y_{2}+\omega_{3}y_{3}\] \[\geq\omega_{1}(y_{1}+y_{2}+y_{3}-a_{3})+2(a_{3}-y_{2}-y_{3})+2y_{2 }+2y_{3}\] \[\geq\omega_{1}(y_{1}+y_{2}+y_{3}-a_{3})+2a_{3},\] which forces that \(a_{2}\geq\omega_{1}(y_{1}+y_{2}+y_{3}-a_{3})+a_{3}\), so we have \[a_{2}\geq\omega_{1}\lfloor y_{1}+y_{2}+y_{3}-a_{3}\rfloor+a_{3}. \tag{4}\] We consider the following two subcases: (a) If \(a_{4}\geq a_{5}\), then \(\mathbf{x^{a}}\) is divisible by \((x_{1}x_{2})^{\omega_{1}\lfloor y_{1}+y_{2}+y_{3}-a_{3}\rfloor}(x_{2}x_{3})^ {a_{3}}(x_{4}x_{5})^{a_{5}}(x_{6}x_{1})^{\lceil y_{6}\rceil}\) by system (9). By 6 in system (9), we have \(a_{5}+\lceil y_{6}\rceil\geq y_{4}+y_{5}+y_{6}\). Thus \((y_{1}+y_{2}+y_{3}-a_{3})+a_{3}+a_{5}+\lceil y_{6}\rceil\geq y_{1}+y_{2}+y_{3 }+y_{4}+y_{5}+y_{6}\geq k\), i.e., \(y_{1}+y_{2}+y_{3}-a_{3}\geq k-(a_{3}+a_{5}+\lceil y_{6}\rceil)\). This yields that \(\lfloor y_{1}+y_{2}+y_{3}-a_{3}\rfloor\geq k-(a_{3}+a_{5}+\lceil y_{6}\rceil)\), i.e., \(\lfloor y_{1}+y_{2}+y_{3}-a_{3}\rfloor+a_{3}+a_{5}+\lceil y_{6}\rceil\geq k\), so \(\mathbf{x^{a}}\in I^{k}\). (b) If \(a_{4}<a_{5}\). We will prove that \(\mathbf{x^{a}}\in I^{k}\) in the following two scenarios: (i) If \(a_{6}\geq y_{5}+y_{6}+y_{1}>a_{1}\), then by similar arguments as for the condition that \(a_{2}\geq y_{1}+y_{2}+y_{3}>a_{3}\) and \(a_{4}\geq a_{5}\), it follows that \(\mathbf{x^{a}}\) is divisible by \((x_{5}x_{6})^{\omega_{5}\lfloor y_{1}+y_{5}+y_{6}-a_{1}\rfloor}(x_{1}x_{6})^ {a_{1}}\cdot(x_{2}x_{3})^{a_{3}}(x_{4}x_{5})^{\lceil y_{4}\rceil}\) and \(\lfloor y_{1}+y_{5}+y_{6}-a_{1}\rfloor+a_{1}+a_{3}+\lceil y_{4}\rceil\geq k\). (ii) If \(a_{1}\geq y_{5}+y_{6}+y_{1}>a_{6}\), then again by similar arguments as for the condition that \(a_{2}\geq y_{1}+y_{2}+y_{3}>a_{3}\) and \(a_{4}\geq a_{5}\), we get that \(\mathbf{x^{a}}\) is divisible by \((x_{1}x_{2})^{\omega_{1}\lfloor y_{1}+y_{5}+y_{6}-a_{6}\rfloor}(x_{1}x_{6})^ {a_{6}}(x_{4}x_{5})^{a_{4}}(x_{2}x_{3})^{\lceil y_{2}\rceil}\) and \(\lfloor y_{1}+y_{5}+y_{6}-a_{6}\rfloor+a_{6}+a_{4}+\lceil y_{2}\rceil\geq k\). In both cases, we always have \(\mathbf{x^{a}}\in I^{k}\). \(\square\) **Theorem 4.8**.: _Let \(C_{\omega}^{n}\) be a weighted cycle on the vertex set \([n]\), where exactly two edges have non-trivial weights. Let \(I=I(C_{\omega}^{n})\) be the edge ideal of the cycle \(C_{\omega}^{n}\). If \(I\) is integrally closed, then \(I\) is normal._ Proof.: Let \(E(C_{\omega}^{n})=\{e_{1},\ldots,e_{n}\}\) with each \(\omega_{i}=\omega(e_{i})\) and \(e_{i}=\{i,i+1\}\), where \(i\) is identified with the integer \(0<j\leq n\) such that \(j\equiv i\pmod{n}\). Since \(C_{\omega}^{n}\) has exactly two edges with non-trivial weights, we can assume by symmetry that \(\omega_{1},\omega_{3}\geq 2\) and \(\omega_{i}=1\) for \(i\in[n]\) with \(i\neq 1,3\). In this case, we will prove that \(\overline{I^{k}}=I^{k}\) for all \(k\geq 2\). Since \(I^{k}\subseteq\overline{I^{k}}\) is always valid, it suffices to prove that \(\overline{I^{k}}\subseteq I^{k}\). Let \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in\mathcal{G}(\overline{I^{k}})\), then \(v_{\mathbf{a}}^{*}(I)\geq k\) by Lemma 4.1(2). It follows from the definition of \(v_{\mathbf{a}}^{*}(I)\) that there exists the vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})^{T}\in\mathbb{R}_{\geq 0}^{n}\) which satisfies the following system (10) of inequalities (10) \[\left\{\begin{aligned} y_{1}+\cdots+y_{n}&\geq k, \qquad\text{\text{\text{\text{1}}}}\\ \omega_{1}y_{1}+y_{n}&\leq a_{1},\qquad\text{\text{ \text{\text{2}}}}\\ \omega_{1}y_{1}+y_{2}&\leq a_{2},\qquad\text{\text{ \text{3}}}\\ y_{2}+\omega_{3}y_{3}&\leq a_{3},\qquad\text{\text{ \text{4}}}\\ \omega_{3}y_{3}+y_{4}&\leq a_{4},\qquad\text{\text{ \text{5}}}\\ y_{4}+y_{5}&\leq a_{5},\qquad\text{\text{ \text{6}}}\\ \cdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},\\ y_{n-1}+y_{n}&\leq a_{n}.\end{aligned}\right.\] (11) Let \(h=\lceil y_{4}+\cdots+y_{n}\rceil\). Since the vector \(\mathbf{y}=(y_{4},y_{5},\ldots,y_{n})^{T}\) satisfies the above system (11) of inequalities, it follows from Theorem 4.6 that there are at least \(h\) monomials \(f_{i1},\ldots,f_{ih}\in\{x_{4}x_{5},\ldots,x_{n-1}x_{n},x_{n}x_{1}\}\) such that \(\mathbf{x^{b}}\) can be divisible by \(\prod_{j=1}^{h}f_{ij}\), where \(\mathbf{b}=(a_{1},0,0,a_{4},\ldots,a_{n})\). If \(h\geq k\), then \(\mathbf{x^{b}}\in I^{k}\), which forces \(\mathbf{x^{a}}\in I^{k}\). In the following, we assume that \(h<k\). One has \[a_{2}+a_{3}\geq\omega_{1}y_{1}+2y_{2}+\omega_{3}y_{3}\geq 2(y_{1}+y_{2}+y_{3}) \geq 2(k-h).\] We distinguish between the two cases: (i) If \(a_{2},a_{3}\geq k-h\), then \((x_{2}x_{3})^{k-h}|x_{2}^{a_{2}}x_{3}^{a_{3}}\). Since \(\mathbf{x^{a}}=(x_{2}^{a_{2}}x_{3}^{a_{3}})\mathbf{x^{b}}\) and \(\mathbf{x^{b}}\) can be divisible by \(\prod_{j=1}^{h}f_{ij}\), it follows that \(\mathbf{x^{a}}\) can be divisible by \((x_{2}x_{3})^{k-h}\prod_{j=1}^{h}f_{ij}\), which implies that \(\mathbf{x^{a}}\in I^{k}\). (ii) If \(a_{2}\geq k-h>a_{3}\) or \(a_{3}\geq k-h>a_{2}\). We can assume that \(a_{2}\geq k-h>a_{3}\) by symmetry. In this case, we have \(k-h>a_{3}\geq y_{2}+\omega_{3}y_{3}\geq y_{2}+y_{3}\). Hence \(y_{1}\geq k-(y_{2}+y_{3})-(y_{4}+\cdots+y_{n})\geq k-a_{3}-(y_{4}+\cdots+y_{n} )\geq k-a_{3}-h\). It follows that \(a_{1}\geq\omega_{1}y_{1}+y_{n}\geq\omega_{1}(k-a_{3}-h)+y_{n}\). On the other hand, \(a_{2}+a_{3}\geq\omega_{1}y_{1}+2y_{2}+\omega_{3}y_{3}\geq\omega_{1}y_{1}+2y_{ 2}+2y_{3}\geq\omega_{1}(k-a_{3}-h)+\omega_{1}(y_{1}+a_{3}+h-k)+2(y_{2}+y_{3}) \geq\omega_{1}(k-h-a_{3})+2a_{3}\). This implies that \(a_{2}\geq\omega_{1}(k-h-a_{3})+a_{3}\). Note that the vector \(\mathbf{y}=(y_{4},y_{5},\ldots,y_{n})^{T}\) also satisfies the above system (11) by replacing \(a_{1}\) by \(a_{1}-\omega_{1}(k-h-a_{3})\). it follows from Theorem 4.6 that there are at least \(h\) monomials \(g_{i1},\ldots,g_{ih}\in\{x_{4}x_{5},\ldots,x_{n-1}x_{n},x_{n}x_{1}\}\) such that \(\mathbf{x^{c}}\) can be divisible by \(\prod_{j=1}^{h}g_{ij}\), where \(\mathbf{c}=(a_{1}-\omega_{1}(k-h-a_{3}),0,0,a_{4},\ldots,a_{n})\). Therefore \(\mathbf{x^{a}}\) can be divisible by \((x_{1}^{\omega_{1}}x_{2}^{\omega_{1}})^{k-h-a_{3}}(x_{2}x_{3})^{a_{3}}(\prod_{ j=1}^{h}g_{ij})\), so \(\mathbf{x^{a}}\in I^{k}\). **Theorem 4.9**.: _Let \(C_{\omega}^{n}\) be a weighted cycle on the vertex set \([n]\), and let \(I=I(C_{\omega}^{n})\) be its edge ideal. If \(I\) is integrally closed, then \(I\) is normal._ Proof.: Let \(E(C_{\omega}^{n})=\{e_{1},\ldots,e_{n}\}\) with each \(\omega_{i}=\omega(e_{i})\), where \(e_{i}=\{i,i+1\}\) for \(i\in[n-1]\) and \(e_{n}=\{1,n\}\). Since \(I\) is integrally closed, \(C_{\omega}^{n}\) has at most three edges with non-trivial weights by Theorem 3.6. If \(C_{\omega}^{n}\) is a trivially weighted cycle, then \(I\) is normal by [15, Proposition 2.1 and Corollary 2.8] and [8, Proposition 2.1.2]. Now we assume that \(C_{\omega}^{n}\) has at least one edge with non-trivial weight. In this case, we will prove that \(\overline{I^{k}}=I^{k}\) for all \(k\geq 2\). Since \(I^{k}\subseteq\overline{I^{k}}\) is always valid, it suffices to prove that \(\overline{I^{k}}\subseteq I^{k}\). We can divide this into the following two cases: (1) If \(C_{\omega}^{n}\) has exactly three edges or two edges with non-trivial weights, then \(\overline{I^{k}}\subseteq I^{k}\) by Theorem 4.7 and Theorem 4.8, respectively. (2) If \(C_{\omega}^{n}\) has only one edge with non-trivial weight, then we can assume by symmetry that \(\omega_{1}\geq 2\) and \(\omega_{i}=1\) for any \(i=2,\ldots,n\). In this case, let \(\mathbf{x^{a}}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\in\mathcal{G}(\overline{I^{k}})\), then \(v_{\mathbf{a}}^{*}(I)\geq k\) by Lemma 4.1(2). It follows from the definition of \(v_{\mathbf{a}}^{*}(I)\) that there exists a vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})^{T}\in\mathbb{R}_{\geq 0}^{n}\) which satisfies the following system (12) of inequalities \[\left\{\begin{aligned} y_{1}+\cdots+y_{n}&\geq k,& \hbox{\textcircled{1}}\\ \omega_{1}y_{1}+y_{n}&\leq a_{1},&\hbox{ \textcircled{2}}\\ \omega_{1}y_{1}+y_{2}&\leq a_{2},&\hbox{ \textcircled{3}}\\ y_{2}+y_{3}&\leq a_{3},&\hbox{ \textcircled{4}}\\ y_{3}+y_{4}&\leq a_{4},&\hbox{ \textcircled{5}}\\ y_{4}+y_{5}&\leq a_{5},&\hbox{ \textcircled{6}}\\ \vdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},&\\ y_{n-1}+y_{n}&\leq a_{n}.\end{aligned}\right.\qquad \left\{\begin{aligned} y_{2}&\leq a_{2},\\ y_{2}+y_{3}&\leq a_{3},\\ y_{3}+y_{4}&\leq a_{4},\\ y_{4}+y_{5}&\leq a_{5},\\ \vdots\\ y_{n-2}+y_{n-1}&\leq a_{n-1},\\ y_{n-1}+y_{n}&\leq a_{n},\\ y_{n}&\leq a_{1}.\end{aligned}\right. \tag{12}\] Let \(h=\lceil y_{2}+\cdots+y_{n}\rceil\). We distinguish between the following two cases: (1) If \(h\geq k\), then the vector \(\mathbf{y}^{\prime}=(y_{2},\ldots,y_{n})^{T}\in\mathbb{R}_{\geq 0}^{n-1}\) satisfies the system (13) of inequalities. It follows from Theorem 4.6 that there exist at least \(h\) monomials \(f_{1},\ldots,f_{h}\in\{x_{2}x_{3},\ldots,x_{n-1}x_{n},x_{n}x_{1}\}\) such that \(\mathbf{x^{a}}\) is divisible by \(\prod_{i=1}^{h}f_{i}\), so \(\mathbf{x^{a}}\in I^{k}\). (2) If \(h<k\), then \(y_{1}\geq k-h\) by \(\hbox{\textcircled{1}}\) in system (12). This implies that \[a_{1}\geq\omega_{1}(k-h)+y_{n}\ \text{ and }\ a_{2}\geq\omega_{1}(k-h)+y_{2}.\] Thus the vector \(\mathbf{y}^{\prime}=(y_{2},\ldots,y_{n})^{T}\in\mathbb{R}_{\geq 0}^{n-1}\) satisfies the system (13) by replacing \(a_{1}\) by \(a_{1}-\omega_{1}(k-h)\) and \(a_{2}\) by \(a_{2}-\omega_{1}(k-h)\). By Theorem 4.6, there exist at least \(h\) monomials \(g_{1},\ldots,g_{h}\in\{x_{2}x_{3},\ldots,x_{n-1}x_{n},x_{n}x_{1}\}\) such that \(\mathbf{x^{b}}\) is divisible by \(\prod_{i=1}^{h}g_{i}\), where \(\mathbf{b}=(a_{1}-\omega_{1}(k-h),a_{2}-\omega_{1}(k-h),a_{3},\ldots,a_{n})\). So \(\mathbf{x^{a}}\) can be divisible by \((x_{1}^{\omega_{1}}x_{2}^{\omega_{1}})^{(k-h)}\prod_{i=1}^{h}f_{i}\). Therefore, \(x^{\mathbf{a}}\in I^{k}\). \(\square\) **Theorem 4.10**.: _Let \(L_{\omega}^{n}\) be a weighted path on the set \([n]\), and let \(I=I(L_{\omega}^{n})\) be its edge ideal. If \(I\) is integrally closed, then \(I\) is normal._ Proof.: Let \(E(L_{\omega}^{n})=\{e_{1},\ldots,e_{n-1}\}\) with each \(\omega_{i}=\omega(e_{i})\) and \(e_{i}=\{i,i+1\}\). Since \(L_{\omega}^{n}\) is an induced subgraph of \(C_{\omega}^{n+1}\), where \(E(C_{\omega}^{n})=E(L_{\omega}^{n})\cup\{e_{n},e_{n+1}\}\), \(e_{n}=\{n,n+1\}\), \(e_{n+1}=\{1,n+1\}\) and \(\omega(e_{n})=\omega(e_{n+1})=1\). Since \(I\) is integrally closed, \(I(C_{\omega}^{n+1})\) is also integrally closed by Theorem 3.6. Again by Theorem 4.9, we get that \(I(C_{\omega}^{n+1})\) is normal, which implies that \(I\) is normal by Remark 3.3. \(\square\) **Acknowledgments** This research is supported by the Natural Science Foundation of Jiangsu Province (No. BK20221353). The authors are grateful to the computer algebra systems Normaliz [3] for providing us with a large number of examples. **Data availability statement** The data used to support the findings of this study are included within the article. **Conflict of interest statement** All authors declare that they have no conflicts of interest to this work.
2301.12568
Schema-Guided Semantic Accuracy: Faithfulness in Task-Oriented Dialogue Response Generation
Ensuring that generated utterances are faithful to dialogue actions is crucial for Task-Oriented Dialogue Response Generation. Slot Error Rate (SER) only partially measures generation quality in that it solely assesses utterances generated from non-categorical slots whose values are expected to be reproduced exactly. Utterances generated from categorical slots, which are more variable, are not assessed by SER. We propose Schema-Guided Semantic Accuracy (SGSAcc) to evaluate utterances generated from both categorical and non-categorical slots by recognizing textual entailment. We show that SGSAcc can be applied to evaluate utterances generated from a wide range of dialogue actions in the Schema Guided Dialogue (SGD) dataset with good agreement with human judgment. We also identify a previously overlooked weakness in generating faithful utterances from categorical slots in unseen domains. We show that prefix tuning applied to T5 generation can address this problem. We further build an ensemble of prefix-tuning and fine-tuning models that achieves the lowest SER reported and high SGSAcc on the SGD dataset.
Jinghong Chen, Weizhe Lin, Bill Byrne
2023-01-29T22:32:48Z
http://arxiv.org/abs/2301.12568v1
# Schema-Guided Semantic Accuracy: Faithfulness in Task-Oriented Dialogue Response Generation ###### Abstract Ensuring that generated utterances are faithful to dialogue actions is crucial for Task-Oriented Dialogue Response Generation. Slot Error Rate (SER) only partially measures generation quality in that it solely assesses utterances generated from non-categorical slots whose values are expected to be reproduced exactly. Utterances generated from categorical slots, which are more variable, are not assessed by SER. We propose Schema-Guided Semantic Accuracy (SGSAcc) to evaluate utterances generated from both categorical and non-categorical slots by recognizing textual entailment. We show that SGSAcc can be applied to evaluate utterances generated from a wide range of dialogue actions in the Schema Guided Dialogue (SGD) dataset with good agreement with human judgment. We also identify a previously overlooked weakness in generating faithful utterances from categorical slots in unseen domains. We show that prefix tuning applied to T5 generation can address this problem. We further build an ensemble of prefix-tuning and fine-tuning models that achieves the lowest SER reported and high SGSAcc on the SGD dataset. ## 1 Introduction Task-oriented dialogue response generation aims to generate accurate and fluent utterances from triplets of intent, slot, and values known as dialogue actions (See Fig.2). Ensuring that the generated utterances faithfully realize dialogue actions is crucial because misinformation can be costly in real-life applications. However, the lack of a complete, automatic faithfulness metric for task-oriented dialogue NLG has made assessing faithfulness difficult. Slot Error Rate (SER) Luong et al. (2015), the most widely used faithfulness metric currently, can only assess utterances generated from non-categorical slots whose values are expected to be reproduced exactly in generations by string matching, omitting utterances generated from categorical slots such as "(kids_friendly, True)". However, dialogue actions with categorical slots are present in 28% of the test instances in the Schema Guided Dialogue (SGD) dataset Rastogi et al. (2020), which is the largest dataset for multi-domain task-oriented dialogue system so far. Therefore, it is essential to include them for complete faithfulness evaluation. To cover categorical slots in evaluation, we propose Schema-Guided Semantic Accuracy (SGSAcc) that examines semantic consistency rather than string overlap. We build upon Semantic Accuracy Dusek and Kasner (2020), which evaluates faithfulness in table-to-text tasks by recognizing textual entailment (RTE). A natural language inference (NLI) model is used to check whether the premise (generated utterances) entails, contradicts, or is neutral to the hypothesis (dialogue actions). Following their design, we first convert the dialogue actions into fluent sentences to serve as the hypothesis, so that the NLI model trained on free-running texts can perform RTE correctly without further fine-tuning. We name the converted sentences _entailment reference_ to emphasize their role in entailment checking. We find that the original Semantic Accuracy cannot be directly applied on task-oriented response generation because it requires handcrafted templates for each dialogue action to produce entailment references. Although Kale and Rastogi (2020) have published templates for the SGD, they were designed for generation rather than evaluation. We find that Semantic Accuracy using these templates marks 25% of the ground-truth utterances as unfaithful. To cover the 45 services in the SGD and the 225 service variations in the SGD-X without prohibitive labor, we propose a rule-based algorithm that constructs entailment references based on slot descriptions from service schema, which are provided in the SGD Kale and Rastogi (2020) and other popular task-oriented dialogue datasets such as MultiWOZ 2.2 (Zang et al., 2020). We were able to design the rules efficiently within 20 working hours for the SGD (Appendix C). In addition, to help resolve co-references, which are prevalent in dialogues, we augment the premise (generation to be assessed) with previous dialogue turns and slot descriptions when needed. We verified that SGSAcc has good agreement with human judgments of faithfulness. We applied SGSAcc to evaluate the best-performing published model in terms of SER on the SGD. We found a previously overlooked weakness in generating faithful utterances in domains not seen in training. To address this, we experimented with prefix-tuning (PT) (Li and Liang, 2021) which was reported to generalize better than fine-tuning (FT) on unseen data (Li and Liang, 2021; Clive et al., 2021). We found that PT significantly improved SGSAcc in unseen domains, whereas the FT model achieved lower SER in comparison. Noting their complementary advantages, we used SGSAcc to implement a fidelity reranker similar to that from Harkous et al. (2020) to select faithful generations from an ensemble of PT and FT models, which further improved SER and SGSAcc on the SGD. Our contributions are summarised as follows: (1) We propose SGSAcc, a faithfulness metric tailored to task-oriented dialogue systems that evaluate both categorical and non-categorical slots. (2) We empirically show that prefix-tuning significantly improves faithfulness in unseen domains on the Schema Guided Dialogue dataset. (3) We build an ensemble of fine-tuning and prefix-tuning models using a SGSAcc-powered fidelity reranker, which significantly improves faithfulness in the NLG task of the SGD dataset. ## 2 Schema-Guided Semantic Accuracy A generation is considered faithful if all dialogue actions are faithfully realized. For each dialogue action, faithfulness is evaluated through an NLI model, which checks whether the generated utterance entails the hypothesis text constructed from this dialogue action. The generation is considered faithful if "entailment" attains the highest probability amongst the three NLI output classes -- "entailment", "neutral", and "contradiction" (Fig.2 3). We use RoBERTa (Liu et al., 2019) without further fine-tuning as our NLI model (Appendix E). To obtain the entailment reference from the dialogue action, we first construct multiple candidate references using a set of rules designed for each type of dialogue action (Fig.2 1). For example, two candidate references are constructed from the dialogue action "INFORM(kids_friendly=True)" by directly expanding the slot name into "_Is kids friendly._" and rephrasing the slot description from schema to an answered question "_Whether the place is kids friendly? Yes._" (See Appendix B for detailed rules). Then, using the ground-truth utterance as the premise, we select the candidate with the highest entailment score as the entailment reference. Optionally, we further check that the NLI model yields entailment for at least one of the candidate references and yields non-entailment for all negative references constructed from "tampered" dialogue actions with incorrect slot values (Fig.2 2). Instances that fail the check are excluded from evaluation. We call this optional checking _validation_. Note that the validation step is agnostic to different generation models as it uses the ground-truth utterance as the premise, not the generated utterance. This ensures all models are evaluated equally on the same test set. In our experiment on the SGD, only 3% of the test instances fail the validation. We find that SGSAcc has better agreement with human on validated instances (see section 2.2). We report SGSAcc with and without validation in our results (Table 1). ### Supplying Dialogue Context to NLI We found that ground-truth utterances frequently refer to subjects that appeared in previous dialogue turns, which hinders proper entailment recognition. In the example in Figure 1, without dialogue context, the NLI model outputs neutral with high confidence because it is unclear that "Queens" in the ground-truth utterance refers to a "hair stylist". Figure 1: An example where the lack of dialogue context hinders faithfulness evaluation. To help capture the dialogue context, when the generated utterance alone fails to entail the entailment reference, we add the previous dialogue turn and the slot description to the premise and check for entailment again. This optional supplement is applied in the validation step and the final evaluation. ### SGSAec Evaluation We conducted human evaluations with 3 crowdsourced evaluators to verify that SGSAcc agrees with human judgment in faithfulness. We use the generations of the fine-tuned T5 model detailed in Sec.3 for evaluation. With the validation step enabled, we randomly sample 200 utterance-dialogue action pairs which SGSAcc marks as faithful, and another 200 pairs which SGSAcc marks as unfaithful from the 9703 validated instances. Note that we use the ground-truth utterances rather than the generated utterances in the validation step, so the choice of generation model does not affect the set of validated instances. We asked the human evaluators to annotate whether the generated utterance faithfully verbalizes the dialogue action. The majority judgment was compared with the evaluation results of SGSAcc. Inter-annotator agreement is near-perfect, as indicated by a Fleiss' kappa of 0.93. SGSAcc with validation agrees with human judgments on 90.0% of the examples, showing good agreement. We also randomly sample 25 examples from instances that fail the validation step and find that human agreement degrades to 76%, suggesting that SGSAcc evaluation is more reliable on validated instances. Compared with SER, SGSAcc provides significantly more complete evaluations by examining both non-categorical and categorical slots, increasing coverage from 66% to 97% (with validation) of the test set in the SGD. In addition, we also find that SGSAcc is consistent with SER in examining non-categorical slots, as SGSAcc yields the same result on 99.7% of the test instances that can be evaluated by SER. Since SGSAcc uses schema information to construct candidate references, we also validated that SGSAcc is robust to different schema writing styles, as shown by the consistently high F1-score (>0.95) on distinguishing faithful and unfaithful utterances with the rephrased schema in the SGD-X extension (Lee et al., 2021) of SGD (See Table 2 in Appendix A). ## 3 Faithful NLG with SGSAcc Ensembling We now describe a Template-Guided (T2G2) (Kale and Rastogi, 2020) NLG approach for the SGD dataset based on T5 (Raffel et al., 2020), PrefixTuning (Li and Liang, 2021), and SGSAcc ensembling, which shows that SGSAcc can also be used Figure 2: SGSAcc evaluation (Sec. 2). (1) Construct multiple candidate references using predefined rules taking slot names and descriptions from schema; (2) The candidate with the highest entailment score is selected as the entailment reference. Optionally, the validation step checks that the ground-truth utterance entail at least one candidate reference and not entail any negative references constructed with substituted slot values. Otherwise, the instance is excluded from the evaluation; (3) RTE between the premise (the generated utterance, supplemented with previous dialogue turn and slot descriptions when needed) and the hypothesis (entailment reference). The same NLI model is used throughout SGSAcc evaluation. Here, the model outputs an entailment probability of 0.64, so the generation is considered faithful to the dialogue action. to improve the faithfulness of generation. **Data Preprocessing.** We follow the T2G2 approach by Kale and Rastogi (2020) as shown in Fig.2. We first use predefined templates to turn each dialogue action into a natural sentence by substituting placeholders with the slot values. Then the templated sentences from each dialogue action are concatenated together to form the semantic representation (SR), which is fed into a generative model to generate utterances. **Fine-Tuning T5 (FT-T5).** We fine-tuned a T5-small model Raffel et al. (2020) as our baseline generative model (details in Appendix E). **Prefix-Tuning T5 (PT-T5).** We also experimented with prefix-tuning, a technique that inserts trainable key-value pairs at each attention layer while fixing other parameters in the language model during training Li and Liang (2021). PT is reported to enhance generalization Li and Liang (2021); Clive et al. (2021). Details are given in Appendix E. **SGSAcc Ensemble (PT+FT-T5).** To further improve faithfulness, we adapt SGSAcc to implement a fidelity reranker Harkous et al. (2020) that helps select faithful utterances from an ensemble of FT and PT models. We check whether the generated utterance entails any of the candidate references constructed from the dialogue action. If the NLI model outputs entailment for any of the candidates, the generation is considered faithful to the dialogue action. No labels are leaked as the NLI model has not seen the dataset and no ground-truth utterances are used to select entailment reference. For this SGSAcc ensemble, we first use beam search to decode the FT and PT models in parallel to obtain their respective best generations. Then, we use SGSAcc to assign fidelity scores to each generation, adding a score of 1 for each faithfully realized dialogue action and 0 otherwise. Finally, the generation that attains the highest fidelity score (or likelihood in case of a tie) is selected as the final generated utterance. The reranker targets for higher SGSAcc and we find that it also reduces SER. ## 4 Experiments and Results We use the preprocessed Schema Guided Dialogue (SGD) dataset as part of the GEM benchmark Su et al. (2021) for NLG tasks. There are 164,982 turns for training and 10,000 turns for testing. There are 16 domains in total, 4 of which are only present in the test split. We use SER, SGSAcc, and BLEU Papineni et al. (2002) as evaluation metrics. We compare our system with previous work on the SGD dataset focusing on faithfulness in Table 1. In terms of SER, our FT model already surpasses the previous best-achieving model, T2G2-T5 from Kale and Rastogi (2020) (0.04% v.s 0.40%). Compared with its near-perfect SER, FT-T5 scores relatively low on SGSAcc, especially in domains not seen in training (89.1%). This suggests that the model struggles to realize categorical slots in unseen domains. In comparison, PT-T5 yields significantly higher SGSAcc (89.1% \(\rightarrow\) 98.5%) than FT-T5, faithfully realizing unseen categorical slots. Finally, the FT+PT ensemble with SGSAcc fidelity reranker combines the advantages of FT-T5 and PT-T5, achieving a substantial SGSAcc improvement in both seen and unseen domains (99.4% and 98.8%) while keeping SER close to zero (0.02%). ## 5 Conclusion We present SGSAcc, a faithfulness metric for Task-Oriented Dialogue Response Generation. We applied SGSAcc to the SGD dataset and showed that it significantly increases coverage relative to SER and agrees well with human judgments. We also showed that prefix-tuning (PT) can improve faithfulness of generation in unseen domains compared with fine-tuning (FT). Finally, Our PT+FT ensemble with SGSAcc fidelity reranker establishes a new faithfulness benchmark on the SGD dataset.1 \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{BLEU} & \multicolumn{3}{c}{SER} & \multicolumn{3}{c}{SGSAcc(validated)} & \multicolumn{3}{c}{SGSAcc(all)} \\ & & all & seen & unseen & all & seen & unseen & all & seen & unseen \\ \hline CVAE Du et al. (2020) & **43.0** & 24.0 & 9.00 & 27.0 & - & - & - & - & - & - \\ GPT2 Tsai et al. (2021) & 20.5 & 0.90 & - & - & - & - & - & - & - & - \\ T2G2-T5 Tsai et al. (2021) & 28.6 & 0.40 & 0.40 & 0.40 & - & - & - & - & - & - \\ \hline FT-T5 (Ours) & 32.3 & 0.04 & 0.05 & 0.00 & 97.3 & 98.6 & 88.7 & 96.5 & 97.8 & 87.7 \\ PT-T5 (Ours) & 31.8 & 0.82 & 0.91 & 0.23 & 98.5 & 98.7 & 97.0 & 97.9 & 98.2 & 96.4 \\ PT+FT-T5 (Ours) & 32.1 & **0.02** & **0.02** & **0.00** & **98.9** & **99.5** & **97.7** & **98.5** & **99.1** & **97.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison between previous work on the SGD and ours. Best performance is in **bold**. SGSAcc scores with and without the validation step are close (\(\pm 1\%\)) and system rankings are preserved. ## 6 Limitation We note that since SGSAcc uses the slot descriptions in the schema to construct reference candidates, writing slot descriptions manually may be required to apply SGSAcc on datasets that do not come with schema. Although an increasing amount of research in Task-Oriented Dialogue Response Generation Du et al. (2020); Kale and Rastogi (2020); Tsai et al. (2021) have shown that introducing schema information can help improve NLG performance, only recent Task-Oriented Dialogue datasets such as the SGD and MultiWOZ 2.2 provide service schema. Although we have shown that SGSAcc is relatively robust to different schema writing styles (Appendix A), certain phrasing of slot names and slot descriptions can affect NLI entailment recognition. For example, the use of double negatives can be difficult for the NLI model to recognize entailment as analyzed in Appendix D. To address this issue, dialogue system designers may consider using alias slot names and rephrasing descriptions in practice so that a fluent entailment reference can be readily constructed by pre-designed rules to facilitate entailment recognition. ## Ethics Statement We recognize that neural-based Task-Oriented Dialogue Response Generation can potentially convey misinformation that may result in loss or cause harm in real-life applications. Mitigating such risks is the primary goal of this research and our future work. However, extra care must be taken when neural-based response generation is deployed for mission-critical tasks such as ambulance service or crime reports to ensure that information is communicated clearly and accurately. We also note that using pretrained Natural Language Inference models without further fine-tuning can reduce the environmental impact compared with training on specific datasets for each task. The reduction is proportional to the size of the NLI model and the dataset, which can be quite significant in corporate-level applications. ## References * C. L. Li and P. Liang (2021)Prefix-tuning: optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp. 4582-4597. External Links: Link, Document Cited by: SS1, SS2. * Y. Du, S. Oraby, V. Perera, M. Shen, A. Narayan-Chen, T. Chung, A. Venkatesh, and D. Hakkani-Tur (2020)Schema-guided natural language generation. In Proceedings of the 13th International Conference on Natural Language Generation, INLG 2020, Dublin, Ireland, December 15-18, 2020, pp. 283-295. External Links: Link, Document Cited by: SS1, SS2. * O. Dusek and Z. Kasner (2020)Evaluating semantic accuracy of data-to-text generation with natural language inference. In Proceedings of the 13th International Conference on Natural Language Generation, INLG 2020, Dublin, Ireland, December 15-18, 2020, pp. 131-137. External Links: Link, Document Cited by: SS1, SS2. * H. Harkous, I. Groves, and A. Saffari (2020)Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pp. 2410-2424. External Links: Link, Document Cited by: SS1, SS2. * M. Kale and A. Rastogi (2020)Template guided text generation for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 6505-6520. External Links: Link, Document Cited by: SS1, SS2. * H. Lee, R. Gupta, A. Rastogi, Y. Cao, B. Zhang, and Y. Wu (2021)SGD-X: a benchmark for robust generalization in schema-guided dialogue systems. CoRRabs/2110.06800. External Links: Link, 2110.06800 Cited by: SS1, SS2. * X. L. Li and P. Liang (2021)Prefix-tuning: optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp. 4582-4597. External Links: Link, Document Cited by: SS1, SS2. * Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019)Roberta: a robustly optimized BERT pretraining approach. CoRRabs/1907.11692. External Links: Link, 1907.11692 Cited by: SS1, SS2. * T. Luong, H. Pham, and C. D. Manning (2015)Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lighigh, Portugal, pp. 1412-1421. External Links: Link, Document Cited by: SS1, SS2. * K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002)Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, USA, pp. 311-318. External Links: Link, Document Cited by: SS1, SS2. * Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _J. Mach. Learn. Res._, 21:140:1-140:67. * Rastogi et al. (2020) Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_, pages 8689-8696. AAAI Press. * Su et al. (2021) Lin Su, Nan Duan, Edward Cui, Lei Ji, Chenfei Wu, Huaishao Luo, Yongfei Liu, Ming Zhong, Taroon Bharti, and Arun Sacheti. 2021. GEM: A general evaluation benchmark for multimodal tasks. In _Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021_, volume ACL/IJCNLP 2021 of _Findings of ACL_, pages 2594-2603. Association for Computational Linguistics. * Tsai et al. (2021) Alicia Y. Tsai, Shereen Oraby, Vittorio Perera, Jiun-Yu Kao, Yuheng Du, Anjali Narayan-Chen, Tagyoung Chung, and Dilek Hakkani-Tur. 2021. Style control for schema-guided natural language generation. _CoRR_, abs/2109.12211. * Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. _CoRR_, abs/1910.03771. * Zang et al. (2020) Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020. Multiwoz 2.2 : A dialogue dataset with additional annotation corrections and state tracking baselines. _CoRR_, abs/2007.12720. ## Appendix A Robustness Against Schema Writing Styles Since SGSAcc uses the slot description in service schema to construct entailment reference, we check its robustness to different schema writing styles so that it can be used to evaluate a variety of services with heterogeneous interfaces. We use the SGD-X dataset (Lee et al., 2021), which contains five versions of schema rephrased from the original SGD to test whether SGSAcc is sensitive to writing styles. For each version, we run the validation step and assess how well SGSAcc classify entailment reference and negative references with F1-score. Table 2 shows that SGSAcc can effectively recognize faithfulness for all versions of schema, demonstrating its robustness against different schema writing styles.
2303.12311
Frozen Language Model Helps ECG Zero-Shot Learning
The electrocardiogram (ECG) is one of the most commonly used non-invasive, convenient medical monitoring tools that assist in the clinical diagnosis of heart diseases. Recently, deep learning (DL) techniques, particularly self-supervised learning (SSL), have demonstrated great potential in the classification of ECG. SSL pre-training has achieved competitive performance with only a small amount of annotated data after fine-tuning. However, current SSL methods rely on the availability of annotated data and are unable to predict labels not existing in fine-tuning datasets. To address this challenge, we propose Multimodal ECG-Text Self-supervised pre-training (METS), the first work to utilize the auto-generated clinical reports to guide ECG SSL pre-training. We use a trainable ECG encoder and a frozen language model to embed paired ECG and automatically machine-generated clinical reports separately. The SSL aims to maximize the similarity between paired ECG and auto-generated report while minimize the similarity between ECG and other reports. In downstream classification tasks, METS achieves around 10% improvement in performance without using any annotated data via zero-shot classification, compared to other supervised and SSL baselines that rely on annotated data. Furthermore, METS achieves the highest recall and F1 scores on the MIT-BIH dataset, despite MIT-BIH containing different classes of ECG compared to the pre-trained dataset. The extensive experiments have demonstrated the advantages of using ECG-Text multimodal self-supervised learning in terms of generalizability, effectiveness, and efficiency.
Jun Li, Che Liu, Sibo Cheng, Rossella Arcucci, Shenda Hong
2023-03-22T05:01:14Z
http://arxiv.org/abs/2303.12311v1
# Frozen Language Model Helps ECG Zero-Shot Learning ###### Abstract The electrocardiogram (ECG) is one of the most commonly used non-invasive, convenient medical monitoring tools that assist in the clinical diagnosis of heart diseases. Recently, deep learning (DL) techniques, particularly self-supervised learning (SSL), have demonstrated great potential in the classification of ECG. SSL pre-training has achieved competitive performance with only a small amount of annotated data after fine-tuning. However, current SSL methods rely on the availability of annotated data and are unable to predict labels not existing in fine-tuning datasets. To address this challenge, we propose **M**ultimodal **ECG-T**ext **S**elf-supervised pre-training (METS), **the first work** to utilize the auto-generated clinical reports to guide ECG SSL pre-training. We use a trainable ECG encoder and a frozen language model to embed paired ECG and automatically machine-generated clinical reports separately. The SSL aims to maximize the similarity between paired ECG and auto-generated report while minimize the similarity between ECG and other reports. In downstream classification tasks, METS achieves around 10% improvement in performance without using any annotated data via zero-shot classification, compared to other supervised and SSL baselines that rely on annotated data. Furthermore, METS achieves the highest recall and F1 scores on the MIT-BIH dataset, despite MIT-BIH containing different classes of ECG compared to the pre-trained dataset. The extensive experiments have demonstrated the advantages of using ECG-Text multimodal self-supervised learning in terms of generalizability, effectiveness, and efficiency. **Keywords:** Multimodal self-supervised learning, Zero-shot learning, Language model, ECG, Signal processing ## 1 Introduction The electrocardiogram (ECG) is a diagnostic tool that is widely used in clinical practice (Addison, 2005). In practice, the ECG is used to detect a wide range of cardiac conditions, including arrhythmias, heart attacks, and heart failure (Berkaya et al., 2018). Recently, deep learning (DL) methods have shown promising results in classifying ECG data (Ebrahimi et al., 2020; Tripathy et al., 2019). DL models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have been shown to be highly accurate in classifying ECG for a variety of cardiac conditions (Baloglu et al., 2019; Singh et al., 2018; Xu et al., 2020). However, training DL models in a supervised manner (see Figure 1 (a)) often requires a large number of high-quality labels to obtain strong generalization performance (Ebrahimi et al., 2020). In addition, some forms of ECG, such as ST-elevation myocardial infarction, are difficult to detect and often require manual interpretation of the ECG by trained cardiologists (Ayer and Terkelsen, 2014). This work requires a huge effort which is costly and laborious. Currently, self-supervised learning (SSL) has achieved impressive performance on datasets containing a small number of annotations, which provides a promising solution for unannotated ECG data (Jaiswal et al., 2020; Chou et al., 2020). It allows models to mine useful representations of ECG and can be widely used for various downstream tasks such as abnormality detection and arrhythmia classification (Lan et al., 2022; Mehari and Strodthoff, 2022). Nevertheless, existing ECG SSL methods still require a large amount of annotated data in order to fine-tune them for downstream tasks (see Figure 1 (b)). This requirement hinders the real-world application of ECG methods as some heart diseases are rare, Figure 1: (a) denotes a supervised learning method for ECG. (b) denotes a common self-supervised learning method for ECG, i.e. pre-training followed by fine-tuning. (c) denotes a self-supervised learning method for multimodal ECG-Text. Zero-shot classification is performed after pre-training is completed. which leads to problems with zero-shot learning. Zero-shot learning means that the model does not need any annotated samples for _unseen_ categories (Socher et al., 2013). This is achieved by explicitly learning shared features from seen samples, and then generalizing them on unseen samples based on "descriptions" of the unseen categories' features (Xian et al., 2018; Pourpanah et al., 2020). Specifically, such "descriptions", are usually borrowed from external medical domain knowledge, textual ECG reports for example (see Figure 1 (c)). Zero-shot learning for ECG faces a number of challenges. The first challenge is the semantic gap, where ECG and text (automatically machine-generated ECG reports) are two heterogeneous modalities. ECG is long-term continuous numbers and text is relatively short-term discrete clinical terminologies (Krishnan and Sowmya Kamath, 2018). They are difficult to align and characterize each other (Liang et al., 2022). The second challenge is domain adaptation. Zero-shot learning model may be sensitive to unknown domains, making it difficult to adapt to new domains or unseen categories and not performing well for downstream tasks in zero-shot learning. The third challenge is scalability. Zero-shot learning models need to learn a large number of representations and apply them to downstream tasks, which leads to a large computational cost for the model (Wang et al., 2019). Recently, (Yamac et al., 2022) and (Bhaskarpandit et al., 2022) have reached considerable results on ECG zero-shot classification tasks. However, they pre-trained the model with supervised learning, which indicates that their methods still require large-scale annotated ECG for the pre-training stage. To fully utilize the unannotated data, CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) first implement multimodal SSL with two individual encoders and use zero-shot classification as the downstream task to evaluate SSL pre-trained model performance (Radford et al., 2021; Jia et al., 2021). Florence (Yuan et al., 2021), LiT (Zhai et al., 2022), and ALBEF (Li et al., 2021) explore the potential of multimodal SSL on large-scale pre-training tasks (Yuan et al., 2021; Li et al., 2021; Zhai et al., 2022). Although recent works have achieved substantial progress on an image-text task, the medical signals-text, such as ECG, has not yet leveraged the benefits of the multimodal SSL. To take advantage of multimodal SSL, we propose a novel method to do **M**ultimodal **ECG-TEXT **SSL** pre-training (METS). The METS model takes the ECG and the corresponding reported text as input and feeds them into a multimodal comparative learning framework. The multimodal framework contains a language component and an ECG encoder to obtain embedding representations of the text and the ECG respectively. To make full use of the a priori clinical knowledge in the report text, the report text is fed into a large frozen language model with Resnet1d-18 as the backbone of the ECG encoder, both of which have a linear projection head to embed the text and ECG into the same dimension. Then, the similarity between ECG embedding and text embedding is computed to minimize the contrast learning loss and obtain a pre-trained model with rich medical knowledge. probabilities, which can be used to classify the various categories of ECG. The main contributions of this paper are summarised as follows: * Our proposed METS is **the first work** to apply a large language model for ECG SSL. The apriori clinical knowledge from the language model can be fully exploited to help generate ECG medical reports. * METS is independent of the annotated categories. Even if the external dataset categories are unseen, the classification can be done directly with zero-shot, unlike other SSL which require fine-tuning. * Experiments demonstrate that METS can be adapted to any of the downstream tasks, e.g. form, rhythm, and superclass, without the need to fine-tune on different tasks. Besides, METS does not require any annotated data but can exceed the supervised and SSL methods of fine-tuning with small-scale annotated data. ## 2 Methods In this section, we will demonstrate the details of METS. The framework is shown in Figure 2. METS consists of two components: Multimodal self-supervised pre-training (Section 2.1), and the zero-shot classification downstream task (Section 2.2). ### Multimodal self-supervised pre-training #### 2.1.1 Frozen Pre-trained Language Models Our approach starts with a large pre-trained language model based on the transformer architecture. In order for the large language model to fully understand the report text, we extend the report text into the language model as a complete sentence input (Radford et al., 2021). Specifically, we construct a prompt template for the report **"The report of the ECG is Figure 2: A framework for the METS approach. (a) shows a self-supervised pre-training approach. ECG-text pairs are fed into the model, and after comparative learning, the ECG encoder learns the parameters. (b) shows the zero-shot classification task. The corresponding labels are found by computing ECG and text similarity. (c) shows the visualization of the results of zero-shot classification. that {text}"**. We use a large clinical language model as the backbone of the text component. ClinicalBert has been pre-trained on all text from MIMIC III dataset (Alsentzer et al., 2019). #### 2.1.2 ECG Encoder Our ECG encoder \(E_{ecg}\) is based on ResNet1d-18, which modifies the kernel of ResNet-18 from a 2D patch to a 1D stride, in order to obtain the deep ECG embeddings e (He et al., 2016; Hong et al., 2020). This process can be represented as follows: \(\mathbf{e}=E_{ecg}\left(\mathbf{y}\right)\), where \(\mathbf{y}\) is the input of ECG. Then, a linear projection head \(f_{e}\) maps raw embeddings to \(\mathbf{e}_{d}\in\mathbb{R}^{D}\). The embedding dimension of the ECG encoder is set to be the same as the language model embedding dimension \(d\) for comparison learning. Inspired by (Tsimpoukelli et al., 2021), we freeze the parameters of the language model (LM) and use only paired ECG-text data from the PTB-XL dataset to update the parameters of the ECG encoder during SSL pre-training. This has the advantage of allowing the ECG encoder to learn rich prior clinical knowledge from the medical corpus, thus improving the generalization ability of the model. In addition, the parameters of the language model are frozen to reduce the significant computational cost of LM parameter updates. #### 2.1.3 Multimodal Contrastive Learning Following the multimodal contrast learning framework, we treat a pair of report text and ECG belonging to the same patient as a positive sample pair, while treating pairs of other patients' report text and that ECG as negative sample pairs. We maximize the contrast loss of different pairs (\(\mathbf{t_{i}}\), \(\mathbf{e_{j}}\)) and minimize the contrast loss of the same pair (\(\mathbf{t_{i}}\), \(\mathbf{e_{i}}\)) to improve the similarity of the same pair of samples. We first define the similarity between the representations \(\mathbf{t}\) and \(\mathbf{e}\) of two modalities in terms of cosine similarity, as shown in Equation 1. \[\text{sim}\left(\mathbf{t},\mathbf{e}\right)=\frac{t^{\top}\cdot e}{\left\|t \right\|\left\|e\right\|} \tag{1}\] Then, we need to train two contrast loss functions. The first loss function is the ECG-to-text contrast loss for the \(i^{th}\) pair, as shown in Equation 2. \[\ell_{\mathbf{i}}^{\left(e\to t\right)}=-\log\frac{\exp\left(\text{sim} \left(\mathbf{t_{i}},\mathbf{e_{i}}\right)/\tau\right)}{\sum_{k=1}^{N}\exp \left(\text{sim}\left(\mathbf{t_{i}},\mathbf{e_{j}}\right)/\tau\right)} \tag{2}\] The initialization of \(\tau\) is set to 0.07.Similarly, the text-to-ECG contrast loss Equation 3 is represented as follows. \[\ell_{\mathbf{i}}^{\left(t\to e\right)}=-\log\frac{\exp\left(\text{sim} \left(\mathbf{e_{i}},\mathbf{t_{i}}\right)/\tau\right)}{\sum_{k=1}^{N}\exp \left(\text{sim}\left(\mathbf{e_{i}},\mathbf{t_{j}}\right)/\tau\right)} \tag{3}\] Finally, our training losses are calculated as the average combination of the two losses for all positive ECG-text pairs in each minibatch, as shown in Equation 4. \[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\frac{\ell_{\mathbf{i}}^{\left(e\to t \right)}+\ell_{\mathbf{i}}^{\left(t\to e\right)}}{2} \tag{4}\] ### Zero-Shot ECG Classification In zero-shot classification, a segment of the ECG is used as input. To evaluate the zero-shot performance of the model on a multi-label classification task, we extend the discrete labels into full medical diagnostic statements and feed them into the language model to obtain embedding representations. Finally, the similarity between ECG embedding and text embedding is computed to obtain probabilities, which can be used to classify the various categories of ECG. ## 3 Experiments ### Datasets Ptb-XlWe use the PTB-XL dataset to train the METS model (Wagner et al., 2020). The PTB-XL dataset contains 21,837 clinical 12-lead ECG of 10 seconds duration from 18,885 patients, where each ECG segment is paired with the corresponding ECG reports. The reports are generated by the machine and only describe the ECG without final diagnosis. The original ECG reports were written in 70.89% German, 27.9% English, and 1.21% Swedish, and were converted into structured SCP-ECG statements. The statement sets were assigned to three non-reciprocal categories: diagnosis, form, and rhythm. Specifically, the dataset consisted of 71 different statements, broken down into 44 diagnostic statements, 12 rhythmic statements, and 15 formal statements. For the diagnostic labels were divided into 5 superclasses and 24 subclasses. In the current experiments, we focused on investigating ECG-text pairs without using any other labels. Following the experimental setup in (Huang et al., 2021; Wang et al., 2022), we extracted a multiclass classification dataset, PTB-XL test set, from the test set split. Ptb-Xl Test SetThe original ECG in the PTB-XL dataset is multi-labeled with diagnostic, form, and rhythm. In zero-shot downstream task classification, we need to calculate the similarity of ECG and text to find the most similar target, and multiple labels for a target can confuse the categories. Therefore, we produce diagnostic superclass, form, and rhythm test sets to complete the corresponding zero-shot downstream tasks. There are 1,000 samples on each test set. Details of the specific split test set are shown in figure 4. MIT-BIH Test SetWe use the MIT-BIH dataset for testing to evaluate the performance of our pre-trained representation framework for classification on external datasets (Moody and Mark, 2001). Please note that we do not pre-train on the MIT-BIH dataset. Similarly, we produced an MIT-BIH test set following the segmentation method above. Details of the specific split test set are also shown in figure 4. ### Implementation Details The models for the transformer were taken from the transformer library (Wolf et al., 2020). We took a linear projection head with an output dimension of 128 and a temperature \(\tau\) initialized to 0.07. The ECG encoder is optimized using Adam optimizer with a learning rate of 1e-3 and weight decay of 1e-3. We use 50 epochs and a batch size of 32 for pre-training and downstream tasks. The experiments were conducted using PyTorch 1.7 on NVIDIA GeForce RTX-3090 GPU, which took about 8 hours. ### Baselines To demonstrate the performance of the METS method, our approach is compared with the following baselines. (1) **ResNet-18** (He et al., 2016): We choose ResNet-18 for showing the performance of small data fraction fine-tune. (2) **SimCLR** (Chen et al., 2020): A self-supervised contrast learning model that achieves good performance in SSL. We compare it with ECG SSL. The temperature parameter of SimCLR is set to 0.1. For all SSL methods above, we use 5% data for fine-tuning. (3) **Supervised** (He et al., 2016): We train ResNet1d-18 in a supervised manner, in order to compare the learning performance of our method with that of a fully supervised. ### Results and Discussion In this experiment, we assessed the ECG classification using the commonly used metrics: Accuracy, Precision, Recall, F1. We first performed a zero-shot classification of the PTB-XL Test set for the diagnostic superclass. We illustrate the classification results for the diagnostic superclasses in Table 1. It can be found that our method outperforms all other SSL methods with comparable performance to supervised training. For example, compared to SimCLR, accuracy and F1 are improved by 11% and 4%, respectively. In contrast, METS far outperforms other SSL methods in the form classification results presented in Table 2 and achieves better performance than supervised learning in accuracy, precision, and F1. As shown in Table 3, METS also achieves good performance for the rhythm classification \begin{table} \begin{tabular}{c c c c c} \hline \hline Methods & Accuracy & Precision & Recall & F1 \\ \hline \multicolumn{5}{c}{Self-supervised} \\ \hline random - 5\% & 0.581 & 0.438 & 0.421 & 0.429 \\ SimCLR - 5\% & 0.648 & 0.545 & 0.443 & 0.485 \\ **METS - 0\%** & **0.842** & **0.694** & **0.626** & **0.657** \\ \hline \multicolumn{5}{c}{Supervised} \\ \hline Resnet18 - 100\% & 0.894 & 0.811 & 0.745 & 0.776 \\ \hline \hline \end{tabular} \end{table} Table 1: _PTB-XL_ result on superclass. % refers to fractions of labels used in the training data. \begin{table} \begin{tabular}{c c c c c} \hline \hline Methods & Accuracy & Precision & Recall & F1 \\ \hline \multicolumn{5}{c}{Self-supervised} \\ \hline random - 5\% & 0.603 & 0.364 & 0.342 & 0.351 \\ SimCLR - 5\% & 0.660 & 0.446 & 0.471 & 0.456 \\ **METS - 0\%** & **0.734** & **0.537** & **0.503** & **0.518** \\ \hline \multicolumn{5}{c}{Supervised} \\ \hline Resnet18 - 100\% & 0.724 & 0.520 & 0.508 & 0.509 \\ \hline \hline \end{tabular} \end{table} Table 2: _PTB-XL_ result on form. % refers to fractions of labels used in the training data. task. Overall, our results in PTB-XL show that the representations learned by METS are more informative than those of other state-of-the-art SSL methods. This also demonstrates that reports containing a priori knowledge can improve performance on the metrics. We evaluated the performance of METS in migration learning. Table 4 shows the performance comparison under cross-dataset testing. In general, METS outperforms other state-of-the-art methods and even outperforms supervised learning. Compared to Table 1, there is a significant improvement in F1 for METS. This suggests that the features learned by METS are robust and have the potential to be generalized to other data sources. ## 4 Conclusion In this paper, we present METS that uses automatically generated clinical reports to guide ECG pre-training. We pre-train the ECG encoder by applying the rich medical knowledge from the frozen large language model to the report text. As a result, our approach is independent of the class of annotated data and can be directly migrated to any unseen database. We can also do classification directly with zero-shot, unlike other SSL methods that require fine-tuning. Our experiments demonstrate that METS can be adapted to various downstream tasks, e.g. form, rhythm, disease, and abnormality classification. This means that the METS approach is more effective and efficient. \begin{table} \begin{tabular}{c c c c c} \hline Methods & Accuracy & Precision & Recall & F1 \\ \hline \multicolumn{5}{c}{Self-supervised} \\ \hline random - 5\% & 0.565 & 0.468 & 0.499 & 0.483 \\ SimCLR - 5\% & 0.749 & 0.642 & 0.610 & 0.624 \\ **METS - 0\%** & **0.794** & **0.680** & **0.735** & **0.706** \\ \hline \multicolumn{5}{c}{Supervised} \\ \hline Resnet18 - 100\% & 0.836 & 0.697 & 0.712 & 0.704 \\ \hline \end{tabular} \end{table} Table 4: _MIT-BIH result. % refers to fractions of labels used in the training data._ \begin{table} \begin{tabular}{c c c c c} \hline Methods & Accuracy & Precision & Recall & F1 \\ \hline \multicolumn{5}{c}{Self-supervised} \\ \hline random - 5\% & 0.627 & 0.435 & 0.442 & 0.438 \\ SimCLR - 5\% & 0.697 & 0.516 & 0.565 & 0.549 \\ **METS - 0\%** & **0.746** & **0.576** & **0.612** & **0.593** \\ \hline \multicolumn{5}{c}{Supervised} \\ \hline Resnet18 - 100\% & 0.790 & 0.664 & 0.607 & 0.633 \\ \hline \end{tabular} \end{table} Table 3: _PTB-XL result on rhythm. % refers to fractions of labels used in the training data._ **Acknowledgement** This work was supported by the National Natural Science Foundation of China (No.62102008).
2302.05544
Direct measurement of hexacontatetrapole, $\textbf{E6}$ γ decay from $^{\textbf{53m}}$Fe
The only proposed observation of a discrete, hexacontatetrapole ($E6$) transition in nature occurs from the T$_{1/2}$ = 2.54(2)-minute decay of $^{53m}$Fe. However, there are conflicting claims concerning its $\gamma$-decay branching ratio, and a rigorous interrogation of $\gamma$-ray sum contributions is lacking. Experiments performed at the Australian Heavy Ion Accelerator Facility were used to study the decay of $^{53m}$Fe. For the first time, sum-coincidence contributions to the weak $E6$ and $M5$ decay branches have been firmly quantified using complementary experimental and computational methods. Agreement across the different approaches confirms the existence of the real $E6$ transition; the $M5$ branching ratio and transition rate have also been revised. Shell model calculations performed in the full $pf$ model space suggest that the effective proton charge for high-multipole, $E4$ and $E6$, transitions is quenched to approximately two-thirds of the collective $E2$ value. Correlations between nucleons may offer an explanation of this unexpected phenomenon, which is in stark contrast to the collective nature of lower-multipole, electric transitions observed in atomic nuclei.
T. Palazzo, A. J. Mitchell, G. J. Lane, A. E. Stuchbery, B. A. Brown, M. W. Reed, A. Akber, B. J. Coombes, J. T. H. Dowie, T. K. Eriksen, M. S. M. Gerathy, T. Kibédi, T. Tornyi, M. O. de Vries
2023-02-10T23:17:23Z
http://arxiv.org/abs/2302.05544v1
# Direct measurement of hexacontatetrapole, \(E6\)\(\gamma\) decay from \({}^{53m}\)Fe ###### Abstract The only proposed observation of a discrete, hexacontatetrapole (\(E6\)) transition in nature occurs from the T\({}_{1/2}=2.54(2)\)-minute decay of \({}^{53m}\)Fe. However, there are conflicting claims concerning its \(\gamma\)-decay branching ratio, and a rigorous interrogation of \(\gamma\)-ray sum contributions is lacking. Experiments performed at the Australian Heavy Ion Accelerator Facility were used to study the decay of \({}^{53m}\)Fe. For the first time, sum-coincidence contributions to the weak \(E6\) and \(M5\) decay branches have been firmly quantified using complementary experimental and computational methods. Agreement across the different approaches confirms the existence of the real \(E6\) transition; the \(M5\) branching ratio and transition rate have also been revised. Shell model calculations performed in the full \(pf\) model space suggest that the effective proton charge for high-multipole, \(E4\) and \(E6\), transitions is quenched to approximately two-thirds of the collective \(E2\) value. Correlations between nucleons may offer an explanation of this unexpected phenomenon, which is in stark contrast to the collective nature of lower-multipole, electric transitions observed in atomic nuclei. pacs: 23.40.-s, 21.60.Fw, 23.20.Lv First-order electromagnetic processes are the primary mechanism by which excited states in atomic nuclei relax, most often via single \(\gamma\)-ray emission. Since both initial- and final-state wave functions possess a well-defined spin (\(J\)) and parity (\(\pi\)), conservation laws impose a characteristic multipolarity (\(\sigma\lambda\)) for each discrete transition. Nature favours pathways that proceed via the lowest available multipole order; as such, \(\Delta J=1,2\) transitions are prevalent in atomic and nuclear systems. However, situations arise in which the only available decay pathway is hindered by a larger angular-momentum-change requirement [1]. As the multipole order increases, the number of known cases decreases rapidly. For example, there are \(\approx\) 1100 pure or mixed \(\Delta J=3\) (\(E3\) or \(M3\)), \(\approx\) 170 \(\Delta J=4\) (\(E4\) or \(M4\)), and \(\approx\) 25 \(\Delta J=5\) (\(E5\) or \(M5\)) transitions reported in atomic nuclei. Despite discovery of over 3,000 different nuclides, only one claim of \(\Delta J=6\), or hexacontatetrapole, decay has been reported: the \(J^{\pi}=19/2^{-}\to J^{\pi}=7/2^{-}\), \(E6\)\(\gamma\) decay from \({}^{53m}\)Fe [2; 3; 4; 5] (see Fig. 1 for details). Low-lying states in this nucleus can be understood in the (\(f_{7/2}\)) model space with an effective interaction derived from the energy-level spectra of \({}^{54}\)Co (\({}^{53}\)Fe plus a proton) and \({}^{54}\)Fe (\({}^{53}\)Fe plus a neutron) [4]. Isomerism of the 19/2\({}^{-}\) level occurs due to its location relative to the other yrast states i.e., those with the lowest excitation energy for a given spin and parity. The only alternate decay pathways to the \(E6\) transition are the strongly hindered \(M5\), \(J^{\pi}=19/2^{-}\to 9/2^{-}\) and \(E4\), \(J^{\pi}=19/2^{-}\to 11/2^{-}\) transitions. However, inconsistencies in \(\gamma\)-ray branching ratios and reduced transition rates are reported in the literature [2; 3]. Although they are relatively rare, \(\gamma\)-ray'summing' events could be mistaken for the very weak, \(E6\) decay; these occur when multiple \(\gamma\) rays are incident on the same detector within an unresolvable time window. It is even possible that no real \(E6\) transition was observed in the prior work, and the feature at 3041 keV reported in the energy spectrum of Ref. [2] consists entirely of sum events. Despite their importance, a thorough and quantitative understanding of sum contributions was lacking [2; 3]. This Letter reports the first direct confirmation of \(E6\)\(\gamma\) decay in \({}^{53m}\)Fe using a novel combination of experimental, computational and Monte Carlo techniques that fully quantify the sum contributions; this confirms the highest multipole order ever observed. With a now-well-defined \(E6\) transition strength, and revised values for the \(M5\) and \(E4\)\(\gamma\) decay, \({}^{53m}\)Fe provides a unique test of the nuclear shell model and our present understanding of high-multipolarity transitions within a single nuclear system. Comparison with theoretical shell model calculations performed in the full \(fp\)-model space shows, surprisingly, that low- and high-multipolarity transitions in atomic nuclei are fundamentally different in nature. The experiments were performed at the Heavy Ion Accelerator Facility at the Australian National University. A 2-pnA beam of 50-MeV, \({}^{6}\)Li ions delivered by the 14UD Pelletron accelerator was incident on self-supporting targets of natural vanadium. Three separate, 10-mg/cm\({}^{2}\) thick targets were used; these were replaced periodically to suppress build up of long-lived activity. Excited states in \({}^{53}\)Fe were populated via the \({}^{51}\)V(\({}^{6}\)Li,4\(n\))\({}^{53}\)Fe reaction. Other fusion-evaporation channels led to production of neighbouring isotopes of iron, manganese, chromium, vanadium, titanium and scandium. Since many of these nuclides are stable against \(\beta\) decay, their prompt \(\gamma\) rays were easily separated from delayed decay of \({}^{53m}\)Fe via subtraction of suitable sections of the time-correlated data discussed below. Relaxation of \({}^{53m}\)Fe was studied via \(\gamma\)-ray spectroscopy using the CAESAR array of Compton-suppressed High-Purity Germanium (HPGe) detectors [7]. Of the nine detectors used, six were fixed in the vertical plane, perpendicular to the beam axis and \(\approx\) 12 cm from the target. The remaining three, in the horizontal plane, were on rail systems allowing their radial position to be moved. The detector-suppressor assemblies were retracted such that the front collimator that defines the detector illumination was moved from \(\approx\) 8.5 cm to \(\approx\) 12 cm from the target between measurements, reducing the exposed solid angle by approximately a factor of two. These are referred to as the 'near' and 'far' geometries, respectively, and discussed quantitatively in the text below. Standard \(\gamma\)-ray sources of \({}^{152}\)Eu and \({}^{56}\)Co were used for energy and absolute detection-efficiency calibrations. A continuous \({}^{6}\)Li beam irradiated the target for 7.5 minutes (approximately three half-lives of \({}^{53m}\)Fe), after which the beam was intercepted and decay of the isomer was observed for 20 minutes (approximately eight half-lives). A custom-made counter, with an oscillator that can be driven at various well-defined frequencies, was used in conjunction with the CAESAR data acquisition system to time-stamp individual \(\gamma\)-decay events across many repeating irradiation-decay cycles. Observation of intense 701-, 1011-, 1328- and 2338-keV \(\gamma\) rays confirmed production of \({}^{53m}\)Fe. The bulk of nuclei produced in the reactions have much longer lifetimes than \({}^{53m}\)Fe. Subtracting the second 10 minutes of the collection cycle from the first 10 minutes resulted in a much cleaner energy spectrum that strongly enhances the peak-to-total ratio for \({}^{53m}\)Fe decay, while only sacrificing \(\approx\) 12% of the total \({}^{53m}\)Fe data collected. The time spectrum of collected events, as well as the total \(\gamma\)-ray and time-subtracted \(\gamma\)-ray energy spectra are presented in Fig. 2. Gamma rays from the decay of \({}^{53m}\)Fe have been labeled by their energy in keV. The remaining \(\gamma\) rays have been identified as arising from decay of \({}^{75m}\)Ge (T\({}_{1/2}~{}=~{}\)48 s), and \(\beta\) decay of \({}^{51}\)Ti (T\({}_{1/2}~{}=~{}\)346 s), \({}^{53}\)Fe (ground state, T\({}_{1/2}~{}=~{}\)510 s), \({}^{52}\)V (T\({}_{1/2}~{}=~{}\)208 s), \({}^{20}\)F (T\({}_{1/2}~{}=~{}\)11 s) and \({}^{28}\)Al (T\({}_{1/2}~{}=~{}\)134 s). Total yields of \(\gamma\) rays from \({}^{53m}\)Fe decay, measured in both geometries, are provided in Table 1 of Ref. [8]. In addition to the real \(E6\) transition reported in this Letter, \({}^{53m}\)Fe exhibits three alternate decay pathways to the ground state (refer to Fig. 1 for details). Each individual cascade presents a potential summing contribution (\(S_{i}\)) to the true 3041-keV \(\gamma\)-ray intensity (\(I_{\gamma}\)) that requires careful consideration. The observed full-energy peak yield (\(Y_{\gamma}\)) is given by: \[Y_{\gamma}~{}=~{}I_{\gamma}~{}+~{}\Sigma S_{i}, \tag{1}\] where the sum is over each possible multi-transition cascade that connects the level to the ground state. While the real 1713-, 2338- and 3041-keV full-energy peaks are all expected to contain individual sum contributions, an additional peak observed at 2029 keV in Fig. 2 is entirely composed of sum events (701 keV + 1328 keV). Experimental and computational methods were adopted to quantify the sum-coincidence component in each of these measured full-energy peak yields. Full details of the methods and their results are described in Figure 1: Level scheme showing the energies (in keV) of excited states and \(\gamma\)-ray transitions observed in the decay of \({}^{53m}\)Fe [6], together with nucleon configurations that couple to form the 19/2\({}^{-}\) isomer. The \(\gamma\)-ray intensities were determined in this work. Proton (neutron) particles are depicted by red (blue) solid spheres; proton (neutron) holes are shown as faded spheres. Coupling of the proton- and neutron-hole configurations leads to formation of the 19/2\({}^{-}\) isomeric state at 3040 keV. Refs. [8; 9]; a brief explanation of each method is provided here: \(\bullet\)\(Experimental\): The measured yield of the 2029-keV full-energy sum peak, which can \(only\) occur though summing, can be scaled to estimate the sum-coincidence components of the other transitions while accounting for detection efficiencies and angular correlations. \(\bullet\)\(Geometric\): Sum-coincidence events can be directly inferred by considering changes in counting efficiency between the 'near' and 'far' detector geometries. \(\bullet\)\(Computational\): The sum contribution to \(Y_{\gamma}(3041\) keV) can be estimated from measured \(\gamma\)-ray intensities, detection efficiencies and angular correlations by solving the set of equations that govern the different sum contributions. \(\bullet\)\(Monte\)\(Carlo\): A Monte Carlo simulation was developed to model the \(\gamma\) decay of \({}^{53m}\)Fe and evaluate summing contributions expected with the CAESAR array. Consistency between the various approaches across both detector geometries gives confidence in the deduced branching ratios. Therefore, the analysis confirms that the \(E6\) transition is real, and enables a firm measurement of its decay branching ratio for the first time. Transition strengths for the \(E4\), \(M5\) and \(E6\) decays were calculated using the new branching ratios derived from results of the Experimental method; they are presented in Table 3. These have been determined using the adopted \(19/2^{-}\) state lifetime of T\({}_{1/2}\) = 2.54(2) min [6] and theoretical internal conversion coefficients; values for \(L=1-5\) were calculated using BRICC [10], while for \(L=6\) it was calculated directly using the RAINE code [11]. Intensities reported by Black \(et\)\(al\)[2; 3], and transition strengths determined using the relative intensities of Ref. [3] are included for comparison. We confirm the reported values for \(E4\) decay, however, the competing \(M5\) branching ratio and transition strength were found to be \(\approx\)20% lower. Notably, the branching ratios of transitions depopulating the state at 2339 keV were also found to be significantly different to those of Black \(et\)\(al\)[3]. To gain microscopic understanding of the high-multipolarity transitions in \({}^{53m}\)Fe, shell model calculations were performed with the NuShellX code [12]. For comparisons between theory and experiment, it is useful to consider the reduced matrix element, \(\mathcal{M}_{p}\), which is related to the reduced transition strength by: \[B(E\lambda;J_{i}\to J_{f})=\frac{\mathcal{M}_{p}^{2}}{(2J_{i}\ +\ 1)}, \tag{2}\] where \(\mathcal{M}_{p}\) is further separated into its proton (\(\mathcal{A}_{p}\)) and neutron (\(\mathcal{A}_{n}\)) contributions: \[\mathcal{M}_{p}\ =\ \mathcal{A}_{p}\cdot\varepsilon_{p}\ +\ \mathcal{A}_{n}\cdot \varepsilon_{n}. \tag{3}\] Typically, \(\mathcal{A}_{p}\) and \(\mathcal{A}_{n}\) are calculated to account for configuration mixing within the major shell, while effective nucleon charges are introduced to account for cross-shell mixing. Thus \(\varepsilon_{p,n}=\varepsilon_{p,n}+\delta_{p,n}\), where \(e_{p,n}\) are bare nucleon charges and \(\delta_{p,n}\) are core-polarization charges. Calculations were performed within a restricted (\(f_{7/2}\))\({}^{13}\), and full \(fp\) model space with two commonly used Hamiltonians, GFPX1A [2] and KB3G [3]. Excited-state energies were in good agreement with the adopted values [6]; for example, the energies of the \(19/2^{-}\), \(11/2^{-}\) and \(9/2^{-}\) states calculated with the GFPX1A interaction have a root-mean-squared (rms) deviation of 169 keV. Matrix elements for the electromagnetic transitions are sensitive to the rms radius of the \(0f_{7/2}\) orbit, and with harmonic oscillator radial wavefunctions they scale approximately with \(b^{\lambda}\), where \(b\) is the oscillator length parameter. Spherical Skyrme Hartree-Fock calculations, with Skx [15] and Sly4 [16] interactions, were used to determine the \(0f_{7/2}\) orbital rms radius. The Skx \(0f_{7/2}\) rms radius was reproduced by the harmonic oscillator model with \(b=1.937\) fm. This parameter is approximately 3% larger for Sly4, which represents the theoretical uncertainty in the rms radius. The matrix elements, therefore, have uncertainties of 18%, 15%, and 12% for the calculated \(\lambda=6,5,4\) matrix elements, respectively. The full set of results is provided in Table 2 of Ref. [8], and average values of both \(fp\)-shell calculations are summarised and compared to experiment in Table 2 in this paper. Results of the (\(f_{7/2}\))\({}^{13}\) calculations are similar to Figure 2: (a) Time spectrum from the ADC clock recorded with each \(\gamma\)-ray event illustrating the irradiation and out-of-beam collection period split into two parts, gates A and B. Lower panels show (b) the total \(\gamma\)-ray spectrum recorded (gate A plus gate B) and (c) the subtracted spectrum (gate A minus gate B) described in the text. The inset spectrum is on a linear scale and expands the region near the 3041-keV, \(E6\) transition. those in Ref. [4]. Surprisingly, matrix elements obtained in the full \(fp\) model space are almost a factor of two smaller than the restricted-basis values. This is unusual, since strong \(\lambda=2\) transitions are generally enhanced in the full \(fp\) space with respect to the restricted one. This behavior comes about because the high-\(\lambda\) transitions are dominated by the \(0f_{7/2}\) orbital; in the larger space, the matrix elements are diluted by mixing of the \(0f_{7/2}\) component with \(1p\) orbitals, which cannot contribute to the high-multipolarity transitions; in contrast, the \(1p\) orbitals contribute to and enhance \(\lambda=2\) transition strength. A remarkable aspect of these high-multipolarity transitions is that they are dominated by their proton component. This, again, is in contrast to strong \(B(E2)\) transitions, in which the proton and neutron components are typically observed to be similar. For this reason, the isoscalar \(E2\) effective charge is best determined with, for example, the empirical value of \(\varepsilon_{p}+\varepsilon_{n}=2.0\) obtained in Ref. [18]. The separate proton and neutron \(E2\) effective charges can only be obtained in special cases. An example is the \(A~{}=~{}51\) mirror nuclei system [19], where values of \(\varepsilon_{p}\approx 1.15\) and \(\varepsilon_{n}\approx 0.80\) were obtained from the measured \(E2\) transition data. The calculated proton and neutron contributions and experimental matrix elements, presented in Table 2, can be used with Equation (3) to obtain effective proton charges for the high-multipolarity electric transitions. For the small neutron component, \(\varepsilon_{n}=0.5\) is adopted [20]. The results obtained are: \(\varepsilon_{p}=0.62(13)\) for \(\lambda=6\); and \(\varepsilon_{p}=0.64(6)\) for \(\lambda=4\); if a value of \(\varepsilon_{n}=0\) is used instead, \(\varepsilon_{p}=0.65(13)\) and \(\varepsilon_{p}=0.80(7)\) are found for \(\lambda=6\) and \(\lambda=4\), respectively. These results are presented in Fig. 3, along with the value of \(\varepsilon_{p}=1.15\) for \(\lambda=2\) from Ref. [19], which has an assumed uncertainty of \(5\%\). Effective charges are evaluated by considering the coupling of valence nucleons to particle-hole excitations of the core. Whether based on perturbation theory or the particle-vibration concepts of Bohr and Mottelson [21], there is a choice of--and sensitivity to--the residual particle-hole interaction adopted in the calculation. Core-polarization contributions for all \(\lambda\) values were calculated for seven different interactions in Ref. [20]. The results of these calculations, summarized in Table 1 of Ref. [20], are compared to empirical values for \(\lambda=2,4,6\) in Fig. 3. The one that adopts Wigner-type interactions, shown in red, has a trend which is closest matched to experiment. However, while there is excellent agreement for \(\lambda=2\), all of the theoretical results are too large for \(\lambda=4\) and \(\lambda=6\). The \(E6\) matrix element within the \((0f_{7/2})^{13}\) configuration can be written as a product of two \(0f_{7/2}\) spectroscopic amplitudes for one-proton removal times the single-particle \(E6\) matrix element. Cross sections from \((e,e^{\prime}p)\) data are also proportional to the product of two \(0f_{7/2}\) spectroscopic amplitudes; these are quenched by about a factor of two compared to those calculated in the \(fp\) model space (see e.g., Ref. [22] for \({}^{51}\)V\((e,e^{\prime}p)^{50}\)Ti). This is interpreted as a "dilution" of the \(fp\) part of the wavefunction due to short- [23; 24] and long-range [25] correlations that go beyond the \(fp\) model-space. \begin{table} \begin{tabular}{c c c c c c c} \(\sigma L\) & \(\mathcal{A}_{p}\times 10^{3}\) & \(\mathcal{A}_{n}\times 10^{3}\) & \(\mathcal{M}\times 10^{3}\) & \(\mathcal{M}_{p}^{\text{expt.}}\times 10^{3}\) \\ \hline \(E4\) & 0.142(17) & 0.045(7) & - & 0.1137(5) \\ \(M5\) & 5.09(76) & -0.11(2) & 4.98(76) & 2.57(6) \\ \(E6\) & 3.52(63) & 0.22(4) & - & 2.29(35) \\ \end{tabular} \end{table} Table 2: Theoretical values of proton and neutron contributions to the \(E4\), \(M5\) and \(E6\) matrix elements (\(\mathcal{A}_{p,n}\)) calculated in the full \(fp\) model space, discussed in the text. Uncertainties in the calculated matrix elements are \(\pm\)(18,15,12)% for \(\lambda=(6,5,4)\), respectively. For the \(M5\) transition, \(\mathcal{M}=(\mathcal{A}_{p}~{}+~{}\mathcal{A}_{n})\). Experimental matrix elements (\(\mathcal{M}_{p}^{\text{expt.}}\)) are determined from this work. \begin{table} \begin{tabular}{c c c c c c c c c} \(E_{\text{Level}}\) & \(E_{\gamma}\) & \(\sigma L\) & & \(I_{\gamma}\) & & \(B(\sigma\lambda)\) (W.u) & \(B(\sigma\lambda)\) (e\({}^{2}\)fm\({}^{2\lambda}\), \(\mu_{N}^{2}\) fm\({}^{2\lambda-2}\)) \\ \hline Ref. [6] & Ref. [6] & Ref. [6] & This work & Ref. [2] & Ref. [3] & This work & \(I_{\gamma}\)([3]) & This work & \(I_{\gamma}\)([3]) \\ \hline \hline 3040.4 & 701.1(1) & \(E4\) & \(\equiv\)100 & \(\equiv\)100 & \(\equiv\)100 & 0.2593(21) & 0.2587(21) & 6.46(5)\(\times 10^{2}\) & 6.44(6)\(\times 10^{2}\) \\ & 1712.6(3) & \(M5\) & 1.05(5) & 0.7(1) & 1.3(1) & 4.34(21) & 5.4(4) & 3.31(16)\(\times 10^{5}\) & 4.1(3)\(\times 10^{5}\) \\ & 3040.6(5) & \(E6\) & 0.056(17) & 0.020(5) & 0.06(1) & 0.42(12) & 0.45(8) & 2.61(81)\(\times 10^{5}\) & 2.8(5)\(\times 10^{5}\) \\ 2339.24 & 1011.2(2) & \(M1(+E2)\) & 79.4(3) & 86(9) & 86(9) & & & \\ & 2338.3(5) & \(M1\)+\(E2\) & 22.3(2) & 13(2) & 13(2) & & & & \\ \end{tabular} \end{table} Table 1: Summary of adopted level and \(\gamma\)-ray energies, transition multipolarities, newly measured relative intensities (taking sum-coincidence events into account) and deduced transition strengths for the \(E4\), \(M5\), and \(E6\) measured in this work quoted in units of Weisskopf units (W.u), as well as e\({}^{2}\)fm\({}^{2\lambda}\) for the \(E4\) and \(E6\) transitions and \(\mu_{N}^{2}\) fm\({}^{2\lambda-2}\) for the \(M5\). The half-life of the \(J^{\pi}=19/2^{-}\) isomer is 2.54(2) minutes [6]. Conflicting relative intensities quoted in Table 1 of Ref. [2] and Table 3 of Ref. [3] are provided for reference. Transition strengths calculated using the branching ratios of Ref. [3] are also provided for comparison with those of the present work. This phenomenon is observed more broadly across the nuclear landscape [26; 27] and cross sections extracted from nucleon transfer-reaction data are also known to be quenched by a similar magnitude [28]. The similarities suggest that quenching of the \(E6\) matrix element observed in this work and quenching of \((e,e^{\prime}p)\) cross sections are connected. Ultimately, any model that is used to understand the quenching of nucleon-removal cross sections should be extended to include calculations of electromagnetic matrix elements. Since matrix elements of single-particle operators can be expanded in terms of the overlap integrals between eigenstates of a system with \(A\) nucleons and one of mass \((A-1)\)[29], high-multipole transitions appear to provide a sensitive probe of single-particle features of atomic nuclei. Further theoretical investigation into the high-multipolarity matrix elements, that includes such correlations, is therefore necessary. In summary, experimental observation of an \(E6\) transition in \({}^{53}\)Fe is unambiguously confirmed by identifying and removing sum-coincidence contributions with three distinct methods that are in mutual agreement. Transition strengths for the high-multipolarity transitions from the 2.54(2)-minute, \(J=19/2^{-}\) isomer have been determined from the newly measured branching ratios. In the \(fp\) model space, the \(E6\) strength comes mainly from the dominant \((0f_{7/2})^{13}\) configuration. When this mixes with the many other \(fp\) configurations, the \((0f_{7/2})^{13}\) configuration becomes 'diluted' and the total \(E6\) matrix element decreases by about a factor of two in our calculations. The negative effective charge obtained for the full \(fp\) model space for \(E6\) could be connected as a further dilution relative to the 'exact' wavefunction that goes beyond the \(fp\) model space. Connection of the reduction of \((e,e^{\prime}p)\) cross sections compared to those calculated in the \(fp\) model space was also discussed. The authors are grateful for excellent support from technical staff of the Department of Nuclear Physics and Accelerator Applications, ANU and the Australian Heavy Ion Accelerator Facility. We thank J. Heighway for preparing targets for these experiments. This work was supported by the Australian Research Council Grants No. DP170101673 and No. DP170101675, the International Technology Center Pacific (ITC-PAC) under Contract No. FA520919PA138, and NSF Grant PHY-2110365. A.A., B.J.C., J.T.S.D., M.S.M.G, and T.P. acknowledge support of the Australian Government Research Training Program. Support for the ANU Heavy Ion Accelerator Facility operations through the Australian National Collaborative Research Infrastructure Strategy program is acknowledged. Figure 1 in this letter was created using the LevelScheme scientific figure preparation system [30]. ## References * Walker and Dracoulis [1999]P. Walker and G. D. Dracoulis, Nature **399**, 35 (1999). * Black et al. [1971]J. N. Black, W. C. McHarris, and W. H. Kelly, Phys. Rev. Lett. **26**, 451 (1971). * Black et al. [1975]J. N. Black, W. C. McHarris, W. H. Kelly, and B. H. Wildenthal, Phys. Rev. C **11**, 939 (1975). * Geesaman [1976]D. Geesaman, _Spin gap isomers in \({}^{52}\)Fe, \({}^{53}\)Fe, and \({}^{54}\)Co_, Ph.D. thesis, State University of New York, Stony Brook (USA) (1976). * Geesaman et al. [1979]D. F. Geesaman, R. L. McGrath, J. W. Noe, and R. E. Malmin, Phys. Rev. C **19**, 1938 (1979). * Junde [2009]H. Junde, Nucl. Data Sheets **110**, 2689 (2009). * Dracoulis and Byrne [1989]G. D. Dracoulis and A. P. Byrne, Annual report ANU-P/1052 (1989). * [8]See Supplemental Material for details pertaining to the sum-event evaluation methods. * Palazzo [2017]T. Palazzo, _Spectroscopy and characterisation of high multipolarity transitions depopulating the metastable state in \({}^{53}\)Fe_, Master's thesis, The Australian National University, Canberra (Australia) (2017). * Kibedi et al. [2008]T. Kibedi, T. W. Burrows, M. B. Trzhaskovskaya, P. M. Davidson, and C. W. Nestor Jr., Nucl. Instrum. Meth. A **589**, 202 (2008). * Band et al. [2002]I. Band, M. Trzhaskovskaya, C. Nestor, P. Tikkanen, and S. Raman, At. Data Nucl. Data Tables **81**, 1 (2002). * Brown and Rae [2014]B. Brown and W. Rae, Nucl. Data Sheets **120**, 115 (2014). * Honma et al. [2005]M. Honma, T. Otsuka, B. A. Brown, and T. Mizusaki, Euro Phys. J. A **25**, 499 (2005). * Poves et al. [2001]A. Poves, J. Sanchez-Solano, E. Caurier, and F. Nowacki, Nucl. Phys. A **694**, 157 (2001). * Brown [1998]B. A. Brown, Phys. Rev. C **58**, 220 (1998). * Brown et al. [1983]B. Brown, R. Radhi, and B. Wildenthal, Phys. Rep. **101**, 313 (1983). * Gloeckner and Lawson [1975]D. H. Gloeckner and R. D. Lawson, Phys. Rev. C **11**, 1832 (1975). * Honma et al. [2004]M. Honma, T. Otsuka, B. A. Brown, and T. Mizusaki, Phys. Rev. C **69**, 034335 (2004). Figure 3: Proton effective charges calculated for \(\lambda=2,4,\) and \(6\) with seven different interactions (red and blue lines) [20] compared to experimental values for \(\lambda=2\) (open circle) [19] and \(\lambda=4,6\) (closed circles) from this work. * (19) R. du Rietz, J. Ekman, D. Rudolph, C. Fahlander, A. Dewald, O. Moller, B. Saha, M. Axiotis, M. A. Bentley, C. Chandler, G. de Angelis, F. Della Vedova, A. Gadea, G. Hammond, S. M. Lenzi, N. Marginean, D. R. Napoli, M. Nespolo, C. Rusu, and D. Tonev, Phys. Rev. Lett. **93**, 222501 (2004). * (20) H. Sagawa, Phys. Rev. C **19**, 506 (1979). * (21) A. Bohr and B. R. Mottelson, _Nuclear Structure_ (World Scientific Publishing Company, 1998). * (22) J. W. A. den Herder, J. A. Hendriks, E. Jans, P. H. M. Keizer, G. J. Kramer, L. Lapikas, E. N. M. Quint, P. K. A. de Witt Huberts, H. P. Blok, and G. van der Steenhoven, Phys. Rev. Lett. **57**, 1843 (1986). * (23) H. Muther, A. Polls, and W. H. Dickhoff, Phys. Rev. C **51**, 3040 (1995). * (24) W. H. Dickhoff, J. Phys. G **37**, 064007 (2010). * (25) C. Barbieri, Phys. Rev. Lett. **103**, 202502 (2009). * (26) L. Lapikas, Nucl. Phys. A **553**, 297 (1993). * (27) G. Kramer, H. Blok, and L. Lapikas, Nucl. Phys. A **679**, 267 (2001). * (28) B. P. Kay, J. P. Schiffer, and S. J. Freeman, Phys. Rev. Lett. **111**, 042502 (2013). * (29) T. Berggren, Nucl. Phys. **72**, 337 (1965). * (30) M. Caprio, Comput. Phys. Commun. **171**, 107 (2005). **Supplemental Material**: Reference [1] presents the first experimental confirmation of hexacontatetrapole, \(E6\)\(\gamma\) decay to include a detailed consideration of sum contributions to the measured \(\gamma\)-ray yields. The different approaches used to evaluate the sum contributions are described below with results summarised in Table 3. The sum-component fractions of the total 3041-keV \(\gamma\)-ray yield determined by each method were 48(11)% (\(Experimental\)), 47(25)% (\(Geometric\)), 46(6)% (\(Computational\)) and 42(11)% (\(Monte\)\(Carlo\)), respectively. \(Experimental\): In the example shown in Fig. 4, an excited state, E\({}_{3}\), relaxes via two distinct pathways: a single \(\gamma\) decay (\(\gamma_{3}\)) direct to E\({}_{1}\) and a two-photon cascade (\(\gamma_{2}\) and \(\gamma_{1}\)) via E\({}_{2}\). The relative probability of each pathway taking place is given by the branching ratios of the transitions that depopulate E\({}_{3}\) (b\({}_{3}\) and b\({}_{2}\)). The measured yield of \(\gamma_{3}\) (\(Y_{3}\)) includes the number of \(\gamma_{3}\) decays (\(I_{3}\)), corrected for detection efficiency (\(\varepsilon_{3}\)), and the additional sum component from \(\gamma_{2}\) and \(\gamma_{1}\) (\(S_{2,1}\)), such that: \[Y_{3}\ =\ I_{3}\cdot\varepsilon_{3}+S_{2,1}, \tag{4}\] where: \[S_{2,1}\ =\ I_{2}\cdot\varepsilon_{2}\cdot b_{1}\cdot\varepsilon_{1}\cdot \overline{W}_{2,1}(0), \tag{5}\] and \(\overline{W}_{2,1}(0)\) is the angular correlation of \(\gamma_{2}\) and \(\gamma_{1}\) at \(\approx 0^{\circ}\) averaged over the solid angle subtended by the detector. If more than one cascade pathway exists, then a sum over all combinations of sum contributions must be considered. The yield of the 2029-keV full-energy sum peak observed from the \(\gamma\)-decay of \({}^{53m}\)Fe, which can \(only\) occur though summing of the 701-keV and 1328-keV \(\gamma\)-rays, can be directly measured and scaled to estimate the sum-coincidence components of the other transitions. Using the notation of Equation (5): \[Y_{2029}\ =\ S_{2029}\ =\ I_{701}\cdot\varepsilon_{701}\cdot b_{1011} \cdot b_{1328}\cdot\varepsilon_{1328}\\ \times\overline{W}_{701,1328}(0). \tag{6}\] Expressions that connect the two- and three-fold sum components of each transition to \(S_{2029}\), other measured \(\gamma\)-ray yields and branching ratios, detection efficiencies and calculated angular correlations between pairs of \(\gamma\) rays can then be deduced. For example, the contributions to the 3041-keV full-energy peak are given by: \[S_{701,2338}\ =\ S_{2029}\cdot \left(\frac{Y_{2338}-Y_{1011}\cdot\varepsilon_{1328}\cdot \overline{W}_{1011,2338}(0)}{Y_{1011}}\right)\] \[\times\left(\frac{\varepsilon_{1011}}{\varepsilon_{1328}}\right) \cdot\left(\frac{\overline{W}_{701,2338}(0)}{\overline{W}_{701,1328}(0)} \right), \tag{7}\] \[S_{1713,1328}\ =\ S_{2029}\cdot \left(\frac{Y_{1713}-Y_{1011}\cdot\varepsilon_{701}\cdot\overline{W}_{70 1,1011}(0)}{Y_{1011}}\right)\] \[\times\left(\frac{\varepsilon_{1011}}{\varepsilon_{701}}\right) \cdot\left(\frac{\overline{W}_{1713,1328}(0)}{\overline{W}_{701,1328}(0)} \right), \tag{8}\] \[S_{701,1011,1328}\ =\ S_{2029}\cdot \varepsilon_{1011}\cdot\left(\frac{\overline{W}_{701,1011,1328}(0)}{ \overline{W}_{701,1328}(0)}\right). \tag{9}\] Similar expressions can be defined for the sum components of the 1713-keV and 2338-keV transitions; these corrections are \(\approx 10\%\) and \(\approx 1\%\), respectively. _Geometric_: The sum contributions can be directly inferred by considering the change in counting efficiency between different detector geometries. The three \(\gamma\)-ray detectors mounted on adjustable rails were moved radially outwards by \(\approx 3.5\) cm to reduce their solid angle coverage. This decreased the expected full-energy peak yields by a reduction factor, \(r_{\varepsilon}\), for a'real', single \(\gamma\)-ray event, \(r_{\varepsilon}^{2}\) for a sum-coincidence event and \(r_{\varepsilon}^{3}\) for a triple-sum-coincidence event. The measured total yields for each \(\gamma\) ray, in both the 'near' and 'far' geometries, can be reduced to a single expression connecting the real component to the total measured yields and respective detection efficiencies. In this work, \(r_{\varepsilon}=2.01(6)\) was deduced from measurement of the absolute detection efficiency for both geometries. Gamma-ray intensities determined from this geometric approach were consistent with the Experimental method, giving further confidence in the results. However, these suffered from larger experimental uncertainties and are not included in the final branching-ratio analysis. Figure 4: Example level scheme and \(\gamma\)-ray transitions used to explain the methods of determining sum contributions of measured \(\gamma\)-ray yields described in the text. An excited state, \(E_{3}\), has two relaxation pathways: one direct to \(E_{1}\), and the other through a cascade of \(\gamma\) rays via an intermediate level, \(E_{2}\). Each transition yield is characterised by its intensity (\(I\)), branching ratio (\(b\)) and \(\gamma\)-ray detection efficiency (\(\varepsilon\)). _Computational_: Following Equation (4), a general expression for \(Y_{3041}\) can be defined as follows: \[Y_{3041} = I_{3041}\cdot\varepsilon_{3041}\] \[+I_{701}\cdot b_{2338}\cdot\varepsilon_{701}\cdot\varepsilon_{2338} \cdot\overline{W}_{701,2338}(\theta)\] \[+I_{1713}\cdot b_{1328}\cdot\varepsilon_{1713}\cdot\varepsilon_{13 28}\cdot\overline{W}_{1713,1328}(\theta)\] \[+I_{701}\cdot b_{1011}\cdot b_{1328}\cdot\varepsilon_{701}\cdot \varepsilon_{1011}\cdot\varepsilon_{1328}\] \[\times\overline{W}_{701,1011,1328}(\theta).\] Since \(Y_{i}^{real}=I_{i}\cdot\varepsilon_{i}\) for non-sum events and \(Y_{1}=I_{2}\cdot b_{2}\cdot\varepsilon_{1}\) for sequential cascades of \(\gamma\) rays (such as the \(\gamma_{2}\) to \(\gamma_{1}\) cascade in Fig. 4), Equation (II) can be further reduced to a single expression that only includes quantities that were measured directly in the experiment--\(\gamma\)-ray yields and detection efficiencies--and calculated angular correlation coefficients, such that: \[Y_{3041} = Y_{3041}^{real}+a+b+c, \tag{11}\] where: \[a =(Y_{2338}-Y_{1011}\cdot\varepsilon_{1328}\cdot\overline{W}_{101 1,1328}(\theta))\cdot\varepsilon_{701} \tag{12}\] \[\qquad\times\overline{W}_{701,2338}(\theta),\] \[b =(Y_{1713}-Y_{1011}\cdot\varepsilon_{701}\cdot\overline{W}_{701,1011}(\theta))\cdot\varepsilon_{1328}\] \[\qquad\times\overline{W}_{1713,1328}(\theta),\] \[c =(Y_{1328}\cdot\varepsilon_{701}\cdot\varepsilon_{1011}\] \[\qquad\qquad+(Y_{1011}\cdot\varepsilon_{701}\cdot\overline{W}_{7 01,1011}(\theta)-Y_{1713})\] \[\qquad\qquad\times\frac{\varepsilon_{701}\cdot\varepsilon_{1011} \cdot\varepsilon_{1328}}{\varepsilon_{1713}})\cdot\overline{W}_{701,1011,1328 }(\theta),\] and \(S_{3041}=a+b+c\). Following this methodology, similar expressions can be determined for the sum contributions to the other \(\gamma\) rays and the 2029-keV sum peak: \[Y_{1713} = I_{1713}\cdot\varepsilon_{1713}\] \[+I_{701}\cdot b_{1011}\cdot\varepsilon_{701}\cdot\varepsilon_{1 011}\cdot\overline{W}_{701,1011}(\theta),\] \[= Y_{1713}^{real}+(Y_{1011}\cdot\varepsilon_{701}\cdot\overline{W }_{701,1011}(\theta)).\] \[Y_{2338} = I_{2338}\cdot\varepsilon_{2338}\] \[+I_{1011}\cdot b_{1328}\cdot\varepsilon_{1011}\cdot\varepsilon_{1 328}\cdot\overline{W}_{1011,1328}(\theta),\] \[= Y_{2338}^{real}+(Y_{1011}\cdot\varepsilon_{1328}\cdot\overline{W }_{1011,1328}(\theta)).\] \[Y_{2029} = I_{701}\cdot b_{1011}\cdot b_{1328}\cdot\varepsilon_{701}\cdot \varepsilon_{1328}\cdot\overline{W}_{1701,1328}(\theta),\] \[= Y_{1328}\cdot\varepsilon_{701}\cdot\overline{W}_{1701,1328}( \theta).\] Equations (11), (12), (13) (14), (15) now define the sum contributions to each peak in terms of the experimentally observed peak yields that are inclusive of both the real and sum components. These observed peak yields and their uncertainties can then be used to determine the summing contributions. Uncertainties in the sum components were evaluated using a Monte Carlo methodology. The \(\gamma\)-ray yields were randomly sampled using Gaussian distributions centred on the measured values with widths defined by their uncertainties. This resulted in distributions of real and sum-coincidence yields, from which the mean and standard deviation were used in the subsequent analysis. \(Monte\ Carlo\): From the experimental and computational methods, a set of branching ratios were deduced from the corrected \(\gamma\)-ray yields, removing the sum components. Consistency of these results was confirmed by performing a Monte Carlo simulation of the \(\gamma\) decay of \({}^{53m}\)Fe and evaluating the implied summing contributions as an additional check. The Monte Carlo simulation was developed to model the \(\gamma\) decay of \({}^{53m}\)Fe and evaluate the associated summing contributions. In the model, decay of \({}^{53m}\)Fe proceeds via randomised pathways that are weighted by the measured transition branching ratios of this work. The simulation considers each individual detector efficiency and takes account of angular-correlation effects. A sum event is recorded when two or more \(\gamma\) rays from a cascade are recorded in the same detector. The number of simulated decays was fixed as the number of \({}^{53m}\)Fe decay events that occurred in the experiments for the near (\(\sim\)208M) and far (\(\sim\)214M) geometries, respectively. Separate simulations were used to estimate statistical errors: five iterations of 10 million \({}^{53m}\)Fe decay events; one iteration of 100 million events, and two iterations of one billion events. Results of the MC simulation for the total yields and sum components are consistent with the experimental value for the 3041-keV transition. MC-simulated yields of the other real \(\gamma\) rays are within 5% of the corresponding experimental values. Results obtained from the different approaches are summarised in Table 3. Importantly, evaluations of the sum contributions to each of the \(\gamma\) rays agree between the experimental and computational methods, and they are also in agreement with the Monte Carlo predictions. Table 4 shows the complete set of calculated matrix elements, as described in the text of Ref. [1]. Theoretical values of proton and neutron components (\(\mathcal{A}_{p,n}\)) of the \(E6\), \(M5\) and \(E4\) matrix elements are provided.
2310.14880
Can ChatGPT Perform Reasoning Using the IRAC Method in Analyzing Legal Scenarios Like a Lawyer?
Large Language Models (LLMs), such as ChatGPT, have drawn a lot of attentions recently in the legal domain due to its emergent ability to tackle a variety of legal tasks. However, it is still unknown if LLMs are able to analyze a legal case and perform reasoning in the same manner as lawyers. Therefore, we constructed a novel corpus consisting of scenarios pertain to Contract Acts Malaysia and Australian Social Act for Dependent Child. ChatGPT is applied to perform analysis on the corpus using the IRAC method, which is a framework widely used by legal professionals for organizing legal analysis. Each scenario in the corpus is annotated with a complete IRAC analysis in a semi-structured format so that both machines and legal professionals are able to interpret and understand the annotations. In addition, we conducted the first empirical assessment of ChatGPT for IRAC analysis in order to understand how well it aligns with the analysis of legal professionals. Our experimental results shed lights on possible future research directions to improve alignments between LLMs and legal experts in terms of legal reasoning.
Xiaoxi Kang, Lizhen Qu, Lay-Ki Soon, Adnan Trakic, Terry Yue Zhuo, Patrick Charles Emerton, Genevieve Grant
2023-10-23T12:51:49Z
http://arxiv.org/abs/2310.14880v2
# Can ChatGPT Perform Reasoning Using the IRAC Method in Analyzing Legal Scenarios Like a Lawyer? ###### Abstract Large Language Models (LLMs), such as ChatGPT, have drawn a lot of attentions recently in the legal domain due to its emergent ability to tackle a variety of legal tasks. However, it is still unknown if LLMs are able to analyze a legal case and perform reasoning in the same manner as lawyers. Therefore, we constructed a novel corpus consisting of scenarios pertain to Contract Acts Malaysia and Australian Social Act for Dependent Child. ChatGPT is applied to perform analysis on the corpus using the IRAC method, which is a framework widely used by legal professionals for organizing legal analysis. Each scenario in the corpus is annotated with a complete IRAC analysis in a semi-structured format so that both machines and legal professionals are able to interpret and understand the annotations. In addition, we conducted the first empirical assessment of ChatGPT for IRAC analysis in order to understand how well it aligns with the analysis of legal professionals. Our experimental results shed lights on possible future research directions to improve alignments between LLMs and legal experts in terms of legal reasoning. ## 1 Introduction Since ChatGPT was released by OpenAI in November 2022, there are fast-growing interests in applying LLMs to analyzing legal documents (Katz et al., 2023) and reasoning tasks (Huang and Chang, 2022). Although the recently released LLMs demonstrate strong abilities to solve challenging tasks requiring reasoning, people find that they often follow different, or even wrong reasoning paths to obtain correct answers (Paul et al., 2023; Tang et al., 2023). This issue is also referred to as a _misalignment_ problem between LLMs and humans. As this problem has not been investigated in the legal domain, this work focuses on understanding to what degree ChatGPT is able to perform reasoning for legal scenario analysis in the same way as legal professionals. Herein, we chose IRAC (Alsagoff, 1996), standing for Issue, Rule, Application, and Conclusion, as the framework for legal analysis because it is the most popular analysis methodology used by legal professionals and law schools. A typical example of IRAC analysis is depicted in Fig. 1. Given a scenario regarding the Contract Act Malaysia (CAM), the task is to answer the legal question recognized as an issue in I. In this example, the rules (R) are the relevant statutes in CAM. In a common law system, rules may also include precedents. The analysis or application (A) consists of the reasoning paths leading to the answer in the conclusion (C). Herein, a reasoning path is a sequence of reasoning steps, where each step involves taking a statement from the available facts in a given scenario and then applying a relevant rule to it. Section 2 describes IRAC in details and its importance in legal reasoning. Interpretability of model outputs is crucial for legal professionals in real-world legal applications Figure 1: An example IRAC analysis conducted by a legal professional. (Wu et al., 2022). For scenario analysis using IRAC, it is helpful for models to produce individual reasoning paths and their associated rules or precedents so that legal professionals are able to understand why models draw certain conclusions. Moreover, it is noteworthy that legal reasoning is _defeasible_ such that conclusions can be overturned by considering new evidence or making different assumptions due to missing information (Sartor, 1995). However, prior legal datasets do not include intermediate reasoning paths understandable by legal professionals and neglect the aspect of defeasible reasoning. To address the above issues, we constructed the _first_ semi-structured IRAC corpus, coined SIRAC1, which includes a set of legal scenarios pertinent to CAM and Australian Social ACT (ASA) respectively. To make IRAC analysis understandable by both LLMs and legal professionals, we proposed a semi-structured language and used it to codify an IRAC analysis for each scenario. As all scenarios are analyzed by senior law students with IRAC, we conducted detailed comparative studies between their results and the ones produced by ChatGPT, by applying different prompting and in-context learning techniques (Dong et al., 2022). As our scenarios are complex and involve 7.05 reasoning paths on average, we also decomposed the legal question of a scenario into multiple simpler questions and instructed ChatGPT to address them separately. As a result, we obtained the following novel findings via extensive analysis conducted by the law students: Footnote 1: Github link: [https://github.com/chrisitnakang/SIRAC.git](https://github.com/chrisitnakang/SIRAC.git) * Without IRAC analysis from legal professionals, ChatGPT achieves an F1 of 0.49 on average for answering legal questions of scenarios. However, ChatGPT fails to produce complete and correct reasoning paths toward answers for any evaluated scenario albeit some of the answers are correct. * We demonstrated the importance of providing correct intermediate reasoning paths to ChatGPT. The average F1 score of the final answers estimated by ChatGPT was improved more than 0.86, when the complete human-written reasoning paths except final answers are fed to the model. * ChatGPT benefits from adding similar example scenarios with IRAC analysis during in-context learning, only if similar scenarios can be found. * Decomposing legal questions into simpler ones consistently improve the accuracy of identifying legal concepts, such as "invitation to treat". However, this approach does not always improve the correctness of produced reasoning paths. ## 2 Background A legal problem could be as simple as ascertaining the amount of money due from a tenant to the land-lord under a contract affected by a certain unseen event like COVID-19, or a more complex one involving contracts between large corporate entities negotiating cross-border sales and related matters. Legal problems require unique reasoning skills to be solved. These unique skills are applied for solving legal problems in a rather systematic manner using the IRAC methodology (Alsagoff, 1996). Before one begins to solve a legal problem using the IRAC's legal reasoning process, it is essential to acquire factual details from the client. These facts will lead to the correct identification of the legal issue. Given the identified issue, the next step is to determine the law that governs that aspect of the legal problem. The application of the law or the analysis in the third stage is perhaps the most detailed stage of IRAC. In this stage, the law is practically applied to the facts and the issues in question. As no two legal problems would be identical, one needs to be aware of possible variations. This stage is particularly important because it is here that legal reasoning skills are truly tested. The final stage of IRAC is the conclusion, which is a natural outcome of applying the law to the facts and issues. However, applying the same law to the same facts and issues could result in different conclusions, and the correct answer depends on the application of the appropriate law to the facts, which could be interpreted differently. ## 3 Dataset Construction We constructed SIRAC to evaluate to what degree LLMs and the other AI models are able to conduct IRAC analysis in the same manner as legal professionals. Our goals are to promote research on devising novel legal AI models that are i) equipped with strong defeasible reasoning capability, and ii) interpretable by both legal and computer science professionals. We start with selecting scenarios pertaining to CAM and ASA, followed by conducting full-fledged IRAC analysis on all scenarios using a proposed semi-structured language. The data quality is ensured by our strict procedures. ### Selection of Scenarios We have selected CAM and ASA scenarios in the common-law systems, which are under-explored by legal AI research communities so that LLMs are less likely memorize similar scenarios, relevant statutes and precedents during pre-training. The IRAC analysis of a single legal scenario may take several hours or days, even for law professors and the best law students. At this stage, we do not consider scenarios pertaining to more than one areas of law, involving multiple topics and complex legal issues, in order to keep the analysis time within our budget. We started with selecting a set of statutes in ASA before creating scenarios. In particular, Section 2, Section 3 and Section 4 in ASA are selected for the purpose of this study. Based on the statutes, law students were instructed to write 30 scenarios inspired by real-world cases so that i) each scenario is designed to answer a legal question "Is the child a dependent child"; ii) all relevant legal rules are covered by the selected sections. As a result, LLMs will be given complete information to perform IRAC analysis. For the scenarios pertinent to CAM, we increased the complexity of IRAC analysis by considering scenarios with more diverse legal issues, requiring the application of both statutes and precedents - decisions made in previous similar cases. Herein, 20 scenarios were selected from tutorials, text books and examination questions related to CAM. Each scenario is associated with one topic and there are 10 topics in total. Each scenario pertains to a different yes-no question as the issue. For example, "Whether Bob and Jane can change their mind and demand repayment of the whole debt?", as shown in Fig. 1. ### Annotation of IRAC Analysis Issue.As issues have already been determined for creating the scenarios, law students focused on analyzing relevant statutes and precedents, in order to obtain correct conclusions to the issues. Rule.We follow the format that is widely used by legal professionals to reference relevant statutes and prior cases. An example is depicted in Fig. 1. Application.Law students were instructed to annotate complete reasoning paths toward conclusions. A reasoning path is a sequence of reasoning steps and the last step is the answer to the issue of a scenario. In addition, we borrowed the key ideas from _default logic_(Reiter, 1980) when performing defeasible reasoning. It differs from the well-known monotonic logic, such as first-order logic, largely by introducing defaults (assumptions) and default rules. Both defaults and default rules are applied when coping with missing information. For example, if the marriage status is unknown for a twenty-year-old defendant, it is reasonable to assume that he or she is single during reasoning. Default reasoning allows changes of intermediate conclusions if the new facts or evidences are included. As a reasoning path may involve arguments and assumptions articulated in natural language, logical operators, and references to statutes or precedents, we introduce a semi-structured language to precisely describe the reasoning paths, so that they are easy-to-interpret by legal professionals and easy-to-process by computer programs. Inspired by neuro-symbolic approaches (Hitzler, 2022), we design the language by using a mix of the language used in statutes and symbols for logical operators. We take into account the language used to formulate statutes because it is unambiguous for legal professionals, and the same words were consistently used for the same concepts. The introduction of logical operators, such as AND and OR, is expected to facilitate future research on incorporating symbolic or neuro-symbolic reasoning into legal reasoning. In addition, we annotate mentions of legal concepts with brackets because they often play an important role in legal analysis. Conclusion.We apply the same semi-structured language to formulate conclusions in the corresponding section of the IRAC structure. For each scenario, each conclusion is essentially the answer to the corresponding issue. To facilitate the manual annotation work, we build an online annotation tool A.1 for law students, which supports annotations of legal concepts, rules, logical operators, and analysis using the semi-structured language in the IRAC framework. ### Quality Assurance To ensure the quality of the scenarios and annotations, we applied a strict procedure in selecting the annotators, as well as checking their progress. _Annotators Selection and Training._ We recruited five senior law students from different universities after interviewing all the candidates. The recruited students were expected to be familiar with ASA and CAM. Before they joined the annotation project, we provided test cases to them, in order to get them familiar with the semi-structured language and the annotation tool. After they submitted the test cases, we asked the law professors to verify their answers before they started the actual annotation work. _Scenario Quality._ To ensure the quality, the scenarios of CAM were selected from reliable sources, such as law textbooks and examination questions. Each scenario related to ASA was checked by another student besides the one who created it. All scenarios were double-checked by one expert in the area to ensure that they are reasonable and meet all criteria mentioned above. In case of disagreements on the criteria of the scenarios, the expert discussed with all the law students and revised the corresponding guidelines, if necessary, to ensure consistency. _IRAC Analysis Quality._ We created detailed guidelines for performing and annotating IRAC analysis, as well as the usage of the annotation tool. We had regular meetings with the law students to check on their progress and to discuss any potential problems. The solved problems were incrementally reflected in the guidelines. In addition, we regularly assigned the same scenarios to different law students to let them work independently, such that we can calculate inter-annotator agreement (IAA) and use the IAA scores to check the consistency of their annotations. Moreover, the answers to the issues of the CAM scenarios were checked against the sample answers provided by the textbooks, examination questions and tutorials, so that the correctness is guaranteed. ### Data Statistics _Basic Statistics._ We have 50 annotated legal scenarios in total. Table 1 summarizes the statistics of the dataset. For CAM and ASA, the rules include references to precedents. In the reasoning paths, each reasoning steps is the statement from the facts of the scenario and applied with the rule. The average length of the reasoning paths is 7.05. For CAM the average reasoning path is 9.3 because the scenario is more complex compare to ASA. _Comparison with Related Datasets._ We compare SIRAC with the current datasets for legal QA tasks along six dimensions: i) if the tasks require legal reasoning; ii) if the data is easy-to-process by machines; iii) if the reasoning paths are interpretable by legal professionals; iv) if IRAC analysis is applied; v) if there are complete and detailed reasoning paths annotated; vi) if the reasoning requires both statutes and prior cases. As summarized in Table 2, LegalBenchGuha et al. (2022) is the only one applying the IRAC methodology. However, they do not perform full IRAC analysis on scenarios. Hence, the corresponding reasoning paths are incomplete. The length of the annotated paths are also fairly short, which contain less than three steps. The other similar dataset is SARAHolzenberger and Van Durme (2021), which contains both question answering tasks and reasoning paths. However, they employed Prolog to codify reasoning paths, which is rather challenging for legal professionals to understand. ## 4 Assessment of ChatGPT Our preliminary results show that ChatGPT is one of the best performing LLMs in legal reasoning. Therefore, it was chosen to assess its ability in performing IRAC analysis. Instead of evaluating only the final results of statutory reasoning by GPT3Blair-Stanek et al. (2023), we employed ChatGPT to perform full IRAC analysis, and the quality of its outputs was assessed by the law students in the respective sections: Rule, Application, and Conclusion. Issues were not evaluated because they are part of the inputs. Moreover, we also carried out in-context learning and question decomposition, which have been found useful in other reasoning tasksMialon et al. (2023), to improve the performance of ChatGPT. ### Evaluation Measures In this work, we focus on understanding the pros and cons of ChatGPT on this task, as well as \begin{table} \begin{tabular}{l l l l l} \hline \hline & **Scenarios** & **Issues** & **Rules** & **Ave Length** \\ \hline SIRAC\_ASA & 30 & 1 & 3 & 4.8 \\ SIRAC\_CAM & 20 & 20 & 55 & 9.3 \\ \hline SIRAC & 50 & 21 & 58 & 7.05 \\ \hline \hline \end{tabular} \end{table} Table 1: Basic statistics of SIRAC. its alignment to the reasoning behavior of humans. Therefore, the human evaluation measures are based on the marking rubrics used by law schools, and the ones for evaluating language models. Throughout all experiments of human evaluation, we formulate the majority of the measures as a statement and score each of them with either -1, 0 or 1, indicating _disagree_, _neutral_, and _agree_ respectively. The statements are formulated in the way that the higher scores the better. All measures are assessed first for each selected scenario and their results are summarized for later analysis. Appendix A.3.2 shows the questions we asked the annotators. Rule.The rules and precedents are evaluated with regard to _information relevance_. The corresponding statement is formulated as "The referenced statutes and precedents are correct and complete.". Application.We apply multiple measures to assess the legal analysis in this section. _Correctness of Concept Identification._ This measures whether all important legal concepts correctly identified, and is formulated as "All key concepts are correctly identified." _Articulation of Reasoning._ This is concerns with articulation of reasoning steps and if the logical structures between steps are correct. The corresponding statement is "ChatGPT performs and articulates reasoning correctly and consistently." _Evaluation of Assumptions._ As legal reasoning is defeasible, we evaluate the generated assumptions by using the following questions: * How many assumptions are made? * Among the assumptions made by ChatGPT, how many assumptions are correct? * Compare the assumptions from ChatGPT and humans, how many of them are matched? _General Comment._ We also ask annotators to comment on the reasoning performance of ChatGPT, in particular its strengths and weaknesses. Conclusion._Correctness_ is a commonly evaluated feature Hamalainen and Alnajjar (2021). Humans will evaluate the answer to identify whether it addresses the question and whether it is consistent with the source material. For all IRAC analysis, we also evaluate the _fluency_ of the texts generated by ChatGPT, in order to assess if they are understandable by humans. To evaluate the quality of the assessment results, we reassigned 18 finished scenarios to three different annotators and asked them to evaluate the scenarios in terms of the above measures. We choose Cohen's Kappa score Cohen (1960); Zhan et al. (2023) to compute the agreement. The Kappa score for all the evaluation measures is 0.55, while the Kappa score for the analysis without the assumption evaluation is 0.75. Our investigation shows that the questions for the assumptions are subjective for law students, which however is a common problem in law education, as confirmed by a lecturer teaching law for several years. ### Results and Discussions We started with evaluating the final answers generated by ChatGPT, followed by analyzing its reasoning steps and references. Evaluation of Conclusion.We fed a scenario and its issue as the input to ChatGPT. As the questions in all scenarios are all yes-no questions, we appended "Answer in one word." to the last line of the inputs so that we could automatically extract the answers from ChatGPT. An example input is depicted in Fig. 2. In addition to the scenarios pertaining to ASA and CAM, we included the 276 scenarios pertinent to the Internal Revenue Code (IRC) from the SARA dataset, where IRC is the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset name**} & \multirow{2}{*}{**Reanoning**} & **Fricently for** & **Fricently for** & **Detailed** & **Analysis requiring** \\ & & **AI systems** & **legal professional?** & **IRAC applied?** & **reasoning paths** & **status and precedents** \\ \hline SARA & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} \\ (Helenberger and Van Durme, 2021) & & & & & & \\ \hline COLLE & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} \\ (Ralebo et al., 2022) & & & & & & \\ \hline UAD & \multirow{2}{*}{No} & \multirow{2}{*}{Yes} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} \\ (Helendyck et al., 2021) & & & & & & \\ \hline MAUD & \multirow{2}{*}{No} & \multirow{2}{*}{Yes} & \multirow{2}{*}{Maybe} & \multirow{2}{*}{No} & \multirow{2}{*}{No} & \multirow{2}{*}{No} \\ (Wang et al., 2023) & & & & & & \\ \hline ELGAL & \multirow{2}{*}{Yes} & \multirow{2}{*}{Maybe} & \multirow{2}{*}{Maybe} & \multirow{2}{*}{Yes} & \multirow{2}{*}{No} & \multirow{2}{*}{No} \\ \cline{1-1} BENCH (Gha et al., 2022) & & & & & & \\ \hline Semi-structured & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} & \multirow{2}{*}{Yes} \\ \cline{1-1} BAC & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the datasets for legal QA and legal scenario analysis. domestic portion of federal tax law in the United States. As shown in Table 3, ChatGPT's performance varied across these domains, with the highest precision in the ASA (0.75), and the lowest in IRC (0.29). We also compared F1 score is lowest on the IRC which is 0.35 and highest 0.67 one ASA. The average F1 score of ChatGPT is 0.49. Which we can see there are still a lot of room for improvement in the following steps. Evaluation of Application and Rule.Apart from conclusions, we evaluated the ability of ChatGPT by letting it produce intermediate reasoning paths. It can be achieved by removing "Answer in one word." from the last line. As SARA does not contain human annotated reasoning paths, we evaluated the sections of Application only on the scenarios of CAM and ASA. We selected 20 scenarios per area for human evaluation. Among the 40 scenarios we evaluated, only two of them produced high quality reasoning paths, which are considered as _agreed_ in the question about _articulation of reasoning_. Further details can be found in the example depicted in Fig 3. Table 5 summarizes ChatGPT's performance in CAM and ASA in terms of all evaluation measures, including fluency, information relevance, articulation of reasoning. From the results we can see that ChatGPT receives much less _agree_ (1) than that evaluated on the answers. In contrast, annotators agree that the fluency of ChatGPT outputs is high, in line with the observations from related work. However, ChatGPT does not provide enough information such as references to statutes or precedents in the involved reasoning paths. Out of 40 examples, there is only one example from SIRAC-CAM having the analysis with correct references to statutes and precedents. The performance is also poor on the articulation of reasoning. We notice that the formulation of the analysis from ChatGPT is sometimes very confusing and logically inconsistent. The evaluation results of assumptions are listed in Table 4. We evaluated the result into two different perspectives: assumption and analysis. The assumption measures if the answer from ChatGPT makes reasonable assumptions. For example: if my son stay at my home whole day and doing nothing. Then we can assume that my son is wholly dependent on me. However, we also noticed that, although some of the assumptions were identified by ChatGPT, they were not analysed correctly. For example: ChatGPT answered "my son is **not** wholly dependent on me", which is a wrong assumption. We compared the result from different law and different methods. The result indicate that, after using the in-context learning and decomposed questions, more assumptions were identified and discussed correctly. Nonetheless, the improvement of analysis still lesser than the assumptions. Overall, although ChatGPT could produce correct conclusions in IRAC analysis, its analysis part Figure 3: An example reasoning path. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Precision** & **Recall** & **F1** \\ \hline IRC & 0.29 & 0.43 & 0.35 \\ ASA & 0.75 & 0.60 & 0.67 \\ CAM & 0.50 & 0.40 & 0.44 \\ \hline Average & 0.51 & 0.48 & 0.49 \\ \hline \hline \end{tabular} \end{table} Table 3: Result of answers produced by ChatGPT. Figure 2: An example prompt for answer evaluation. in Application mostly are not aligned with those from legal professionals. The references to law and precedents are often missing or incorrect. Impact of Adding Reasoning Paths.LLMs are known to rely on "shortcuts" for predictions occasionally (Du et al., 2022), hence their robustness is questionable. In this experiment, we verify if ChatGPT's performance can be improved by progressively providing it with human-authored parts of reason paths, in order to shed light on future research directions of legal reasoning. Herein, we add 20%, 40%, and 80% of the reasoning paths annotated by law students to the inputs after the legal question. The final outcome has been progressively improved. Figure 4 shows that the more analysis we provided in a prompt, the higher F1 scores we gained from the final answers. The F1-score is able to reach 0.89 /1.0 starting from the lowest 0.10/0.0 for CAM/ASA respectively. This observation suggests that LLMs, e.g. ChatGPT, should focus on generating correct intermediate steps. We also found out that this observation is consistent in both areas of law that we evaluated. Effects of Adding Examples to Prompts.To understand the performance of ChatGPT using in-context learning, we added the most similar example in terms of topics or related statutes to the prompt for each scenario. Fig 5 gives an example of in-context learning. We evaluated the same 40 scenarios as before. The quality of the reasoning paths is improved by 27.5%, especially for ASA, because the scenarios in this area are similar to each other. From the statistics of our data, we can tell that all scenarios are related to one topic and two sections pertinent to ASA. The followings are the main findings according to our human evaluation: * The analysis parts are improved in 50% of the given scenarios after adding examples. * The analysis parts are improved for the scenarios that are similar to the provided examples. Table 4 displays the F1 score, changing from 0.34 to 0.66 on the analysis part. The ASA is able to cover more legal points and F1 scores are improved up to 0.96. Referring to the example provided, ChatGPT is able to answer similar to the one shown in the example question. Effects of Decomposed Questions.If ChatGPT is unable to capture the main legal concepts for the reasoning analysis, we investigated how it can be improved by decomposing a question into smaller questions? Inspired by Chain-Of-Thoughts (Wei et al., 2022), we attempted to dissect the issue questions into smaller questions and therefore \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**Assumptions**} & \multicolumn{3}{c}{**Analysis**} \\ \cline{2-7} & **Precision** & **Recall** & **F1 Score** & **Precision** & **Recall** & **F1 Score** \\ \hline **Overall** & 0.45 & 0.68 & 0.54 & 0.28 & 0.43 & 0.34 \\ **CAM** & 0.56 & 0.66 & 0.61 & 0.35 & 0.41 & 0.38 \\ **AS** & 0.32 & 0.73 & 0.45 & 0.30 & 0.46 & 0.28 \\ **1-context Overall** & 0.73 & 0.96 & 0.83 & 0.58 & 0.76 & 0.66 \\ **1-context CAM** & 0.54 & 0.95 & 0.69 & 0.32 & 0.57 & 0.41 \\ **1-context ASA** & 0.94 & 0.97 & 0.96 & 0.85 & 0.88 & 0.87 \\ \hline **Decomposition Overall** & 0.75 & 0.38 & 0.81 & 0.48 & 0.37 & 0.52 \\ **Decomposition CAM** & 0.28 & 0.90 & 0.89 & 0.49 & 0.50 & 0.49 \\ **Decomposition ASA** & 0.66 & 0.87 & 0.75 & 0.48 & 0.63 & 0.54 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of ChatGPT in terms of Precision, Recall, and F1 Score. Figure 4: Result of the add reasoning paths. Figure 5: An example of adding Examples to Prompts. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Assumptions**} & \multicolumn{3}{c}{**Analysis**} \\ \cline{2-7} & **Precision** & **Recall** & **F1 Score** & **Precision** & **Recall** & **F1 Score** \\ \hline **Overall** & 0.45 & 0.68 & 0.54 & 0.28 & 0.43 & 0.34 \\ **CAM** & 0.56 & 0.66 & 0.61 & 0.35 & 0.41 & 0.38 \\ **AS** & 0.32 & 0.73 & 0.45 & 0.30 & 0.46 & 0.28 \\ **1-context Overall** & 0.73 & 0.96 & 0.83 & 0.58 & 0.76 & 0.66 \\ **1-context CAM** & 0.54 & 0.95 & 0.69 & 0.32 & 0.57 & 0.41 \\ **1-context ASA** & 0.94 & 0.97 & 0.96 & 0.85 & 0.88 & 0.87 \\ \hline **Decomposition Overall** & 0.75 & 0.38 & 0.81 & 0.48 & 0.37 & 0.52 \\ **Decomposition CAM** & 0.28 & 0.90 & 0.89 & 0.49 & 0.50 & 0.49 \\ **Decomposition ASA** & 0.66 & 0.87 & 0.75 & 0.48 & 0.63 & 0.54 \\ \hline \hline \end{tabular} \end{table} Table 5: Result for reasoning paths. guided ChatGPT towards more accurate answers. Fig 6 shows examples of decomposed questions. The list of all decomposed questions can be referred to in appendix A.3.1. Our annotators decomposed the issue questions based on the legal concepts that need to be mentioned in the reasoning paths. From the sample results, we can observe that with decomposed questions, ChatGPT is able to apply the analysis to the facts identified from the scenario, followed by matching and aligning them with the legal concepts. Table 7 shows the result of the decomposed questions. We can see improvement from previous answers. From Table 4, the overall legal concept identification maintains high performance with a precision of 0.75, recall of 0.88, and an F1-score of 0.81. The results prove that decomposed questions help in identifying legal concepts related to the scenario more effectively. ## 5 Related Work In this section, we present related work, highlighting the similarities as well as the differences between theirs and ours. Previous work related to our paper can be categorized into the following subsections. ### LegalQA with Reasoning Task There are several research related to legal reasoning. Nils Holzenberger and Van Durme (2021) presented the reasoning step with Prolog De Raedt et al. (2007). Their research focused on identifying the rules applicable in the scenarios. They also attempted to do the reasoning by rewriting the rule with Prolog De Raedt et al. (2007). However, it is not applicable to other law and Prolog is not easily comprehensible by legal professionals. Yu et al. (2022) proved that the best accuracy can be obtained by applying the IRAC methodology. Their work did not evaluate the reasoning path, which is the analysis part of IRAC. The accuracy was only based on the conclusion, which is not a comprehensive evaluation of IRAC methodology. Guha et al. (2022) separated IRAC to four individual tasks. They pliled different data corpus and fit into IRAC analysis. However, they did not apply the complete IRAC on a single case. In comparison to all these related works, our work presented in this paper covers the complete IRAC methodology. More importantly, the reasoning traces are presented in a comprehensible method, for both legal and IT professionals. ### Reviewing ChatGPT Performance Given the proliferation of ChatGPT since November 2022, a lot of research works were carried out to review the performance of the ChatGPT in specific domains Zhuo et al. (2023). Tiffany Kung et al. (2022) found that ChatGPT was able to pass all the three examinations without any specialized training or reinforcement. In addition, ChatGPT provided a high level of concordance and insight in its explanations.Their work paves the way for domain-specific evaluations on ChatGPT. Andrew Blair-Stanek et al. (2023) and Savel Savelka (2023) reviewed ChatGPT performance based on different input prompts. From their results, GPT-3 had 78% accuracy, raising doubts about GPT-3's ability to handle basic legal work. Similar to Guha et al. (2022), there is no evaluation \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Constructions} & Fluency & Improvement & \multicolumn{2}{c}{Information} & Category & Attribution \\ & & & & & & & \\ \hline -1 & 5 & 0 & 3 & 5 & 5 & 5 \\ ASA & 0 & 7 & 3 & 2 & 9 & 6 & 5 \\ & 1 & 8 & 17 & 15 & 6 & 9 & 10 \\ \hline +1 & 11 & 0 & 9 & 1 & 10 & 11 \\ CLM & 0 & 6 & 3 & 7 & 6 & 7 & 4 \\ & 1 & 3 & 17 & 4 & 3 & 2 & 5 \\ \hline +1 & 16 & 0 & 12 & 16 & 15 & 16 \\ Total & 0 & 13 & 6 & 9 & 15 & 13 & 9 \\ 1 & 11 & 34 & 19 & 9 & 11 & 15 \\ \hline \hline \end{tabular} \end{table} Table 6: Result for adding examples to prompts. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Constructions**} & \multicolumn{2}{c}{**Improvement**} & \multicolumn{2}{c}{**Phases**} & Information & Category & Attribution \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ \hline -1 & 10 & 12 & 0 & 20 & 10 & 9 \\ ASA & 0 & 7 & 5 & 0 & 0 & 7 & 10 \\ & 1 & 3 & 5 & 20 & 0 & 3 & 1 \\ \hline +1 & 10 & 11 & 0 & 18 & 9 & 12 \\ CAM & 0 & 6 & 4 & 3 & 2 & 7 & 5 \\ & 1 & 4 & 5 & 17 & 0 & 4 & 3 \\ \hline +1 & 20 & 23 & 0 & 30 & 19 & 20 \\ nud & 0 & 13 & 9 & 3 & 2 & 14 & 18 \\ & 1 & 7 & 8 & 37 & 0 & 7 & 4 \\ \hline \hline \end{tabular} \end{table} Table 7: Result for decomposed questions. Figure 6: Example of Decomposed questions. on the reasoning traces. Although it was mentioned that ChatGPT performs poorly, no specific measures were used to support the claim objectively. ### Alignment Problems between Large Language Models (LLMs) and Humans Large language models (LLMs) have achieved success at a range of tasks such as question answering, summarisation, and dialogue. Hannah Kirk et al. (2023) proposed a three-tiered policy framework that allows users to experience the benefits of personalised alignment. Similarly, Jakob Mokander et al. (2023) also used a three-layered approach for the LLMs which include governance audits, model audits, and application audits. Atoosa Kasirzadeh and Gabriel (2023) developed a philosophical analysis of building blocks of linguistic communication between conversational agents and human interlocutors. Bakker Bakker et al. (2022) studied LLMs alignment from different perspectives, their work highlights that there is still a gap between the alignment of LLMs and humans. However, there is a need to strike the balance of providing more information to LLMs and not burdening the human experts involved unnecessarily. This observation inspires us to investigate the right amount of information LLMs require from humans in order to achieve satisfactory performance. ## 6 Conclusion We constructed a novel dataset, coined SIRAC, to evaluate the ability of LLMs for IRAC analysis. SIRAC contains 50 legal scenarios pertinent to ASA and CAM. Every scenario was annotated with reasoning paths with an average length of 7.05, described by our proposed semi-structured language. SIRAC is not only useful for the studies and analysis by legal professionals, but offers a fertile pool of resources for legal NLP. Our evaluation results on ChatGPT show that the powerful LLMs can produce reasonable answers but mostly fail to yield correct reasoning paths aligned with legal experts. Its performance can be further improved by providing parts of the annotated reasoning paths, including similar annotated scenarios for in-context learning and decomposing complex issues into simpler questions. As those techniques are not specific to the investigated areas of law, it is desirable to understand the extend to which such empirical findings still hold in the other legal areas in the future work. ## 7 Limitations While SIRAC offers a comprehensive resource for IRAC analysis, both by legal professionals and machine learning models, we need to acknowledge the existing constraints. Lack of legal domainIt is challenging to engage legal professionals who can understand the law of different countries. Hence, there is a limit to the extent of analysis that could be performed on some published legal dataset. At this stage, SIRAC is limited to only two types of law ASA and CAM. However, the methodology proposed in this paper is applicable to different laws. SIRAC covers two different laws from different countries. In the future, we plan to engage more legal professional who can contribute to expanding the dataset to other types of law. Lack of data resources for ChatGPT revision due to the limited resources, we were able to engage a small group of annotators to assist us in evaluating the outcome produced by ChatGPT. While the analysis is sufficient for us to obtain concrete insights, we hope we to engage more annotators for further strengthen our research contribution. ## 8 Ethics All the tasks carried out in this paper aim to assess the reasoning traces for the legal scenario analysis by ChatGPT. The entities mentioned in the legal scenarios are anonymized by the annotators. The court cases include in our dataset does not reveal the real case. We only include the court case name and responding paragraph numbers. In fact, court cases are accessible by the public and often used for further analysis by the commentators or for law studies. As such, the court cases used do not require additional ethics approval. ## 9 Acknowledgements This research is made possible through the generous support of Monash Inter-Faculty Seeding Grant. We wish to extend our sincere gratitude to all the dedicated annotators who have made invaluable contributions to this project. We would like to extend our special thanks to Dr. Sia Chin Chin from Taylor's University, whose provision of the examination questions as a reliable source for the legal scenarios, along with assistance in recruiting qualified annotators, has been instrumental in advancing our research endeavors.
2306.09851
Joint multi-modal Self-Supervised pre-training in Remote Sensing: Application to Methane Source Classification
With the current ubiquity of deep learning methods to solve computer vision and remote sensing specific tasks, the need for labelled data is growing constantly. However, in many cases, the annotation process can be long and tedious depending on the expertise needed to perform reliable annotations. In order to alleviate this need for annotations, several self-supervised methods have recently been proposed in the literature. The core principle behind these methods is to learn an image encoder using solely unlabelled data samples. In earth observation, there are opportunities to exploit domain-specific remote sensing image data in order to improve these methods. Specifically, by leveraging the geographical position associated with each image, it is possible to cross reference a location captured from multiple sensors, leading to multiple views of the same locations. In this paper, we briefly review the core principles behind so-called joint-embeddings methods and investigate the usage of multiple remote sensing modalities in self-supervised pre-training. We evaluate the final performance of the resulting encoders on the task of methane source classification.
Paul Berg, Minh-Tan Pham, Nicolas Courty
2023-06-16T14:01:57Z
http://arxiv.org/abs/2306.09851v1
# Joint Multi-Modal Self-Supervised Pre-Training in Remote Sensing: ###### Abstract With the current ubiquity of deep learning methods to solve computer vision and remote sensing specific tasks, the need for labelled data is growing constantly. However, in many cases, the annotation process can be long and tedious depending on the expertise needed to perform reliable annotations. In order to alleviate this need for annotations, several self-supervised methods have recently been proposed in the literature. The core principle behind these methods is to learn an image encoder using solely unlabelled data samples. In earth observation, there are opportunities to exploit domain-specific remote sensing image data in order to improve these methods. Specifically, by leveraging the geographical position associated with each image, it is possible to cross reference a location captured from multiple sensors, leading to multiple views of the same locations. In this paper, we briefly review the core principles behind so-called joint-embeddings methods and investigate the usage of multiple remote sensing modalities in self-supervised pre-training. We evaluate the final performance of the resulting encoders on the task of methane source classification. Paul Berg, Minh-Tan Pham, Nicolas Courty+ Footnote †: This work is funded by the ANR AI chair OTTOPIA under reference ANR-20-CHIA-0030. This work was performed using HPC resources from GENCI-IDRIS (Grant 2022-AD011013514). IRISA, Universite de Bretagne Sud, UMR 6074, F-56000 Vannes, France Remote sensing, Self-supervised learning, Multi-modal fusion, Methane source classification ## 1 Introduction By considering the large amount of remote sensing data collected daily by different satellite sensors, there is a growing interest in leveraging those large-scale data to perform downstream tasks such as scene classification. Deep learning methods have imposed themselves as the de-facto tools for most tasks operating on geospatial data [1]. However, supervised neural networks require large quantities of annotations in order to perform at full capabilities. Therefore, recent methods based on self-supervised learning (SSL) have been proposed to reduce the need for labels for training in computer vision [2] as well as in remote sensing applications [3, 4]. Among these methods, joint-embeddings SSL frameworks have shown impressive results, even learning image representations on par with or better than state of the art supervised models [5]. In remote sensing, early works have been proposed toward learning across modalities in a self-supervised manner by exploiting the multi-modal nature of various remote sensing datasets [6, 7, 8]. Indeed, the geographical locations associated with each samples can allow the creation of pairs of samples capturing the same geographical location. These pairs can then be seen as different views of the same location and incorporated in a self-supervised learning pipeline. In this paper, we build upon these works which focus on learning on only two modalities to develop a framework for self-supervised pre-training across several image modalities. We work with up to three but there could be an arbitrary number of modalities if available. The proposed method is then tested on the task of methane source classification using the Meter-ML dataset [9]. Our experiments show that scaling the number of modalities in SSL pre-training can improve the performance on downstream tasks. Interestingly, these results remains true when the downstream task inputs only a single modality, highlighting the potential of leveraging new modalities in self-supervised learning applied to remote sensing tasks. We also investigate the impact of artificial augmentations in this pre-training pipeline compared to using only the geographical cross references. ## 2 Methodology Joint-embedding methods enable the pre-training of models in a self-supervised manner by using multiple views from the same data point [10, 11, 12, 13]. In most cases where the unlabeled data used for pre-training come from a single modality, a common technique is to create different views based on randomly artificial augmentations such as cropping, color jittering, rotations. Given a sample view \(x\), we call views from the same sample positives \(x^{+}\) and we refer to views from a different sample as negatives \(x^{-}\). The main objective of joint-embedding methods is to maximize the alignment between the representation of a view and representations of its corresponding positives. However, this objective is not sufficient since the model can trivially converge to a constant representation minimizing the distance between all representations and producing un-discriminative representa tions (i.e., collapse problem). To circumvent this problem, two families of solutions have been proposed. Firstly, contrastive methods [10, 14] leverage other samples in the batch as so-called negatives so that the learning objective not only minimizes the distance of each sample to its positive views but also maximizes its distance to negatives, thus preventing the latent space from collapsing in a single point. Secondly, several regularization-based or architectural methods rely on either additional loss terms [15, 16] or adding network asymmetry [12, 13] for the different augmented views. In this paper, we focus on contrastive methods due to their popularity in SSL. Also, they have been until now the most widely used SSL approaches in remote sensing [3]. The most common version of the contrastive loss is called InfoNCE [10]. It maximizes the alignment between positive views from the same sample while minimizing the alignment with views from other samples present in the same batch using a softmax cross-entropy. Given an encoder model \(f_{\theta}(\cdot)\) parameterized by \(\theta\), the loss for a single view is: \[\mathcal{L}_{\text{NCE}}(x;\theta)=-\sum_{\hat{x}^{+}\in x^{+}}\frac{\exp( \langle f_{\theta}(x),f_{\theta}(\hat{x}^{+})\rangle/\tau)}{\sum_{\hat{x}\in \Omega(x)}\exp(\langle f_{\theta}(x),f_{\theta}(\hat{x})\rangle/\tau)} \tag{1}\] where \(\tau>0\) is a temperature parameter used to control the sharpness of the distribution generated by the softmax operator and \(\langle x,y\rangle\) refers to the dot product between \(x\) and \(y\) both normalized to unit vectors, as the cosine similarity. \(\Omega(x)\) refers to the set of augmented views present in the batch excluding \(x\), namely \(\Omega(x)=x^{+}\cup x^{-}\). During self-supervised pre-training, the loss is applied to every generated view. Therefore, for a batch \(x\) of original \(N\) samples, each augmented with \(T\) views, the following loss function will be computed: \[\mathcal{L}_{\theta}=\frac{1}{T\times N}\sum_{i=1}^{T\times N}\mathcal{L}_{ \text{NCE}}(x_{i};\theta). \tag{2}\] As each modality contains a different number of channels, we pre-train a backbone for each one. These backbones can therefore be used on their own as a discriminative initialization after self-supervised pre-training. During the downstream tasks, a single feature can be used for a geographical location by fusing representations from each input modalities. ## 3 Experimental Study The Meter-ML [9] dataset used in our experiments for methane source classification contains multiple modalities for each geographic coordinates. Each methane-emitting present facility in the dataset therefore has corresponding Sentinel-1, Sentinel-2 and NAIP sensor captures. The dataset contains facilities from six different classes: concentrated animal feed operations (CAFOs), coal mines, landfills, natural gas processing plants (Proc Plants), oil refineries and petroleum terminals (R&Ts), and wastewater treatment plants (WWTPs). To experiment with both optical and SAR data as well as both low and high resolution, we pick sensor views from Sentinel-1 (VH and VV) and Sentinel-2 (RGB and NIR) at 10-m resolution as well as NAIP (RGB and NIR) at 1-m resolution. The proposed architecture is composed of a backbone for each modality used during pre-training (see Figure 1). We use an AlexNet [17] for Sentinel-1 (S1) and Sentinel-2 (S2) and a ResNet18 [18] model for NAIP. To evaluate the impact of artificial augmentations, we compare self-supervised pre-training with artificial augmentations and without. When artificial augmentations are used, we generate two versions of the same image for each modality using Figure 1: Architecture of our multi-modal pre-training and finetuning processes. Black arrows represent the forward data flow during pre-training while dashed gray arrows represent the forward data flow during finetuning. To ease the reading, we illustrate a specific use-case of methane source classification using the from the Meter-ML dataset [9] with three modalities: NAIP, Sentinel-2 and Sentinel-1 images. In the downstream classification task, methane emitting sources are divided in six classes. random augmentations. Our set of augmentations includes random horizontal and vertical flips and a random cropping with a conservative resized crop with a scale of at least 90% of the original image. With artificial augmentations, it results in each augmented view having a randomly augmented positive in the same modality as well as multiple augmented positives in other available modalities whereas without artificial augmentations, the view from a modality only has corresponding positives in other modalities. The Negative class samples present in the Meter-ML dataset are only used as negatives in the contrastive loss (see equation 1). Models are pre-trained for 120 epochs. When using multiple modalities during finetuning, representations from each backbone are concatenated to produce a single feature vector for the sample which is then fed to the classifier. For methane source classification, we finetune models for 100 epochs. The training and validation sets are set following the official split of the Meter-ML dataset. Results can be seen on Table 1. From the table, self-supervised pre-training consistently improves the performance compared to randomly initialized models. The multi-backbone architecture also scales with the number of modalities even when certain modalities are removed for the downstream task. The best overall performance is obtained when combining all modalities for the downstream classification task. Each modality therefore contains information relevant to the classification task that the methane source classifier is able to exploit. It is interesting to highlight that using fusion only during the downstream task and with a random initialization leads to a worst performance than using only Sentinel-2 data during pre-training and finetuning. This means that this modality could provide the most important information for classification and that self-supervised pre-training allows a discriminative initialization, which gives a better performance on the scene classification downstream task. Our experiments without artificial augmentations show interesting results. Mainly, when pre-training with modalities which are different in nature like SAR and optical data (S1+S2, S1+NAIP), the results are better with artificial augmentations. This phenomenon suggests that the single positive from the other modality is hard to align to. Indeed, adding artificial augmentations improves the pre-training performance by also offering in-modality positives. With only optical pairs from different modalities (when pre-training with Sentinel-2 and NAIP for example), the drop in performance is less severe. Therefore in this case, artificial augmentations have less impact on the downstream performance. In any case, the pre-training performance with only a single modality and random augmentations performs worse than pre-training with multiple modalities, but remains better than random initialization for our chosen downstream task of scene classification with finetuning. ## 4 Conclusion We performed evaluations on a multi-modal self-supervised pre-training pipeline. By leveraging the geographical pairs of sensor captures as multiple views, the performance of self-supervised pre-training can be improved compared to using a single modality pre-training with only artificial augmentations. In order to improve the performance of those pre-trained models, we leave to future work the evaluation of different fusing methods for the downstream task. Another interesting direction for improvement is the sharing of model weights between modalities. Indeed, our proposed approach requires having a entire set of encoder weights for each modality. Finally, we would like to explore in-depth the impact of the Negative class on the final performance of the model and how other unrelated datasets can be used as Nega \begin{table} \begin{tabular}{l|c c c c c c c} Pre-training & \multicolumn{6}{c}{Downstream} \\ & S1 & S2 & NAIP & S1 + S2 & S1 + NAIP & S2 + NAIP & S1 + S2 + NAIP \\ \hline None & 47.37\% & 64.29\% & 62.03\% & 65.04\% & 63.16\% & 68.42\% & 65.79\% \\ \hline S1 & 51.13\% & - & - & - & - & - & - \\ S2 & - & 70.30\% & - & - & - & - & - \\ NAIP & - & - & 66.92\% & - & - & - & - \\ S1 + S2 & 53.76\% & 71.80\% & - & 71.80\% & - & - & - \\ S1 + NAIP & 56.39\% & - & 70.68\% & - & **72.18\%** & - & - \\ S2 + NAIP & - & 71.43\% & 68.42\% & - & - & 72.18\% & - \\ S1 + S2 + NAIP & **58.65\%** & **72.56\%** & **72.93\%** & **72.18\%** & 68.80\% & **73.31\%** & **73.68\%** \\ \hline S1 + S2 \({}^{*}\) & 50.00\% & 69.55\% & - & 65.04\% & - & - & - \\ S1 + NAIP \({}^{*}\) & 55.26\% & - & 70.30\% & - & 60.90\% & - & - \\ S2 + NAIP \({}^{*}\) & - & **72.56\%** & 69.92\% & - & - & 72.93\% & - \\ S1 + S2 + NAIP\({}^{*}\) & 52.63\% & 71.05\% & 69.92\% & 65.41\% & 65.79\% & 69.92\% & 69.17\% \\ \end{tabular} \end{table} Table 1: Accuracy using different pre-training and finetuning scenarios. When pre-training is None, it refers to randomly initialized baseline models. Pre-trainings annotated with \({}^{*}\) denotes using no random augmentations and using only view (the original ) by modality during pre-training. In this setting, the self-supervised pre-training cannot be done on a single modality. tives only during the pre-training phase. Overall, multi-modal self-supervised learning provides a better initialization than single modal self-supervised learning for methane source classification. Future work could also investigate whether or not these results generalize to the more general downstream task of remote sensing scene classification. Hopefully, this shows the interest of using other datasets during pre-training than only the ones concerned by the downstream tasks. There are also other opportunities to include remote sensing specific data in the pre-training phase in the form of domain specific augmentations which should continue to be investigated because these often have more impact than the specific type of self-supervised loss used.
2310.05755
Deep Concept Removal
We address the problem of concept removal in deep neural networks, aiming to learn representations that do not encode certain specified concepts (e.g., gender etc.) We propose a novel method based on adversarial linear classifiers trained on a concept dataset, which helps to remove the targeted attribute while maintaining model performance. Our approach Deep Concept Removal incorporates adversarial probing classifiers at various layers of the network, effectively addressing concept entanglement and improving out-of-distribution generalization. We also introduce an implicit gradient-based technique to tackle the challenges associated with adversarial training using linear classifiers. We evaluate the ability to remove a concept on a set of popular distributionally robust optimization (DRO) benchmarks with spurious correlations, as well as out-of-distribution (OOD) generalization tasks.
Yegor Klochkov, Jean-Francois Ton, Ruocheng Guo, Yang Liu, Hang Li
2023-10-09T14:31:03Z
http://arxiv.org/abs/2310.05755v1
# Deep Concept Removal+ ###### Abstract We address the problem of concept removal in deep neural networks, aiming to learn representations that do not encode certain specified concepts (e.g., gender etc.) We propose a novel method based on adversarial linear classifiers trained on a concept dataset, which helps remove the targeted attribute while maintaining model performance. Our approach Deep Concept Removal incorporates adversarial probing classifiers at various layers of the network, effectively addressing concept entanglement and improving out-of-distribution generalization. We also introduce an implicit gradient-based technique to tackle the challenges associated with adversarial training using linear classifiers. We evaluate the ability to remove a concept on a set of popular distributionally robust optimization (DRO) benchmarks with spurious correlations, as well as out-of-distribution (OOD) generalization tasks. ## 1 Introduction It is well known that deep neural networks encode the information of various concepts in the latent representation space Bau et al. (2017). The ability to remove a specified concept (e.g., by the model trainer or user) from the learned representation and the model is crucial in many ways. For example, some concepts may represent detrimental features, such as ones that are not relevant to the downstream task, but are nevertheless spuriously correlated with the target variable, e.g., the background for classifying the type of animal Beery et al. (2018); some of the attributes might represent information that is once informative but nonetheless is no longer so; others may represent sensitive features, such as gender or race, which are undesirable for the model to correlate with. Removing these features will produce more robust, generalizable, and fair models that are oblivious to the presence of them. In this paper, we consider the problem of _concept removal_ -- we want to learn deep neural representations that do not encode a certain specified attribute. A large set of existing literature is concerned with adversarial concept removal Elazar and Goldberg (2018); Ye et al. (2021); Moyer et al. (2018). These methods seek to learn representations that are statistically independent of sensitive attributes, thus preventing any potential inference, with an adversarial classifier serving as a proxy to measure the mutual information. While these methods help to mitigate the bias, they may be limited in generalizing to out-of-distribution (OOD) data, as the independence is subject to the specific distribution of the input data. Furthermore, these methods require labeled sensitive attributes in the training dataset. In this paper, we propose an adversarial method that relies on a _concept dataset_ instead. We borrow this notion from the interpretability literature Kim et al. (2018); Chen et al. (2020); Crabbe and van der Schaar (2022), where a concept dataset refers to a set of examples that are chosen or curated to represent a specific concept of interest. For instance, to determine whether the classification of a "zebra" relies on the concept "stripes" they collect a set of images with striped patterns. These images are used to construct a linear concept classifier by separating them from some random images in the latent space of a pretrained neural network. A concept dataset is also potentially cheaper to obtain since it can be composed of publicly available or synthetic data. While interpretability methods are primarily concerned with detecting whether a classifier relies on a certain concept, our goal is to mitigate this effect through adversarial training. [MISSING_PAGE_POST] ### Adversarial Concept Removal in Representation Learning Following Elazar and Goldberg (2018), adversarial concept removal consists of training simultaneously a downstream task classifier and an adversarial concept classifier by alternating between two objectives, \[\min_{g_{adv}}\frac{1}{N}\sum_{i=1}^{N}\ell(g_{adv}(h(X_{i})),Y_{i}^ {C})\] \[\min_{h,f}\frac{1}{N}\sum_{i=1}^{N}\ell(f(h(X_{i})),Y)-\frac{1}{N} \sum_{i=1}^{N}\ell(g_{adv}(h(X_{i})),Y_{i}^{C}) \tag{2.2}\] In the first equation, we are fitting the classifier \(g_{adv}\) to predict the _attributed_ concept indicators \(Y_{i}^{C}\) given a fixed representation \(h\). In the second part, given a fixed probing classifier \(g_{adv}\), we simultaneously minimize the loss of a downstream task and maximize the loss of the adversarial classifier. In the ideal situation where \(h(X)\) and \(Y^{C}\) are independent, we should have the loss of the adversarial classifier close to the _chance_ level. In other words, negative loss of the adversarial classifier is a proxy for the mutual information between \(h(X)\) and \(Y^{C}\). We emphasize that this approach requires the concept indicators \(Y^{C}\in\{0,1\}\) to be attributed in the training dataset (or at least a subset of it) Elazar and Goldberg (2018); Madras et al. (2018). These methods are designed in order to _sensor_ the concept within a particular distribution, which does not mean that the concept is not included entangled with other features. For example, there are no guarantees that we can learn features that are transferable between instances of concept and instances not belonging to concept. Instead, rely on the notion of concept sensitivity in terms that Kim et al. (2018) propose. There, a linear classifier \(g_{adv}(h)=\sigma(v^{\top}h)\) is trained on the concept dataset \((X_{i}^{C},Y_{i}^{C})_{i=1}^{N_{C}}\) instead. The choice of a linear classifier is mainly motivated by its application in the concept-based interpretation literature. It has also become widely accepted in the deep learning literature to consider concepts as directions in the latent space Louizos et al. (2015). Furthermore, a linear classifier also has more chances to generalize to the original training data \(X\), when we train it on the concept data \(X_{i}^{C},Y_{i}^{C}\). We note that there are many difficulties associated with training adversarial classifiers, such as vanishing and unstable gradients Goodfellow (2016); Arjovsky and Bottou (2017). Although we do not know if these problems can be caused particularly by our choice of the discriminator, to the best of our knowledge linear adversarial classifiers have not been considered in the literature before. We introduce a new method to train them by modifying the loss function1 and employing _implicit differentiation technique_Rajeswaran et al. (2019); Borsos et al. (2020); Lorraine et al. (2020). Footnote 1: We remark that Arjovsky and Bottou (2017) has a whole section dedicated to modified loss functions as ways of mitigating the problem with vanishing and unstable gradients. ## 3 Methodology Recall that Kim et al. (2018) does not specify how exactly the linear classifier is obtained. Let us consider CAV as a penalized logistic regression estimator (for simplicity, we assume that the concept dataset is **balanced**, so that we do not have to include the bias) \[v_{C,k,\lambda}^{*}(W)=\operatorname*{arg\,min}_{v}\frac{1}{N_{C}}\sum_{i=1}^ {N_{C}}\ell_{BCE}\left(\sigma(v^{\top}h_{k}(X_{i}^{C};W)),Y_{i}^{C}\right)+ \frac{\lambda}{2}\|v\|^{2}\,, \tag{3.1}\] where \(\ell_{BCE}(p,y)=-y\log p-(1-y)\log(1-p)\) is the binary cross-entropy loss, and \(\sigma(x)=1/(1+e^{-x})\) is the sigmoid function. Here, \(v_{C,k,\lambda}^{*}(W)\) is viewed as a function of \(W\), and although we do not have an explicit analytical form, we can differentiate implicitly, see Section A. What can be a good measure of the concept information encoded in the representation? If we use the loss function in (3.1), it goes back to the traditional adversarial approach. The effect of using the gradients of \(v^{*}_{C,\lambda,k}\) vanishes thanks to the envelope theorem, and the optimization problem reduces to standard alternating procedure (for sake of completeness, we show this in detail in Section D). Instead, we look at the parameter \(v=v^{*}_{C,\lambda,k}(W)\) from the perspective of feature importance. If \(v[i]\neq 0\), then the \(i\)th component of the representation \(h_{k}(\cdot;W)[i]\) is important for making a prediction of the concept label. In other words, the bigger the absolute value of \(v[i]\) the more information the corresponding neuron contains about the concept. In the ideal situation where \(h_{k}(\cdot;W)\) does not encode concept information at all, we expect that \(v=0\). We propose to penalize the norm of the CAV vector in order to encourage less concept information, i.e., we introduce the following _adversarial CAV penalty_ to the objective: \[\mathsf{adv}_{C,k,\lambda}(W)=\|v^{*}_{C,k,\lambda}(W)\|^{2}. \tag{3.2}\] We emphasize that this choice is purely heuristic, intuitively we expect it to "push" the concept activation vector towards the origin. Mini-batch optimization.For stochastic gradient descent, we evaluate the terms in the objective \[L_{DS}(W,\theta)+\gamma\mathsf{adv}_{C,k,\lambda}(W)\] by averaging over a mini-batch. Here, \(L_{DS}(W,\theta)\) is the downstream task loss, for instance, as in the first term of (2.2) (\(\frac{1}{N}\sum_{i=1}^{N}\ell(f(h(X_{i})),Y)\)). For the adversarial penalty, we replace the sample averages in Eq. (A.1) and (A.2) with batch averages. However, we require a larger batch size when evaluating \(\mathcal{D}_{0,\lambda}\), since it needs to be inverted in Eq. (A.1). We also notice that the inversion of a large matrix is not computationally tractable in practice. For example, the output of some intermediate layers of ResNet50 can reach up to 800K dimensions. In this case, even storing the square matrix \(\mathcal{D}_{0,\lambda}\) in the memory is not possible. Using a batch average significantly speeds up the computation, and we propose a simplified procedure that does not require computing \(\mathcal{D}_{0,\lambda}\) directly and replaces the matrix inverse with a linear solve operation, see Section B. Batch normalization.Notice that the concept activation vector \(v^{*}_{C,k,\lambda}\) can be shrunk by simple scaling of the representation output \(h_{k}\to\alpha h_{k}\) (for instance, if the last layer is a convolution with ReLU activation). One way to avoid this is to equip \(h_{k}(\cdot;W)\) with a batch normalization layer at the top. In the experiments, we make sure that this is the case. ## 4 Deep Concept Removal By default, the penultimate layer of the neural network is considered as the representation. However, in Section 4.2.2 of Kim et al. (2018), they demonstrate that a concept can be encoded in deeper layers as well. Motivated by this observation, we propose to apply adversarial CAVs simultaneously to a set of deep layers, rather than only to a penultimate one. That is, we fix some number of layers \(k_{1},\dots,k_{m}\) and the hyperparameters \(\lambda_{1},\dots,\lambda_{m}\), and we add up the total adversarial penalty \[\mathsf{adv}_{C,k_{1},\lambda_{1}}(W)+\dots+\mathsf{adv}_{C,k_{m},\lambda_{m }}(W).\] We call this approach _Deep Concept Removal_. This however leaves us with a difficult choice of which exactly layers we should choose, since there can be many when we use a deep net. We are hoping to find an _interpretable_ way of choosing these layers, without the need to select them through expensive search. To find an answer, we take experimental approach, by conducting a simple case study. Specifically, we want to answer the following research questions: * RQ1. _Does applying adversarial CAVs to not only the penultimate layer but also deeper and wider layers lead to more effective concept removal? What should be the choice?_ * RQ2. _Can the concept dataset be defined on out-of-distribution images, while maintaining the effectiveness of the deep concept removal method?_ Following a rolling example of "stripes" concept in Kim et al. (2018), we generate a modified MNIST dataset by injecting a striped pattern into the background of each digit, with the angle of the stripes dependent on the class label, Figure 0(a). This dataset allows us to test the effectiveness of our deep concept removal methods by evaluating the models on the original MNIST test split. We first consider the most straightforward way to define the concept dataset -- by using the same MNIST digits with and without stripes, see Figure 0(b)2. Footnote 2: Such set-up is equivalent to domain adaptation, where we have labeled instances from the training domain (in our case, striped digits) and unlabeled images from the test domain (the original MNIST digits) Ganin and Lempitsky (2015). For experimenting, we consider a variety of convolutional neural networks (CNNs) of different shapes, see Figure 2. The first three networks 2a, 2b, 2c are simple feedforward CNNs which we borrow from An et al. (2020). Network 2d is a resnet-type network with 14 layers He et al. (2016). For each of them, we show the dimension of output of each layer, with the the width of each of them proportional to that dimension. **To address RQ1,** we apply adversarial CAVs to different combinations of layers within these four networks. We first conduct an experiment, where we only use adversarial CAV at the penultimate layer with networks 2a, 2b, 2c. While the test performance for 2a and 2b can go up to 98%, it drops down to 84% for network 2c. What makes the latter different from the former two? We observe that the contraction of intermediate representations is more pronounced in model 2c, in the sense that the dimensionality of the Figure 1: (a) Training dataset based on MNIST digits, where the angle of injected stripes is determined by the label: for a digit \(j\in\{0,...,9\}\), the striped pattern is rotated by \(\pi j/5\) radians (b) Concept dataset based on MNIST digits: the concept examples contain stripes (upper half) and the outer examples do not (bottom half); (c) Concept dataset based on EMNIST letters, where we introduce stripes at random angles, with \(j\) in \(\pi j/5\) uniformly drawn from \(0,\ldots,9\). Figure 2: Comparison of layer widths for CNN models M3, M5, and M7. Each horizontal line in the chart represents the width of a layer in the respective model, with the bottom layer corresponding to the MNIST input dimension of 784 (28 x 28). Note that the final fully connected (FC) layer is not depicted in the figures. output of the middle layers is much larger than that of the penultimate layer. We show that we can fix the performance of network (c) by including the widest layer that goes in the middle, which makes it also around 98%. See Figures 3(a), 3(b). We note that these results partially conform with the observations made by Bau et al. (2017). They argue that wider networks are encouraging more disentangled concept. We can speculate that contraction in the middle layers results in entanglement of the concept with other features, so that when we remove the concept information, we can as well remove useful information and harm the performance. Based on our observations and observations of Bau et al. (2017), we propose to use the following rule to determine which layers we need to include. **Rule**.: _Apply the adversarial CAVs to (most) layers that precede contraction, which are the layers whose output has total dimension significantly larger than that of the consequent layer._ We extend our analysis to a residual-type network (ResNet, He et al. (2016)), known for its structure of bottleneck blocks that intertwine contracting and expanding layers. This network has 6 layers that precede contraction: 4, 7, 8, 10, 11, 13 (we include the penultimate layer 13 since it precedes the contracting softmax layer). To further confirm our rule, we consider a large number of subsets layers and train such ResNet with Deep Concept Removal. The results are reported below in Table 1. Although the test error varies significantly, we notice that a good indication of a successful concept removal is that the training error is comparable with the test, that is, we generalize to out-of-distribution images without the stripes. And although there are some "outliers" to our statement, we can confidently say that including at least four layers from the list 4, 7, 8, 10, 11, 13 guarantees that we will remove the concept successfully and achieve an acceptable performance on test. **To address RQ2,** we additionally consider a concept dataset for the "stripes" concept based on EMNIST letters, as shown in Figure 0(c). For instance, we look at a case where the concept is defined through instances that are not directly related to the original downstream task, which is closer to set-up in Kim et al. (2018). We compare the performance for the two concept datasets described above. The test performance is presented in Figure 5. Although the performance drops for EMNIST dataset, it remains adequate: we can see that the performance on the EMNIST concept drops by only around 3% with a slightly increased fluctuation of the last epoch. In addition, we can use a trained decoder to visually inspect removal of the stripes, see Figure 3. It is important to note that, with this concept dataset, the training algorithm never encounters the original MNIST digits without stripes, whether labeled or unlabeled. To the best of our understanding, this is a novel problem setup that has not been previously explored in invariant representation learning literature. We postpone all technical details of the experiments described above to Section C.1 in the appendix. \begin{table} \begin{tabular}{|c|c|c||c|c||c||c|c|} \hline Layers (High) & train & test & Layers (Medium) & train & test & Layers (Low) & train & test \\ \hline \hline 1, 7, 8, 11, 13 & 95.6 & 94.4 & 2, 8, 10, 11 & 79.8 & 98.9 & 1, 2, 9, 12 & 47.0 & 100.0 \\ 4, 7, 8, 10, 11, 13 & 93.5 & 90.9 & 5, 6, 7, 13 & 74.5 & 97.4 & 5, 6, 9, 12 & 35.5 & 100.0 \\ 4, 7, 10, 13 & 90.6 & 90.9 & 5, 7, 8, 9, 11 & 57.7 & 100.0 & 1, 6, 11 & 34.8 & 100.0 \\ 4, 7, 8, 10, 13 & 89.4 & 93.2 & 5, 7, 10 & 50.9 & 100.0 & 2, 4, 12 & 32.6 & 100.0 \\ 4, 7, 10, 11, 13 & 86.1 & 90.5 & 3, 7, 8 & 49.8 & 100.0 & 3, 5, 9 & 19.7 & 100.0 \\ \hline \end{tabular} \end{table} Table 1: MNIST experiment using a small ResNet. Each experiment corresponds to our adversarial CAV applied to a different set of layers. Layers preceding contraction are highlighted in cyan, including the penultimate layer. We report train and test accuracies after training for 100 epochs, averaged over three seeds. ## 5 Robustness to distributional shifts and out-of-distribution generalization Distributionally robust optimization (DRO) concerns with classification problems, where the correlation between target variable \(Y\) and an attribute \(A\) changes during test. We focus on optimizing the _worst group accuracy_, which corresponds to the worst-case correlation shift in test Sagawa et al. (2019). Here, a group is a sub-population of data, which corresponds to a particular realization (\(Y=y,A=a\)). The worst group accuracy of a classifier \(\hat{Y}(X)\) reads as follows \[\min_{(a,y)}\Pr\left(\hat{Y}(X)=Y|A=a,Y=y\right).\] Usually, DRO tackles the situation where variables \(Y\) and \(A\) are spuriously correlated. When this occurs, some subgroups have low observation probabilities, making generalization difficult. An additional challenge is that the attributes are often not annotated, making it difficult to optimize the worst-group error. Our concept removal approach can be used in cases where \(A\) is binary, i.e. the attribute is an indicator of concept class membership. By using our approach, we hope to learn a representation that is transferable between subgroups (\(A=0,Y=y\)) and (\(A=1,Y=y\)). This can improve performance on the subgroup that has fewer observations. In addition, our approach does not require attribute annotation, rather it relies on an additional concept dataset to infer the concept directions. We use two popular DRO benchmark datasets for evaluation: * **Celeb-A** Liu et al. (2015) features aligned celebrity faces with annotated attributes (e.g. hair color, gender, wearing makeup, glasses, etc.) Our task is to classify blond hair. The spurious attribute is gender (male or not male). For the concept dataset, we select only celebrities with brown hair. * **Waterbirds** Sagawa et al. (2019) contains bird images on different backgrounds. The task is to classify the birds as waterbirds or landbirds. The spurious attribute is the background (water or land). The benchmark includes training, validation, and test splits. The validation split is used for the concept dataset, allowing us to "decorrelate" the bird type and background. * **CMNIST** Arjovsky et al. (2019) is an MNIST variant incorporating a spurious color feature. Labels are binarized (0 for digits 0-4, 1 for digits 5-9), with 25% label noise introduced via random flipping. Digits are colored red or green, with the color 90% correlated with the label, making color a stronger predictor than digit shape. For concept dataset, we used EMNIST letters with randomly added colors. * **Striped MNIST** is our own adaptation of the original MNIST considered in Section 4, see Figure 0(a). For the DRO experiment, we add the label-dependent stripes with probability 95%. This modification creates a dataset with 20 distinct groups, each corresponding to a combination of the 10 classes and the presence or absence of stripes. For concept dataset, we used EMNIST letters with and without stripes, Figure 0(c). We compare our method with the following popular baselines: * **Empirical Risk Minimization (ERM)** optimizes the average error. * **Just Train Twice (JTT)** Liu et al. (2021) is a two-phase method for handling spurious factors without annotation. It trains the model with ERM in the first phase to recognize the "short-cut" feature \(A\); in the second phase, the model is fine-tuned by upweighting the error set to improve the accuracy of the smaller groups, all without requiring attribute annotations. * **Group-DRO (gDRO)** is a method introduced by Sagawa et al. (2019). It requires group annotations and is designed to optimize the worst-group error directly. It is typically used as an upper bound for methods that do not use this information. Similar to JTT, our method does not require attribute annotations during training. However, we additionally require a concept dataset to specify what information needs to be removed. In all cases, we use validation data with labeled groups for model selection. Table 2 compares our method's performance on test with the baseline methods. We include training and model selection details in the Appendix, Section C.2. The experiments demonstrate that our concept removal approach achieves comparable results with state-of-the-art DRO methods that do not use group labels on the Celeb-A dataset. However, on the Waterbirds dataset, our approach does not achieve competitive accuracy. Our results suggest that our concept removal \begin{table} \begin{tabular}{c c|c c c c} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Groups**} & \multicolumn{4}{c}{**Worst-group accuracy**} \\ \cline{3-6} & & Celeb-A & Waterbirds & CMNIST & StripedMNIST \\ \hline \hline ERM & no & 79.7(3.7) & 85.5(1.0) & 0.2(0.3) & 93.0(1.2) \\ JTT & no & 81.1 & **86.0** & **71.5(0.7)** & 91.1(1.5) \\ \hline Ours & no\({}^{*}\) & **83.9(0.7)** & 68.2(2.6) & 57.4 (16.8) & **96.5(0.8)** \\ \hline gDRO & yes & **86.9(1.1)** & **87.1(3.4)** & **72.7(0.5)** & 96.0(0.9) \\ \hline \end{tabular} \end{table} Table 2: Test accuracies of different methods averaged over 5 seeds. Results for JTT are from Liu et al. (2021) and for ERM and gDRO from Idrissi et al. (2022). The group column indicates whether the method uses group attributes. Unlike JTT and ERM, our method uses additional information in the form of concept dataset. approach is more effective at removing higher-level features, while lower-level features are deeply ingrained in the representations and harder to remove. This could explain why our method struggles to achieve comparable accuracy on the Waterbirds dataset. A similar comparison can be made between CMNIST and Striped MNIST, where stripes represent a more high-level concept. ### Out-of-distribution generalization Our experiments further show that removing concepts leads to features that are transferable not just to small sub-populations, but even to out-of-distribution data. To demonstrate this, we make a modification to the Celeb-A dataset by removing all Blond Males from the training set. Due to the space constraints, we postpone these results to Appendix, Section C.3. ## 6 Fair representation learning In fair representation learning, one wants to train a representation that is oblivious of a sensitive attribute. Whether it is oblivious is typically measured through statistical dependence, due to its rigorous formulation Madras et al. (2018). However, with our method, we can train a classifier that is oblivious of the sensitive attribute in terms of interpretability method of Kim et al. (2018). To demonstrate this, let us consider the problem of classifying professors against primary school teachers, Figure 6. We evaluate the importance of the gender concept (sensitive), as well as two other concepts that may be useful for distinguishing professors and primary school teachers, namely, eyeglasses and young. Using TCAV (Kim et al., 2018), we compare the results of a model trained with and without concept removal, i.e. the standard ERM. In the Appendix, Section C, we provide a detailed description of how we selected examples for each of the concepts thanks to the annotations "Male", "Eyeglasses", "Young" in the Celeb-A dataset. We also describe in detail how we obtain the images of professors and teachers, and how we carry out the training process. Additionally, we show how the importance of these three concepts changes with the removal of the gender concept. We compare our method as well as with standard Empirical Risk Minimization (ERM). The results are presented in Figure 7, where the leftmost graph displays the TCAV score for all three concepts, including also the scores for the ERM (represented by a dashed line). The TCAV score indicates that all three concepts are close to being 100% important (a value of 0 for the young concept corresponds to negative importance). It is worth noting that the TCAV score does not measure the extent to which a concept is important. In Figure 6: Examples of primary school teachers (top row) and professors (bottom row). the middle plot of Figure 7, we show the sensitivity score (Eq. 2.1), which decreases with the application of concept removal for each of the three concepts. However, the score of the concepts "Young" and "Eyeglasses" is significantly higher than that of gender. On the other hand, for ERM, gender is the most important concept according to this score. The rightmost graph also shows the accuracy of the concept classifier for each of the three concepts. It can be observed that for ERM, gender (red) can be classified with high accuracy, whereas for the concept removal model, the accuracy drops to the chance level when measured at the last layer. Additionally, we note that the accuracy for the other two concepts has decreased, but only slightly. If we replace the concept dataset with the population data, which contains the same correlations between concept, as in the training, we end up learning fair representation in the traditional sense (Madras et al., 2018). We show that this allows us to learn a fair representation (in a non-supervised manner), so that linear classifiers that fit on top (without any restrictions) satisfy group fairness measures. Notice that in this case, we do not require deep concept removal and it is sufficient to apply an adversarial classifier to the penultimate layer. We defer details to Section C.4 in the Appendix, where we conduct additional experiments with CelebA. ## 7 Conclusion and Limitations This work introduced a novel method to remove detrimental concepts when learning representation. We demonstrated that our method improves robustness to distributional shifts as well as out-of-distribution generalization. We also find applications in learning fair representations. Among limitations, we mention training time, which is a common issue in adversarial learning. Furthermore, although deep concept removal is not tied to a particular adversarial training algorithm, our current approach does not scale well with bigger models. We also leave the exploration of vision transformers and NLP applications for future research. We also emphasize that our method still requires validation when dealing with distributionally robust benchmarks. However, our approach requires a less aggressive model selection process, in particular, we always report the results for the model obtained after the last training epoch, unlike most of the existing DRO methods (Idrissi et al., 2022).
2310.15546
Robust and Deterministic Preparation of Bosonic Logical States in a Trapped Ion
Encoding logical qubits in bosonic modes provides a potentially hardware-efficient implementation of fault-tolerant quantum information processing. Here, we demonstrate high-fidelity and deterministic preparation of highly non-classical bosonic states in the mechanical motion of a trapped ion. Our approach implements error-suppressing pulses through optimized dynamical modulation of laser-driven spin-motion interactions to generate the target state in a single step. We demonstrate logical fidelities for the Gottesman-Kitaev-Preskill (GKP) state as high as $\bar{\mathcal{F}}=0.940(8)$, a distance-3 binomial state with an average fidelity of $\mathcal{F}=0.807(7)$, and a 12.91(5) dB squeezed vacuum state.
V. G. Matsos, C. H. Valahu, T. Navickas, A. D. Rao, M. J. Millican, X. C. Kolesnikow, M. J. Biercuk, T. R. Tan
2023-10-24T06:30:06Z
http://arxiv.org/abs/2310.15546v3
# Robust and Deterministic Preparation of Bosonic Logical States in a Trapped Ion ###### Abstract Encoding logical qubits in bosonic modes provides a potentially hardware-efficient implementation of fault-tolerant quantum information processing. Recent advancements in trapped ions and superconducting microwave cavities have led to experimental realizations of high-quality bosonic states and demonstrations of error-corrected logical qubits encoded in bosonic modes. However, current protocols for preparing bosonic code words lack robustness to common noise sources and can be experimentally challenging to implement, limiting the quality and breadth of codes that have been realized to date. Here, we combine concepts of error suppression via robust control with quantum error correction encoding and experimentally demonstrate high-fidelity, deterministic preparation of highly non-classical target bosonic states in the mechanical motion of a trapped ion. Our approach implements numerically optimized dynamical modulation of laser-driven spin-motion interactions to generate the target state in a single step. The optimized control pulses are tailored towards experimental constraints and are designed to be robust against the dominant source of error. Using these protocols, we demonstrate logical fidelities for the Gottesman-Kitaev-Preskill (GKP) state as high as \(\mathcal{\hat{F}}=0.940(8)\), achieve the first realization of a distance-3 binomial logical state with an average fidelity of \(\mathcal{F}=0.807(7)\), and demonstrate a 12.91(5) dB squeezed vacuum state. Fault-tolerant quantum error correction (QEC) for quantum information processing (QIP) necessitates the implementation of a redundant encoding within a sufficiently large Hilbert space to yield protection against local hardware errors [1]. At present, the predominant strategy focuses on encoding a logical qubit with multiple discrete-variable physical qubits manipulated with ultra-low operational errors [2; 3]. This approach is resource-intensive, and, despite many impressive demonstrations [4; 5; 6; 7; 8; 9], the viability of using QEC in this way to deliver net improvements in hardware error rates remains challenging. Many analyses indicate that a large ratio of physical-to-logical qubits is necessary for fault-tolerant operation in target algorithms, posing a substantial resource penalty and far outstripping device sizes available in the near future [10]. An alternative approach involves encoding logical qubits within continuous-variable systems [11; 12]. In particular, the infinite-dimensional Hilbert space spanned by the bosonic mode of a harmonic oscillator offers a highly symmetrical physical system that lends itself to logical encodings including Gottesman-Kitaev-Preskill (GKP) [13], binomial [14], and cat [15] codes. This approach demands fewer individual physical devices at the cost of increased complexity in preparing and controlling the logical code words. Several experimental works have successfully created different classes of bosonic states [16; 17; 18; 19; 20], implemented logical gate sets [21], and demonstrated QEC protocols [22; 23; 24; 25; 26; 27; 28]. However, the ability to prepare bosonic codes for QEC with sufficient fidelity remains a limiting challenge. For example, achieving fault tolerance by concatenating the GKP code with discrete-variable error-correcting codes requires a squeezing parameter currently estimated to be \(\sim\)10 dB [29; 30]; this threshold has yet to be experimentally realized. Moreover, thus far, only the lowest-order binomial code words (distance-2) have been realized experimentally. In contrast, a minimum distance of 3 is required to correct all types of bosonic errors [14]. Figure 1: **State preparation of non-classical bosonic states in an ion trap.****a)** A pair of orthogonal bichromatic Raman beams couples the spin and motion of a trapped ion and enacts spin-dependent forces. **b)** The experimental pulse sequence consists of (i) state preparation and (ii) state reconstruction. (i) After initializing the qubit and bosonic mode to their ground state, the control pulse evolves the system to the target bosonic state under the time-dependent Hamiltonian \(H(t)\) (Eq. 1). (ii) The characteristic function, \(\chi(\beta)\), is reconstructed by applying a displacement, \(\hat{\mathcal{D}}(\pm\beta/2)\), conditioned on the qubit state in the \(\hat{\sigma}_{x}\) basis. Internal state readout of the ancilla qubit in the \(\hat{\sigma}_{z}\) basis measures \(\mathrm{Re}[\chi(\beta)]\). **c)** Targeted bosonic states are prepared by modulating the phases \(\phi_{r,\mathrm{b}}(t)\) of the bichromatic fields. Insets show the evolution of the Wigner function of the \((|0\rangle+\sqrt{3}|6\rangle)/2\) binomial state at different times.
2303.16177
Control Barrier Function-based Predictive Control for Close Proximity operation of UAVs inside a Tunnel
This paper introduces a method for effectively controlling the movement of an Unmanned Aerial Vehicle (UAV) within a tunnel. The primary challenge of this problem lies in the UAV's exposure to nonlinear distance-dependent torques and forces generated by the tunnel walls, along with the need to operate safely within a defined region while in close proximity to these walls. To address this problem, the paper proposes the implementation of a Model Predictive Control (MPC) framework with constraints based on Control Barrier Function (CBF). The paper approaches the issue in two distinct ways; first, by maintaining a safe distance from the tunnel walls to avoid the effects of both the walls and ceiling, and second, by minimizing the distance from the walls to effectively manage the nonlinear forces associated with close proximity tasks. Finally, the paper demonstrates the effectiveness of its approach through testing on simulation for various close proximity trajectories with the realistic model of aerodynamic disturbances due to the proximity of the ceiling and boundary walls.
Vedant Mundheda, Damodar Datta K, Harikumar Kandath
2023-03-28T17:43:32Z
http://arxiv.org/abs/2303.16177v1
Control Barrier Function-based Predictive Control for Close Proximity operation of UAVs inside a Tunnel ###### Abstract This paper introduces a method for effectively controlling the movement of an Unmanned Aerial Vehicle (UAV) within a tunnel. The primary challenge of this problem lies in the UAV's exposure to nonlinear distance-dependent torques and forces generated by the tunnel walls, along with the need to operate safely within a defined region while in close proximity to these walls. To address this problem, the paper proposes the implementation of a Model Predictive Control (MPC) framework with constraints based on Control Barrier Function (CBF). The paper approaches the issue in two distinct ways; first, by maintaining a safe distance from the tunnel walls to avoid the effects of both the walls and ceiling, and second, by minimizing the distance from the walls to effectively manage the nonlinear forces associated with close proximity tasks. Finally, the paper demonstrates the effectiveness of its approach through testing on simulation for various close proximity trajectories with the realistic model of aerodynamic disturbances due to the proximity of the ceiling and boundary walls. ## I Introduction Present times have witnessed the widespread deployment of Unmanned Aerial Vehicles (UAVs) in a variety of domains, ranging from delivery, search, and rescue to monitoring [1]. Certain civil inspection and delivery tasks necessitate close-range operations near stationary obstructions, such as bridges and buildings [2]. Furthermore, UAV based indoor missions involving inspection of tunnels, rooms, aircraft fuel tanks, coal mines and AC ducts, offer significant advantages over traditional manual methods by reducing the time and effort required while also minimizing risks to human safety. Nonetheless, when conducting inspection tasks in close proximity to obstacles or walls, the UAV's aerial dynamics are subject to various force and torque disturbances, leading to potential instability and safety concerns. To account for such disturbances from all directions, we demonstrate our controller for operating inside a tunnel. The behavior of a UAV as it approaches the walls of a tunnel is characterized by nonlinear variation in its thrust, attributable to the intricate aerodynamic interactions at play [3]. As a result, a region of operation that is deemed unsafe can be identified in the vicinity of the wall or obstacle, necessitating the confinement of the UAV to a remaining safe region. Nonetheless, certain inspection tasks may require the UAV to operate in close proximity to the wall. Consequently, the controller must be designed to facilitate stability in the presence of such nonlinear disturbances. [4] demonstrates the safe distance for operation is beyond \(2\times\)_Radius of Propeller_ from the obstruction or wall. ## II Related Work The literature is sufficiently populated with efforts to model ceiling and ground effects [3, 4, 5, 6], but there is a clear gap in formulating control algorithms to tackle these effects in a combined fashion. Nonlinear Model Predictive Control (MPC) [7] has been used for navigation and obstacle avoidance of UAVs for real-time utilities. MPC provides the predictive ability [8] which aids in performing agile maneuvers with high precision and smooth control actions. [9] tries to limit the risk of unsafety by formulating a probabilistic guarantee, but fails to provide a rigid safety guarantee to avoid obstacles. [10] utilises partial sensor information to navigate through unknown environments by providing partial safety guarantees. Control Barrier Function (CBF) [11] is used to guarantee safety-critical control for various domains, including dynamic robotic systems. [11] introduces safety, safety sets, and describes using CBF to enforce safety in a minimally invasive fashion by not increasing the control effort or trajectory cost. CBF has been used as a constraint to MPC [12] to provide safety guarantees while addressing the case of conflict between safety and performance. This provides improved performance to MPC while providing safety guarantees. [13] shows collision avoidance for multi UAV swarm to reach desired locations and providing safety guarantees. [14] utilizes MPC while handling external wind disturbances. Although nonlinear controllers have been tried separately for ground and ceiling effects, no effort has been made to minimize the impacts of these disruptions using a disturbance resistive barrier function. ## III Contributions The paper contributes in the following ways: Fig. 1: Depicts operation of the UAV in a safe region with minimal aerodynamic effects from the wall. If the UAV goes closer than \(2\times\) R from the walls, it experiences turbulent forces, which tend to destabilize the UAV and cause collision. 1. To the best of the author's knowledge, this paper marks the initial endeavor to address the challenges of ground, ceiling, and wall effects simultaneously in a closed space for a UAV via the utilization of a model predictive controller. 2. This paper also proposes the use of CBF as a bounding function to bound the UAV into the Safe region (in Fig. 1) to prevent interaction with the aerodynamical forces of tunnel effect. 3. The contemporary CBF function is modified to tackle disturbances and provide safety guarantees in the presence of bounded external disturbances. The paper follows the structure with Section IV describing the UAV dynamic, a conventional CBF, and different aerodynamic effects acting on the UAV. Section V provides the problem formulation, and Section VI describes the outer loop MPC with CBF constraints and inner loop PID. Section VII explains the simulation results for different cases, and Section VIII concludes the paper. ## IV Preliminaries ### _UAV Dynamics_ UAV translational dynamics are given in (1). \[\ddot{\mathbf{p}}=\mathbf{g}+{{}^{I}}{R_{B}}\mathbf{T}/m \tag{1}\] where \(\mathbf{p}\) is the position of the center of the UAV in the inertial frame, \(m\) is the mass of the UAV, and \(T\) is the thrust vector acting on the UAV in the body frame. \({{}^{I}}{R_{B}}\) denotes standard rotation matrix in 3D for transformation from frame \(B\) to frame \(I\)[15]. The rotational dynamics are given in (2) where the angular acceleration in the body frame is \(\omega\). \[\dot{\omega}=\mathbf{I}^{-1}(\tau-\omega\times\mathbf{I}\omega) \tag{2}\] where \(\tau\) and \(\mathbf{I}\) are, the torque acting on the UAV and inertia matrix defined in the body frame. The combined UAV dynamics using (1) and (2) is presented below in matrix form. \[\begin{bmatrix}m\mathbf{I}_{3\times 3}&\mathbf{0}_{3\times 3}\\ \mathbf{0}_{3\times 3}&\mathbf{I}\end{bmatrix}\begin{bmatrix}\ddot{\mathbf{p}} \\ \dot{\omega}\end{bmatrix}+\begin{bmatrix}\mathbf{0}_{3\times 3}\\ \omega\times\mathbf{I}\omega\end{bmatrix}=\begin{bmatrix}m\mathbf{g}+{{}^{I}}{R _{B}}T\\ \tau\end{bmatrix} \tag{3}\] where \(\mathbf{I}\) denotes identity and \(\mathbf{0}\) denotes a null matrix. ### _Control Barrier Function_ Dynamics of a UAV in control affine form are given in (4): \[\dot{\mathbf{x}}=\mathbf{A}\mathbf{x}+\mathbf{B}\mathbf{u} \tag{4}\] where \(A\) is a square matrix of dimension 4x4 and \(B\) is a matrix of dimension 4x2. \(h(\mathbf{x})\) is a valid CBF if it is differentiable and follows the conditions in (5). \[\begin{cases}h(\mathbf{x})>0,\ \forall\ \mathbf{x}\in\zeta\\ h(\mathbf{x})=0,\ \forall\ \mathbf{x}\in\delta\zeta\end{cases} \tag{5}\] where \(\zeta\) is the set of all states of the UAV which lie in the safe region and \(\delta\zeta\) are the states of the UAV on the boundary of the safe region. If the UAV is initially located within the secure area \(\zeta\), the principle of forward invariance can be applied by verifying that \(\dot{h}(\mathbf{x})\geq 0\). This principle ensures that the UAV remains within the safe region if it commences within it. To enhance optimization for ideal trajectory tracking while also providing safety guarantees, this principle can be extended to an invariance condition where \(\dot{h}(\mathbf{x})\geq-\gamma\dot{h}(\mathbf{x})\). This invariance condition induces the asymptotic convergence of \(h(\mathbf{x})\) to 0. The condition for invariance is presented in equation (6). \[\frac{\partial h(\mathbf{x})}{\partial x}(\mathbf{A}\mathbf{x}+\mathbf{B} \mathbf{u})+\gamma\dot{h}^{z}(\mathbf{x})\geq 0 \tag{6}\] where \(\gamma>0\) is the relaxation coefficient and \(z>0\) is the exponential limit of convergence for the CBF. We define CBF to avoid point obstacles as \(h\) in (7). \[h(\mathbf{x})=\sqrt{2a_{max}(||\mathbf{p}||-d_{s})}+\frac{\mathbf{p}^{T}}{|| \mathbf{p}||}\dot{\mathbf{p}} \tag{7}\] The expression \(a_{max}\) represents the highest possible acceleration value of the UAV, whereas \(d_{s}\) is the secure distance that separates the obstacle from the UAV. Additionally, \(\mathbf{p}\) denotes the vector from the obstacle's location to the UAV center, while \(\dot{\mathbf{p}}\) represents the velocity of the UAV at a particular time instant \(k\). Similarly, \(h(\mathbf{x})\) can be defined as a discrete-time control barrier function (CBF). The final invariance condition can be found in equation (8). \[\begin{split}\frac{a_{max}\ \dot{\mathbf{p}}^{T}\mathbf{p}}{ \sqrt{2a_{max}(||\mathbf{p}||-d_{s})}}-\left(\frac{\mathbf{p}^{T}}{||\mathbf{p }||}\dot{\mathbf{p}}\right)^{2}+||\dot{\mathbf{p}}||^{2}+\mathbf{p}^{T} \mathbf{u}\\ +\gamma\dot{h}^{z}(\mathbf{x})||\mathbf{p}||\ \geq\ 0\end{split} \tag{8}\] ### _Aerodynamic Ceiling, Ground and Wall effect_ When a UAV's rotors start rotating, depending on whether it is near a vertical or horizontal surface, different aerodynamic forces start acting on it. These aerodynamic forces start affecting the UAV by pulling or pushing from the expected trajectory. It is crucial to understand where these forces originate and how they affect to tackle their effects. #### Iv-B1 Ground Effect When a UAV flies over a horizontal surface, ground effects (GE) occur. GE is an aerodynamic effect that has been studied extensively and seen to push UAVs away from the ground [4, 5]. The theoretical model of GE presented by Cheeseman and Bennet [16] is a widely accepted thurst ratio approximation of GE as given in (9). \[\text{Ground Effect:}\left[\frac{T_{GE}}{T_{\infty}}\right]=\frac{1}{1-(\frac{R} {4z})^{2}} \tag{9}\] where \(T_{GE}\) is the Thrust into the Ground, \(T_{\infty}\) is the Thrust baseline, \(R\) is the radius of the propeller and \(z\) is the distance from the ground. #### Iv-B2 Ceiling Effect When a UAV flies underneath a horizontal surface, nonlinear disturbances in the form of ceiling effect (CE) acts on the UAV. Contrary to GE, CE pulls the UAV towards the surface [4]. The mathematical approximation is found as a curve in (10) \[\text{Ceiling Effect:}\left[\frac{T_{CE}}{T_{\infty}}\right]=\frac{1}{1-(\frac{1} {a_{1}})(\frac{R}{a_{2}+z})^{2}} \tag{10}\] where \(T_{CE}\) is the Thrust into the ceiling, \(a_{1}\) and \(a_{2}\) are coefficients obtained through an experimental least square approach. #### Iv-B3 Sidewall Effect When a UAV flies close to a vertical surface, it experiences a pull toward the wall. This force is smaller than GE and CE forces and acts on the rotors randomly while pulling toward the wall, destabilizing the UAV. In [17], the paper tried to model this effect and found it to be yaw invariant, but it could not model the effect as it could not detect the wall effect reliably. According to their experiments, the force along the X-Y axis varied by up to 0.052 N with a standard deviation of up to 0.022 N, and along the Z-axis varied by up to 0.062 N with a standard deviation of 0.065 N. Hence, these forces act randomly with these parameters. #### Iv-B4 Combined Tunnel effect The combined tunnel effect refers to the two possible combinations of aerodynamic forces acting in corners. They are near the ceiling (Ceiling effect and sidewall effect) and ground (ground effect and sidewall effect). These effects were studied in [18], as In Low Corner Effect (ILoCE) and In Upper Corner Effect (IUpCE). It tries to analyse the effects and concluded that a source and drain vortex depicting the combined forces is formed in the corners. These vortexes are shown as Particle image velocimetry (PIV) images, and the force diagrams shows a higher combined force in the corners than the individual forces. ## V Problem Formulation The aim of the paper is to provide a control strategy to avoid tunnel effects (combined, ceiling, sidewall and ground effects) and tackle their disturbance to provide safety guarantees when the UAV is close proximity to the tunnel walls as shown in Fig. 1 and Fig. 2. These tasks have been defined in 3 cases. We consider UAV center to be same as the UAV center of gravity. #### V-1 **Case I** To follow trajectory inside a tunnel while maintaining a minimum distance of 2 \(\times\) Radius of Propeller (Fig. 1) to avoid aerodynamic interactions with the wall. The UAV will be bound inside a safe region of operation (Negligible Aerodynamic interactions) in the presence of external disturbances in the form of wind. #### V-2 **Case II** Minimize the safe distance of operation (\(z_{d}\), \(y_{d}\), \(h-z_{d}\) and \(b-y_{d}\)) (Fig. 2) from the tunnel walls for close proximity operations. We minimize the safe hovering distance from the walls even in the presence of external disturbances. #### V-3 **Case III** To follow a trajectory with close proximity to the wall, ceiling and ground and tackle the combined tunnel aerodynamic effect. Primary objective in trajectory tracking and hovering tasks is defined as the error (\(e(\mathbf{x})\)) in (11). \[\underset{\mathbf{u}}{\text{min}}\ e(\mathbf{x})=||\mathbf{p}(\mathbf{x})- \mathbf{p}^{d}||\ \ \forall\ k>0 \tag{11}\] where \(p(\mathbf{x})\) is the position of UAV center, \(p^{d}\) is the desired position of the UAV center and \(k\) is the discreet time step. ## VI Proposed Controller The control architecture of the proposed controller is presented in Fig. 3. The control loop consists of an outer loop Model Predictive Control (MPC) with safety constraints derived from a modified Control Barrier Function (CBF). The modifications to CBF are made to restrict the UAV inside a desired safe region contrary to its earlier collision avoidance utility. A disturbance rejection term is also introduced to the conventional CBF to handle the Tunnel effect and other wind disturbances in the tunnel. The inner loop control is comprised of thrust and attitude PID control. We present our main contributions in this section. We begin by writing the discrete-time dynamics of the UAV for calculating the cost inside MPC outer loop. The state vector of the UAV is defined as \(\mathbf{x_{k}}=[\mathbf{p_{k}},\mathbf{\dot{p}_{k}},\mathbf{\dot{p}_{k}}, \mathbf{\dot{\Psi}_{\_}{k}}]\) where \(\mathbf{p_{k}}\) is the position of the UAV center in the inertial frame and \(\mathbf{\Psi_{\_}{\_}{k}}\) is the yaw angle of the UAV. The control input is \(u_{\_}k=[\mathbf{\dot{p}_{k}},\mathbf{\dot{\Psi}_{\_}{k}}]\) at time interval \(k\). The state space model for the UAV utilized by the model MPC is given in (4). ### _Model Predictive Control (Outer loop)_ The optimal control problem for each time step \(k\) is given in (12) for the UAV dynamics. \[u_{\_}k^{opt}=\underset{\mathbf{u}}{\text{min}}\ g(\mathbf{x_{k} },\mathbf{u_{\_}{k}},t_{\_}k) \tag{12a}\] \[\text{s.t.}\ \dot{\mathbf{x}_{\_}{k}}=\mathbf{A}\mathbf{x}_{\_}k+ \mathbf{B}\mathbf{u_{\_}{k}}\] (12b) \[\mathbf{x_{\_}{min}}\leq\mathbf{x}_{\_}k\leq\mathbf{x}_{\_}{max}\] (12c) \[\mathbf{u_{\_}{min}}\leq\mathbf{u_{\_}{k}}\leq\mathbf{u_{\_}{max}} \tag{12d}\] Fig. 2: Image shows the trajectory tracking of a UAV inside a tunnel while handling effects from tunnel effects. \(u_{k}^{opt}\) depicts the optimal input by the optimizer which is then given to the inner loop controller for tracking. The cost \(g\) is the weighted sum of \(N_{g}\) cost functions \(g=\sum_{i=1}^{N_{g}}g_{i}\) given below. \(N_{g}\) is the number of cost functions and \(N\) denotes the prediction horizon of the MPC. #### Iii-B1 UAV center tracking error To account for the penalization of drift from the desired position or trajectory, we add a cost to the MPC optimizer as in (13) \[g_{1}=\left.\sum_{i=0}^{N-1}(\left|\left|\mathbf{p}(\mathbf{x}_{k+i})-\mathbf{ p}_{k+i}^{d}\right|\right|_{W_{1}}^{2})+\left|\left|\mathbf{p}(\mathbf{x}_{k+N})- \mathbf{p}_{k+N}^{d}\right|\right|_{W_{1}}^{2}\right. \tag{13}\] where \(W_{1}\) and \(W_{s_{1}}\) are weight matrices. #### Iii-B2 UAV center velocity error To penalize the higher velocity of the UAV, we add a cost to the MPC optimizer as in (14). \[g_{2}=\left.\sum_{i=0}^{N-1}(||\dot{\mathbf{p}}(\mathbf{x}_{k+i})||_{W_{2}}^{ 2})+||\dot{\mathbf{p}}(\mathbf{x}_{k+N})||_{W_{2}}^{2}\right. \tag{14}\] where \(W_{2}\) and \(W_{s_{2}}\) are weight matrices. The controller with only MPC as the outer loop and PID as the inner loop is referred to as **Naive MPC** in the following sections. **MPC-HC** is demonstrated as Naive MPC with hard constraints on the optimizer, not in the form of CBF. These algorithms would be utilized to compare the performance of the proposed controller. Additional constraints for **MPC-HC** are: For **Case I**, \(||\mathbf{p}-d^{s}||\leq r\) and for **Case III**, \(||\mathbf{d}||\geq d_{s}\) which have been explained in Section VI part C. ### _PID (Inner loop)_ The inner loop PID receives a \(u_{k}^{opt}\) as the optimal \(u_{k}\) from the MPC optimizer. Desired roll \(\Theta\) and pitch \(\Phi\) angles are calculated using small angle analysis, and the desired thrust and attitude are tracked by PID Thrust and Attitude Controllers. ### _CBF Constraints_ #### Iii-C1 Bounding UAV in safe region (Bounding condition) For **Case I**, We give our primary contribution to bound the UAV inside the safe region where aerodynamic effects do not hamper the stability of the UAV. To assume a continuous differentiable bounding area, we choose the safe region to be a spherical boundary similar to Fig. (1), as the tunnel effect and other effects together form a region where the safe region can be simplified to a sphere. The UAV can only leave the safe region in a radial direction. We constrain the movement of the UAV for a high velocity motion using CBF. The CBF for one direction is given in 15, and we replace \(\mathbf{p}\) with \(-\mathbf{p}\) to get CBF in the opposite direction. \[h_{1}(\mathbf{x}_{k})=\sqrt{\frac{2(\mathbf{p}_{k+i})^{T}a_{max}}{||\mathbf{ p}_{k+i}||}(||\mathbf{p}_{k+i}||-r)}+\frac{\mathbf{p}_{k+i}^{T}}{||\mathbf{p}_{k+i} ||}(\dot{\mathbf{p}}_{k+i}-\dot{\mathbf{p}}_{k+i}^{\prime}) \tag{15}\] Where \(r\) is the radius of the safe region. #### Iii-C2 Minimize safe distance of operation from tunnel Walls (Disturbance Rejection) For **Case II**, we can tighten the bound of the CBF using an additional disturbance rejection parameter \(\lambda\) to tackle aerodynamic disturbances from various effects. Hence we change the earlier invariance condition in (6) to the condition in (16). \[\dot{h}(\mathbf{x})+\gamma(h^{z}(\mathbf{x})-\lambda)\geq 0 \tag{16}\] #### Iii-C3 Trajectory tracking for close proximity flights For **Case III**, the CBF is modified to avoid walls and the CBF condition for this task is given in (17). Fig. 3: **Control Architecture:** The Outer loop control for the UAV is a Model Predictive controller which provides the optimal control input to the Inner loop control (PID) to track while additional constraints to the MPC are derived from the CBF. \(x_{k}^{d}\) is the desired state of the UAV. \[h_{2}(\mathbf{x}_{k})=\sqrt{\frac{2\mathbf{d}(\mathbf{x}_{k+i})^{T}a_{max}}{|| \mathbf{d}(\mathbf{x}_{k+i})||}(||\mathbf{d}(\mathbf{x}_{k+i})||-d^{s})}+\frac{ \mathbf{d}(\mathbf{x}_{k+i})^{T}}{||\mathbf{d}(\mathbf{x}_{k+i})||}\hat{\mathbf{ p}}_{k+i} \tag{17}\] where \(d(\mathbf{x}_{k})\) is the perpendicular distance from the wall at time instance \(k\) and \(d^{s}\) is the minimum safe distance from the wall. The combination ## VII Simulation Results This section presents the results of the performance of the algorithm on simulation. Python 3 was used to perform the scenario on an Intel(r) Core(tm) i7-8550U CPU desktop operating at 1.80 GHz. The optimizer used is the 'SLSQP' method provided in the scipy library [19]. The specifications of the UAV model used are given in Table I and the parameters used in **MPC-CBF** are given in Table. II. ### _Metric for performance comparison_ We measure the performance of the algorithm with the following matrices: * Bounding inside Safe region * Trajectory Tracking error, \(T_{e}=\sqrt{\frac{1}{N}\sum_{k=0}^{N-1}(\mathbf{p}(\mathbf{x}_{k})-p_{k}^{d})^ {2}}\) * Control effort, \(c_{e}=\sum_{k=0}^{N-1}||\mathbf{u}_{k}||^{2}\) * Control Smoothness, \(c_{s}=\sum_{k=0}^{N-1}||\mathbf{u}_{k}||\) ### _Results for **Case I**_ For bounding the UAV inside the safe region, **Naive MPC** is unable to find the bounds and shows very high Trajectory tracking error in the presence of wind disturbances. **MPC - HC** is unable to maintain the bound when the UAV gets a high velocity input. **MPC-CBF** performs best compared to other algorithms because it incorporates obstacle avoidance and disturbance rejection using CBF. It shows a 30% decrease in the trajectory error and maintains the safe region's bound. Trajectory tracking results are shown in Fig. 6 with Fig. 4 depicting the trajectory in 3D. The performance matrices are mentioned in Table. III. ### _Results for **Case II**_ The shortest distance between the walls and the UAV depicts the extended stability zone of the UAV when deploying a new control algorithm. **Naive MPC** gives the minimal distance as 2\(\times\) R while **MPC-CBF** shows a decrease in this distance by 45% as shown in Table. III. ### _Results for **Case III**_ When the UAV trajectory passes through the unsafe region, **Naive MPC** and **MPC-HC** are unable to maintain the trajectory and subsequently collide to the wall. Only **MPC-CBF** is able to maintain the trajectory while reducing the control effort by \(\bar{1}5\%\) thus reducing the power consumed by the UAV. \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline MPC Weights & \(w_{1}=10\times\mathbf{I}_{3\times 3}\), \(w_{s_{1}}=50\times\mathbf{I}_{3\times 3}\), \\ & \(w_{2}=2\times\mathbf{I}_{3\times 3}\), \(w_{s_{2}}=10\times\mathbf{I}_{3\times 3}\) \\ \hline \(u_{k}\) Initialization & \(\mathbf{0}_{1\times d\pi}\) \\ \hline \(\gamma\) & 3 \\ \hline \(\lambda\) & 8 \\ \hline \(z\) & 3 \\ \hline Sampling step (\(t_{s}\)) & 0.1 s \\ \hline Total time (\(t\)) & 100 s \\ \hline Max wind & \(d_{m}=0.8m/s^{2}\) \\ disturbance & \\ \hline \end{tabular} \end{table} TABLE II: Weights and Parameters for MPC and CBF Fig. 4: Position of the UAV center and the safe region boundary (**Case I**) for trajectory tracking in Fig. 1: (a) Naive MPC, (b) MPC-HC, (c) MPC- CBF. <green> -> safe region boundary, <blue> -> UAV center, <orange> -> desired trajectory. UAV Center should remain bound inside the safe region. \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline Mass & 1.5 \(kg\) \\ \hline Arm length & 0.20 \(m\) \\ \hline Propeller Diameter & 0.24 \(m\) \\ \hline Moment of Inertia - & \(I_{x}=0.1\ kg\ m^{2}\), \(I_{y}=0.1\ kg\ m^{2}\), \\ UAV & \(I_{z}=0.2\ kg\ m^{2}\) \\ \hline UAV attitude & \(|\theta|\leq\pi/10\ rad\), \\ constraints & \(|\phi|\leq\pi/10\ rad\) \\ \hline \end{tabular} \end{table} TABLE I: Specifications of the UAV: These parameters have been taken from the UAV used to define the Aerodynamic effects ## VIII Conclusion The paper shows that a Model predictive controller, when combined with constraints using Control Barrier Function, can provide safety guarantees when flying inside a tunnel. The controller also reduces the safe hovering distance from the wall by 37% and incorporates high disturbance tolerance. It is also shown that flying near the ground and ceiling can reduce the UAV's power consumed (control effort) by \(\bar{15}\)%. The algorithm's efficacy provides safety guarantees while travelling inside a tunnel using parameters from a real UAV model. Future work shall include using vision based learning models to detect obstacles and create barrier functions through their understanding.
2306.02904
Light Curve Analysis of the AP Dor Binary System using Ground-Based and TESS Observations
The short-period AP Dor eclipsing binary's first in-depth and multiband photometric solutions are presented. We made use of our eight nights of ground-based at a southern hemisphere observatory, and twelve sectors of TESS observations. We extracted eight and 1322 minima from our observations and TESS, respectively. We suggested a new linear ephemeris based on the trend of orbital period variations using the Markov chain Monte Carlo (MCMC) approach. The PHysics Of Eclipsing BinariEs (PHOEBE) Python code and the MCMC approach were used for the light curve analysis. This system did not require a starspot for the light curve solutions. We calculated the absolute parameters of the system using Gaia DR3 parallax method. The orbital angular momentum (J_0) of the AP Dor indicates that this system is located in a region of contact binaries. According to our results, this system is an overcontact binary system with a mass ratio of 0.584, a fillout factor of 48\%, and an inclination of 53deg. The positions of AP Dor stars on the Hertzsprung-Russell (HR) diagram are represented.
Atila Poro, Eduardo Fernández Lajús, Mohammad Madani, Golshan Sabbaghian, Farshid Nasrollahzadeh, Faezeh Jahediparizi
2023-06-05T14:11:43Z
http://arxiv.org/abs/2306.02904v2
# Light Curve Analysis of the AP Dor Binary System using Ground-Based and TESS Observations ###### Abstract The short-period AP Dor eclipsing binary's first in-depth and multiband photometric solutions are presented. We made use of our eight nights of ground-based at a southern hemisphere observatory, and twelve sectors of TESS observations. We extracted eight and 1322 minima from our observations and TESS, respectively. We suggested a new linear ephemeris based on the trend of orbital period variations using the Markov chain Monte Carlo (MCMC) approach. The PHysics Of Eclipsing BinariEs (PHOEBE) Python code and the MCMC approach were used for the light curve analysis. This system did not require a starspot for the light curve solutions. We calculated the absolute parameters of the system using \(Gaia\) DR3 parallax method. The orbital angular momentum (\(J_{0}\)) of the AP Dor indicates that this system is located in a region of contact binaries. According to our results, this system is an overcontact binary system with a mass ratio of 0.584, a fillout factor of 48%, and an inclination of 53\({}^{\circ}\). The positions of AP Dor stars on the Hertzsprung-Russell (HR) diagram are represented. keywords: binaries: eclipsing - method: photometric - individual (AP Dor) + Footnote †: journal: ## 1 Introduction Eclipsing binaries are a significant astrophysical tool for investigating star formation, stellar structure, and the physical properties of stars and their evolution. Both stars in a binary system known as an overcontact binary have exceeded their Roche lobes. Due to the tidally distorted forms of the stars, the light curve of an overcontact system is continuously changeable and is typically categorised as being of the W UMa type. Mass transfer through Lagrange points is likely to happen in such a system. Other features of them are that the temperatures of the components are roughly equal because they share a similar envelope with the same entropy (Paczynski et al., 2006). W UMa stars also known as the low-mass eclipsing binaries consisting of ellipsoidal components with orbital periods less than \(1^{day}\), usually \(P<0.7^{day}\)(Poro et al., 2022). The AP Dor (HIP 023793) binary system has an apparent magnitude of \(9.37\)1 and is located in the southern hemisphere with coordinates R.A.: \(05^{h}\) 06\({}^{m}\) 45.09188\({}^{s}\) and Dec: \(-59^{\circ}\) 03\({}^{\prime}\) 03.45465\({}^{\circ}\) (J2000). Footnote 1: [http://simbad.cds.unistra.fr/simbad](http://simbad.cds.unistra.fr/simbad) Footnote 2: W Ucase Majoris-type eclipsing variables This system is introduced as an EW\({}^{2}\) type in the VSX3 database with an orbital period of 0.427187 days, but its orbital period is unknown in the ASAS-SN4 catalog. Footnote 3: [https://www.aavso.org/vsx/](https://www.aavso.org/vsx/) For the first time, this system is classified as a W UMa-type system or possibly a RR I Lyrae star in the \(HIPPARCOS\) catalog. In a subsequent study, Eggen (1980) introduced it as a contact system. Then, by the Selam (2004) study, three main geometric parameters (\(q\), \(f\), and \(i\)) have been estimated for 64 \(HIPPARCOS\) catalog contact systems, including AP Dor. The structure of the paper is as follows: Section 2 provides details on photometric observations and a data reduction method. Section 3 presents the minima and the new ephemeris of the AP Dor system. The photometric light curve solutions for the system are discussed in Section 4. Section 5 provides a description of the method used to determine absolute parameters. At the end, Section 6 includes the summary and conclusion. ## 2 Observation and Data Reduction The photometric observations of AP Dor were carried out on October 24-31, 2017, and a total of 2897 images were taken in eight nights. These observations were made using the 0.60m "Helen Sawyer Hogg" (HSI) telescope at the Complejo Astronomico El Leoncito (CASLEO) Observatory, Argentina (69\({}^{\circ}\)18\({}^{\prime}\) W, 31\({}^{\circ}\)48\({}^{\prime}\) S, 2552m above sea level). A CCD SBIG STL1001E and \(BVRI\) standard filters were employed. The average temperature of CCD during the observation nights was \(-30^{\circ}\)C. Each of the frames was \(1\times 1\) binned with averaged 50s exposure time for the \(B\) filter, 45s for the \(V\) filter, 40s for the \(R\) filter, and 30s for the \(I\) filter. UCAC4 156-005107 was selected as a comparison star and TYC 8517-653-1 was chosen as a check star. The comparison star was found at R.A. \(05^{h}\) 07\({}^{m}\) 13.49\({}^{s}\), Dec. \(-58^{\circ}\) 59\({}^{\prime}\) 58.59\({}^{\prime}\) (J2000) with a \(V=12.203(30)\) magnitude, while the check star was located at R.A. \(05^{h}\) 06\({}^{m}\) 35.85\({}^{\prime}\), Dec. \(-58^{\circ}\) 57\({}^{\prime}\) 53.24\({}^{\prime}\) (J2000) with a \(V=11.57(12)\) magnitude, according to the Simbad astronomical database. The APPHOT photometry package of the Image Reduction and Analysis Facility5 (IRAF) was used for CCD reduction and aperture photometry. Footnote 5: [http://iraf.noao.edu](http://iraf.noao.edu) Footnote 6: [http://archive.stsci.edu/tess/allproducts.html](http://archive.stsci.edu/tess/allproducts.html) The Transiting Exoplanet Survey Satellite (TESS) mission observed the AP Dor system in sectors 1, 4, 6, 8, 10, 13, 27, 30, 31, 32, 34, and 39. TESS data is available at the Mikulski Space Telescope Archive (MAST)7. The LightKurve code8 was used to extract TESS style curves from the MAST, which had been detrended using the TESS Science Processing Operations Center (SPOC) pipeline (Jenkins et al., 2016). Footnote 7: [https://docs.lightkurve.org](https://docs.lightkurve.org) Footnote 8: [https://astroutils.astronomy.osu.edu/time/hjd2bjd.html](https://astroutils.astronomy.osu.edu/time/hjd2bjd.html) ## 3 Orbital period variations We used a Python code using a Gaussian function and the MCMC method to extract the new mid-eclipse times and uncertainty. The code is implemented in Python using the PyMC3 package (Salvatier et al., 2016). Therefore, we extracted eight mid-eclipse times, including four primary and four secondary minima, from our observations, with two times recorded for each \(BVRI\) filters (Table 1). In addition, we extracted a total of 1322 minima from different sectors of TESS observations (Appendix Table A1). We found two minima from the Jurvsek et al. (2017) study, and we added them as literature. Barycentric Julian Date in Barycentric Dynamical Time (\(BJD_{TDB}\)) was used to express all minimum times. We used OSU Online Astronomy Utilities9 to convert the literature minimum times to \(BJD_{TDB}\). Footnote 9: [http://www.astroutils.astronomy.osu.edu/time/hjd2bjd.html](http://www.astroutils.astronomy.osu.edu/time/hjd2bjd.html) There are two different orbital periods for this system in the catalogs: in the AAVSO catalog, the value is 0.427187\({}^{d}\), and in the ASAS3 catalog, the value is 0.213593\({}^{d}\). Based on a Fourier analysis of our data, we conclude that the AAVSO catalog value is more valid. Therefore, we used the orbital period of 0.427187\({}^{d}\) along with the minimum time from our observation as a reference ephemeris. The O-C variations are the observed mid-eclipse times (O) from their calculated values (C) based on a reference ephemeris. Typically, a trend in these variations is the result of several separate effects working together. Figure 1 shows the O-C diagram of the AP Dor system. According to the visible trend for the O-C diagram, only a linear fit can be considered. We calculate a new ephemeris based on the MCMC method using the Emrece package in Python (Foreman-Mackey et al., 2013). We applied 20 walkers and 20,000 iterations for each walker, with a 1000 burn-in period in the MCMC sampling. Due to the linearity of the fit, the values of orbital period and minimum time were considered priors from the reference ephemeris. The following light elements were assigned to a new revised linear ephemeris for the minima obtained from this study, TESS, and the literature: \[\begin{split} BJD_{TDB}(Min.I)=2458055.623786(3)+0.427188944(2) \times E\\ \end{split} \tag{1}\] where \(E\) is the integer number of orbital cycles after the reference epoch. The upper and lower limits of uncertainties for the elements in MCMC were equal. ## 4 Light curve analysis Light curve analysis of the AP Dor binary system was carried out using the PHOEBE 2.4.9 version and the MCMC approach (Prsa & Zwitter, 2005, Conroy et al., 2020, Poro et al., 2022). We selected contact mode for the light curve solutions based on how the light curve appeared and the system's short orbital period. The gravity-darkening coefficients and the bolometric albedo were assumed to be \(g_{1}=g_{2}=0.32\)(Lucy, 1967) and \(A_{1}=A_{2}=0.5\)(Rucinski, 1969), respectively. The limb-darkening coefficients were used as free parameters in PHOEBE, and the Castelli & Kurucz (2004) method was used to model the stellar atmosphere. Regarding the primary star's temperature input, we tried three methods and compared the results. However, based on our observational data, we set the value obtained from \(B-V\) as the temperature of the primary star. So, after the required calibrations (Hog et al., 2000) we calculated \((B-V)_{APDor}=0.428\pm 0.013\) and the effective temperature of the primary component, \(T_{1}\) was assumed as \(6517\pm 121\)(Eker et al., 2020). We calculated \(T_{1}\) from the relationship between the primary star temperature and the orbital period of the system from the study of Poro et al. (2022) to be \(6396\pm 92\). Also, the temperature of the system is determined by \(Gaia\) DR2 to be \(6530^{+129}_{-159}\) One of the most important input parameters to the PHOEBE code is the mass ratio. We ran a \(q\)-search with PHOEBE and then used the code's optimization tool to improve the results. As a result, preliminary analyses were improved using the MCMC method, and the uncertainty estimates were obtained (Table 2). In the MCMC approach based on the Emrece package, we applied 96 walkers and 800 iterations to each walker in the MCMC approach. It should be noted that the light curve solution for this system did not require adding a star spot. The observed and theoretical light curves are shown in Figure 2. The corner plot that MCMC produced is displayed in Figure 3. The geometrical structure and 3D view of the AP Dor binary system are provided in Figure 4. \begin{table} \begin{tabular}{l c c c c} \hline \hline Min.(\(BJD_{TDB}\)) & Error & Filter & Epoch & O-C \\ \hline 2458050.708890 & 0.000873 & \(V\) & -11.5 & -0.00086 \\ 2458052.636356 & 0.001302 & \(V\) & -7 & 0.00426 \\ 2458053.701563 & 0.000783 & \(I\) & -4.5 & 0.00150 \\ 2458054.770044 & 0.001515 & \(I\) & -2 & 0.00201 \\ 2458055.622405 & 0.001227 & \(R\) & 0 & 0.00000 \\ 2458055.837207 & 0.001296 & \(R\) & 0.5 & 0.00121 \\ 2458056.691174 & 0.001276 & \(B\) & 2.5 & 0.00080 \\ 2458057.759589 & 0.001168 & \(B\) & 5 & 0.00125 \\ \hline \hline \end{tabular} \end{table} Table 1: Times of minima based on the ground-based \(BVRI\) observations. ## 5 Absolute parameters When just photometric data are available, one of the possible ways for estimating absolute parameters is to use the parallax \(Gaia\) DR3 method. The method of calculating the parameters is described in the study of Poro et al. (2022), and the parameters \(d\)(pc), \(A_{v}\), \(V_{max}\)(mag), \(l_{1,2}/I_{tot}\), \(BC_{1,2}\), \(T_{1,2}\), \(r_{mean1,2}\), and \(P\)(day) is needed for this estimation. Accordingly, \(M_{v(system)}\), \(M_{v1,2}\), \(M_{bol1,2}\), \(L_{1,2}\), \(R_{1,2}\), \(a_{1,2}\), \(M_{1,2}\) calculated, respectively. The separation \(a\) is the average value of a1 and a2 calculated for each component; \(a_{1}\) and \(a_{2}\) must be close to each other, otherwise, it is not possible to use this method to calculate absolute parameters. We utilized \(V_{max}=9.34(4)\) from our observations, the extinction coefficient \(A_{v}=0.035(1)\) from the Schlafly & Finkbeiner (2011) study, the system's distance from \(Gaia\) DR3 \(d_{(pc)}=186.402(398)\) to accomplish the estimation of the absolute parameters. Also, each star's bolometric magnitude was calculated using \(BC_{1}=0.074\) and \(BC_{2}=0.057\) from Eker et al. (2020) study. The results of the \(Gaia\) DR3 method for estimating the absolute parameters of the AP Dor system are given in Table 3. In addition, \(M_{bol1,2}\), \(logg_{1,2}\), and \(a(R_{\odot})\) parameters have been calculated using the following well-known equations respectively: \[M_{bol}-M_{bol_{\odot}}=-2.5log(\frac{L}{L_{\odot}}) \tag{2}\] \[g=G_{\odot}(M/R^{2}) \tag{3}\] Figure 1: The O-C diagram of the AP Dor binary system is on the left, and the corner plot obtained from MCMC is on the right. Figure 2: The observed light curves of AP Dor (black dots), and synthetic light curves obtained from light curve solutions in the \(BVRI\) filters and TESS (top to bottom respectively); with respect to orbital phase, shifted arbitrarily in the relative flux. Figure 4: 3D view of the AP Dor system stars. Figure 3: The corner plots of the AP Dor system was determined by MCMC modeling. \[\frac{P^{2}}{4\pi^{2}}=\frac{a^{3}}{G(M_{1}+M_{2})} \tag{4}\] ## 6 Conclusion The AP Dor short-period binary system was observed during a period of eight nights at a southern hemisphere observatory using \(BVRI\) standard filters. We extracted times of minima from our observations and TESS data and presented a new ephemeris for the system using the MCMC method. The O-C diagram displayed a linear and increasing trend. Utilizing PHOEBE Python code and the MCMC approach, the light curves of this system were analyzed. There is a 283 K temperature difference between the two components. These temperatures indicate that the primary and secondary components' spectral types are F5 and F7, respectively (Cox, 2000). We used the \(Gaia\) DR3 parallax method to determine the absolute parameters of the AP Dor system. HR and Mass-Radius (\(M-R\)) diagrams show the components' evolutionary state (Figure 5a,b). Both the primary and secondary stars of AP Dor lie between the Zero-Age Main Sequence (ZAMS) and the Terminal-Age Main Sequence (TAMS). The position of AP Dor on the \(R_{ratio}-q\) relationship provided by Poro et al. (2022a) is also depicted in Figure 5c. In addition, the \(logJ_{0}-logM\) diagram shows the position of the system (Figure 5d) and this diagram shows that AP Dor is in a contact binary systems region. According to calculations, the orbital angular momentum of AP Dor has a value of \(51.847\pm 0.046\). This result is based on the equation presented by Eker et al. (2006) as follows: \[J_{0}=\frac{q}{(1+q)^{2}}\sqrt{\frac{G^{2}}{2\pi}M^{5}P} \tag{5}\] where \(q\) is the mass ratio, \(M\) is the total mass, \(P\) is the orbital period, and \(G\) is the gravitational constant. Selam (2004) were analysed this system with the aid of Rucinski's simplified light curve synthesis method (Rucinski, 1993). As written in the conclusion section of this study, the method used for analysis is used for large databases of variables observed with moderate accuracy, as in the case of the \(HIPPARCOS\) mission photometry (Rucinski, 1997). So, they no attempt has been made to use more sophisticated light curve solution methods. Therefore, the Selam (2004) study estimated a mass ratio \(q=0.1\), a fillout factor \(f=1\), and an inclination \(i=62.5\). According to Selam (2004)'s study, the value of the fillout factor for the AP Dor system seems unrealistic due to the difference in temperature between the components, which has not reached equilibrium. There is a significant disparity between the findings of the Selam (2004) study and those of this investigation. The method used in the Selam (2004) study, a large number of investigated systems, and the estimation of only three main parameters show that their results are controversial. Our results show that the short orbital period and light curve analysis of AP Dor demonstrate that it is an overcontact eclipsing binary with a fillout factor of 48.8% and a mass ratio of 0.584. ## Acknowledgements This manuscript was prepared by the Binary Systems of South and North (BSN) project ([https://bsnp.info/](https://bsnp.info/)). We have made use of data from the European Space Agency (ESA) mission Gaia ([http://www.cosmos.esa.int/gaia](http://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC). This work includes data from the TESS mission observations. Funding for the TESS mission is provided by the NASA Explorer Program. We would like to thank Filiz Kahraman Aliavusy, and Paul D. Maley for their scientific assistance. ## ORCID IDs Atila Poro: 0000-0002-0196-9732 Eduardo Fernandez Lajis: 0000-0002-9262-4456 Mohammad Madani: 0000-0003-4705-923X Golshan Sabaghaghian: 0000-0002-0615-4292 Farshid Nasrollahzadeh: 0000-0003-4444-8942 Faezeh Jaehedipraziz: 0000-0002-6813-8124 \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Result \\ \hline \(T_{1}\) (K) & \(6585^{+(51)}_{-(39)}\) \\ \(T_{2}\) (K) & \(6302^{+(39)}_{-(29)}\) \\ \(q=M_{2}/M_{1}\) & \(0.584^{+(83)}_{-(13)}\) \\ \(\Omega_{1}=\Omega_{2}\) & \(2.866(171)\) \\ \(i^{\circ}\) & \(53.00^{+(12)}_{-(13)}\) \\ \(f\) & \(0.4875^{+(127)}_{-(96)}\) \\ \(l_{1}/l_{tot}\) & \(0.644^{+(3)}_{-(2)}\) \\ \(l_{2}/l_{tot}\) & \(0.356^{+(3)}_{-(2)}\) \\ \(r_{1(mean)}\) & \(0.467(50)\) \\ \(r_{2(mean)}\) & \(0.375(41)\) \\ Phase shift & \(0.006(2)\) \\ \hline \hline \end{tabular} \end{table} Table 2: Light curve solutions of AP Dor. \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Primary & Secondary \\ \hline \(M(M_{\odot})\) & \(1.278^{+(123)}_{-(75)}\) & \(0.746^{+(83)}_{-(60)}\) \\ \(R(R_{\odot})\) & \(1.360^{+(31)}_{-(31)}\) & \(1.113^{+(40)}_{-(10)}\) \\ \(L(L_{\odot})\) & \(3.119(85)\) & \(1.752(43)\) \\ \(M_{bol}(mag)\) & \(3.505(30)\) & \(4.131(27)\) \\ \(log(g)(cgs)\) & \(4.277^{+(30)}_{-(24)}\) & \(4.218^{+(45)}_{-(34)}\) \\ \(a(R_{\odot})\) & \(3.019^{+(30)}_{-(37)}\) & \\ \hline \hline \end{tabular} \end{table} Table 3: Estimation of the AP Dor’s absolute parameters.
2302.11531
Effect of Aberrations on 3D optical topologies
Optical knots and links, consisting of trajectories of phase or polarisation singularities, are intriguing nontrivial three-dimensional topologies. They are theoretically predicted and experimentally observed in paraxial and non-paraxial regimes, as well as in random and speckle fields. Framed and nested knots can be employed in security protocols for secret key sharing, quantum money, and topological quantum computation. The topological nature of optical knots suggests that environmental disturbances should not alter their topology; therefore, they may be utilised as a resilient vector of information. Hitherto, the robustness of these nontrivial topologies under typical disturbances encountered in optical experiments has not been investigated. Here, we provide the experimental analysis of the effect of optical phase aberrations on optical knots and links. We demonstrate that Hopf links, trefoil and cinquefoil knots exhibit remarkable robustness under misalignment and phase aberrations. The observed knots are obliterated for high aberration strengths and defining apertures close to the characteristic optical beam size. Our observations recommend employing these photonics topological structures in both classical and quantum information processing in noisy channels where optical modes are strongly affected and not applicable.
Nazanin Dehghan, Alessio D'Errico, Tareq Jaouni, Ebrahim Karimi
2023-02-22T18:16:14Z
http://arxiv.org/abs/2302.11531v2
# Effect of Aberrations on 3D optical topologies ###### Abstract Optical knots and links, consisting of trajectories of phase or polarisation singularities, are intriguing nontrivial three-dimensional topologies. They are theoretically predicted and experimentally observed in paraxial [1; 2; 3; 4; 5; 6] and non-paraxial regimes [7; 8; 9; 10], as well as in random and speckle fields [11]. Framed and nested knots can be employed in security protocols for secret key sharing [12; 13; 14], quantum money [15; 16], and topological quantum computation [17]. The topological nature of optical knots suggests that environmental disturbances should not alter their topology; therefore, they may be utilised as a resilient vector of information. Hitherto, the robustness of these nontrivial topologies under typical disturbances encountered in optical experiments has not been investigated. Here, we provide the experimental analysis of the effect of optical phase aberrations on optical knots and links. We demonstrate that Hopf links, trefoil and cinquefoil knots exhibit remarkable robustness under misalignment and phase aberrations. The observed knots are obliterated for high aberration strengths and defining apertures close to the characteristic optical beam size. Our observations recommend employing these photonics topological structures in both classical and quantum information processing in noisy channels where optical modes are strongly affected and not applicable. Linked or knotted structures can arise from optical fields as trajectories in the three-dimensional space of phase or polarisation singularities [1; 2; 4; 18; 19]. For instance, a scalar paraxial optical field described by the wavefunction \(\psi(x,y,z)\), at a given plane \(z\), can exhibit points of zero intensity where the phase is undefined [20]. These singular points are characterised by a topological charge \(q\) given by the winding of the field's phase around the singularity [18; 19; 20; 21; 22]: \(q=1/(2\pi)\oint_{L}\boldsymbol{\nabla}\psi\cdot d\boldsymbol{\ell}\), where \(L\) is a closed loop around the singularity and \(d\boldsymbol{\ell}\) the infinitesimal arc length. Stable singularities carry the minimal topological charge \(q=\pm 1\) and can evolve in space with the constraint of total topological charge conservation [21; 23; 24; 25], i.e. singularity pairs, with individual components having opposite \(q\), can annihilate each other or emerge from planes where no charge is present. This dynamics of creation and annihilation of pairs at different planes can result in singularity paths confined in the three-dimensional space and forming a closed trajectory [26]. These curves can be trivial loops, for instance, generated by two charges appearing in one plane and then re-joining each other upon free-space propagation [27]. However, it is now well-known that the wave equation allows for solutions where singularities can track linked or knotted trajectories [1; 26]. These solutions were originally found by perturbing high-strength vortex line singularities threaded by unstable loop singularities [1]. Other approaches based on numerical optimisation procedures and Laguerre-Gauss mode expansions [3; 4; 5; 6] have been subsequently developed. More recently, it has been shown that specific optical knots can be generated by imposing weighted polynomials as boundary conditions on the field amplitude [12; 13; 28]. This result, which still lacks a general proof [28], is particularly intriguing since it may lead to a systematic approach for generating optical knots and links. This may prove extremely useful in implementing secure communications based on optical knots [12; 13; 14], quantum money [16], and topological quantum computation [29]. Different knots or links are topologically robust objects since they cannot be smoothly deformed into each other. For a knot to change type, a mathematical transformation should reach a point in parameter space where the knot is singular, i.e. self-intersecting. An optical knot is thus expected to keep the same nature under environmental disturbances that smoothly change its phase and amplitude, thus suggesting a potential advantage for environment-resilient transfer of information. However, in practice, typical disturbances may be strong enough to induce a transition in knot type, thus ruling out the potential of this approach. Here we show, through numerical and experimental investigation, how some types of optical knots recently realised experimentally; namely Hopf link, trefoil, and cinquefoil, are remarkably robust under the action of phase distortions applied on the waist plane. The structures mentioned above can all be generated by an optical field which in the plane \(z=0\) reads: \(\psi(\rho,\phi)=\exp(-\rho^{2}/(2s^{2}))\text{Poly}(\rho,\exp(i\phi))\), where \((\rho,\phi,z)\) are cylindrical coordinates, \(s\) is a width parameter specifying the size of a Gaussian envelope, and \(\text{Poly}(\rho,\exp(i\phi))\) is a Milnor polynomial which specifies the knot type [12; 13; 28] (see Methods for the explicit expressions). Phase aberrations, which are the most common consequence of environmental disturbances and imperfections in optical setups, can be modelled as an additional phase factor \(\delta(\rho,\phi)\) applied on the undistorted field: \[\widetilde{\psi}(\rho,\phi)=\exp(i\delta(\rho,\phi))\psi(\rho,\phi). \tag{1}\] Monochromatic optical aberrations can be modelled by expanding the phase distortion in Zernike functions: \[\delta(\rho,\phi)=\pi\sum_{n,m}\gamma_{n,m}Z_{n}^{m}(\rho/A,\phi), \tag{2}\] where \(\gamma_{n,m}\) are real numbers which we call the _strength_ of the \((n,m)\)-th aberration, and \(Z_{n}^{m}(\rho/A,\phi)=R_{n}^{m}(\rho/A)\cos(m\phi)\) for positive \(m\) and \(Z_{n}^{m}(\rho/A,\phi)=R_{n}^{m}(\rho/A)\sin(m\phi)\) for negative \(n\), with \(R_{n}^{m}(\rho/A)\) the radial Zernike polynomial, defined in \(0\leq\rho\leq A\). We recall that \(|m|\leq n\) and \(|Z_{n}^{m}(\rho/A,\phi)|\leq 1\). Here, we mainly investigate the effect of the individual aberrations, up to 4-th order, on optical knots, thus applying phase distortions of the form, \[\widetilde{\psi}(\rho,\phi)=e^{i\pi\gamma Z_{n}^{m}(\rho/A,\phi)}\psi(\rho,\phi). \tag{3}\] In particular, we seek the critical values of the strengths \(\gamma\) and inverse aperture \(1/A\) above which the knot topology is altered. It will be shown how knots can survive these aberrations for apertures slightly larger than the characteristic beam size and strengths \(\gamma\approx 1\), which are much higher than values encountered in many practical scenarios. We then push towards smaller apertures to determine which aberrations can affect more the topological structure. It is found that aberrations like coma and secondary astigmatism are the most relevant. This is because these aberrations exhibit local maxima and minima in the interior of the aperture, thus introducing wavefront distortion in between the location of the phase singularities and affecting their trajectories. The robustness of three different 3-dimensional optical topologies (Hopf link, Trefoil and Cinquefoil) under various optical phase aberrations was experimentally investigated. These structures were generated and detected with an approach similar to Ref. [6]. The principle of the experiment is shown in Fig. 1-a, while details of the setup are reported in the Methods. Optical knots can be obtained from computer-generated holograms displayed on a spatial light modulator (SLM). Exploiting the encoding introduced in Ref. [30], a phase mask is applied on an input Gaussian beam which is transformed in the desired knotted field after selecting the first diffraction order (this is done with a pinhole placed in the far field of the SLM). In this experiment, the knotted beam interferes collinearly with a reference Gaussian beam. On the translation stage, a CMOS camera is automatically translated to 75 different planes, recording interference patterns that are formed by changing the phase \(\alpha\) of the reference beam. The reference phase is controlled using half of the SLM window (see Refs.[6; 12] and Methods). The phase of the structured beam in each plane can be reconstructed by means of phase-shifting digital holography. By tracking the phase singularities upon propagation, the singular skeleton was retrieved. Different aberrations, modelled as Zernike functions [31], were applied individually as additional phases with a specific aperture on the knot hologram. In agreement with simulations, it was observed that for each topology and aperture \(A\) higher than a critical value \(A_{c}\), the structure survives all different aberrations with \(\gamma=1\). For each example, robustness up to at least \(\gamma=1\) was observed for values of \(A\) slightly larger than the characteristic beam intensity radius. Specifically, the topological structures were unaltered for aperture values \(A=8w_{0}\), \(8w_{0}\), and \(4w_{0}\) for Hopf link, trefoil, and cinquefoil, respectively, where \(w_{0}\) is the characteristic waist parameter (which however, does not necessarily correspond to the beam size, the latter being also dependent on the parameter \(s\)). We point out that simulations predict that trefoil survives should survive high strength aberrations also for \(A=7w_{0}\), while experimentally, we observe a critical behaviour for \(Z_{4}^{2}\) (while other aberrations still give rise to trefoils). We attribute this mismatch to an additional phase perturbation present in the setup. Figure 1-b shows examples of the three structures affected by coma (data for the other aberrations are given in the Supplementary Materials, Fig. S2). Figure 1: **Experimental scheme and reconstructed aberrated knots.****a**- Schematics of the experimental setup. An SLM encodes a phase mask which generates the aberrated knot (Eq. (1)) after a 4-f system and selection of the first diffraction order. The resulting field’s phase structure is reconstructed by phase-shifting digital holography, interfering with the knot field and a Gaussian reference beam. Recording the interference patterns at different propagation planes allows one to reconstruct the singular skeleton. **b**- Examples of three-dimensional topologies affected by coma with \(\gamma=1\) experimentally reconstructed. From left to right: Hopf link, trefoil, and cinquefoil. The main plots show the singular skeletons and the phase patterns in different propagation planes. Insets show the top view of the singular skeleton (right) and the theoretical intensity patterns in the waist plane with red circles indicating the size of the aperture over which the Zernike polynomials were defined. The main effect of the aberrations is to stretch or compress the individual "lobes" of the curves leaving the overall topology unchanged. Therefore, if the aperture of the phase aberration compared to the beam waist \(w_{0}\) is bigger than a critical value no aberrations (up to 4-th order) even with \(\gamma=1\) can break the topological structure. When decreasing \(A\), knots and links can be broken in open trajectories. This effect can be, for instance, caused by singularity pairs created in the far field, which, instead of annihilating with each other in back-propagation, join with the lines of the singular skeleton. For fixed \(\gamma=1\), we define the critical aperture (\(A_{c}\)) as the \(A\) in which the structure breaks up or cannot be fully reconstructed. The value of \(A_{c}\) is different for different aberrations typologies. For instance, in the case of the cinquefoil, \(A_{c}\) for \(Z_{3}^{1}\), \(Z_{4}^{0}\), and \(Z_{4}^{2}\) is \(\approx 3w_{0}\) and for \(Z_{2}^{0}\), \(Z_{2}^{2}\), \(Z_{3}^{3}\), and \(Z_{4}^{4}\) is \(\approx 2.3w_{0}\). When \(A\approx A_{c}\), the effect of \(\gamma\) can be further investigated. In Fig. 2, experimental examples of links and knots under different aberrations with \(A\approx A_{c}\) and \(\gamma<1\) Figure 2: **Knot survival for small apertures.** The figure shows different singular skeletons obtained for aperture values slightly below the critical aperture \(A_{c}\) and for strengths \(\gamma<1\) such that the topology is conserved. The examples shown are for 3-rd and 4-th order aberrations, namely (a) trefoil, (b) quadrafoil, (c) coma, and (d) secondary astigmatism. The first column shows a three-dimensional plot of the aberrations with a texture given by the cinquefoil phase pattern to highlight how the different Zernike polynomials affect the wavefront near the phase singularities. are shown, highlighting how, even if the topology is not preserved under the highest \(\gamma\), it can still be recovered for smaller strengths. We note that the experimental knots tend to break up for slightly higher apertures than theoretically expected. This is due to either experimental imperfections, which amount to additional aberrations perturbing the beam, or to residual interference with the zeroth order beam diffracted from the SLM. The latter case was analyzed in detail (see Supplementary Materials) but may be less relevant in practical scenarios where aberrations are applied after the spatial filtering. The topology of the singular skeleton can be more or less sensitive to different Zernike functions. The aberrations with stronger variations close to the singularities affect their evolution more and, therefore, can destroy the topology for larger \(A\) compared with aberrations responsible for distortions of the most external parts of the wavefront. This is the reason why to coma (\(Z_{3}^{2}\)) and secondary astigmatism (\(Z_{4}^{2}\)) is associated a bigger \(A_{c}\) compared to trefoil (\(Z_{3}^{3}\)) and quadafroil (\(Z_{4}^{4}\)) aberrations. As shown in the first column of Fig. 2, coma and secondary astigmatism have local minima and maxima in the interior of the defining circle, and so these variations can modify the wave vector distribution around the singularities, thereby altering their trajectory. On the other hand, distortions like \(Z_{3}^{3}\) and \(Z_{4}^{4}\), are mostly flat in their central region while being responsible for wavefront distortions close to the boundary of the defining circle. Hence, for large enough apertures, the phase singularities will lie in the central flat region, and the formation of the singular skeleton will be less affected. Up to this point, we considered the case in which the defining aperture of the Zernike functions is perfectly centred with the unperturbed beam. However, lateral misalignments are a common source of imperfections. Hence, we looked into the effect of a relative displacement between the centre of the beam and the centre of the aberration's defining circle. Figure 3 shows how displaced coma affects the cinquefoil knot. Relatively small displacements leave the singular skeleton topology unaltered, even for \(\gamma=1\). However, for the second displacement (\(\Delta x=1.5w_{0}\)), it can be seen that \(Z_{3}^{1}\) breaks the knot. We observed qualitatively similar effects for other aberrations (data are reported in the Supplementary Fig. **S3**). So far, we have considered high values of the strength by looking at the effect of individual aberrations. In practical scenarios, one deals with wavefront distortions described by a superposition of Zernike polynomials (as in Eq. (2)). As an example, we consider the effect of the wavefront distortion observed in a previous experiment on underwater high-dimensional quantum key distribution (Ref. [32]). The same distortion was applied to the cinquefoil beam, and, as shown in Fig. 4, the topology survives, which is not surprising since the \(\gamma_{n,m}\) are much smaller than 1. We point out that while this wavefront distortion is rather small, it was shown to have a significant effect on the security of OAM-based high-dimensional quantum key distribution. In particular, giving an error rate above the security threshold for dimensions higher than three [32]. This difference in robustness between OAM and knot-based encoding is strictly due to the topological nature of the singular skeletons of structured beams: on the one hand, optical aberrations can abruptly change the decomposition in a given spatial mode basis, thus affecting immediately the fidelity of the transmitted beam and the information encoded within its structure; on the other hand, the change in spatial mode decomposition induced by aberrations does not alter the topological structure associated with a knotted or linked beam, assuming that the strength and/or inverse defining aperture is below a given threshold. Thus, the survival of knot fields under relevant wavefront distortion is a promising example of how these structures can provide a more robust way to encode information. However, we stress that deeper studies on the effects of turbulence must be carried out to certify this advantage. We expect that moderate levels of turbulence, which can be compensated by an adaptive optics system, will not present a serious obstacle to the transmission of either classical or quantum information by means of three-dimensional optical topological structures. In conclusion, we have demonstrated experimentally how simple optical knots and links are robust under phase aberrations and setup misalignments, thus hinting at their potential advantage in communication protocols. Moreover, we showed Figure 4: **Cinquefoil knot subject to a superposition of aberrations.** a- Plot of the wavefront distortion applied to a cinquefoil knot. b- Side view and top view (inset) of the reconstructed singular skeleton. Figure 3: **Effect of aperture displacement.** The singular skeleton of a cinquefoil knot perturbed by a coma aberration with displaced origin (aperture \(A=4w_{0}\) and \(\gamma=1\)). For displacement \(\Delta x=w_{0}\) the knot topology is unaltered (Panel a). For \(\Delta x=1.5w_{0}\) (b) the singular skeleton opens up due to joining with singularities created in the far field. Insets show the phase pattern in the waist plane (left) and the top view of the singular skeleton (right). how the aberrations exhibiting local minima or maxima in the interior of the circle defining the Zernike polynomials are those which can more significantly affect the topology of the singular skeleton. These considerations can be useful not only for communication purposes but also in devising setups to generate knotted fields in more delicate scenarios, e.g. in nanophotonics experiments [10] or in setups for other kinds of structured quantum waves [33; 34], e.g. electrons and neutrons [35].
2308.04601
Generalized Mahler measures of Laurent polynomials
Following the work of Lal\'in and Mittal on the Mahler measure over arbitrary tori, we investigate the definition of the generalized Mahler measure for all Laurent polynomials in two variables when they do not vanish on the integration torus. We establish certain relations between the standard Mahler measure and the generalized Mahler measure of such polynomials. Later we focus our investigation on a tempered family of polynomials originally studied by Boyd, namely $Q_{r}(x, y) = x + \frac{1}{x} + y + \frac{1}{y} + r$ with $r \in \mathbb{C},$ and apply our results to this family. For the $r = 4$ case, we explicitly calculate the generalized Mahler measure of $Q_4$ over any arbitrary torus in terms of special values of the Bloch-Wigner dilogarithm. Finally, we extend our results to the several variable setting.
Subham Roy
2023-08-08T21:57:40Z
http://arxiv.org/abs/2308.04601v2
# Generalized Mahler measure of Laurent polynomials ###### Abstract. Following the work of Lalin and Mittal on the Mahler measure over arbitrary tori, we investigate the definition of the generalized Mahler measure for all Laurent polynomials in two variables when they do not vanish on the integration torus. We establish certain relations between the standard Mahler measure and the generalized Mahler measure of such polynomials. Later we focus our investigation on a tempered family of polynomials originally studied by Boyd, namely \(Q_{r}(x,y)=x+\frac{1}{x}+y+\frac{1}{y}+r\) with \(r\in\mathbb{C}\), and apply our results to this family. For the \(r=4\) case, we explicitly calculate the generalized Mahler measure of \(Q_{4}\) over any arbitrary torus in terms of special values of the Bloch-Wigner dilogarithm. Finally, we extend our results to the several variable setting. Key words and phrases:Mahler measure; elliptic curve; special values of \(L\)-functions; dilogarithm 2020 Mathematics Subject Classification: Primary 11R06; Secondary 11G05, 14H52, 31A05 ## 1. Introduction The (logarithmic) Mahler measure of a non-zero rational function \(P\in\mathbb{C}\left(x_{1},\ldots,x_{n}\right)^{*}\) is defined by \[\mathrm{m}\left(P\right)=\mathrm{m}(P(x_{1},\ldots,x_{n})):=\frac{1}{\left(2 \pi i\right)^{n}}\int_{\mathbb{T}^{n}}\log|P\left(x_{1},\ldots,x_{n}\right)| \frac{dx_{1}}{x_{1}}\cdots\frac{dx_{n}}{x_{n}}, \tag{1}\] where \(\mathbb{T}^{n}=\{(x_{1},\ldots,x_{n})\in\mathbb{C}^{*}\times\mathbb{C}^{*} \times\cdots\times\mathbb{C}^{*}:|x_{1}|=\cdots=|x_{n}|=1\}\). The first appearance of this quantity (for one variable polynomials) can be traced back to Lehmer's work [1] on Mersenne numbers, and its several variable form first appeared in the work of Mahler [2] regarding a simpler proof of the Gel'fond-Mahler inequality, and it was later named after him. In the early 80's, Smyth [3] discovered the following remarkable identities: \[\mathrm{m}(x+y+1)= \frac{3\sqrt{3}}{4\pi}L(\chi_{-3},2),\] \[\mathrm{m}(1+x+y+z)= \frac{7}{2\pi^{2}}\zeta(3),\] where \(L(\chi_{-3},2)\) is the Dirichlet \(L\)-function of the quadratic character \(\chi_{-3}\) of conductor \(3\), and \(\zeta(s)\) is the Riemann zeta function (for more details see [4]). These are two of the initial formulas for several variable cases. Later the work of Boyd [5], Deninger [6], Rodriguez-Ville tags [7] and others provided us with interesting connections among Mahler measure, higher regulators, and Belfinson's conjectures. The conjectural formulas to support their work, such as \[\mathrm{m}(P_{k}(x,y))\stackrel{{?}}{{=}}r_{k}L^{\prime}(E_{N(k )},0),\qquad r_{k}\in\mathbb{Q},\] were eventually proved for certain polynomials, due to Rodriguez-Villegas [7], Rogers and Zudilin [8, 9] et al. Here \(E_{N(k)}\) is an elliptic curve of conductor \(N(k)\) associated to \(P_{k}\), and the question mark stands for a numerical formula that is true for at least 20 decimal places. (See the book of Brunault and Zudilin [10] for more details.) In a different direction, Cassaigne and Maillot [11] generalized the formula found by Smyth to \(\mathrm{m}(a+bx+cy)\) for arbitrary complex constants \(a,b,\text{ and }c:\) \[\pi\mathrm{m}(ax+by+c)=\left\{\begin{array}{ll}\alpha\log|a|+\beta\log|b|+ \gamma\log|c|+D\left(\frac{|a|}{|b|}e^{i\gamma}\right)&\text{if }\Delta\text{ holds},\\ \log\max\{|a|,|b|,|c|\}&\text{if }\Delta\text{ does not hold},\end{array}\right. \tag{2}\] where \(\Delta\) stands for the statement that \(|a|,|b|,\text{ and }|c|\) are the lengths of the sides of a planar triangle, and in that case, \(\alpha,\beta,\text{ and }\gamma\) are the angles opposite to the sides of the lengths \(|a|,|b|\) and \(|c|\) respectively (see Figure 1). We also remark that the constant coefficient can be replaced by a variable without changing the Mahler measure, in the sense that \(\mathrm{m}(ax+by+c)=\mathrm{m}(ax+by+cz).\) Additionally, it is immediate to see that Cassaigne and Maillot's result can also be interpreted as \[\mathrm{m}(ax+by+cz) =\frac{1}{\left(2\pi i\right)^{3}}\int_{\mathbb{T}^{3}_{|a|,|b|,|c |}}\log|x+y+z|\frac{dx}{x}\frac{dy}{y}\frac{dz}{z}\] \[=\mathrm{m}\left(|a|x+|b|y+|c|z\right)=\mathrm{m}_{|a|,|b|,|c|}(x +y+z),\] i.e. the standard Mahler measure of \(ax+by+cz\) is same as the Mahler measure of \(x+y+z\) over the integration torus \(\mathbb{T}^{3}_{|a|,|b|,|c|},\) where \[\mathbb{T}^{3}_{|a|,|b|,|c|}=\{(x,y,z)\in\mathbb{C}^{*}\times\mathbb{C}^{*} \times\mathbb{C}^{*}:|x|=|a|,|y|=|b|,|z|=|c|\}.\] This different representation of \(\mathrm{m}(ax+by+cz)\) makes (2) a generalization of Smyth's result. This leads to the following definition. **Definition 1.1**.: _The **generalized Mahler measure** of a non-zero rational function \(P\in\mathbb{C}(x_{1},\ldots,x_{n})^{*}\) is defined as_ \[\mathrm{m}_{\mathfrak{a}}(P)=\mathrm{m}_{a_{1},\ldots,a_{n}}(P(x_{1},\ldots,x _{n})):=\frac{1}{\left(2\pi i\right)^{n}}\int_{\mathbb{T}^{n}_{\mathfrak{a}}} \log|P\left(x_{1},\ldots,x_{n}\right)|\frac{dx_{1}}{x_{1}}\cdots\frac{dx_{n}} {x_{n}},\] _where \(\mathfrak{a}=(a_{1},\ldots,a_{n})\in(\mathbb{R}_{>0})^{n}\) and_ \[\mathbb{T}^{n}_{\mathfrak{a}}:=\{(x_{1},\ldots,x_{n})\in\mathbb{C}^{*}\times \mathbb{C}^{*}\times\cdots\times\mathbb{C}^{*}:|x_{1}|=a_{1},\ldots,|x_{n}|=a _{n}\}.\] Figure 1. Condition \(\Delta\) in Cassaigne and Maillot’s formula Lalin and Mittal [12] explored this definition over \(\mathbb{T}^{2}_{q^{2},q}\) and \(\mathbb{T}^{2}_{q,q}\) to obtain relations between some polynomials mentioned in Boyd's paper [5], namely \[\begin{array}{ll}R_{-2}(x,y)&:=(1+x)(1+y)(x+y)+2xy,\\ S_{2,-1}(x,y)&:=y^{2}+2xy-x^{3}+x,\end{array}\] for some values of \(q\in\mathbb{R}_{>0}.\) They have simultaneously evaluated \(\mathrm{m}_{q^{2},q}(R_{-2})\) and \(\mathrm{m}_{q,q}(S_{2,-1})\) in terms of \(\log q\) and special values of \(L\)-functions when each of them does not vanish on the respective integration torus. In particular, they established a relation between the standard Mahler measure and the generalized Mahler measure. In this article, we provide a way to obtain such relations for a variety of Laurent polynomials of the form \[Q_{r}(x,y)=r-Q(x,y)\in\mathbb{C}[x^{\pm},y^{\pm}],\] where \(r\in\mathbb{C},\) and \(Q\) has no constant term. This project started with a specific family of Boyd's polynomials, namely \[\left\{x+\frac{1}{x}+y+\frac{1}{y}+t:t\in\mathbb{C}\right\}. \tag{3}\] An extension of the methods in [7] and [13] led us to an interesting fact: for an arbitrarily fixed \((a,b)\in\mathbb{R}^{2}_{>0},\) there exists a large set of \(t\in\mathbb{C}\) such that Mahler measure of these polynomials remains the same irrespective of deforming the integration torus from \(\mathbb{T}^{2}\)\((=\mathbb{T}^{2}_{1,1})\) to \(\mathbb{T}^{2}_{a,b}.\) In fact, we found that this method can be extended to a larger family of Laurent polynomials when they do not vanish on the integration torus. Let \(Q(x,y)\) be a Laurent polynomial in \(\mathbb{C}[x^{\pm},y^{\pm}]\) with no constant term, and define the family of polynomials \(\{Q_{r}(x,y):r\in\mathbb{C}\}\) associated to \(Q\) as \[Q_{r}(x,y)=r-Q(x,y)\in\mathbb{C}[x^{\pm},y^{\pm}].\] For \(a,b>0,\) let \(\mathcal{R}_{a,b}\) be the image of the map \[q:\mathbb{T}^{2}_{a,b}\longrightarrow\mathbb{C},\quad\text{defined by}\quad(x,y) \mapsto Q(x,y). \tag{4}\] For \(u\in\mathbb{T}^{1}_{b},\) let \(Z^{1}_{a,u,r}\) denote the number of zeros (counting multiplicities) of \(Q_{r}(x,u)\) inside the circle \(\mathbb{T}^{1}_{a},\) and let \(P^{1}_{a,u,r}\) denote the order of the pole of \(Q_{r}(x,u)\) at \(x=0.\) Similarly, for \(w\in\mathbb{T}^{1}_{a},\) the number of zeros (counting multiplicities) of \(Q_{r}(w,y)\) inside the circle \(\mathbb{T}^{1}_{b}\) is denoted by \(Z^{2}_{w,b,r},\) and the order of the pole of \(Q_{r}(w,y)\) at \(y=0\) is denoted by \(P^{2}_{w,b,r}.\) Define \(\nu^{1}_{a,u,r}\) and \(\nu^{2}_{w,b,r}\) as \[\nu^{1}_{a,u,r}:=Z^{1}_{a,u,r}-P^{1}_{a,u,r},\quad\text{and}\quad\nu^{2}_{w,b,r}:=Z^{2}_{w,b,r}-P^{2}_{w,b,r}. \tag{5}\] Then, for \(u=b\) and \(w=a,\) we have the following theorem. **Theorem 1.2**.: _Let \(a\) and \(b\) be positive real numbers, and denote by \(U_{a,b}\) the unbounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) containing some neighbourhood of \(r=\infty.\) Then, for \(r\in U_{a,b}\cap U_{1,1},\)_ \[\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}(Q_{r})+\nu^{1}_{a,b,r}\log a+\nu^{2}_{a,b, r}\log b,\] _where \(\nu^{1}_{a,b,r}\) and \(\nu^{2}_{a,b,r}\) are defined as above, and \(\mathrm{m}_{1,1}(Q_{r})=\mathrm{m}(Q_{r}).\) Moreover, when \(r\in U_{a,b}\cap U_{1,1},\) the quantities \(\nu^{1}_{a,b,r}\) and \(\nu^{2}_{a,b,r}\) only depend on \((a,b).\)_ For \(|r|\) large enough, the above relation between the standard Mahler measure and the generalized Mahler measure of \(Q_{r}\) can be obtained by first expanding \(\log\left(1-\frac{Q}{r}\right)\) in a convergent series, and then integrating each term individually. We should mention here that, in order to obtain a convergent series expansion of \(\log\,,\) the above procedure is restricted to a smaller subregion contained in the unbounded region of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Theorem 1.2 establishes this equality for a larger set, and harmonic properties of Mahler measure then imply that the equality holds for all \(r\) in the unbounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) A follow up question can be posed regarding the values of \(\mathrm{m}_{a,b}(Q_{r})\) when \(r\) belongs to one of the bounded open connected components of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) The next theorem answers this question when \(\nu_{a,b,r}^{1}\) (or \(\nu_{a,b,r}^{2}\)) satisfies a particular condition. We introduce some necessary notation to state the next theorem. \(Q_{r}(x,y),\) when considered as a polynomial in \(y\) (resp. \(x\)) of degree \(d_{y}\) (resp. \(d_{x}\)) with coefficients in \(\overline{\mathbb{C}(x)}\) (resp. \(\overline{\mathbb{C}(y)}\)), can be expressed as \[Q_{r}(x,y)= (y)^{-v_{2}}\left(Q_{F,r}^{y}(x)(y)^{d_{y}}+Q_{f,r}^{y}(x)+\sum_{ j=1}^{d_{y}-1}a_{j,r}^{y}(x)(y)^{j}\right)\] \[= (x)^{-v_{1}}\left(Q_{F,r}^{x}(y)(x)^{d_{x}}+Q_{f,r}^{x}(y)+\sum_{ j=1}^{d_{x}-1}a_{j,r}^{x}(y)(x)^{j}\right),\] where \(v_{1}\) and \(v_{2}\) denote the largest powers of \(x^{-1}\) and \(y^{-1}\) in \(Q_{r}(x,y),\) respectively, and \(Q_{F,r}^{u}\) and \(Q_{f,r}^{u}\) are the respective leading and "constant" coefficient with respect to the variable \(u,\) for \(u=x\) or \(y.\) Then we have the following theorem. **Theorem 1.3**.: _Let \(a\) and \(b\) be positive real numbers. Let \(r_{0}\in\mathbb{C}\setminus\mathcal{R}_{a,b}\) such that \(r_{0}\) belongs to one of the bounded open connected components of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) We denote by \(V_{a,b,r_{0}}\) the bounded open connected component containing \(r_{0}.\)_ **(i)**: _If all the roots of_ \(Q_{r_{0}}(a,y)\) _either lie entirely inside the circle_ \(\mathbb{T}_{b}^{1}\) _or lie entirely outside the circle_ \(\mathbb{T}_{b}^{1},\) _then, for all_ \(r\in V_{a,b,r_{0}},\)__ \[\mathrm{m}_{a,b}(Q_{r})-\nu_{a,b,r}^{2}\log b=\left\{\begin{array}{ll} \mathrm{m}_{a}(Q_{F,r}^{y}(x))&\mbox{when all roots of $Q_{r_{0}}(a,y)$ lie inside $\mathbb{T}_{b}^{1}$},\\ \mathrm{m}_{a}(Q_{f,r}^{y}(x))&\mbox{when all roots of $Q_{r_{0}}(a,y)$ lie outside $\mathbb{T}_{b}^{1}$}.\end{array}\right.\] **(ii)**: _If all the roots of_ \(Q_{r_{0}}(x,b)\) _either lie entirely inside the circle_ \(\mathbb{T}_{a}^{1}\) _or lie entirely outside the circle_ \(\mathbb{T}_{a}^{1},\) _then, for all_ \(r\in V_{a,b,r_{0}},\)__ \[\mathrm{m}_{a,b}(Q_{r})-\nu_{a,b,r}^{1}\log a=\left\{\begin{array}{ll} \mathrm{m}_{b}(Q_{F,r}^{x}(y))&\mbox{when all roots of $Q_{r_{0}}(x,b)$ lie inside $\mathbb{T}_{a}^{1}$},\\ \mathrm{m}_{b}(Q_{f,r}^{x}(y))&\mbox{when all roots of $Q_{r_{0}}(x,b)$ lie outside $\mathbb{T}_{a}^{1}$}.\end{array}\right.\] Using Theorems 1.2 and 1.3, Cassaigne and Maillot's result in (2) follows immediately when the "triangle condition" does not hold. In this case, \(Q_{c}(x,y)=c-x-y\) for \(c\in\mathbb{C}.\) For \(a,b\in\mathbb{C}^{*},\)\(\mathcal{R}_{|a|,|b|}\) for this family of polynomials is the "closed" annulus \(\left\{z\in\mathbb{C}:|z|\in\left[||a|-|b|\right],|a|+|b|\right\}.\) Note that, when \(c\) belongs to the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{|a|,|b|},\) we have \(\nu_{|a|,|b|,c}^{j}=0.\) Then, Theorem 1.2 and harmonic properties of Mahler measure imply that, when \(|c|>|a|+|b|,\) \[\mathrm{m}_{|a|,|b|}(Q_{c})=\mathrm{m}(|a|x+|b|y+c)=\log|c|.\] On the other hand, Theorem 1.3 implies that, for \(|c|<\left|\left|a\right|-\left|b\right|\right|,\) \[\mathrm{m}_{|a|,|b|}(Q_{c})=\log\max\{|a|,|b|\},\] since \(\nu_{|a|,|b|,c}^{1}=1\) (resp. \(\nu_{|a|,|b|,c}^{2}=1\)) when \(|a|>|b|\) (resp. \(|b|>|a|\)). The combination of both equalities leads to a restatement of (2) when \(\Delta\) does not hold. We should remark that the condition \(\Delta\) in their result is equivalent to the condition \(c\in\mathcal{R}_{|a|,|b|},\) i.e. \(Q_{c}\) vanishes on the integration torus. A more involved approach, using Theorems 1.2 and 1.3 on the family of polynomials \[\begin{array}{ll}R_{\alpha}^{*}(x,y)&:=\alpha-x^{-1}-y^{-1}-xy^{-1}-yx^{-1}- x-y,\qquad\alpha\in\mathbb{C},\\ S_{\beta,-1}^{*}(x,y)&:=\beta-x^{-1}y+x^{2}y^{-1}-y^{-1},\qquad\beta\in \mathbb{C},\end{array}\] re-establishes the identities obtained in [12] for \(\alpha=-4\) and \(\beta=2.\) Note that the aforementioned result(s) involving the generalized Mahler measure of \(R_{-2}^{*}\) (resp. \(S_{2,-1}^{*}\)) on the torus \(\mathbb{T}_{a,b}^{2}\) only depends on \(b\) since the integration torus is \(\mathbb{T}_{b^{2},b}^{2}\) (resp. \(\mathbb{T}_{b,b}^{2}\)), i.e. \(a\) is a function of \(b\) here. Our theorems, along with the method of the _Lagrange multiplier_, provide a larger set of \(2\)-tuples \((a,b)\in\mathbb{R}_{>0}^{2},\) such that a similar type of identities obtained in [12] hold even when \(a\) is not a function of \(b.\) An analogous result is exhibited in Section 5 with a different Boyd's family of polynomials given in (3). Due to the technical difficulties involving the study of the integration path in the definition of Mahler measure, it is challenging to evaluate \(\mathrm{m}_{a,b}(Q_{r})\) explicitly for all \(a,b>0.\) In this regard, Theorems 1.2 and 1.3 have a common feature: _the polynomial in consideration does not vanish on the integration torus._ The next result considers a particular polynomial from our initial family of polynomials, namely \[Q_{4}(x,y)=x+\frac{1}{x}+y+\frac{1}{y}+4.\] It removes the constraint of being non-zero on the integration torus, and evaluates the generalized Mahler measure of \(Q_{4}(x,y)\) for all \(a,b>0.\) **Theorem 1.4**.: _Let \(a,b\in\mathbb{R}_{>0},\) and define_ \[c=\sqrt{ab},\quad d=\sqrt{\frac{b}{a}},\quad\text{and }\mathcal{A}_{c,d}= \frac{1-d^{2}}{1+d^{2}}\cdot\frac{1+c^{2}}{2c},\] _such that \(c\) and \(d\) are both positive real numbers. Then,_ \[\mathrm{m}_{a,b}(Q_{4}(x,y))=\left\{\begin{array}{ll}\max\{\log c,-\log c\} +\max\{\log d,-\log d\}&\text{if }\left|\mathcal{A}_{c,d}\right|\geq 1,\\ \frac{2}{\pi}\left[D(ice^{-i\mu})+D(ice^{i\mu})-\mu\log d+(\log c)\tan^{-1} \left(\frac{c-c^{-1}}{2\cos\mu}\right)\right]&\text{if }\left|\mathcal{A}_{c,d}\right|<1,\end{array}\right.\] _where \(\mu=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in\left(-\frac{\pi}{2},\frac{\pi }{2}\right),\) and \(D\) is the Bloch-Wigner dilogarithm defined, for \(z\in\mathbb{C},\) by_ \[D\left(z\right)=\mathrm{Im}\left(\mathrm{Li}_{2}\left(z\right)+i\arg\left(1- z\right)\log\left|z\right|\right),\qquad\mathrm{Li}_{2}\left(z\right)=-\int_{0}^{z} \frac{\log\left(1-v\right)}{v}dv.\] Under a certain change of variables, the polynomial above can be factored into two linear polynomials [5]. This simplification along with a direct approach involving a particular differential form and the Bloch-Wigner dilogarithm lead us to the explicit formula in the statement of Theorem 1.4. This article is organised as follows. In Section 2, we recall some relations between Mahler measure, the Bloch-Wigner dilogarithm and a particular differential form appearing after a simplification of the definition in (1). We end the section with a discussion on Mahler measure over an arbitrary torus. In Section 3, we discuss the proof of Theorem 1.2 and some auxiliary results required to complete the proof. A brief discussion regarding the relations between Mahler measure and periods of the associated varieties (such as, algebraic curves) is also included at the end of this section for completion. Section 4 is completely dedicated to the proof of Theorems 1.3. In Section 5, we discuss some applications of Theorems 1.2 and 1.3 to the family of polynomials in (3). We also derive an explicit expression for the region \(\mathcal{R}_{a,b}\) with some conditions on \(a,b.\) We then prove Theorem 1.4 in Section 6, where we use properties of the differential form and Bloch-Wigner dilogarithm mentioned in Section 2. Later, in Section 7, we sketch a brief proof of an extension of Theorem 1.2 to the several variables setting. After concluding remarks on possible directions to pursue going forward, we end this article with an Appendix containing explicit calculations involving the region \(\mathcal{R}_{a,b}\) mentioned in Section 5. ## Acknowledgements I express my deepest gratitude to my Ph.D. supervisor, Matilde Lalin, for her invaluable assistance and support, and for sharing several ideas that have enriched this work. I am grateful to Andrew Granville for his useful comments and enlightening discussions. I am also thankful to Ali Zahabi for helpful discussions on Ronkin functions and generalized Mahler measure. Finally, I would like to express my gratitude to the Faculte des etudes superieurs et postdoctorales (bourses d'excellence) of the Universite de Montreal, the Institute des sciences mathematiques and the Centre de recherches mathematiques for their financial support. ## 2. Mahler measure and a differential form In this section, we briefly review some necessary background prior to proving the theorems in the following sections. ### Jensen's Formula We recall a special case of Jensen's formula. Let \(z_{0}\in\mathbb{C}.\) Then \[\frac{1}{2\pi i}\int_{\mathbb{T}^{1}}\log|z-z_{0}|\frac{dz}{z}=\left\{\begin{array} []{cl}\log|z_{0}|&|z_{0}|\geq 1,\\ 0&|z_{0}|\leq 1.\end{array}\right.\] ### Bloch-Wigner Dilogarithm For \(z\in\mathbb{C},\) the Bloch-Wigner dilogarithm \(D(z)\) is defined as \[D\left(z\right)=\operatorname{Im}\left(\operatorname{Li}_{2}\left(z\right)+i \arg\left(1-z\right)\log|z|\right), \tag{6}\] where \(\operatorname{Li}_{2}\left(z\right)=-\int_{0}^{z}\frac{\log\left(1-v\right)}{ v}dv.\) In [14], Zagier shows that it can be extended continuously to \(\mathbb{C}\cup\{\infty\},\) with \(D(\infty)=D(0)=D(1)=0.\) In fact, it is real-analytic in \(\mathbb{C}\setminus\{0,1\}.\) For \(z\in\mathbb{C},\) \[D\left(\bar{z}\right)=-D\left(z\right). \tag{7}\] This also implies that \(D(r)=0\) for all \(r\in\mathbb{R}\) (for more details see [14]). This property of the Bloch-Wigner dilogarithm frequently appears in the proof of Theorem 1.4. ### A differential form and its applications Let \(C\) be a curve over \(\mathbb{C}\) which defines a compact Riemann surface, and let \(\mathbb{C}(C)\) be its field of fractions. For \(f,g\in\mathbb{C}(C)^{*},\) we define \[\eta\left(f,g\right):=\log|f|d\arg g-\log|g|d\arg f, \tag{8}\] where \(d\arg x\) is defined by \(\operatorname{Im}(\frac{dx}{x})\). Note that, \(\eta\) is a real \(C^{\infty}\) differential 1-form on \(C\setminus S,\) where \(S\) contains all the zeroes and poles of \(f\) and \(g\). The following lemma consists of some useful properties of \(\eta\) which are frequently used in later sections (see [15, 16], [17] for more details). **Lemma 2.1**.: _Let \(f,g,h,v\in\mathbb{C}(C)^{*}\) and \(a,b\in\mathbb{C}^{*}.\) Then we have_ 1. \(\eta(f,g)=-\eta(g,f),\) _i.e._ \(\eta\) _is anti-symmetric,_ 2. \(\eta(fg,hv)=\eta(f,h)+\eta(g,h)+\eta(f,v)+\eta(g,v),\)__ 3. \(\eta(a,b)=0,\)__ 4. \(\eta\) _is a closed differential form,_ 5. _for_ \(x,1-x\in\mathbb{C}(C)^{*},\)__ \[\eta\left(x,1-x\right)=dD\left(x\right). \tag{9}\] Let \(P(x,y)\) be a Laurent polynomial in two variables. Multiplying \(P(x,\)\(y)\) by a suitable power of \(y,\) we can always assume that \(P(x,y)\in\mathbb{C}[x^{\pm},y]\) is a polynomial of degree \(d\) in \(y,\) where \(d>0\). Then \(P(x,\)\(y)\) has the following factorization over \(\overline{\mathbb{C}(x)}:\) \[P\left(x,y\right)=P^{*}\left(x\right)\left(y-y_{1}\left(x\right)\right)\left( y-y_{2}\left(x\right)\right)\cdots\left(y-y_{d}\left(x\right)\right),\] where \(P^{*}\left(x\right)\in\mathbb{C}[x]\) and \(y_{j}:=y_{j}\left(x\right)\) are algebraic functions of \(x\) for \(j=1,2,\ldots,d.\) Applying Jensen's formula with respect to the variable \(y\) in the standard Mahler measure formula for \(P(x,y),\) we obtain \[\operatorname{m}\left(P(x,y)\right)-\operatorname{m}\left(P^{*}( x)\right)= \frac{1}{\left(2\pi i\right)^{2}}\int_{\mathbb{T}^{2}}\log|P \left(x,y\right)|\frac{dx}{x}\frac{dy}{y}-\operatorname{m}\left(P^{*}(x)\right)\] \[= \frac{1}{2\pi i}\left(\sum_{j=1}^{d}\int_{|x|=1,|y_{j}(x)|\geq 1} \log|y_{j}\left(x\right)|\frac{dx}{x}\right) \tag{10}\] \[= -\frac{1}{2\pi}\sum_{j=1}^{d}\int_{|x|=1,|y_{j}(x)|\geq 1}\eta \left(x,y_{j}\right),\] where \(\eta\) is defined by (8), and \(\eta(x,y_{j})=i\log|y_{j}(x)|\frac{dx}{x},\) which immediately follows from the facts that \(\log|x|=\log 1=0\) and \(\frac{dx}{x}=d(\log|x|+i\arg x).\) Here we consider \(\arg(x)\in[-\pi,\pi).\) Therefore, if \(\eta\) can be decomposed as \[\eta\left(x,y_{j}\right)=\sum_{k}a_{j_{k}}\eta\left(z_{j_{k}},1-z_{j_{k}} \right)=\sum_{k}a_{j_{k}}dD(z_{j_{k}}), \tag{11}\] where \(z_{j_{k}},(1-z_{j_{k}})\in\mathbb{C}(C)^{*}\) are algebraic functions of \(x,\) and the sum is finite, then (10) can be restated in terms of Bloch-Wigner dilogarithm: \[\operatorname{m}\left(P\left(x,y\right)\right)-\operatorname{m}\left(P^{*} \left(x\right)\right)=-\frac{1}{2\pi}\sum_{j=1}^{d}\sum_{k}a_{j_{k}}D\left(z_{j _{k}}\right)|_{\partial\{|x|=1,|y_{j}(x)\geq 1\}},\] where \(\partial\{|x|=1,|y_{j}|\geq 1\}\) is the set of boundary points of \(\{|x|=1,|y_{j}|\geq 1\}.\) **Remark 2.2**.: _As mentioned in [18], we may have some extra terms of the form \(\eta\left(c,z\right)\) in (11), where \(c\) is a constant complex number and \(z\) is some algebraic function. In that case, we can still reach a closed formula by integrating \(\eta\left(c,z\right)\) directly (i.e. by integrating \(\log|c|d\arg z\)). Also, if \(\nu\) is a constant such that \(|\nu|=1,\) then \(\eta\left(\nu,z\right)=\log|\nu|d\arg z=0\)._ ### Arbitrary Tori, Mahler measure, and \(\eta\) For a Laurent polynomial \(P(x,y),\) we analyse the Mahler measure of \(P\) over an arbitrary torus \(\mathbb{T}_{a,b}^{2}.\) The following brief description is essentially reproducing the analysis in Section 3 of [12]. For simplicity, we take \(d=2,\) where \(d\) is the degree of \(y\) in \(P(x,y)\) once \(P\) is multiplied by a suitable power of \(y\) to remove any negative power of \(y.\) Let \(x=ax^{\prime}\) and \(y=by^{\prime}.\) Then we have, for \(P^{*}(x)\in\mathbb{C}[x],\) \[\mathrm{m}_{a,b}\left(P(x,y)\right)-\mathrm{m}_{a,b}\left(P^{*}( x)\right)= \frac{1}{\left(2\pi i\right)^{2}}\iint_{|x^{\prime}|=|y^{\prime}| =1}\log|P\left(ax^{\prime},by^{\prime}\right)|\frac{dx^{\prime}}{x^{\prime}} \frac{dy^{\prime}}{y^{\prime}}-\mathrm{m}_{a,b}\left(P^{*}(x)\right)\] \[= 2\log b+\frac{1}{2\pi i}\left(\sum_{j=1}^{2}\int_{|x^{\prime}|=1,|y^{\prime}_{j}|\geq 1}\log|y^{\prime}_{j}|\frac{dx^{\prime}}{x^{\prime}} \right), \tag{12}\] \[= 2\log b-\frac{1}{2\pi}\sum_{j=1}^{2}\int_{|x|=a,|y_{j}|\geq b} \eta\left(x/a,y_{j}/b\right).\] where \(y_{j}=y_{j}(x)=by^{\prime}_{j}\) are algebraic functions of \(x\) for \(j=1,2,\) and \[\eta\left(x/a,y_{j}/b\right)=\eta(x^{\prime},y^{\prime}_{j})=i\log|y^{\prime} _{j}|\frac{dx^{\prime}}{x^{\prime}},\] for \(j=1,2,\) and the penultimate equality follows from Jensen's formula. Further simplification of the terms involving \(y_{j}\)'s using (2) of Lemma 2.1 implies \[\mathrm{m}_{a,b}\left(P(x,y)\right)-\mathrm{m}_{a,b}\left(P^{*}(x)\right)=2 \log b-\frac{1}{2\pi}\sum_{j=1}^{2}\int_{|x|=a,|y_{j}|\geq b}\left[\eta\left( x,y_{j}\right)-\eta\left(a,y_{i}\right)-\eta\left(x,b\right)\right].\] If \(\{|x|=a,|y_{j}|\geq b\}\) is a closed path, then the integral \[\frac{1}{2\pi}\sum_{j=1}^{2}\int_{|x|=a,|y_{j}|\geq b}\eta\left(x/a,y_{j}/b\right)\] can be evaluated using Stokes' theorem (see Deninger [6]). In addition, if \(\{|x|=a,|y_{i}|\geq b\}\) is a closed path, the term \[\frac{1}{2\pi}\int_{|x|=a,|y_{j}|\geq b}\eta\left(a,y_{i}\right)=\frac{\log a }{2\pi}\int_{|x|=a,|y_{j}|\geq b}d\arg y_{j}\] becomes a multiple of \(\log a.\) If we have a genus \(0\) curve (such as \(C_{4}:Q_{4}(x,y)=0\)) then, instead of proceeding as in the direction above, we may be able to use (9) to relate the Bloch-Wigner dilogarithm and \(\eta\) for evaluating the Mahler measure. The evaluation is much simpler in this case as we will see in the proof of Theorem 1.4. ## 3. Proof of Theorem 1.2 We are now ready to prove Theorem 1.2 for \(Q_{r}(x,y)=r-Q(x,y)\in\mathbb{C}[x^{\pm},y^{\pm}],\) where \(Q(x,y)\) has no constant term. In this section, we use the notation \[\mathrm{m}_{a,b}(Q_{r}(x,y))=\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}_{a,b}(r)\qquad \quad\text{for }r\in\mathbb{C},\] for simplicity. Our approach is inspired from the methods of Rodriguez-Villegas [7] and Bertin [13]. We first show that the required equality between Mahler measures holds for a smaller unbounded region of \(\mathbb{C}\setminus\mathcal{R}_{a,b},\) and then we argue using properties of harmonic functions that it can be extended to the desired region stated in Theorem 1.2. The following lemma formulates the invariance of \(\mathrm{m}_{a,b}(r)\) under certain changes of variables. **Lemma 3.1**.: _Let \(a,b\) be positive real numbers. Define \(f_{r}(a,b):=\mathrm{m}_{a,b}(r).\) Then \(f_{r}\) satisfies the following identities:_ \[f_{r}(a,b)=f_{r}(b,a)=f_{r}\left(\frac{1}{a},b\right)=f_{r}\left(\frac{1}{a}, \frac{1}{b}\right).\] By Lemma 3.1, we can restrict ourselves to the case \(a>b>1.\) Proof of Lemma 3.1.: Let \(a,b>0.\) For \(\tilde{Q}_{r}(x,y)=Q_{r}(ax,by),\) the generalized Mahler measure of \(Q_{r}\) satisfies the identity \[\mathrm{m}_{a,b}(Q_{r}(x,y))=\mathrm{m}(\tilde{Q}_{r}(x,y))=\mathrm{m}(\tilde {Q}_{r}).\] The change of variables \[(x,y)\to(y,x),\quad(x,y)\to(x^{-1},y),\quad(x,y)\to(x^{-1},y^{-1}),\] fix \(\mathrm{m}(\tilde{Q}_{r}).\) Since \(\mathrm{m}_{a,b}(r)=\mathrm{m}(\tilde{Q}_{r}),\) we have the required identities involving \(f_{r}(a,b)=\mathrm{m}_{a,b}(r).\) Our main aim is to study \(\mathrm{m}_{a,b}(r)\) in terms of the complex parameter \(r.\) Recall that \(\mathcal{R}_{a,b}\) is the set of all \(r\in\mathbb{C}\) such that \(Q_{r}(x,y)\) vanishes on \(\mathbb{T}_{a,b}^{2}.\) Before proceeding to prove the theorem, we state a proposition explaining the following: * the behaviour of the roots of \(Q_{r}(x,y)\) for each \(x\in\mathbb{T}_{a}^{1};\) in particular, the number of roots inside the unit circle \(\mathbb{T}_{b}^{1},\) * the behaviour of the roots of \(Q_{r}(x,y)\) for each \(y\in\mathbb{T}_{b}^{1};\) in particular, the number of roots inside the unit circle \(\mathbb{T}_{a}^{1}.\) This proposition, in particular, provides us with the quantities \(\nu_{a,b,r}^{2}\) and \(\nu_{a,b,r}^{1}\) in the statement of Theorem 1.2. Since the above two cases are analogous, we consider just the first case. For \(w\in\mathbb{T}_{a}^{1},\) let \(\varrho_{a,b,r}^{2}(w)\) denote the number of roots of \(Q_{r}(w,y)\) lying inside the circle \(\mathbb{T}_{b}^{1}.\) In particular, following the discussion preceding the definition in (5), we have, for \(w\in\mathbb{T}_{a}^{1},\) \[\varrho_{a,b,r}^{2}(w)=Z_{w,b,r}^{2}\quad\text{and}\quad\varrho_{a,b,r}^{2}( a)=Z_{a,b,r}^{2}=\nu_{a,b,r}^{2}+P_{a,b,r}^{2}=\nu_{a,b,r}^{2}+v_{2}, \tag{13}\] where \(Z_{w,b,r}^{2}\) is the number of zeros (counting multiplicities) of \(Q_{r}(w,y)\) inside the circle \(\mathbb{T}_{b}^{1},\)\(P_{w,b,r}^{2}\) is the order of the pole of \(Q_{r}(w,y)\) at \(y=0,\) and \(v_{2}\) is the largest power of \(y^{-1}\) in \(Q_{r}(x,y).\) Then we have the following proposition. **Proposition 3.2**.: _Let \(r\in\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Then \(\varrho_{a,b,r}^{2}(x)\) is constant for all \(x\in\mathbb{T}_{a}^{1}.\)_ The above discussion along with Proposition 3.2 implies that, for all \(x\in\mathbb{T}_{a}^{1},\)\(\varrho_{a,b,r}^{2}(x)=\nu_{a,b,r}^{2}+v_{2}.\) Next we derive Theorem 1.2 assuming Proposition 3.2. Proof of Theorem 1.2.: For \(a\) and \(b\) positive real numbers, the torus \(\mathbb{T}_{a,b}^{2}\) is defined as the set \(\{(x,y)\in\left(\mathbb{C}^{*}\right)^{2}:|x|=a,|y|=b\}.\) By construction, \(\mathbb{T}_{a,b}^{2}\) is compact. Since the map in (4), namely \[q:\mathbb{T}_{a,b}^{2}\longrightarrow\mathbb{C},\quad\text{defined by}\quad(x,y) \mapsto Q(x,y),\] is continuous, the image of \(q\) is compact. That is, \(q(\mathbb{T}_{a,b}^{2})=\mathcal{R}_{a,b}\) is compact, and therefore closed and bounded in \(\mathbb{C}.\) In other words, \(\max_{r\in\mathcal{R}_{a,b}}|r|\) exists. We denote \[R_{a,b}:=\max_{r\in\mathcal{R}_{a,b}}|r|,\quad\text{and}\quad R_{a,b,1,1}:= \max\{R_{a,b},R_{1,1}\}.\] Following a construction in [7], we define \[\tilde{\text{m}}_{a,b}(r)=\log r-\sum_{n\geq 0}\frac{a_{n,a,b}}{n}r^{-n},\quad| r|>R_{a,b,1,1},r\notin(-\infty,0],\] where \(\log\) denotes the principal branch of the logarithm, and \(a_{n,a,b}\) is defined as follows: \[a_{n,a,b}=\left[\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{a,b}^{2}}\frac{dxdy} {xy(1-r^{-1}Q(x,y))}\right]_{n}=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{a,b}^{ 2}}Q(x,y)^{n}\frac{dx}{x}\frac{dy}{y},\] Here \([T(s)]_{n}\) denotes the coefficient of \(s^{-n}\) in the series \(T(s).\) It is immediate to see that \(\tilde{\text{m}}_{a,b}\) is holomorphic in the region defined by \(|r|>R_{a,b,1,1}\) and \(r\notin(-\infty,0]\). Also, \[\text{Re}(\tilde{\text{m}}_{a,b}(r))=\text{m}_{a,b}(r),\quad|r|>R_{a,b,1,1}.\] **Lemma 3.3**.: _For \(|r|>R_{a,b,1,1},\)_ \[\frac{d\tilde{\text{m}}_{a,b}}{dr}=\frac{d\tilde{\text{m}}_{1,1}}{dr}.\] Proof of Lemma 3.3.: In order to prove the statement, it is enough to show that \(a_{n,a,b}=a_{n,1,1}\) for all \(n.\) The above construction of the coefficients and the integral expression of these terms in [7] yield that \[a_{n,a,b} =\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{a,b}^{2}}Q(x,y)^{n}\frac {dx}{x}\frac{dy}{y}=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}^{2}}Q(ax^{\prime}, by^{\prime})^{n}\frac{dx^{\prime}}{x^{\prime}}\frac{dy^{\prime}}{y^{\prime}}\] \[=[Q(ax^{\prime},by^{\prime})^{n}]_{0}=[Q(x^{\prime},y^{\prime})^{ n}]_{0}\] \[=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}^{2}}Q(x^{\prime},y^{ \prime})^{n}\frac{dx^{\prime}}{x^{\prime}}\frac{dy^{\prime}}{y^{\prime}}\] \[=a_{n,1,1}.\] The equality \(\left[Q(ax^{\prime},by^{\prime})^{n}\right]_{0}=\left[Q(x^{\prime},y^{\prime} )^{n}\right]_{0}\) follows from the fact that the constant term gathers the terms with degree \(0,\) which are invariant under the multiplications of \(x\) and \(y\) by \(a\) and \(b,\) respectively. This concludes the proof. Due to the above identity, we can denote the coefficients as \(a_{n}:=a_{n,a,b}=a_{n,1,1}\) for the rest of the argument. From the definition of \(\tilde{\text{m}}_{a,b},\) it follows that \[\frac{d\tilde{\text{m}}_{a,b}}{dr}=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{a,b }^{2}}\frac{1}{r-Q(x,y)}\frac{dx}{x}\frac{dy}{y},\qquad|r|>R_{a,b,1,1}, \tag{14}\] where we include the region \(r\in(-\infty,0]\cap\{|r|>R_{a,b,1,1}\}\) by continuity. We need to show that \(\frac{d\tilde{\mathrm{m}}_{a,b}}{dr}\) is in fact holomorphic in \(|r|>R_{a,b,1,1}.\) For \(r\in\mathbb{C},\) define \[\mathcal{F}_{a,b}(r):=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{a,b}^{2}}\frac{1 }{r-Q(x,y)}\frac{dx}{x}\frac{dy}{y}. \tag{15}\] Note that the integrand \[\frac{1}{r-Q(x,y)}\bigg{|}_{(x,y)\in\mathbb{T}_{a,b}^{2}}\] is holomorphic in \(r\) when \(|r|>R_{a,b,1,1}.\) In fact, we will now show that \(\mathcal{F}_{a,b}(r)\) is holomorphic as well on \(|r|>R_{a,b,1,1}\). The integrand, as well as the integral in (14), are bounded on \(\mathbb{T}_{a,b}^{2}.\) This implies that \(\frac{d^{j}\mathcal{F}_{a,b}}{dr^{j}}\) exists and is holomorphic for \(j=1\) (and therefore for all \(j\geq 1\)). Hence, \(\mathcal{F}_{a,b}(r)\) is holomorphic in \(|r|>R_{a,b,1,1}.\) From Lemma 3.3 we have, for \(|r|>R_{a,b,1,1},\) \[\frac{d\tilde{\mathrm{m}}_{a,b}(r)}{dr}=\frac{d\tilde{\mathrm{m}}_{1,1}(r)}{ dr},\] and all the quantities are holomorphic in the mentioned region. Integrating both sides with respect to \(r,\) we get \[\tilde{\mathrm{m}}_{a,b}(r)=\tilde{\mathrm{m}}_{1,1}(r)+\tilde{f}(a,b),\qquad \text{for }|r|>R_{a,b,1,1},\] where \(\tilde{f}(a,b)\) is the integration constant which only depends on \(a\) and \(b.\) Taking the real part of both sides yields \[\mathrm{m}_{a,b}(r)=\mathrm{m}_{1,1}(r)+f(a,b),\qquad\text{for }|r|>R_{a,b,1,1}, \tag{16}\] where \(\mathrm{Re}(\tilde{f}(a,b))=f(a,b).\) Notice that \(\mathrm{m}_{a,b}(r)\) is harmonic on \(U_{a,b},\) the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) which contains \(\{|r|>R_{a,b}\},\) and \(\mathrm{m}_{1,1}(r)+f(a,b)\) is also harmonic on \(U_{1,1},\) since \(f(a,b)\) is constant for \(a,b\) fixed. The equality (16) implies that \(\mathrm{m}_{a,b}(r)\) and \(\mathrm{m}(r)+f(a,b)\) coincide in the open neighbourhood \(|r|>R_{a,b,1,1}.\) Therefore, they must be equal in \(U_{a,b}\cap U_{1,1},\) that is \[\mathrm{Re}(\tilde{\mathrm{m}}_{a,b}(r))=\mathrm{m}_{a,b}(r)=\mathrm{m}(r)+f(a,b),\qquad\quad\text{for }r\in O_{a,b}:=U_{a,b}\cap U_{1,1} \tag{17}\] We now proceed to evaluate \(f(a,b)\) in terms of \(a,b.\) Since \(\mathcal{R}_{a,b}\) is compact for \(a,b>0,\) it is bounded for such \(a,b.\) Let \(0<\delta<1\) such that \(a,b>\delta.\) Let \(\mathcal{M}_{a,b}\) be the subset of \(\mathbb{R}_{>0}^{2}\) defined by \[\mathcal{M}_{a,b}=[a-\delta,a+\delta]\times[b-\delta,b+\delta].\] Note that \((a,b)\in\mathcal{M}_{a,b}.\) Since \(\mathcal{M}_{a,b}\) is compact, and the map \((\alpha,\beta)\mapsto R_{\alpha,\beta}\) is continuous for all \((\alpha,\beta)\) in \(\mathcal{M}_{a,b},\) we conclude that the subset \(\{R_{\alpha,\beta}:(\alpha,\beta)\in\mathcal{M}_{a,b}\}\) is compact in \(\mathbb{R}_{>0}.\) Then \(\tilde{R}_{a,b}:=\max_{(\alpha,\beta)\in\mathcal{M}_{a,b}}R_{\alpha,\beta}\) exists, and is finite. Now choose an \(R\in\mathbb{R}_{>0}\) such that \[R>\tilde{R}_{a,b}+R_{1,1}.\] The choice of \(R\) implies that, for \((\alpha,\beta)\in\mathcal{M}_{a,b},\)\(\tilde{\mathrm{m}}_{\alpha,\beta}(R)\) is holomorphic, and (16) yields \[\mathrm{m}_{\alpha,\beta}(R)=\mathrm{m}_{1,1}(R)+f(\alpha,\beta). \tag{18}\] Let \(A_{a,b,\delta}\subset\mathbb{C}^{2}\) be the poly-annulus \(A_{a,b,\delta}=A_{a,\delta}\times A_{b,\delta},\) where \(A_{a,\delta}=\{z\in\mathbb{C}:a-\delta<|z|<a+\delta\}\) and \(A_{b,\delta}=\{z\in\mathbb{C}:b-\delta<|z|<b+\delta\}.\) Note that \(\mathbb{T}_{a,b}^{2}\subset A_{a,b,\delta}.\) Since \[Q_{R}(x,y)\in\mathbb{C}\setminus(-\infty,0]\qquad\text{for }(x,y)\in A_{a,b,\delta},\] \(\log(Q_{R}(x,y))\) is holomorphic in \(A_{a,b,\delta},\) where \(\log\) is the principal branch of logarithm. Let \(\tilde{W}_{a,b}\) denote the set of all \((\alpha,\beta)\in\mathcal{M}_{a,b}\) such that \[\tilde{\text{m}}_{\alpha,\beta}(R)=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{ \alpha,\beta}^{2}}\log(Q_{R}(x,y))\frac{dx}{x}\frac{dy}{y}.\] Note that \(\tilde{W}_{a,b}\) is an open subset of \(\mathcal{M}_{a,b},\) and it also contains \((a,b).\) Next we compute the functions \(\alpha\frac{\partial\tilde{\text{m}}_{\alpha,\beta}(R)}{\partial\alpha}\) and \(\beta\frac{\partial\tilde{\text{m}}_{\alpha,\beta}(R)}{\partial\beta}.\) We only show here the computation of \(\alpha\frac{\partial\tilde{\text{m}}_{\alpha,\beta}(R)}{\partial\alpha}\) when \(\alpha\) belongs to an open subinterval of \((a-\delta,a+\delta)\) containing \(a,\) since the other case is analogous. Note that \(\tilde{\text{m}}_{\alpha,\beta}(R)\) and \(\log(Q_{R})\) are well-defined and finite-valued on \(\tilde{W}_{a,b}\) and \(A_{a,b,\delta},\) respectively. Therefore, we can consider their partial derivatives with respect to \(\alpha,\) and obtain \[\alpha\frac{\partial\tilde{\text{m}}_{\alpha,\beta}(R)}{\partial\alpha} =\alpha\frac{\partial}{\partial\alpha}\left(\frac{1}{(2\pi i)^{2} }\int_{\mathbb{T}_{\alpha,\beta}^{2}}\log(Q_{R}(x,y))\frac{dx}{x}\frac{dy}{y}\right)\] \[=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{\alpha,\beta}^{2}}\alpha \frac{\partial\log(Q_{R}(x,y))}{\partial x}\frac{\partial x}{\partial\alpha} \frac{dx}{x}\frac{dy}{y}\] \[=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{\alpha,\beta}^{2}}x \frac{\partial_{x}Q_{R}(x,y)}{Q_{R}(x,y)}\frac{dx}{x}\frac{dy}{y} \tag{19}\] \[=\frac{1}{(2\pi i)^{2}}\int_{|y|=\beta}\left(\int_{|x|=\alpha} \frac{\partial_{x}Q_{R}(x,y)}{Q_{R}(x,y)}dx\right)\frac{dy}{y},\] where \(\partial_{x}=\frac{\partial}{\partial x},\) and the penultimate equality follows from the facts that \(x=\alpha e^{i\theta}\) and \(\theta\) does not depend on \(\alpha.\) For a fixed \(y_{0}\) such that \(|y_{0}|=\beta,\) the integrand \[\int_{|x|=\alpha}\frac{\partial_{x}Q_{R}(x,y_{0})}{Q_{R}(x,y_{0})}dx=Z_{\alpha, y_{0},R}^{1}-P_{\alpha,y_{0},R}^{1}\] is an integer, where \(Z_{\alpha,y_{0},R}^{1}\) denotes the number of zeros (counting multiplicity) of the Laurent polynomial \(Q_{R}(x,y_{0})\) inside the circle \(\mathbb{T}_{\alpha}^{1},\) and \(P_{\alpha,y_{0},R}^{1}\) denote the order of pole of \(Q_{R}(x,y_{0})\) at \(x=0.\) Let \(\nu_{\alpha,R}^{1}(y_{0}):=Z_{\alpha,y_{0},R}^{1}-P_{\alpha,y_{0},R}^{1}.\) From Proposition 3.2 (when applied to the torus \(\mathbb{T}_{\alpha,\beta}^{2}),\) it follows that \(\nu_{\alpha,R}^{1}(y)\) is constant for all \(y\) in \(\mathbb{T}_{\beta}^{1}.\) We define \(\nu_{\alpha,\beta,R}^{1}:=\nu_{\alpha,R}^{1}(y)\in\mathbb{Z},\) for all \(y\in\mathbb{T}_{\beta}^{1}.\) Therefore, (19) can be simplified to \[\frac{\partial\tilde{\text{m}}_{\alpha,\beta}(R)}{\partial\alpha}=\frac{\nu_{ \alpha,\beta,R}^{1}}{\alpha}. \tag{20}\] Similarly, \[\frac{\partial\tilde{\text{m}}_{\alpha,\beta}(R)}{\partial\beta}=\frac{\nu_{ \alpha,\beta,R}^{2}}{\beta}, \tag{21}\] where \(\nu_{\alpha,\beta,R}^{2}=Z_{\alpha,\beta,R}^{2}-P_{\alpha,\beta,R}^{2}.\) Here \(Z_{\alpha,\beta,R}^{2}\) and \(P_{\alpha,\beta,R}^{2}\) are similarly defined. Since the integer-valued functions \(\nu^{1}_{\alpha,\beta,R}\) and \(\nu^{2}_{\alpha,\beta,R}\) depend on \(\alpha\) and \(\beta\) continuously, they are constant on \(\tilde{W}_{a,b}\subset\ \operatorname{int}(\mathcal{M}_{a,b}).\) In other words, \[\nu^{1}_{a,b,R}=\nu^{1}_{\alpha,\beta,R},\quad\text{and}\quad\nu^{2}_{a,b,R}= \nu^{2}_{\alpha,\beta,R},\qquad\text{for all }(\alpha,\beta)\in\tilde{W}_{a,b}.\] Integrating (20) with respect to \(\alpha\) and then taking the real part yields \[\operatorname{m}_{\alpha,\beta}(R)=\operatorname{m}_{1,1}(R)+\nu^{1}_{a,b,R} \log\alpha+F(\beta),\] where \(F\) is a function of \(\beta\) which does not depend on \(\alpha\) and \(R.\) A similar process when applied to (21) implies that \[\operatorname{m}_{\alpha,\beta}(R)=\operatorname{m}_{1,1}(R)+\nu^{2}_{a,b,R} \log\beta+G(\alpha),\] where \(G\) is independent of \(\beta\) and \(R.\) From the above equalities and (18), we conclude that \[\operatorname{m}_{\alpha,\beta}(R)=\operatorname{m}_{1,1}(R)+\nu^{1}_{a,b,R} \log\alpha+\nu^{2}_{a,b,R}\log\beta+c, \tag{22}\] for all \((\alpha,\beta)\in\mathcal{M}_{a,b},\) and some constant \(c\) independent of \(\alpha,\beta,R.\) As \(|R|>R_{1,1},\) evaluating (22) at \(\alpha=1,\beta=1\) we obtain \(c=0.\) Then, combining (17) and (22) together, we derive that \[f(a,b)=\nu^{1}_{a,b,R}\log a+\nu^{2}_{a,b,R}\log b,\qquad\quad\text{for }r\in O_{a,b}. \tag{23}\] Since \(f(a,b)\) in (17) is independent of \(r,\) comparing (23) with (17) we obtain that, for \(j=1,2,\)\(\nu^{j}_{a,b,R}\) is constant in \(O_{a,b},\) i.e. \[\nu^{j}_{a,b,R}=\nu^{j}_{a,b,r},\qquad\text{when }r\in O_{a,b},j\in\{1,2\}.\] This concludes the proof of Theorem 1.2, namely \[\operatorname{m}_{a,b}(r)=\operatorname{m}_{1,1}(r)+\nu^{1}_{a,b,r}\log a+\nu ^{2}_{a,b,r}\log b,\qquad\quad\text{for }r\in O_{a,b}=U_{a,b}\cap U_{1,1}.\] ### Invariance of \(\nu^{2}_{w,b,r}\) It now remains to prove Proposition 3.2, which tells us that, for all \(w\in\mathbb{T}^{1}_{a},\)\(\varrho^{2}_{a,b,r}(w)\) is constant. Moreover, (13) implies that the constant is \(\nu^{2}_{a,b,r}+v_{2},\) where \(v_{2}\) is the largest power of \(y^{-1}\) in \(Q_{r}(x,y).\) In particular, we will show that \(\nu^{2}_{w,b,r}=\nu^{2}_{a,b,r}\) for all \(w\in\mathbb{T}^{1}_{a},\) where \(\nu^{2}_{w,b,r}\) is given in (5). Before proceeding with the proof, we first consider the _resultant_ of the polynomial \(Q_{r}\) with respect to \(y.\) Recall that \[Q_{r}(x,y)=y^{-v_{2}}Q^{y}_{F,r}(x)\prod_{j=1}^{d_{y}}(y-y_{j,r}(x)),\] where \(y_{j,r}(x)\) are algebraic functions in \(x,\) and \(v_{2}\) is as defined above. Here and in what follows for the rest of this section, we denote \(Q_{F,r}(x):=Q^{y}_{F,r}(x),d:=d_{y}.\) Let \(D_{r}(x)\) denote the _resultant_ of \(Q_{r}(x,y)\) and \(\frac{\partial}{\partial y}Q_{r}(x,y)\) with respect to \(y.\) Then the algebraic solutions \(y_{j,r}\) are holomorphic in some neighbourhood of \(x\) for any \(x\in\mathbb{C}\setminus S_{r},\) where \[S_{r}=\{z\in\mathbb{C}:Q_{F,r}(z)D_{r}(z)=0\} \tag{24}\] is a finite subset of \(\mathbb{C}.\) Let \(\mathbf{y}_{r}(x)\) be the \(d\)-valued global analytic function, with \(d\)-branches \(y_{1,r},\ldots y_{d,r},\) such that \(Q_{r}(x,\mathbf{y}_{r}(x))=0.\) Then \(S_{r}\) is called the set of _critical points_ of \(\mathbf{y}_{r}(x).\) If \(x^{\prime}\) is a critical point of \(\mathbf{y}_{r}(x),\) then \(x^{\prime}\) is either an algebraic branch point or a pole (for more details see [19]). 1. If \(x^{\prime}\in S_{r}\) is an algebraic branch point, i.e. when \(D_{r}(x^{\prime})=0,\) then, in a sufficiently small neighbourhood \(U_{x^{\prime}}\) of \(x^{\prime}\) (which does not contain any other critical points), the multi-set \(\{y_{1,r},\ldots y_{d,r}\}\) can be decomposed into a number of non-intersecting cycles \[\{f_{1}(x),\ldots,f_{k_{1}}(x)\},\ldots,\{f_{k_{1}+\cdots+k_{t-1}+1}(x),\ldots,f_{k_{1}+\cdots+k_{t}}(x)\},\] such that \(\sum_{n=1}^{t}k_{n}=d,\) and \(f_{j}(x)=y_{l,r}(x)\) for some \(j,l\in\{1,\ldots,d\}.\) The elements of the first cycle can be represented as convergent Puiseux series of the local parameter \(\tau=(x-x^{\prime})^{1/k_{1}}\) in a small enough neighbourhood of \(\tau=0.\) The elements of the rest of the cycles follow analogous convergent series representations. Therefore, a single turn around \(x^{\prime}\) in a circle \(C^{\prime}\subset U_{x^{\prime}}\) converts the Puiseux series of elements in one cycle into each other in a cyclic order, i.e. \(f_{1}\to f_{2}\rightarrow\cdots\to f_{k_{1}}\to f_{1}\) etc. 2. If \(x^{\prime}\in S_{r}\) is a pole, that is when \(Q_{F,r}(x^{\prime})=0,\) then, substituting \(y\) with \(yQ_{F,r}(x),\) we return to the first case where the local parameter of the convergent series is \(\tau=1/x.\) Recall that, for \(w\in\mathbb{T}_{a}^{1},\)\(\varrho_{a,b,r}^{2}(w)\) denote the number of roots of \(Q_{r}(w,y)\) lying inside the circle \(\mathbb{T}_{b}^{1}.\) We are now ready to prove Proposition 3.2. Proof of Proposition 3.2.: First fix an arbitrary \(r\in\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Note that \(\varrho_{a,b,r}^{2}\) defines a function from \(\mathbb{T}_{a}^{1}\) to \(\mathbb{Z}\) via the map \(x\mapsto\varrho_{a,b,r}^{2}(x),\) where \(\mathbb{Z}\) is equipped with discrete topology. If \(x_{0}\in\mathbb{T}_{a}^{1}\) is not a critical point of \(\mathbf{y}_{r},\) i.e. \(x_{0}\notin S_{r},\) where \(S_{r}\) is given in (24), then, for all \(j=1,\ldots,d,\)\(y_{j,r}\) is holomorphic in a sufficiently small neighbourhood \(U_{x_{0}}\) of \(x_{0}\) which does not contain any critical point. Therefore, \(|y_{j,r}(x)|\) is continuous in \(U_{x_{0}}.\) Since \(Q_{r}\) does not vanish on \(\mathbb{T}_{a,b}^{2},\) we have \(|y_{j,r}(x)|\neq b\) for all \(x\in U_{x_{0}}\cap\mathbb{T}_{a}^{1},\)\(j.\) Therefore, if, for any \(l=1,\ldots,d,\)\(|y_{l,r}(x_{0})|<1\) (resp. \(|y_{l,r}(x_{0})|>1\)), then, for all \(x\in U_{x_{0}\cap\mathbb{T}_{a}^{1}},\)\(|y_{l,r}(x)|<1\) (resp. \(|y_{l,r}(x_{0})|>1\)). In other words, \(\varrho_{a,b,r}^{2}(x)\) is constant for all \(x\in U_{x_{0}}\cap\mathbb{T}_{a}^{1}.\) In particular, \(\varrho_{a,b,r}^{2}\) is continuous at \(x_{0}.\) If \(x_{1}\in\mathbb{T}_{a}^{1}\cap S_{r},\) then there exists a sufficiently small neighbourhood \(U_{x_{1}}\) of \(x_{1}\) which does not contain any critical point except \(x_{1}.\) Then the convergent Puiseux series expansions of \(y_{1,r},\ldots,y_{d,r}\) in \(U_{x_{1}}\) imply that, for all \(j,\)\(|y_{j,r}|\) is continuous in \(U_{x_{1}},\) and this brings us to the previous case. From properties \(\mathbf{(1)},\)\(\mathbf{(2)}\) and the above discussion, we conclude that, in the neighbourhood \(U_{x_{1}}\) of \(x_{1},\)\(\varrho_{a,b,r}^{2}\) is constant. This implies that \(\varrho_{a,b,r}^{2}\) is continuous at \(x_{1}.\) We now have a continuous function \(\varrho_{a,b,r}^{2}\) from a connected set \(\mathbb{T}_{a}^{1}\) to a discrete set \(\mathbb{Z}.\) Since only connected subsets of \(\mathbb{Z}\) are singletons, we derive that \(\varrho_{a,b,r}^{2}\) is constant in \(\mathbb{T}_{a}^{1},\) and thus completing the proof of the statement. ### A brief discussion on generalized Mahler measure in terms of periods In this section, our aim is to describe the whole phenomena mentioned above in terms of periods when the curve defined by the polynomial has non-zero genus. Rodriguez-Villegas [7], Deninger [6] et al. showed that, for a family of non-zero polynomials \(\{Q_{r}:r\in\mathbb{C}\},\) the quantity \(\frac{d\widehat{m}(Q_{r})}{dr}\) is in fact a period of the non-singular curve \(C_{r}\) associated to \(Q_{r},\) when the genus of \(C_{r}\) is non-zero for generic \(r.\) Note that, for generic \(r,\)\(C_{r}\) is a non-singular hypersurface in \(\mathbb{P}^{2}.\) Here, using Griffith's method [20], we extend their idea to the arbitrary torus case. In other words, we show that even if we change the integration torus \(\mathbb{T}_{a,b}^{2},\) for \(r\in U_{a,b},\) \[\frac{d\tilde{\mathrm{m}}_{a,b}(Q_{r})}{dr}\quad\text{is a period of $C_{r}$}.\] We use this to derive (17). A typical holomorphic differential (or holomorphic \(1\)-form) of \(C_{r}\) can be expressed as the _Poincare residue_ of \[\psi_{1,r}=\frac{f(x,y)dxdy}{Q_{r}(x,y)},\qquad\qquad\deg f\leq\deg Q_{r}-3,\] along \(C_{r}.\) In other words, \[\omega_{1,r}=\frac{f(x,y)dx}{\frac{\partial Q_{r}}{\partial y}}=\frac{f(x,y)dy }{\frac{\partial Q_{r}}{\partial x}},\qquad\deg f\leq\deg Q_{r}-3\] defines a holomorphic \(1\)-form on \(C_{r}.\) Therefore, the Poincare residue of \[\omega_{r}=\frac{1}{Q_{r}}\frac{dxdy}{xy}=\frac{1}{r-Q(x,y)}\frac{dx}{x}\frac{ dy}{y}\] along \(C_{r}\) is a holomorphic \(1\)-form, and we denote it by \(\mathrm{Res}(\omega_{r}).\) Now \(\Gamma_{a,b}:=\mathbb{T}_{a,b}^{2}\) defines a \(2\)-cycle in \(\mathbb{P}^{2}.\) Additionally, for \(r\notin\mathcal{R}_{a,b},\)\(\Gamma_{a,b}\) defines a \(2\)-cycle in \(\mathbb{P}^{2}\setminus C_{r}.\) Since _any \(2\)-cycle in \(\mathbb{P}^{2}\setminus C_{r}\) is homologous to a tube over an \(1\)-cycle on \(C_{r}\)_[20], we have \[\int_{\gamma_{a,b}}\mathrm{Res}(\omega_{r})=\frac{1}{2\pi i}\int_{\tau( \gamma_{a,b})}\omega_{r},\] where \(\gamma_{a,b}\in H_{1}(C_{r},\mathbb{Z})\) is a \(1\)-cycle lying on \(C_{r},\) and \(\tau(\gamma_{a,b}),\) a tube over \(\gamma_{a,b},\) is a \(2\)-cycle homologous to \(\Gamma_{a,b}\) in \(H_{2}(\mathbb{P}^{2}\setminus C_{r},\mathbb{Z}).\) We know that \(\int_{\gamma}\mathrm{Res}(\omega_{r})\) is a period of \(C_{r}\) for \(\gamma\in H_{1}(C_{r},\mathbb{Z}).\) Therefore, we have shown that, indeed, for \(r\in\mathbb{C}\setminus\mathcal{R}_{a,b},\) \[\mathcal{F}_{a,b}(r)=\frac{1}{(2\pi i)^{2}}\int_{\Gamma_{a,b}}\omega_{r}= \frac{1}{(2\pi i)^{2}}\int_{\tau(\gamma_{a,b})}\omega_{r}=\frac{1}{2\pi i}\int _{\gamma_{a,b}}\mathrm{Res}(\omega_{r})\quad\text{is a period of $C_{r}$},\] where \(\mathcal{F}_{a,b}(r)\) is given in (15). As a result, it satisfies one of the Picard-Fuchs differential equations associated to \(C_{r},\) in particular the Picard-Fuchs differential equation associated to the holomorphic \(1\)-form \(\mathrm{Res}(\omega_{r}).\) Since, \(\mathcal{F}_{a,b}(r)\) is holomorphic for \(|r|>R,\) it agrees with a holomorphic solution of the corresponding differential equation on the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) containing an open neighbourhood of \(r=\infty,\) namely \(\{r:|r|>R\}.\) Recall that \(U_{a,b}\) denotes the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Following the arguments in [7] and [13], we conclude that, in fact for \(r\in U_{1,1},\)\(\mathcal{F}_{1,1}(r)\) is holomorphic. This produces the equality \[\mathcal{F}_{a,b}(r)=\mathcal{F}_{1,1}(r)\qquad\quad\text{for $r\in U_{a,b} \cap U_{1,1}$}, \tag{25}\] Integrating (25) with respect to \(r,\) and then taking the real part and using the harmonicity of the Mahler measure we conclude that, for \(r\in U_{a,b}\cap U_{1,1},\) \[\mathrm{m}_{a,b}(r)=\mathrm{m}_{1,1}(r)+f(a,b),\] where \(f\) is a function of \(a\) and \(b.\) ## 4. Proof of Theorem 1.3 In this section, our goal is to provide the proof of Theorem 1.3, and eventually evaluate \(\mathrm{m}_{a,b}(Q_{r})\) when \(r\in\mathbb{C}\setminus(\mathcal{R}_{a,b}\cup U_{a,b}).\) Our proof uses Proposition 3.2 to conclude that, for all \(r\) from a small enough neighbourhood in one of the bounded regions under consideration, certain properties of the roots of \(Q_{r}(a,y)\) or \(Q_{r}(x,b)\) remain invariant. This, combined with the properties of harmonic functions along with Rouche's theorem, gives us the desired results. Recall that \(Q_{r}(x,y),\) considered as a polynomial in \(y\) of degree \(d_{y}\) with coefficients in \(\overline{\mathbb{C}(x)},\) can be factored in \(\overline{\mathbb{C}(x)}[y]\) as \[Q_{r}(x,y)= (y)^{-v_{2}}\left(Q_{F,r}^{y}(x)(y)^{d_{y}}+Q_{f,r}^{y}(x)+\sum_{ j=1}^{d_{y}-1}a_{j,r}^{y}(x)(y)^{j}\right) \tag{27}\] \[= (y)^{-v_{2}}Q_{F,r}^{y}(x)\prod_{j=1}^{d_{y}}(y-y_{j,r}(x)), \tag{26}\] where the \(y_{j,r}(x)\) are algebraic functions in \(x,\)\(v_{2}\) is the order of the pole of \(Q_{r}(a,y)\) at \(y=0,\) and \(Q_{F,r}^{y}(x)\) and \(Q_{f,r}^{y}(x)\) are the respective leading and "constant" coefficient with respect to the variable \(y.\) Similarly, we can factor \(Q_{r},\) considered as a polynomial in \(x\) of degree \(d_{x}\) with coefficients in \(\overline{\mathbb{C}(y)},\) as \[Q_{r}(x,y)= (x)^{-v_{1}}\left(Q_{F,r}^{x}(y)(x)^{d_{x}}+Q_{f,r}^{x}(y)+\sum_{ j=1}^{d_{x}-1}a_{j,r}^{x}(y)(x)^{j}\right)\] \[= (x)^{-v_{1}}Q_{F,r}^{x}(y)\prod_{j=1}^{d_{x}}(x-x_{j,r}(y)),\] where the \(x_{j,r}(y)\) are algebraic functions in \(y,\)\(v_{1}\) is the order of the pole of \(Q_{r}(x,b)\) at \(x=0,\) and \(Q_{F,r}^{x}(y)\) and \(Q_{f,r}^{x}(y)\) are the respective leading and "constant" coefficient with respect to the variable \(x.\) Let \(Z_{F,r}^{u}=\{z\in\mathbb{C}:Q_{F,r}^{u}(z)=0\},\)\(Z_{f,r}^{u}=\{z\in\mathbb{C}:Q_{f,r}^{u}(z)=0\},\) where \(u=x\) or \(y,\) and \(V_{a,b,r_{0}}\) denotes the bounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) containing \(r_{0}.\) Since the proofs of the statements in (**i**) and (**ii**) of Theorem 1.3 are similar, here we restrict ourselves in proving the statement (**i**) Proof of Theorem 1.3.: In (26), we see that the polynomial \(Q_{r}(x,y)\) can be expressed in terms of \(y_{j,r}(x)\) (algebraic functions in \(x\)), \(v_{2},Q_{F,r}^{y}(x)\) and \(Q_{f,r}^{y}(x).\) For simplicity we denote \[Q_{F,r}(x):=Q_{F,r}^{y}(x),Q_{f,r}(x):=Q_{f,r}^{y}(x),\text{ and }d:=d_{y}.\] Proposition 3.2 and the assumption in (**i**) in the statement of Theorem 1.3 yield that \(\varrho_{a,b,r_{0}}^{2}(x)=d\) or \(0\) for all \(x\in\mathbb{T}_{a}^{1}.\) In particular, \(\varrho_{a,b,r_{0}}^{2}(a)=d\) or \(0,\) depending on whether all the roots of \(Q_{r}(a,y)\) lie entirely inside or entirely outside the circle \(\mathbb{T}_{b}^{1}.\) The following three cases can occur when \(\varrho_{a,b,r_{0}}^{2}(a)=d\) or \(0.\) **Case 1**: For all \(x\in\mathbb{T}_{a}^{1},\) \[Q_{F,r_{0}}(x)\cdot Q_{f,r_{0}}(x)\neq 0.\] **Case 2**: \(Q_{F,r_{0}}^{y}\) vanishes on \(\mathbb{T}_{a}^{1},\) but \(Q_{f,r_{0}}^{y}\) does not, i.e. \[Z_{F,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}\neq\varnothing,\quad\text{and}\quad Z_{f,r _{0}}^{y}\cap\mathbb{T}_{a}^{1}=\varnothing,\] **Case 3**: \(Q_{f,r_{0}}^{y}\) vanishes on \(\mathbb{T}_{a}^{1},\) but \(Q_{F,r_{0}}^{y}\) does not, i.e. \[Z_{f,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}\neq\varnothing,\quad\text{and}\quad Z_{F, r_{0}}^{y}\cap\mathbb{T}_{a}^{1}=\varnothing.\] **Case 1**: Since \[Q_{F,r_{0}}(x)\cdot Q_{f,r_{0}}(x)\neq 0\qquad\text{for all }x\in\mathbb{T}_{a}^{1},\] the discussion preceding the proof of Proposition 3.2 implies that the algebraic functions \(y_{j,r_{0}}(x)\) may have only an algebraic branch point at \(x=a\in\mathbb{T}_{a}^{1}.\) From Proposition 3.2 we know that \(\nu_{a,b,r_{0}}^{2}\) is constant in \(\mathbb{T}_{a}^{1}.\) Therefore, we can in fact assume that \(x=a\) is not a branch point of \(y_{j,r_{0}}(x)\) for all \(j.\) Indeed, if \(x=a\) is branch point, then there exists an \(x_{0}\in\mathbb{T}_{a}^{1}\) close enough to \(a\) such that \(x_{0}\notin S_{r},\) where \(S_{r}\) is given in (24). We replace \(a\) with \(x_{0}\) in the statement, and proceed. Here we provide a proof of **Case 1** when \(\varrho_{a,b,r_{0}}^{2}(a)=d,\) since the case when \(\varrho_{a,b,r_{0}}^{2}(a)=0\) is similar. Recall that the condition \(\varrho_{a,b,r_{0}}^{2}(a)=d\) (resp. \(\varrho_{a,b,r_{0}}^{2}(a)=0\)) is equivalent to the condition that all the roots of \(Q_{r_{0}}(a,y)\) lie inside (resp. outside) the circle \(\mathbb{T}_{b}^{1}.\) The polynomial \(Q_{r}(x,y)\) has additional structure: \(Q_{r}(x,y)=r-Q(x,y)\) where \(Q\) does not contain any constant term, and \(r\) is the constant coefficient in \(Q_{r}.\) Therefore, after multiplying \(Q_{r}\) by \(y^{v_{2}},\) we find from (26) that one, and only one, among the set of the coefficients \[\operatorname{Coeff}_{Q_{r},x}:=\{Q_{F,r}(x),Q_{f,r}(x),a_{1,r}(x),\ldots,a_{ d-1,r}(x)\}\subset\overline{\mathbb{C}(x)}\] contains \(r\) as its constant term, namely the coefficient of \(y^{v_{2}}\) in \(y^{v_{2}}Q_{r}(x,y).\) Let \(b_{v_{2},r}(x)\) denotes the said coefficient. Then \(b_{v_{2},r}(x)\in\operatorname{Coeff}_{Q_{r},x},\) and \(b_{v_{2},r}(x)-b_{v_{2},r_{0}}(x)=r-r_{0}.\) Since all the coefficients, except \(b_{v_{2},r},\) do not depend on \(r\) by construction, the above discussion further implies that \[\{|Q_{F,r}(x)-Q_{F,r_{0}}(x)|\,,|Q_{f,r}(x)-Q_{f,r_{0}}(x)|\}\cup \{|a_{j,r}(x)-a_{j,r_{0}}(x)|:1\leq j\leq d-1\}\] \[= \,\{0,|b_{v_{2},r}(x)-b_{v_{2},r_{0}}(x)|\}=\{0,|r-r_{0}|\}\,. \tag{28}\] In other words, if, for example, \(Q_{F,r}(x)=b_{v_{2},r}(x),\) then \[Q_{F,r}(x)-Q_{F,r_{0}}(x)=r-r_{0},\quad Q_{f,r}(x)=Q_{f,r_{0}}(x),\quad\text{ and, for all }j,\ a_{j,r}(x)=a_{j,r_{0}}(x).\] Next we investigate the relation between \(|Q_{r}(a,y)-Q_{r_{0}}(a,y)|\) and \(|Q_{r_{0}}(a,y)|\) when \(y\) takes values in certain sufficiently small circles. Let \[\epsilon_{ij}=\frac{1}{b}\left|y_{i,r_{0}}(a)-y_{j,r_{0}}(a)\right|\qquad\text {and}\qquad\epsilon_{k}=\frac{1}{b}\min_{t\in\mathbb{T}_{b}^{1}}\left|y_{k,r_ {0}}(a)-t\right|.\] Since all the roots of \(Q_{r_{0}}(a,y)\) are distinct and lie inside the circle \(\mathbb{T}_{b}^{1},\) the quantities \(\epsilon_{ij},\epsilon_{k}\) are non-zero for any \(i,j,k\in\{1,\ldots,d\}\) such that \(i\neq j.\) We denote \[\Upsilon=\min_{\begin{subarray}{c}1\leq i<j\leq d\\ 1\leq k\leq d\end{subarray}}\left\{\epsilon_{ij},\epsilon_{k}\right\}.\] Note that \(\Upsilon>0.\) Let \(\epsilon\in(0,\Upsilon)\cap(0,1)\,.\) We define the closed discs \[D_{j}=\{z:|z-y_{j,r_{0}}(a)|\leq\epsilon\},\quad\text{for }j=1,\ldots,d.\] Let \(C_{j}=\partial D_{j}\) be the boundary of \(D_{j}\). The choice of \(\epsilon\) then confirms that the discs \(D_{j}\) are disjoint and \(Q_{r_{0}}(a,y)\) does not vanish on \(C_{j}\). This implies \(\psi_{j,\epsilon,r_{0}}:=\min_{y\in C_{j}}\left|Q_{r_{0}}(a,y)\right|\) is positive for each \(j\). Let \(\delta_{j,r_{0},\epsilon}:=\frac{\psi_{j,\epsilon,r_{0}}}{d+1}\). Then, for \(y\in C_{j}\), and \(r\in V_{a,b,r_{0}}\) such that \[\left|r-r_{0}\right|<\delta_{j,r_{0},\epsilon},\] we have \[\left|Q_{r}(a,y)-Q_{r_{0}}(a,y)\right|\] \[= \left|\left(Q_{F,r}(a)-Q_{F,r_{0}}(a)\right)(y)^{d}+\left(Q_{f,r} (a)-Q_{f,r_{0}}(a)\right)+\sum_{j=1}^{d-1}\left(a_{j,r}(a)-a_{j,r_{0}}(a) \right)(y)^{j}\right|\] \[\leq |r-r_{0}|\left(\sum_{j=0}^{d}|\epsilon|^{j}\right)\leq(d+1)|r-r_{ 0}|<\psi_{j,\epsilon,r_{0}}\leq\left|Q_{r_{0}}(a,y)\right|,\] where the first inequality follows from (28). This implies that, for \(j=1,\ldots,d\), \[\left|Q_{r}(a,y)-Q_{r_{0}}(a,y)\right|<\left|Q_{r_{0}}(a,y)\right|\] on \(C_{j}\). Therefore, it follows from Rouche's Theorem that \(Q_{r}(a,y)\) and \(Q_{r_{0}}(a,y)\) have the same number of root(s) in the interior of \(D_{j}\) when \(|r-r_{0}|<\delta_{j,r_{0},\epsilon}\). Moreover, for \[\delta(\epsilon,r_{0})=\min_{1\leq j\leq d}\delta_{j,r_{0},\epsilon}>0,\] the choice of \(\epsilon\) implies that, when \(|r-r_{0}|<\delta(\epsilon,r_{0})\), all the roots of \(Q_{r}(a,y)\) lie entirely inside the circle \(\mathbb{T}^{1}_{b}\). When \(|r-r_{0}|<\delta(\epsilon,r_{0})\), another application of Proposition 3.2 yields that all the roots of \(Q_{r}(x,y)\) lie inside \(\mathbb{T}^{1}_{b}\) for every \(x\in\mathbb{T}^{1}_{a}\). Following the discussion in Section 2.4 regarding the Mahler measure over arbitrary tori, we conclude that, for \(r\in\{z:|z-r_{0}|<\delta_{\epsilon,r_{0}}\}\subset V_{a,b,r_{0}}\), \[\mathrm{m}_{a,b}(Q_{r}(x,y)) =\mathrm{m}_{a,b}\left((y)^{-v_{2}}Q_{F,r}(x)\prod_{j=1}^{d}(y-y_{ j,r}(x))\right)\] \[=\mathrm{m}_{a}(Q_{F,r}(x))-v_{2}\log b+d\log b. \tag{29}\] Similarly, when all roots of \(Q_{r_{0}}(a,y)\) lie outside the circle \(\mathbb{T}^{1}_{b}\), we have for \(r\in\{z:|z-r_{0}|<\delta_{\epsilon,r_{0}}\}\subset V_{a,b,r_{0}}\), \[\mathrm{m}_{a,b}(Q_{r}(x,y))= \mathrm{m}_{a}(Q_{F,r}(x))-v_{2}\log b+\mathrm{m}_{a}(Q_{f,r}(x) )-\mathrm{m}_{a}(Q_{F,r}(x)) \tag{30}\] \[= \mathrm{m}_{a}(Q_{f,r}(x))-v_{2}\log b.\] Recall that, \(\nu^{2}_{a,b,r}\) denotes the difference between the number of zeros (counting multiplicity) of \(Q_{r}(a,y)\) inside \(\mathbb{T}^{1}_{b}\) and the order of pole of \(Q_{r}(a,y)\) at \(y=0\). Then the above discussion implies that, for \(r\in\{z:|z-r_{0}|<\delta_{\epsilon,r_{0}}\}\subset V_{a,b}\), \[\nu^{2}_{a,b,r}=\nu^{2}_{a,b,r_{0}}=\varrho^{2}_{a,b,r_{0}}(x)-v_{2}=d-v_{2} \text{ or }-v_{2}.\] Since \(\mathrm{m}_{a}(Q_{F,r}(x))\) and \(\mathrm{m}_{a}(Q_{f,r}(x))\) are harmonic, and \(\mathrm{m}_{a,b}(Q_{r}(x,y))\) is harmonic for all \(r\in V_{a,b,r_{0}}\setminus\mathcal{S}_{a,b,r_{0}}\) (where \(\mathcal{S}_{a,b,r_{0}}\) is a finite set containing all the \(r\in V_{a,b,r_{0}}\) such that \(Q_{r}(x,y)\) is singular), the equalities in (29) and (30) can be extended to a larger set \(V_{a,b,r_{0}}\setminus\mathcal{S}_{a,b,r_{0}}\). using the harmonicity of Mahler measure. In other words, for \(r,r_{0}\in V_{a,b,r_{0}}\setminus\mathcal{S}_{a,b,r_{0}}\), \[\mathrm{m}_{a,b}(Q_{r})-\nu_{a,b,r_{0}}^{2}\log b=\left\{\begin{array}{ll} \mathrm{m}_{a}(Q_{F,r}(x))&\mbox{all roots of $Q_{r_{0}}(a,y)$ lie inside $\mathbb{T}_{b}^{1}$,}\\ \mathrm{m}_{a}(Q_{f,r}(x))&\mbox{all roots of $Q_{r_{0}}(a,y)$ lie outside $\mathbb{T}_{b}^{1}$.}\end{array}\right. \tag{31}\] By continuity, (31) holds for all \(r\in V_{a,b,r_{0}}\), and this concludes the proof of the **Case 1**. Recall that \(Z_{F,r}^{y}=\{z\in\mathbb{C}:Q_{F,r}^{y}(z)=0\}\), and \(Z_{f,r}^{y}=\{z\in\mathbb{C}:Q_{f,r}^{y}(z)=0\}\). **Case 2**: If \[Z_{F,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}\neq\varnothing,\quad\mbox{and}\quad Z_{ f,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}=\varnothing,\] then there exists \(x^{\prime}\in Z_{F,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}\), and a \(l\in\{1,\ldots,d_{y}\}\) such that \(y_{l,r_{0}}(x)\) has a pole at \(x^{\prime}\). Then Proposition 3.2 and the conditions in the statement of Theorem 1.3 imply that all the roots of \(Q_{r_{0}}(a,y)\) lie outside the circle \(\mathbb{T}_{b}^{1}\), and we can choose an \(x_{0}\in\mathbb{T}_{a}^{1}\) in a sufficiently small neighbourhood of \(x^{\prime}\), such that \(x_{0}\) is not a pole of \(y_{j,r_{0}}\) for all \(j\). Such choice is possible since the set of critical points \(S_{r_{0}}\) of the global analytic function \(\mathbf{y}_{r_{0}}\) is a finite set. Then a similar argument as in **Case 1** implies that, for all \(r\in V_{a,b,r_{0}}\), \[\mathrm{m}_{a,b}(Q_{r})-\nu_{a,b,r}^{2}\log b=\mathrm{m}_{a}(Q_{f,r}^{y}(x)).\] **Case 3**: If \[Z_{f,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}\neq\varnothing,\quad\mbox{and}\quad Z_{ F,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}=\varnothing,\] then there exists \(x^{\prime\prime}\in Z_{f,r_{0}}^{y}\cap\mathbb{T}_{a}^{1}\), and a \(p\in\{1,\ldots,d_{y}\}\) such that \(y_{p,r_{0}}(x)\) has a zero at \(x^{\prime\prime}\). Again, Proposition 3.2 and the conditions in the statement of Theorem 1.3 imply that all the roots of \(Q_{r_{0}}(a,y)\) lie inside the circle \(\mathbb{T}_{b}^{1}\), and we can choose an \(x_{1}\in\mathbb{T}_{a}^{1}\), such that \(x_{1}\notin S_{r_{0}}\cup Z_{f,r_{0}}^{y}\), and \(Q_{r_{0}}(x_{1},y)\) has all the roots inside \(\mathbb{T}_{b}^{1}\). With these conditions, we have, for all \(r\in V_{a,b,r_{0}}\), \[\mathrm{m}_{a,b}(Q_{r})-\nu_{a,b,r}^{2}\log b=\mathrm{m}_{a}(Q_{F,r}^{y}(x)).\] This concludes the proof of the statement (**i**). Statement (**ii**) follows from an analogous argument. ## 5. Generalized Mahler measure of a family of polynomials In this section, we consider the family of polynomials \[\left\{Q_{r}(x,y)=x+\frac{1}{x}+y+\frac{1}{y}+r:r\in\mathbb{C}\right\}.\] Boyd [5], Deninger [6], Rodriguez-Villegas [7], Lalin [9], Rogers and Zudilin [8] et al have successfully evaluated the (standard) Mahler measure of \(Q_{r}\) for different values of \(r\) in terms of special values of Dirichlet \(L\)-functions, \(L\)-functions of elliptic curves, special values of the Bloch-Wigner dilogarithm, etc. Our aim here is to apply Theorems 1.2 and 1.3 to evaluate the generalized Mahler measure of \(Q_{r}\). Before proceeding with this evaluation we recall some notation associated to the considered family of polynomials for the reader's convenience. 1. The map in (4) is defined in this case as \[q:\mathbb{T}^{2}_{a,b}\mapsto\mathbb{C},\qquad(x,y)\mapsto x+\frac{1}{x}+y+\frac{1} {y}.\] 2. The image of \(q\) is denoted by \(\mathcal{R}_{a,b}.\) The elements of \(\mathcal{R}_{a,b}\) are of the form \[r=\left(a+a^{-1}\right)\cos\alpha+\left(b+b^{-1}\right)\cos\beta+i\left[\left( a-a^{-1}\right)\sin\alpha+\left(b-b^{-1}\right)\sin\beta\right],\] where \(\alpha,\beta\in[-\pi,\pi).\) 3. Since \(\mathcal{R}_{a,b}\) is compact, \(R_{a,b}=\max_{r\in\mathcal{R}_{a,b}}|r|\) exists. 4. \(U_{a,b}\) denotes the unbounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) It contains the region \(\{|r|>R_{a,b}\};\) since \(R_{1,1}=4,\) we have \(U_{a,b}\subseteq U_{1,1}.\) Now we are ready to apply our theorems to evaluate the generalized Mahler measure of \(Q_{r}.\) ### Generalized Mahler measure on the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) In [7] Rodriguez-Villegas expressed the (standard) Mahler measure of \(Q_{r}\) in terms of Eisenstein-Kronecker series for any \(r\in\mathbb{C}.\) Combining his proof and Theorem 1.2, we will show that, for fixed \(a,b>0,\) there exists a large open subset of \(\mathbb{C},\) namely \(O_{a,b}=U_{a,b}\cap U_{1,1},\) such that if \(r\in O_{a,b},\) then the Mahler measure remains unchanged irrespective of the dependence of the integration torus on \((a,b).\) We will in fact go further and show that \(O_{a,b}\) is the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b},\) namely \(U_{a,b}.\) Later in this section, we will give an explicit expression of the region \(O_{a,b},\) as well as of the region \(\mathcal{R}_{a,b}.\) Recall that, for fixed \(a,b>0,\)\(Q_{r}\) does not vanish on \(\mathbb{T}^{2}_{a,b}\) if and only if \(r\notin\mathcal{R}_{a,b}.\) In order to show that, for a fixed \(a,b>0,\) \[\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}(Q_{r}),\qquad\quad\text{for all $r\in O_{a,b} $},\] it suffices to evaluate \(\nu^{j}_{a,b,r}\) for \(j=1,2.\) Since these quantities are constant in the region \(O_{a,b},\) we can choose a suitable \(r\) and apply Theorem 1.2 to evaluate them. Let \[R=R_{a,b}+R_{1,1}=a+\frac{1}{a}+b+\frac{1}{b}+4.\] Note that \(R\in O_{a,b}\) and \(R\notin(-\infty,0].\) Recall that \(\nu^{1}_{a,b,r}\) denotes the difference between the number of zeros (counting multiplicity), namely \(Z^{1}_{a,b,r}\) and the number of poles (counting multiplicity), namely \(P^{1}_{a,b,r},\) of \(Q_{r}(x,b)\) inside the circle \(\mathbb{T}^{1}_{a},\) i.e. \[\nu^{1}_{a,b,r}=Z^{1}_{a,b,r}-P^{1}_{a,b,r},\] and that \(\nu^{2}_{a,b,r}\) is also defined in a similar way. Since \(Q_{R}(x,b)\) is holomorphic everywhere except for a simple pole at \(x=0,\) we have \(P^{1}_{a,b,R}=1.\) Therefore, \(xQ_{R}(x,b)\) has no pole in \(\mathbb{C}.\) Now \(xQ_{R}(x,b)\) can be factored in \(\mathbb{C}[x]\) as \[xQ_{R}(x,b)=\left(x-x_{+}\right)\left(x-x_{-}\right),\] where \[x_{\pm}=\frac{-\left(R+b+\frac{1}{b}\right)\pm\sqrt{\left(R+b+\frac{1}{b} \right)^{2}-4}}{2}.\] Notice that \(x_{+}\cdot x_{-}=1,\) and since \(R+b+\frac{1}{b}>a+\frac{1}{a},\) we also have \[|x_{-}| =\left|\frac{R+b+\frac{1}{b}+\sqrt{\left(R+b+\frac{1}{b}\right)^{2} -4}}{2}\right|=\frac{R+b+\frac{1}{b}+\sqrt{\left(R+b+\frac{1}{b}\right)^{2}-4} }{2}\] \[=\frac{R+b+\frac{1}{b}+\sqrt{\left(a+\frac{1}{a}+b+\frac{1}{b}+b+ \frac{1}{b}+6\right)\left(a+\frac{1}{a}+b+\frac{1}{b}+b+\frac{1}{b}+2\right)} }{2}\] \[\geq a+\frac{1}{a}.\] Since \(a+\frac{1}{a}>\max\left\{a,\frac{1}{a}\right\},\) we have \(|x_{+}|\leq\frac{1}{a+\frac{1}{a}}<a,\) and therefore, \(Z^{1}_{a,b,R}=1.\) By the definition of \(\nu^{1}_{a,b,R},\) it follows that \(\nu^{1}_{a,b,R}=0.\) A similar argument shows that \(\nu^{2}_{a,b,R}=0.\) Combining Theorem 1.2 and the values obtained above, we derive that, for \(r\in U_{a,b}\subset U_{1,1},\) \[\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}(Q_{r}),\] and the required \(O_{a,b}\) is in fact the region \(U_{a,b}.\) Until now we have been fixing \(a,b>0\) in our discussion. Next, we want to show that our theorem can even be applied to a fixed suitable \(r\) in order to obtain certain values of \((a,b)\) such that the equality \(\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}(Q_{r})\) still holds. For some particular values of \(r\in\mathbb{R}\cup i\mathbb{R},\) the standard Mahler measure of \(Q_{r}\) has been proven to be the same as (up to a rational multiple) a special value of \(L\)-function of the elliptic curve corresponding to \(Q_{r}\) due to Boyd [5], Rodriguez-Villegas [7], Deninger [6], Rogers and Zudilin [8], Lalin and Rogers [9] et al. Therefore, an interesting direction would be to search for values of \((a,b)\) such that changing the integration torus from \(\mathbb{T}^{2}\) (\(=\mathbb{T}^{2}_{1,1}\)) to \(\mathbb{T}^{2}_{a,b}\) keeps the Mahler measure fixed. In order to do so, first notice that, for all \(r>R_{a,b},\) Theorem 1.2 implies that \[\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}(Q_{r}).\] Since \(a\) and \(b\) are fixed arbitrarily, we can fix \(r=r_{0}>4,\) and conclude that, for all \(2\)-tuples \((a,b)\) satisfying \[a+\frac{1}{a}+b+\frac{1}{b}<r_{0},\] we have \(\mathrm{m}_{a,b}(Q_{r_{0}})=\mathrm{m}(Q_{r_{0}}).\) Since the change of variables \(r\mapsto-r\) covers the case when \(r<-4,\) it is sufficient to consider the \(r>4\) case here. For \(r\in i\mathbb{R},\) it suffices to investigate the imaginary part of \(r\in\mathcal{R}_{a,b}.\) Indeed, once we calculate the \(\max_{r\in\mathcal{R}_{a,b}}\mathrm{Im}(r),\) we can conclude that all \(r^{\prime}\in\mathbb{C},\) such that \(\mathrm{Im}(r^{\prime})>\max_{r\in\mathcal{R}_{a,b}}\mathrm{Im}(r),\) belong to the unbounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b},\) namely \(U_{a,b}.\) The following discussion results in gathering the required \(2\)-tuples \((a,b)\) such that \[\mathrm{m}_{a,b}(Q_{r_{0}})=\mathrm{m}(Q_{r_{0}})\] for a fixed \(r^{\prime}=r_{0}.\) Recall that, any element in \(\mathcal{R}_{a,b}\) can be written as \[r=\left(a+a^{-1}\right)\cos\alpha+\left(b+b^{-1}\right)\cos\beta+i\left[\left( a-a^{-1}\right)\sin\alpha+\left(b-b^{-1}\right)\sin\beta\right],\] where \(\alpha,\beta\in[-\pi,\pi).\) Notice that, \[\left|\mathrm{Im}(r)\right|=\left|\left(a-a^{-1}\right)\sin\alpha+\left(b-b^{ -1}\right)\sin\beta\right|\leq\left|a-a^{-1}\right|+\left|b-b^{-1}\right|,\] and, for \(\alpha=\beta\in\{-\frac{\pi}{2},\frac{\pi}{2}\}\), we have \[r_{\max,i\mathbb{R}}=i\left[\left|a-a^{-1}\right|+\left|b-b^{-1}\right|\right].\] Therefore, when \(a\) and \(b\) are fixed, we have \(\mathrm{m}_{a,b}(Q_{r})=\mathrm{m}(Q_{r})\) for all \(r\in\left\{z\in i\mathbb{R}:\left|z\right|>\left|a-a^{-1}\right|+\left|b-b^{- 1}\right|\right\}.\) Then a similar argument as in the real case shows that, for a fixed \(r_{0}\in i\mathbb{R}_{>0}\), the Mahler measure of \(Q_{r_{0}}\) over the integration torus \(\mathbb{T}_{a,b}^{2}\) is same as the standard Mahler measure, i.e. \[\mathrm{m}_{a,b}(Q_{r_{0}})=\mathrm{m}(Q_{r_{0}}),\] for all the \(2\)-tuples \((a,b)\) satisfying \[\left|a-a^{-1}\right|+\left|b-b^{-1}\right|<|r_{0}|.\] Here we mention two such examples for \(r=8\) and \(r=2i.\) **Example 5.1** (\(\mathbf{r=8}\)).: _We provide two cases: \((I)\) when \(b=a,\) and \((II)\) when \(b=\sqrt{a}.\) Notice that, case \((I)\) keeps the symmetry of the polynomial_ \[Q_{8}(x,y)=x+\frac{1}{x}+y+\frac{1}{y}+8\] _in the variables \(x\) and \(y.\) In other words, under the change of variables \(x\mapsto y\) and \(y\mapsto x,\) the polynomial \(Q_{8}\) remains unchanged, and so does the integration torus \(\mathbb{T}_{a,a}^{2}\). On the other hand, case \((II)\) breaks the symmetry as then the above changes of variables change the integration torus from \(\mathbb{T}_{a,\sqrt{a}}^{2}\) to \(\mathbb{T}_{\sqrt{a},a}^{2}.\) In spite of the differences between these two cases, there are certain values of \(a\) such that_ \[\mathrm{m}_{a,\sqrt{a}}(Q_{8})=\mathrm{m}_{a,a}(Q_{8})=\mathrm{m}(Q_{8})=4L^{ \prime}(E_{24},0),\] _where \(E_{24}\) is an elliptic curve of conductor \(24\) associated to \(Q_{8}.\) Here the last equality follows from combining the results due to Rogers and Zudilin [8], and Lalin and Rogers [9], where they showed_ \[\mathrm{m}(Q_{8}(x,y))=\mathrm{m}(Q_{2}(x,y))=L^{\prime}(E_{24},0).\] _From the above discussion, we find that, when \((I)\)\(a=b,\) the equality_ \[\mathrm{m}_{a,a}(Q_{8})=\mathrm{m}(Q_{8})\] _holds for all \(a\) satisfying_ \[a+\frac{1}{a}<4\Longleftrightarrow 2-\sqrt{3}<a<2+\sqrt{3}.\] _Similarly, when \((II)\)\(b=\sqrt{a},\) we find that, for_ \[a+\frac{1}{a}+\sqrt{a}+\frac{1}{\sqrt{a}}<8\] \[\Longleftrightarrow \frac{17-\sqrt{41}-\sqrt{2\left(157-17\sqrt{41}\right)}}{4}<a<\frac {17-\sqrt{41}+\sqrt{2\left(157-17\sqrt{41}\right)}}{4},\] _the equality_ \[\mathrm{m}_{a,\sqrt{a}}(Q_{8})=\mathrm{m}(Q_{8})\] _holds. Since,_ \[\frac{17-\sqrt{41}+\sqrt{2\left(157-17\sqrt{41}\right)}}{4}>2+\sqrt{3}\] _and_ \[\frac{17-\sqrt{41}-\sqrt{2\left(157-17\sqrt{41}\right)}}{4}=\left[\frac{17- \sqrt{41}+\sqrt{2\left(157-17\sqrt{41}\right)}}{4}\right]^{-1},\] _we obtain_ \[\mathrm{m}_{a,\sqrt{a}}(Q_{8})=\mathrm{m}_{a,a}(Q_{8})=\mathrm{m}(Q_{8})=4L^{ \prime}(E_{24},0)\qquad\text{for all }a\in\left(2-\sqrt{3},2+\sqrt{3}\right).\] **Example 5.2** (**r = 2\(i\)**).: _In 2011, Mellit [21] showed that_ \[\mathrm{m}(Q_{2i})=L^{\prime}(E_{40},0),\] _where \(E_{40}\) is an elliptic curve of conductor \(40,\) associated to \(Q_{2i}.\)_ _When \(b=a,\) Theorem 1.2 implies that \(\mathrm{m}_{a,a}(Q_{2i})=\mathrm{m}(Q_{2i})\) is true for_ \[\left|a-a^{-1}\right|<1\Longleftrightarrow\frac{\sqrt{5}-1}{2}<a<\frac{\sqrt{ 5}+1}{2}.\] _Similarly, when \(b=\sqrt{a},\) the equality_ \[\mathrm{m}_{a,\sqrt{a}}(Q_{2i})=\mathrm{m}(Q_{2i})\] _holds for all \(a\) satisfying_ \[\left|a-\frac{1}{a}\right|+\left|\sqrt{a}-\frac{1}{\sqrt{a}}\right|<2 \Leftrightarrow a_{0}<a<a_{1},\] _where \(a_{0}\approx 0.530365\dots\) and \(a_{1}\approx 1.88549\dots\) satisfy \(X-1/X+\sqrt{X}-1/\sqrt{X}-2=0\) and \(X-1/X+\sqrt{X}-1/\sqrt{X}+2=0,\) respectively. Since \(a_{0}<\frac{\sqrt{5}-1}{2}\) and \(a_{1}>\frac{\sqrt{5}+1}{2},\) we obtain_ \[\mathrm{m}_{a,a}(Q_{2i})=\mathrm{m}_{a,\sqrt{a}}(Q_{2i})=\mathrm{m}(Q_{2i})=L ^{\prime}(E_{40},0)\] _for all \(a\in\left(\frac{\sqrt{5}-1}{2},\frac{\sqrt{5}+1}{2}\right).\)_ ### Generalized Mahler measure on bounded component(s) of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) In this section, our goal is to evaluate \(\mathrm{m}_{a,b}(Q_{r})\) when \(r\) belongs to the bounded connected component(s) of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) In particular, we show there can be at most one such component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Later, we apply Theorem 1.3 to calculate \(\mathrm{m}_{a,b}(Q_{r})\) for all \(r\) in said component. 2.1. **Existence of at most one bounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\)** Our aim here is to show that there exists at most one bounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) for any \(a,b>0.\) Recall that the elements of \(\mathcal{R}_{a,b}\) are of the form \[r=\left(a+a^{-1}\right)\cos\alpha+\left(b+b^{-1}\right)\cos\beta+i\left[\left( a-a^{-1}\right)\sin\alpha+\left(b-b^{-1}\right)\sin\beta\right], \tag{32}\] where \(\alpha,\beta\in[-\pi,\pi).\) We have \[R_{a,b}=\max_{r\in\mathcal{R}_{a,b}}|r|=a+a^{-1}+b+b^{-1},\text{ and }r_{a,b}:=\min_{r\in\mathcal{R}_{a,b}}|r|=\left(a+a^{-1}\right)-\left(b+b^{-1} \right).\] Then \[a=b\Longleftrightarrow r_{a,b}=0\Longleftrightarrow 0\in\mathcal{R}_{a,b}.\] From this point onwards we assume that \(a\geq b\geq 1.\) The other cases follow analogously using Lemma 3.1. **Proposition 5.3**.: _There exists at most one bounded open connected component of \(\mathbb{C}\setminus\mathcal{R}_{a,b},\) and if it exists, then it contains \(0.\)_ Note that, if \(0\in\mathcal{R}_{a,b},\) then Proposition 5.3 implies that there is no bounded open connected component in \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Before we proceed to prove the proposition, we note some useful properties of \(\mathcal{R}_{a,b}.\) 1. The points \(r\in\mathcal{R}_{a,b}\) can be interpreted as points on the ellipses (33) \[E_{b,z}:\left|r-(z+2)\right|+\left|r-(z-2)\right|=2\left(b+b^{-1}\right),\] where \(z\in\mathbb{C}\) lies on the ellipse (34) \[e_{a}:\left|z-2\right|+\left|z+2\right|=2\left(a+a^{-1}\right).\] In other words, elements of \(\mathcal{R}_{a,b}\) can be identified with points on ellipses \(E_{b,z}\) defined by (33) with centres on the ellipse \(e_{a}\) in (34). Note that, the centre \((c)\) and the foci \((c_{1}\) and \(c_{2})\) of the ellipse \(e_{a}\) are the points \(c=0,c_{1}=-2\) and \(c_{2}=2.\) The centre \((C_{z})\) and the foci \((C_{1,z}\) and \(C_{2,z})\) of the ellipse \(E_{b,z}\) are \[C_{z}=z,C_{1,z}=z-2,\text{ and }C_{2,z}=z+2.\] Any point \(p\in\mathbb{C}\) lying inside (resp. outside) the ellipse \(E_{b,z}\) satisfies \(\left|p-C_{1,z}\right|+\left|p-C_{2,z}\right|<2\left(b+b^{-1}\right)\) (resp. \(\left|p-C_{1,z}\right|+\left|p-C_{2,z}\right|>2\left(b+b^{-1}\right)\)). Since the length of the minor axis of \(e_{a}\) is \(2\left(a-a^{-1}\right),\) we derive that, for \(z\in e_{a},\) \[\left|\operatorname{Im}\left(C_{z}\right)\right|\leq\left(a-a^{-1}\right),\] and, for all \(h\in\left(-a+a^{-1},a-a^{-1}\right),\) there exists a \(\tilde{z}\in e_{a}\) such that (35) \[\operatorname{Im}\left(C_{\tilde{z}}\right)=h.\] 2. The region \(\mathcal{R}_{a,b}\) is symmetric with respect to the imaginary and real axes, i.e. if \(r\in\mathcal{R}_{a,b},\) then \(-\bar{r},\bar{r}\in\mathcal{R}_{a,b}.\) Let \(Q_{\pm,\pm}\) denote the four quadrants of \(\mathbb{C},\) namely \(Q_{+,+}=\{s\in C:\operatorname{Re}(s)\geq 0,\operatorname{Im}(s)\geq 0\},Q_{+,-}=\{s\in C: \operatorname{Re}(s)\geq 0,\operatorname{Im}(s)\leq 0\}\) and so on. Then, the changes of variable \(\{\alpha\mapsto\pi-\alpha,\beta\mapsto\pi-\beta\}\) and \(\{\alpha\mapsto-\alpha,\beta\mapsto-\beta\}\) applied to (32) take points \(r\in\mathcal{R}_{a,b}\cap Q_{+,\pm}\) to \(-\bar{r}\in\mathcal{R}_{a,b}\cap Q_{-,\pm},\) and \(r\in\mathcal{R}_{a,b}\cap Q_{\pm,+}\) to \(\bar{r}\in\mathcal{R}_{a,b}\cap Q_{\pm,-}.\) For the rest of this section, \(\mathcal{R}_{a,b}^{c}\) denotes the region defined by \(\mathbb{C}\setminus\mathcal{R}_{a,b}.\) Proof of Proposition 5.3.: Recall that \(U_{a,b}\) denotes the connected unbounded component of \(\mathcal{R}_{a,b}^{c}.\) First we consider the case \(a>b.\) From the discussion at the beginning of this section, we have \(0\in\mathcal{R}_{a,b}^{c},\) and moreover the open disc \(\{u\in\mathbb{C}:\left|u\right|<r_{a,b}\}\) is contained in one of the bounded open connected components of \(\mathcal{R}_{a,b}^{c}.\) Let \(V_{a,b}\) denote this component. Note that \(0\in V_{a,b}.\) In order to prove the statement, we note that the property \(\left(\mathbf{B}\right)\) above implies that we can restrict ourselves to the quadrant \(Q_{+,+}.\) Since \(a>b,\) the ellipse \(E_{b,z_{0}}\) lies completely in the interior of \(Q_{+,-},\) where \(z_{0}=-i\left(a-a^{-1}\right).\) This implies that, for \(P\in Q_{+,+},\) we have where \(C_{1,z_{0}}\) and \(C_{2,z_{0}}\) are the foci of \(E_{b,z_{0}}.\) In other words, the point \(P\) lies completely outside the ellipse \(E_{b,z_{0}}\) with centre at \(z_{0}=-i\left(a-a^{-1}\right)\in e_{a}.\) Let \(z_{1}\) and \(z_{2}\) denote the points \(a+a^{-1}\) and \(i\left(a-a^{-1}\right),\) respectively. We now consider the following cases: 1. \(\operatorname{Im}(P)>\operatorname{Im}(z_{2})=\left(a-a^{-1}\right),\) 2. \(0\leq\operatorname{Im}(P)\leq\operatorname{Im}(z_{2})=\left(a-a^{-1}\right).\) Let \(E^{a,b}\) and \(e^{a,b}\) denote the boundaries of \(U_{a,b}\) and \(V_{a,b}\) respectively, i.e. \[E^{a,b}=\overline{U}_{a,b}\setminus U_{a,b},\quad\text{and}\quad e^{a,b}= \overline{V}_{a,b}\setminus V_{a,b}.\] Therefore, \(E^{a,b}\cup e^{a,b}\subset\mathcal{R}_{a,b}.\) If \(P\notin U_{a,b}\cup V_{a,b},\) then \(P\) lies in the region bounded by \(E^{a,b}\) and \(e^{a,b}.\) Then the line \(L_{\operatorname{Re}(P)}:t=\operatorname{Re}(P)\) intersects \(E^{a,b}\) at some point \(P^{\prime}\in Q_{+,+}.\) By construction, \(\operatorname{Re}(P)=\operatorname{Re}(P^{\prime}),\) and \(\operatorname{Im}(P)\leq\operatorname{Im}(P^{\prime}),\) where the equality holds iff \(P=P^{\prime}.\) Note that \(P=P^{\prime}\) is the trivial case. Therefore, we assume that \(P\neq P^{\prime}.\) **Figure 2.**\(\mathcal{R}_{a,b},U_{a,b},V_{a,b},e_{a},C_{j,z},P,P^{\prime},L_{\operatorname{ Re}(P)}\) when \(a>b\) and (I) holds. **Claim**: _For \(P\in Q_{+,+},\) if (I) holds, and \(P\notin V_{a,b}\cup U_{a,b},\) then there exists an ellipse \(E_{b,z^{\prime}},\) with centre at \(z^{\prime}\in e_{a},\) such that_ \[|P-C_{1,z^{\prime}}|+|P-C_{2,z^{\prime}}|<2\left(b+b^{-1}\right).\] Since \(\operatorname{Im}(C_{j,z})\leq a-a^{-1}\) for all \(z\in e_{a}\) and \(j=1,2,\) case (I) implies that \(|P^{\prime}-C_{j,z}|^{2}-|P-C_{j,z}|^{2}>|P^{\prime}-P|^{2}>0\) (see Figure 2), and then we have \[|P-C_{1,z}|+|P-C_{2,z}|<|P^{\prime}-C_{1,z}|+|P^{\prime}-C_{2,z}|\,.\] On the other hand, \(P^{\prime}\in E_{a,b}\subset\mathcal{R}_{a,b},\) i.e. there exists a \(z^{\prime}\in e_{a}\) such that \(\left|P^{\prime}-C_{1,z^{\prime}}\right|+\left|P^{\prime}-C_{2,z^{\prime}} \right|=2\left(b+b^{-1}\right).\) This concludes the proof of the claim. Now note that \(z\in e_{a}\) can also be written as \(z=\left(a+a^{-1}\right)\cos\alpha_{z}+i\left(a-a^{-1}\right)\sin\alpha_{z},\) for \(\alpha_{z}\in\left[-\pi,\pi\right).\) Recall that \(C_{1,z}=z-2\) and \(C_{2,z}=z+2.\) Therefore, when \(a\) is fixed, \(\left|P-C_{j,z}\right|\) is a continuous function of \(\alpha_{z}\) for \(j=1,2.\) Let \[\Theta(\alpha_{z}):=\left|P-C_{1,z}\right|+\left|P-C_{2,z}\right|-2\left(b+b^{ -1}\right)\] define a function from \(\left[-\pi,\pi\right)\) to \(\mathbb{R}.\) From the claim, we have already concluded that, for the case (I), either \(P\in U_{a,b}\cup V_{a,b},\) or there are \(z_{0},z^{\prime}\in e_{b}\) such that \[\left|P-C_{1,z_{0}}\right|+\left|P-C_{2,z_{0}}\right|>2\left(b+b^{-1}\right), \quad\text{and}\ \left|P-C_{1,z^{\prime}}\right|+\left|P-C_{2,z^{\prime}}\right|<2\left(b+b^{ -1}\right).\] This implies that \(\Theta\) is a continuous function which takes both negative and positive values, and, using Mean Value Theorem (MVT) on \(\Theta,\) we derive that there exists \(z_{1}\in e_{a}\) such that \(\Theta(\alpha_{z_{1}})=0.\) In other words, \(P\in\mathcal{R}_{a,b},\) which completes the proof of the proposition for \(a>b\) when (I) holds. For case (II), note that \[0\leq\operatorname{Im}(P),\operatorname{Im}(C_{z})\leq a-a^{-1},\qquad\text{ for all }z\in e_{a},\] where \(C_{z}\) (\(=z\)) is the centre of \(E_{b,z}\) given in property (**A**). Then, the _continuous property_ of \(\operatorname{Im}\left(C_{z}\right)\) mentioned in (35) implies that there exists a \(z^{\prime\prime}\in e_{a}\) such that \(\operatorname{Im}(P)=\operatorname{Im}(C_{z^{\prime\prime}})\) (see Figure 3). Let \(L_{\operatorname{Im}(P)}:t=\operatorname{Im}(P)\) denote the line joining \(P\) and \(C_{z^{\prime\prime}},\) and let \(L_{\operatorname{Im}(P)}\) intersect \(E^{a,b}\) at \(P_{1}\) in \(Q_{+,+}\) such that any \(t\in L_{\operatorname{Im}(P)}\) satisfying \(\operatorname{Re}(t)>\operatorname{Re}(P_{1})\) lies in \(U_{a,b}.\) Then \(P_{1}\) has a representation as in (32), namely \[P_{1}=\left(a+a^{-1}\right)\cos\alpha+\left(b+b^{-1}\right)\cos\beta+i\left[ \left(a-a^{-1}\right)\sin\alpha+\left(b-b^{-1}\right)\sin\beta\right],\] for some \(\alpha,\beta\in\left[0,\pi/2\right)\). Moreover, there exist \(\gamma\in\left[0,\pi/2\right)\) such that \(\operatorname{Im}(P)=\operatorname{Im}(P^{\prime})=\operatorname{Im}(C_{z^{ \prime\prime}})=\left(a-a^{-1}\right)\sin\gamma.\) The case \(\operatorname{Re}(P)=\operatorname{Re}(P_{1})\) is trivial as the above discussion implies that \(P=P_{1}.\) If \(\operatorname{Re}(P)>\operatorname{Re}(P_{1}),\) then from the definition of \(P_{1}\) it follows that \(P\in U_{a,b}.\) Therefore, it only remains to investigate the case when \(0\leq\operatorname{Re}(P)<\operatorname{Re}(P_{1}).\) Define the function \(f:\left[\alpha,\gamma\right]\rightarrow\left[0,\beta\right],\) which sends \(\psi\) to \(f(\psi)\) such that \[\left(a-a^{-1}\right)\sin\gamma=\left(a-a^{-1}\right)\sin\alpha+\left(b-b^{-1} \right)\sin\beta=\left(a-a^{-1}\right)\sin\psi+\left(b-b^{-1}\right)\sin f( \psi).\] Note that this is a continuous onto function from a connected set. Then the graph of this function, namely \(\Gamma_{f}:=\left\{\left(\psi,f(\psi)\right):\psi\in\left[\alpha,\gamma\right]\right\}\), is also a connected set. Consider another function \(g:\Gamma_{f}\rightarrow\mathbb{R},\) defined by \[\left(\psi,f(\psi)\right)\mapsto\left(a+a^{-1}\right)\cos\psi+\left(b+b^{-1} \right)\cos f(\psi).\] Note that this function is a continuous function, and there exist \(\chi,\xi\in\Gamma_{f}\) such that \(g(\chi)=\operatorname{Re}(P_{1})=\left(a+a^{-1}\right)\cos\alpha+\left(b+b^{ -1}\right)\cos\beta,\) and \(g(\xi)=\operatorname{Re}\left(C_{z^{\prime\prime}}\right)=\left(a+a^{-1} \right)\cos\gamma.\) Now if \(\operatorname{Re}(P)\in\left(\operatorname{Re}\left(C_{z^{\prime\prime}}\right), \operatorname{Re}(P_{1})\right),\) we claim that there exists a \(\psi_{0}\in\left[\alpha,\gamma\right]\) such that \[P=\left(a+a^{-1}\right)\cos\psi_{0}+\left(b+b^{-1}\right)\cos f(\psi_{0})+i \left[\left(a-a^{-1}\right)\sin\psi_{0}+\left(b-b^{-1}\right)\sin f(\psi_{0}) \right],\] which will imply that \(P\in\mathcal{R}_{a,b}.\) Indeed, \(g\) is a continuous function on a connected set, and \(g(\xi)<g(\chi).\) Therefore, all the values of the interval \(\left(g(\xi),g(\chi)\right)\) are attained by \(g.\) In particular, such \(\psi_{0}\) exists. This proves the statement of the proposition when \(\operatorname{Im}(P)\in\left[0,a-a^{-1}\right]\) and \(\operatorname{Re}(P)\geq\operatorname{Re}\left(C_{z^{\prime\prime}}\right).\) Therefore, it remains to consider the case when (II) holds along with \(\operatorname{Re}(P)\in\left[0,\operatorname{Re}\left(C_{z^{\prime\prime}}\right) \right).\) If the line \(L_{\operatorname{Im}(P)}\) intersect \(e^{a,b}\) in \(Q_{+,+},\) then we consider the intersection point with the smallest non-negative real part. In other words, if \(K_{1},\ldots,K_{l}\in Q_{+,+}\cap e^{a,b}\cap L_{\operatorname{Im}(P)}\) are distinct with \(0\leq\operatorname{Re}(K_{1})<\cdots<\operatorname{Re}(K_{l}),\) then consider \(\operatorname{Re}(K_{1}).\) We want to show that, in fact there can be at most one intersection point of \(L_{\operatorname{Im}(P)}\) and \(e^{a,b}\) in \(Q_{+,+}.\) That is, if \(\operatorname{Re}(P)\in(\operatorname{Re}(K_{1}),\operatorname{Re}(C_{z^{ \prime\prime}})),\) then \(P\in\mathcal{R}_{a,b};\) but this follows from a similar argument as above. Therefore, it only remains to investigate the case when \(L_{\operatorname{Im}(P)}\) does not intersect \(e^{a,b}\) in \(Q_{+,+}.\) Let \(K_{0}\) be the intersection point of \(L_{\operatorname{Im}(P)}\) and the imaginary axis. Then we have \(\operatorname{Re}(K_{0})=0,\) and \(\operatorname{Re}(P)\in(\operatorname{Re}(K_{0}),\operatorname{Re}(C_{z^{ \prime\prime}}))\subset L_{\operatorname{Im}(P)}.\) Again, an analogous argument as above implies that \(P\in\mathcal{R}_{a,b}.\) We collect all the results above, and then using the symmetry of \(\mathcal{R}_{a,b}\) (see property (**B**)) we conclude that if \(P\notin U_{a,b}\cup V_{a,b},\) then \(P\in\mathcal{R}_{a,b},\) which completes the proof of the proposition for \(a>b.\) The case \(a=b\) follows from a similar argument. #### 5.2.2. **Application of Theorem 1.3 to the bounded component** Now we are ready to apply Theorem 1.3 to \(V_{a,b},\) and evaluate \(\operatorname{m}_{a,b}(Q_{r}).\) Firstly, we need to investigate the roots of \(xQ_{r_{0}}(x,b).\) Since, \(0\in V_{a,b},\) we can choose \(r_{0}=0\) in our theorem. In particular, we need to count the number of roots of \(xQ_{0}(x,b)\) lying inside the circle \(|x|=a.\) By Lemma 3.1, we can also assume \(a>b>1.\) Factoring \(xQ_{0}(x,b)\) in \(\mathbb{C}[x],\) we obtain that \[xQ_{0}(x,b)=x^{2}+\left(b+\frac{1}{b}\right)x+1=(x+b)\left(x+\frac{1}{b} \right).\] Since \(a>b>1\), both roots of \(xQ_{0}(x,b))\) lies inside the circle \(|x|=a.\) Also note that \(Q_{F,0}^{x}(y)\) and \(Q_{f,0}^{x}(y)\) in (26) are equal to the constant function \(\mathbf{1}.\) Applying Theorem 1.3, we have, for \(a>b>1\) and \(r\in V_{a,b},\) \[\mathrm{m}_{a,b}(Q_{r})=\nu_{a,b,0}^{1}\log a=\log a,\] where the last equality follows from the fact that \[\nu_{a,b,0}^{1}=Z_{a,b,0}^{1}-P_{a,b,0}^{1}=2-1=1.\] Other cases, such as \(b>a>1\), \(a>1>b\) etc, follow from a combination Lemma 3.1 and a similar arguments as above. ### Explicit description of \(\mathcal{R}_{a,b}\) In this section, our main goal is to describe the region \(\mathcal{R}_{a,b}\) explicitly when \(a\) and \(b\) satisfy some additional conditions. Since \(a,b>0\), let \(x=\log a\) and \(y=\log b.\) For the rest of this section we consider \((a,b)\) as exponentials \((e^{x},e^{y}).\) Any \(r\in\mathcal{R}_{a,b}\) in (32) can be expressed in terms of of hyperbolic functions as \[r=2\left[\cosh x\cos\alpha+\cosh y\cos\beta+i\left(\sinh x\sin\alpha+\sinh y \sin\beta\right)\right]=2\cosh(x+i\alpha)+2\cosh(y+i\beta),\] with \(\alpha,\beta\in[-\pi,\pi).\). When considered as a subset of \(\mathbb{R}^{2},\) the points of \(\mathcal{R}_{a,b}\) admit a geometric interpretation: each \(r=(\mathrm{Re}(r),\mathrm{Im}(r))\in\mathcal{R}_{a,b}\) corresponds to a point \((u_{r},v_{r})\) on the ellipse \[\frac{\left(u_{r}-s\right)^{2}}{\cosh^{2}y}+\frac{\left(v_{r}-t\right)^{2}}{ \sinh^{2}y}=4,\] with \[\frac{s^{2}}{\cosh^{2}x}+\frac{t^{2}}{\sinh^{2}y}=4.\] Though the above geometric interpretation is not very informative, the boundaries of \(\mathcal{R}_{a,b}\) can be expressed explicitly when \(x\) and \(y\) satisfy certain conditions. This leads to ways of explicitly describing \(O_{a,b}(=U_{a,b})\) and \(V_{a,b}\) in those cases. **Lemma 5.4**.: _Let \(a,b>0\) such that \(a\neq b,\) and recall that \(x:=\log a\) and \(y:=\log b.\) If_ \[\sinh^{2}\left(\frac{x+y}{2}\right)[1+\cosh x\cosh y]\geq\max\{\sinh^{2}x, \sinh^{2}y\}, \tag{36}\] _then the outer boundary of \(\mathcal{R}_{a,b}\) is given by_ \[\frac{u^{2}}{\left(\cosh x+\cosh y\right)^{2}}+\frac{v^{2}}{\left(\sinh x+ \sinh y\right)^{2}}=4.\] **Figure 4.**\(\mathcal{R}_{a,b},\) when \(a=1.5\) and \(b=1.07\) does not satisfy (36), (38) and (39). This lemma implies that, when the condition (36) holds, any \(r=(\operatorname{Re}(r),\operatorname{Im}(r))\in\mathcal{R}_{a,b}\) satisfies \[\frac{(\operatorname{Re}(r))^{2}}{\left(\cosh x+\cosh y\right)^{2}}+\frac{( \operatorname{Im}(r))^{2}}{\left(\sinh x+\sinh y\right)^{2}}\leq 4. \tag{37}\] The region \(O_{a,b}\) is then defined by \[\left(\frac{\operatorname{Re}(r)}{a+a^{-1}+b+b^{-1}}\right)^{2}+\left(\frac{ \operatorname{Im}(r)}{a-a^{-1}+b-b^{-1}}\right)^{2}>1.\] The following lemma expresses the innermost boundary of \(\mathcal{R}_{a,b}\) explicitly, subject to some conditions on \(x\) and \(y\). **Lemma 5.5**.: _Let \(a,b>0\) such that \(a\neq b,\) and define \(x:=\log a\) and \(y:=\log b.\) If_ \[\min\{\left|\tanh y\cosh x\right|,\left|\tanh x\cosh y\right|\}>1, \tag{38}\] _and_ \[\cosh^{2}\left(\frac{x+y}{2}\right)[\cosh x\cosh y-1]\geq\max\{\sinh^{2}x, \sinh^{2}y\}, \tag{39}\] _then the inner boundary of \(\mathcal{R}_{a,b}\) is_ \[\frac{s^{2}}{\left(\cosh x-\cosh y\right)^{2}}+\frac{t^{2}}{\left(\sinh x- \sinh y\right)^{2}}=4.\] **Figure 5.**\(\mathcal{R}_{a,b},\) when \(a=10\) and \(b=4\) satisfy (36), (38) and (39). From Lemma 5.5, we deduce that, when the above conditions are satisfied, we have \(r=(\operatorname{Re}(r),\operatorname{Im}(r))\in\mathcal{R}_{a,b}\) if and only if \[\frac{\left(\operatorname{Re}(r)\right)^{2}}{\left(\cosh x-\cosh y\right)^{2}}+ \frac{(\operatorname{Im}(r))^{2}}{\left(\sinh x-\sinh y\right)^{2}}\geq 4. \tag{40}\] The proofs of the above lemmas are included in the Appendix for completion. In conclusion, when all the conditions in the statements of Lemmas 5.4 and 5.5 are satisfied, the region \(\mathcal{R}_{a,b}\) has a concrete description: \(r=(\operatorname{Re}(r),\operatorname{Im}(r))\in\mathcal{R}_{a,b}\) if and only if \[\left(\frac{\operatorname{Re}(r)}{a+a^{-1}+b+b^{-1}}\right)^{2}+\left(\frac{ \operatorname{Im}(r)}{a-a^{-1}+b-b^{-1}}\right)^{2}\leq 1\] and \[\left(\frac{\operatorname{Re}(r)}{a+a^{-1}-b-b^{-1}}\right)^{2}+\left(\frac{ \operatorname{Im}(r)}{a-a^{-1}-b+b^{-1}}\right)^{2}\geq 1.\] ## 6. Generalized Mahler measure of \(X+\frac{1}{X}+Y+\frac{1}{Y}+4\) In this section, our goal is to provide a proof of Theorem 1.4, and evaluate \[\operatorname{m}_{a,b}(Q_{4}):=\operatorname{m}_{a,b}(Q_{4}(x,y))=\operatorname {m}_{a,b}\left(x+\frac{1}{x}+y+\frac{1}{y}+4\right)\] for all \(a,b>0.\) Our method of proof is mostly inspired from the proof of Theorem 12 in [22]. We start by factoring \(Q_{4}(x,y)\) into linear factors following the changes of variables considered by Boyd (see Section 2A in [5]). The change of variables \[x\mapsto\frac{w}{z}\quad\text{and}\quad y\mapsto wz\] applied to \(Q_{4}(x,y)\) yields that \[P\left(w,z\right)=Q_{4}\left(\frac{w}{z},wz\right)=\frac{1}{wz}\left(1+iw+iz+ wz\right)\left(1-iw-iz+wz\right). \tag{41}\] Since \(\operatorname{m}_{a,b}(S(x,y)T(x,y))=\operatorname{m}_{a,b}(S(x,y))+ \operatorname{m}_{a,b}(T(x,y)),\) it is sufficient to evaluate the Mahler measures of the linear polynomials \((1\pm iw\pm iz+wz)\) over \(\mathbb{T}_{c,d}^{2}=\{(w,z)\in\mathbb{C}^{*}\times\mathbb{C}^{*}:\)\(|w|=c,|z|=d\},\) where \[c=\sqrt{ab},\ d=\sqrt{\frac{b}{a}}.\] Afterwards, using the changes of variables, we can evaluate \(\operatorname{m}_{a,b}(Q_{4}).\) The changes of variables \[w\mapsto-w\quad\text{and}\quad z\mapsto-z\] transform \(\left(1+iw+iz+wz\right)\) to \(\left(1-iw-iz+wz\right).\) As these changes of variables preserve the Mahler measure, we find that \[\operatorname{m}_{a,b}(Q_{4})=\operatorname{m}_{c,d}(P(w,z))= \operatorname{m}_{c,d}\left(\frac{1}{wz}\right)+\operatorname{m}_{ c,d}\left(1+iw+iz+wz\right)+\operatorname{m}_{c,d}\left(1-iw-iz+wz\right)\] \[= -\log cd+2\operatorname{m}_{c,d}\left(1+iw+iz+wz\right) \tag{42}\] \[= -\log b+2\operatorname{m}_{c,d}\left(1+iw+iz+wz\right),\] where the last equality follows from the fact that \(cd=\sqrt{ab}\cdot\sqrt{\frac{b}{a}}=b\). Among the terms in (42), it remains to evaluate \[\frac{1}{2}\left(\mathrm{m}_{c,d}(P)+\log cd\right)=\frac{1}{2}\left(\mathrm{m}_{ a,b}(Q_{4})+\log b\right)=\mathrm{m}_{c,d}(1+iw+iz+wz).\] Note that \(z(w)=-\frac{1+iw}{i+w}\) is the only root of \(R(w,z)=1+iw+iz+wz\), when considered as a polynomial in \(z\). Therefore, \[\mathrm{m}_{c,d}(R(w,z)) =\mathrm{m}_{c,d}(w+i)+\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)\] \[=\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{c,d}^{2}}\log|w+i|\frac{ dw}{w}\frac{dz}{z}+\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right) \tag{43}\] \[=\frac{1}{2\pi i}\int_{|w|=c}\log|w+i|\frac{dw}{w}+\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right).\] To evaluate the first integral, we apply the change of variables \(w=cw^{\prime}\) and Jensen's formula (see (2.1)) to obtain \[\frac{1}{2\pi i}\int_{|w|=c}\log|w+i|\frac{dw}{w}=\log c+\frac{1}{2\pi i}\int_ {|w^{\prime}|=1}\log\left|w^{\prime}+\frac{i}{c}\right|\frac{dw^{\prime}}{w^ {\prime}}=\left\{\begin{array}{ll}\log c&\text{ if }c>1,\\ 0&\text{ if }c\leq 1.\end{array}\right. \tag{44}\] It now suffices to evaluate \[\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right) =\frac{1}{(2\pi i)^{2}}\int_{\mathbb{T}_{c,d}^{2}}\log\left|z+ \frac{1+iw}{i+w}\right|\frac{dw}{w}\frac{dz}{z}\] \[=\frac{1}{2\pi i}\int_{|w|=c}\left(\frac{1}{2\pi i}\int_{|z|=d} \log\left|z+\frac{1+iw}{i+w}\right|\frac{dz}{z}\right)\frac{dw}{w} \tag{45}\] to complete the proof. Note that \(\frac{1}{2\pi i}\int_{|z|=d}\log\left|z+\frac{1+iw}{i+w}\right|\frac{dz}{z}\) can be simplified to \[\frac{1}{2\pi i}\int_{|z|=d}\log\left|z+\frac{1+iw}{i+w}\right|\frac{dz}{z}= \left\{\begin{array}{ll}\log\left|\frac{1+iw}{i+w}\right|&\text{ if }\left|\frac{1+iw}{i+w}\right|>d,\\ \\ \log d&\text{ if }\left|\frac{1+iw}{i+w}\right|\leq d\end{array}\right. \tag{46}\] following an application of Jensen's formula. Let \(\gamma_{>d}\) and \(\gamma_{\leq d}\) be the two collections of arcs defined by \[\gamma_{>d}=\{w:|w|=c,|z(w)|>d\},\qquad\gamma_{\leq d}=\{w:|w|=c,|z(w)|\leq d\}.\] Then (45) can be expressed as \[\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right) =\frac{1}{2\pi i}\int_{|w|=c}\left(\frac{1}{2\pi i}\int_{|z|=d} \log\left|z+\frac{1+iw}{i+w}\right|\frac{dz}{z}\right)\frac{dw}{w} \tag{47}\] \[=\frac{1}{2\pi i}\int_{\gamma_{>d}}\log\left|\frac{1+iw}{i+w} \right|\frac{dw}{w}+\frac{1}{2\pi i}\int_{\gamma_{\leq d}}\log d\frac{dw}{w}.\] Since \(\mathrm{Im}\left(\frac{dw}{w}\right)=d\arg w\), the differential form can be represented in terms of \(\eta\) as \[\log\left|\frac{1+iw}{i+w}\right|\frac{dw}{w}=\log|z(w)|\frac{dw}{w}=-i\left( \eta(w,z(w))-\eta(c,z(w))\right).\] The second term above can be further simplified to \[\eta(c,z(w))=\eta(c,iz(w))-\eta(c,i)=\eta(c,iz(w))=(\log c)d\arg\left(\frac{1+iw}{ 1-iw}\right),\] where \(iz(w)=i\frac{1+iw}{i+w}=\frac{1+iw}{1-iw}.\) Therefore, once we have determined \(\gamma_{>d}\) and \(\gamma_{\geq d}\) explicitly, the integrals in (47) can be evaluated individually using the properties of \(\eta\) and the following two lemmas. **Lemma 6.1**.: _For \(w,z(w)\) mentioned above, \(\eta\left(w,z(w)\right)\) decomposes as_ \[\eta(w,z(w))=\eta\left(-iw,1+iw\right)-\eta\left(iw,1-iw\right).\] **Lemma 6.2** (Lemma 16, [22]).: _For \(c\in\mathbb{R}_{>0}\) and \(\theta\in[-\pi,\pi),\) let \(w=ce^{i\theta}\) and \(\psi=\theta+\frac{\pi}{2}.\) Then_ \[d\arg\left(\frac{1+iw}{1-iw}\right)=\frac{2\left(c^{-1}-c\right)\cos\psi}{ \left(c^{-1}-c\right)^{2}+4\sin^{2}\psi}d\psi.\] Using property (9) of \(\eta\) we can rewrite \(\eta(w,z(w))\) in Lemma 6.1 as \[\eta(w,z(w))=dD\left(-iw\right)-dD\left(iw\right), \tag{48}\] where \(D\) is the Bloch-Wigner dilogarithm given in (6). The evaluation of the remaining integral involving \(\eta(c,z(w))\) (\(=\log c\ d\arg\left(\frac{1+iw}{1-iw}\right)\)) over the integration path \(\gamma_{>d}\) follows from the lemma below. **Lemma 6.3**.: _For \(c\in\mathbb{R}_{>0}\) and \(\theta\in[-\pi,\pi),\) let \(w=ce^{i\theta}.\) Let \(\alpha,\beta\in[-\pi,\pi).\) Then_ \[\int_{w(\alpha)}^{w(\beta)}d\arg\left(\frac{1+iw}{1-iw}\right)=\tan^{-1}\left( \frac{2\cos\alpha}{c-c^{-1}}\right)-\tan^{-1}\left(\frac{2\cos\beta}{c-c^{-1} }\right),\] _where \(w(\alpha)=ce^{i\alpha}\) and \(w(\beta)=ce^{i\beta}.\)_ We omit the proof of Lemma 6.2 since it is an intermediate step in the Lemma 16 of [22]. We should also remark that Lemma 6.3 is a generalized version of the Lemma 16 in [22], which states the above result for the case \(\alpha=-\pi\) and \(\beta=0.\) We will see later that the proof of Lemma 6.3 also follows from an argument similar to the proof in [22]. We now sketch the proofs of Lemma 6.1 and 6.3. Proof of Lemma 6.1.: Using properties of \(\eta\) in Lemma 2.1, \(\eta(w,z(w))\) decomposes as \[\eta(w,z(w)) =\eta\left(w,\frac{1+iw}{i+w}\right)\] \[=\eta(w,1+iw)-\eta(w,i+w)\] \[=\eta(-iw,1+iw)-\eta(-i,1+iw)-\eta(iw,i+w)+\eta(i,i+w)\] \[=\eta(-iw,1+iw)-\eta(iw,1-iw)-\eta(iw,i)\] \[=\eta(-iw,1+iw)-\eta(iw,1-iw),\] where we applied Remark 2.2, which implies that \(\eta(\zeta,f(w))=0=\eta(f(w),\zeta)\) for any root of unity \(\zeta\) and any function \(f(w)\) of \(w.\) _Proof of Lemma 6.3:_ We first assume that \(c\in\mathbb{R}_{>0}\setminus\{1\}\), and the case \(c=1\) follows from a continuity argument. For \(\alpha,\beta\in[-\pi,\pi)\) and \(w=ce^{i\theta}\), Lemma 6.2 yields that \[\int_{w(\alpha)}^{w(\beta)}d\arg\left(\frac{1+iw}{1-iw}\right)=\int_{\alpha+ \frac{\pi}{2}}^{\beta+\frac{\pi}{2}}\frac{2\left(c^{-1}-c\right)\cos\psi}{ \left(c^{-1}-c\right)^{2}+4\sin^{2}\psi}d\psi=-\int_{\cos\alpha}^{\cos\beta} \frac{2(c-c^{-1})}{(c-c^{-1})^{2}+4t^{2}}dt,\] where \(\psi=\theta+\frac{\pi}{2}\), \(w(\phi)=ce^{i\phi}\), and the last equality follows from the change of variables \(\sin\psi\mapsto t\). Further, the change of variables \(\frac{2t}{c-c^{-1}}\mapsto u\) gives that \[\int_{w(\alpha)}^{w(\beta)}d\arg\left(\frac{1+iw}{1-iw}\right)=-\int_{\frac{2 \cos\alpha}{c-c^{-1}}}^{\frac{2\cos\beta}{c-c^{-1}}}\frac{du}{1+u^{2}}=\tan^{ -1}\left(\frac{2\cos\alpha}{c-c^{-1}}\right)-\tan^{-1}\left(\frac{2\cos\beta}{ c-c^{-1}}\right),\] which proves the lemma. In order to apply Lemma 6.1 and Lemma 6.2 to (47), it is necessary to explicitly express \(\gamma_{\leq d}\) and \(\gamma_{>d}\). Since \(\gamma_{>d}\) and \(\gamma_{\leq d}\) are disjoint, and \[\left\{w:\left|w\right|=c\right\}=\gamma_{>d}\cup\gamma_{\leq d},\] it suffices to understand \(\gamma_{>d}\). Recall that \(z(w)=-\frac{1+iw}{i+w}\). Then, \(\left|z(w)\right|>d\Leftrightarrow\left|\frac{1+iw}{i+w}\right|>d\Leftrightarrow \left|1+iw\right|>d\left|i+w\right|.\) Since both sides of the inequality are non-negative, we can square them and get \[\left|1+iw\right|>d\left|i+w\right| \Leftrightarrow\left|1+iw\right|^{2}>d^{2}\left|i+w\right|^{2}\] \[\Leftrightarrow 2(1+d^{2})\operatorname{Re}(iw)>(d^{2}-1)(1+\left|w \right|^{2})\] \[\Leftrightarrow\operatorname{Re}(ie^{i\theta})>\frac{d^{2}-1}{1+ d^{2}}\cdot\frac{1+c^{2}}{2c},\] where the last inequality follows from the fact that \(w=ce^{i\theta}\) for \(\theta\in[-\pi,\pi)\). In other words, the condition \(\left|z(w)\right|>d\) is equivalent to the condition \[-1\leq-\operatorname{Re}(ie^{i\theta})=\sin\theta<\frac{1-d^{2}}{1+d^{2}}\cdot \frac{1+c^{2}}{2c}, \tag{49}\] with \(\theta\in[-\pi,\pi)\). For simplicity we denote \[\mathcal{A}_{c,d}:=\frac{1-d^{2}}{1+d^{2}}\cdot\frac{1+c^{2}}{2c}.\] As \(\left|\sin\theta\right|\leq 1\), there are three cases to consider. **Case 1**: If \(\mathcal{A}_{c,d}\leq-1\), then \(\gamma_{>d}=\varnothing\quad\text{and}\quad\gamma_{\leq d}=\{w:\left|w\right|=c\}\). **Case 2**: If \(\mathcal{A}_{c,d}\geq 1\), then \(\gamma_{>d}=\{w:\left|w\right|=c\}\quad\text{and}\quad\gamma_{\leq d}=\varnothing\). **Case 3**: If \(\left|\mathcal{A}_{c,d}\right|<1\), then, for \(w=ce^{i\theta}\) with \(\theta\in[-\pi,\pi)\), \(\gamma_{>d}=\left\{w:\left|w\right|=c,-1\leq\sin\theta<\mathcal{A}_{c,d}\right\}\) and \(\gamma_{\leq d}=\left\{w:\left|w\right|=c,\mathcal{A}_{c,d}\leq\sin\theta\leq 1 \right\}.\) Now we have everything needed to evaluate (47), namely \[\operatorname{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)=\frac{1}{2\pi i}\int_{ \gamma_{>d}}\log\left|\frac{1+iw}{i+w}\right|\frac{dw}{w}+\frac{1}{2\pi i} \int_{\gamma_{\leq d}}\log d\frac{dw}{w}.\] **Case 1**: Since \(\gamma_{>d}=\varnothing\), the integrals in (47) can be evaluated individually to obtain \[\frac{1}{2\pi i}\int_{\gamma_{>d}}\log\left|\frac{1+iw}{i+w}\right|\frac{dw}{w}= 0,\quad\text{and}\quad\frac{1}{2\pi i}\int_{\gamma_{\leq d}}\log d\frac{dw}{w}= \frac{1}{2\pi i}\int_{|w|=c}\log d\frac{dw}{w}=\log d.\] Therefore, in this case we have \[\text{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)=\log d. \tag{50}\] **Case 2**: Since \(\gamma_{\leq d}=\varnothing\), the second integral in (47) contributes nothing. However the first integral can be decomposed into simpler integrals, i.e. \[\frac{1}{2\pi i}\int_{\gamma_{>d}}\log\left|\frac{1+iw}{i+w} \right|\frac{dw}{w} =\frac{1}{2\pi i}\int_{|w|=c}\log\left|\frac{1+iw}{i+w}\right| \frac{dw}{w}\] \[=\frac{1}{2\pi i}\int_{|w|=c}\log\left|1+iw\right|\frac{dw}{w}- \frac{1}{2\pi i}\int_{|w|=c}\log\left|i+w\right|\frac{dw}{w}\quad=0.\] Therefore, when \(\gamma_{>d}=\{|w|=c\}\) (and \(\gamma_{\leq d}=\varnothing\)), then \[\text{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)=0. \tag{51}\] **Case 3**: Since \[|\mathcal{A}_{c,d}|<1,\] we have two sub-cases to consider. **3a**: When \[-1<\mathcal{A}_{c,d}<0,\] then \(\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in[-\pi,0).\) For simplicity, we denote \(\tau=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\) such that \(\tau\in\left(-\frac{\pi}{2},0\right).\) Note that \(\sin\tau=\sin(-\pi-\tau).\) Then the boundary values of \(\gamma_{>d}\) are \[\partial\gamma_{>d}=\{w(-\pi-\tau),w(\tau)\}=\{ce^{i(-\pi-\tau)},ce^{i\tau}\} =\{-ce^{-i\tau},ce^{i\tau}\},\] where \(w(\theta)=ce^{i\theta}.\) The integration path \(\gamma_{\leq d}\) is then the union of the arcs joining \(w(-\pi)\) and \(w(-\pi-\tau)\), and joining \(w(\tau)\) and \(w(\pi).\) Therefore \[\partial\gamma_{\leq d}=\{w(-\pi),w(-\pi-\tau),w(\tau),w(\pi)\}\] are the boundary values of \(\gamma_{\leq d}.\) All the paths are assumed to be traversed counter-clockwise. Now, we have all the tools to calculate (47) in this case. Combining Lemma 6.1 and (48) with the above discussion we obtain \[\text{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)= \frac{1}{2\pi i}\int_{\gamma_{>d}}\log\left|\frac{1+iw}{i+w} \right|\frac{dw}{w}+\frac{1}{2\pi i}\int_{\gamma_{\leq d}}\log d\frac{dw}{w}\] \[= -\frac{1}{2\pi}\int_{\gamma_{>d}}\eta(w,z(w))+\frac{1}{2\pi}\int _{\gamma_{>d}}\eta(c,z(w))+\frac{1}{2\pi i}\int_{\gamma_{\leq d}}\log d\frac{ dw}{w}\] \[= -\frac{1}{2\pi}\int_{\gamma_{>d}}(dD(-iw)-dD(iw)) \tag{52}\] \[+\frac{\log c}{2\pi}\int_{w(-\pi-\tau)}^{w(\tau)}d\arg\left( \frac{1+iw}{1-iw}\right)+\frac{\log d}{2\pi}\left(\int_{-\pi}^{-\pi-\tau}+ \int_{\tau}^{\pi}\right)d\theta,\] where the simplification of the last integral follows from the above discussion regarding \(\partial\gamma_{\leq d}\) and substituting \(w\) with \(w(\theta)=ce^{i\theta}.\) The first integral in (52) can be evaluated using Stokes' theorem as \[\frac{1}{2\pi}\int_{\gamma_{>d}}(dD(-iw)-dD(iw))= \frac{1}{2\pi}\left[D(-iw)-D(iw)\right]_{\partial\gamma_{>d}}\] \[= \frac{1}{2\pi}\left[D(-iw)-D(iw)\right]_{w(-\pi-\tau)}^{w(\tau)} \tag{53}\] \[= -\frac{1}{\pi}\left(D(ice^{-i\tau})+D(ice^{i\tau})\right),\] where the last equality follows from the property (7) of the Bloch-Wigner dilogarithm. Substituting \(\alpha=-\pi-\tau\) and \(\beta=\tau\) in the statement of Lemma 6.3, we evaluate the second integral in (52): \[\frac{\log c}{2\pi}\int_{w(-\pi-\tau)}^{w(\tau)}d\arg\left(\frac{1+iw}{1-iw} \right)=-\frac{\log c}{\pi}\tan^{-1}\left(\frac{2\cos\tau}{c-c^{-1}}\right). \tag{54}\] The remaining integral's contribution is \[\frac{\log d}{2\pi}\left(\int_{-\pi}^{-\pi-\tau}+\int_{\tau}^{\pi}\right)d \theta=\frac{\log d}{2\pi}\left[-\pi-\tau+\pi+\pi-\tau\right]=\frac{\pi-2\tau }{2\pi}\log d. \tag{55}\] Then (53), (54) and (55) together yield that \[\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)=\frac{1}{\pi}\left[D(ice^{-i \tau})+D(ice^{i\tau})-(\log c)\tan^{-1}\left(\frac{2\cos\tau}{c-c^{-1}}\right) +\left(\frac{\pi}{2}-\tau\right)\log d\right].\] **3b**: It remains to evaluate the case when \[0<\mathcal{A}_{c,d}<1.\] This condition is equivalent to \[\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in(0,\pi).\] Again, for simplicity, we denote \(\kappa=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\) such that \(\kappa\in\left(0,\frac{\pi}{2}\right).\) Since \(\sin\kappa=\sin(\pi-\kappa)\) and \(\sin\pi=0,\) the boundary values in this case are \[\partial\gamma_{>d}=\{w(-\pi),w(\kappa),w(\pi-\kappa),w(\pi)\},\] and \[\partial\gamma_{\leq d}=\{w(\kappa),w(\pi-\kappa)\}.\] The arcs are considered to be oriented in a counter-clockwise direction. From a similar argument as before, we deduce that \[\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)=\frac{1}{\pi}\left[D(ice^{-i \kappa})+D(ice^{i\kappa})-(\log c)\tan^{-1}\left(\frac{2\cos\kappa}{c-c^{-1}} \right)+\left(\frac{\pi}{2}-\kappa\right)\log d\right].\] We combine the results obtained in **3a** and **3b** to obtain \[\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)=\frac{1}{\pi}\left[D(ice^{-i \mu})+D(ice^{i\mu})-(\log c)\tan^{-1}\left(\frac{2\cos\mu}{c-c^{-1}}\right)+ \left(\frac{\pi}{2}-\mu\right)\log d\right], \tag{56}\] where \(\mu=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in\left(-\frac{\pi}{2},\frac{\pi}{ 2}\right).\) This concludes the evaluation of \(\mathrm{m}_{c,d}\left(z+\frac{1+iw}{i+w}\right)\) for the different cases. Recall that \(R(w,z)=1+iw+iz+wz.\) In order to evaluate \(\mathrm{m}_{c,d}(R(w,z)),\) it suffices to collect the equalities in (44), (50), (51) and (56). We deduce \[\mathrm{m}_{c,d}(R(w,z))=\max\{\log c,0\}+\left\{\begin{array}{ll}\max\{\log d,0\}&\text{if}\,\left|\mathcal{A}_{c,d}\right|\geq 1,\\ \frac{1}{\pi}\left[D(ice^{-i\mu})+D(ice^{i\mu})\right.&,\\ \left.-(\log c)\tan^{-1}\left(\frac{2\cos\mu}{c-c^{-1}}\right)+\left(\frac{\pi }{2}-\mu\right)\log d\right]&\text{if}\,\left|\mathcal{A}_{c,d}\right|<1,\end{array}\right.\] where \(\mu=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in\left(-\frac{\pi}{2},\frac{\pi }{2}\right).\) Notice that \(\mathrm{m}_{c,d}(R(w,z))=\frac{1}{2}\left(\mathrm{m}_{a,b}(Q_{4})+\log b\right),\) where \(a=\frac{c}{d}\) and \(b=cd.\) This implies that when \(\left|\mathcal{A}_{c,d}\right|\geq 1,\) we have \[\mathrm{m}_{a,b}(Q_{4})=\max\{\log c,-\log c\}+\max\{\log d,-\log d\}. \tag{57}\] On the other hand, when \(\left|\mathcal{A}_{c,d}\right|<1,\) \[\mathrm{m}_{a,b}(Q_{4})= \max\{\log c,-\log c\}-\log d+\frac{2}{\pi}\left[D(ice^{-i\mu})+D (ice^{i\mu})\right]\] \[-\frac{2\log c}{\pi}\tan^{-1}\left(\frac{2\cos\mu}{c-c^{-1}} \right)+\left(1-\frac{2\mu}{\pi}\right)\log d\] \[= \frac{2}{\pi}\left[D(ice^{-i\mu})+D(ice^{i\mu})-\mu\log d\right]+ \frac{2\log c}{\pi}\left[\max\left\{\frac{\pi}{2},-\frac{\pi}{2}\right\}-\tan^ {-1}\left(\frac{2\cos\mu}{c-c^{-1}}\right)\right]\] \[= \frac{2}{\pi}\left[D(ice^{-i\mu})+D(ice^{i\mu})-\mu\log d\right]+ \frac{2\log c}{\pi}\tan^{-1}\left(\frac{c-c^{-1}}{2\cos\mu}\right),\] where \(\mu=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in\left(-\frac{\pi}{2},\frac{\pi }{2}\right),\) and the simplification of the last term follows from the fact that \[\text{if}\,\,x>0,\quad\pi/2-\tan^{-1}(x)=\tan^{-1}\left(x^{-1}\right),\] and \[\text{if}\,\,x<0,\quad-\pi/2-\tan^{-1}(x)=\tan^{-1}\left(x^{-1}\right).\] Therefore, (57) along with the above discussion implies that \[\mathrm{m}_{a,b}(Q_{4})=\left\{\begin{array}{ll}\max\{\log c,-\log c\}+\max \{\log d,-\log d\}&\text{if}\,\,\left|\mathcal{A}_{c,d}\right|\geq 1,\\ \frac{2}{\pi}\left[D(ice^{-i\mu})+D(ice^{i\mu})-\mu\log d+(\log c)\tan^{-1} \left(\frac{c-c^{-1}}{2\cos\mu}\right)\right]&\text{if}\,\,\left|\mathcal{A}_ {c,d}\right|<1,\end{array}\right.\] where \(\mu=\sin^{-1}\left(\mathcal{A}_{c,d}\right)\in\left(-\frac{\pi}{2},\frac{\pi }{2}\right),\) which completes the proof of Theorem 1.4. ## 7. Extension of Theorem 1.2 to several variables We end this article by extending Theorem 1.2 to \(n\)-variable Laurent polynomials for \(n\geq 3.\) Let \(P_{k}(x_{1},\ldots,x_{n})\in\mathbb{C}[x_{1}^{\pm},\ldots,x_{n}^{\pm}]\) be a Laurent polynomial in \(n\)-variable such that \[P_{k}:=P_{k}(x_{1},\ldots,x_{n})=k-P(x_{1},\ldots,x_{n}),\] where \(P\) has no constant term. Let \(\mathbb{T}_{\mathfrak{a}}^{n}\) be the integration torus in the definition of \(\mathrm{m}_{\mathfrak{a}}(P_{k}),\) where \(\mathfrak{a}=(a_{1},\ldots,a_{n}).\) We consider the image \(\mathcal{K}_{\mathfrak{a}}\) of the map from \(\mathbb{T}_{\mathfrak{a}}^{n}\) to \(\mathbb{C}\) defined by \((x_{1},\ldots,x_{n})\mapsto P(x_{1},\ldots,x_{n}).\) A similar argument as in the \(2\)-variable case shows that \(\mathcal{K}_{\mathfrak{a}}\) is compact in \(\mathbb{C}.\) Therefore, \(\max_{k\in\mathcal{K}_{\mathfrak{a}}}|k|\) exists and is finite. Let \(K_{\mathfrak{a}}\) denote this maximum, and let \(K_{\mathfrak{a},\mathfrak{1}}=\max\{K_{\mathfrak{a}},K_{\mathfrak{1}}\},\) where \(\mathfrak{1}=(1,\ldots,1).\) Then, generalizing the steps in [7] and [13], we define \[\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})=\log k-\sum_{m\geq 0}\frac{a_{m, \mathfrak{a}}}{m}k^{-m},\quad|k|>K_{\mathfrak{a},\mathfrak{1}},k\notin(-\infty,0], \tag{58}\] where \(\log\) denotes the principal branch of the logarithm, and \(a_{m,\mathfrak{a}}\) is defined as follows: \[a_{m,\mathfrak{a}}=\left[\frac{1}{(2\pi i)^{n}}\int_{\mathbb{T}_{\mathfrak{a}} ^{2}}\frac{dx_{1}\cdots dx_{n}}{x_{1}\cdots x_{n}(1-r^{-1}P(x_{1},\ldots,x_{n} ))}\right]_{m}=\frac{1}{(2\pi i)^{n}}\int_{\mathbb{T}_{\mathfrak{a}}^{2}}P(x_{ 1},\ldots,x_{n})^{n}\frac{dx_{1}}{x_{1}}\cdots\frac{dx_{n}}{x_{n}},\] where \([T(s)]_{m}\) denotes the coefficient of \(s^{-m}\) in the series \(T(s).\) It is immediate to see that \(\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})\) is holomorphic in the region defined by the intersection of \(\{|k|>K_{\mathfrak{a},\mathfrak{1}}\}\) and \(\mathbb{C}\setminus(-\infty,0]\). Also, \[\operatorname{Re}(\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k}))=\mathfrak{m}_{ \mathfrak{a}}(P_{k}),\quad|k|>K_{\mathfrak{a},\mathfrak{1}}.\] A similar argument as in the 2-variable case shows that \(a_{m,\mathfrak{a}}=a_{m,\mathfrak{1}}\) for all \(m\geq 0,\) and therefore, we have the following equality: \[\frac{d\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})}{dk}=\frac{d\tilde{ \mathfrak{m}}_{\mathfrak{1}}(P_{k})}{dk},\qquad\text{for }|k|>K_{\mathfrak{a},\mathfrak{1}}. \tag{59}\] In addition, similarly as in the 2-variable case, the integral representation \[\frac{d\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})}{dk}=\frac{1}{(2\pi i)^{n}} \int_{\mathbb{T}_{\mathfrak{a}}^{n}}\frac{1}{k-P(x_{1},\ldots,x_{n})}\frac{dx _{1}}{x_{n}}\cdots\frac{dx_{n}}{x_{n}}\] is in fact holomorphic in \(\{|k|>K_{\mathfrak{a},\mathfrak{1}}\}.\) Now integrating both sides of (59) with respect to \(k\) yields that, for \(\{|k|>K_{\mathfrak{a},\mathfrak{1}}\},\) \[\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})=\tilde{\mathfrak{m}}_{\mathfrak{1} }(P_{k})+\tilde{g}(\mathfrak{a}),\] where \(\tilde{g}:\mathbb{R}_{>0}^{n}\rightarrow\mathbb{C}.\) Taking real parts on both sides, we get \[\mathfrak{m}_{\mathfrak{a}}(P_{k})=\mathfrak{m}_{\mathfrak{1}}(P_{k})+g( \mathfrak{a}), \tag{60}\] for \(\{|k|>K_{\mathfrak{a},\mathfrak{1}}\},\) where \(g=\operatorname{Re}\left(\tilde{g}\right).\) Let \(U_{\mathfrak{a}}\) be the unbounded open connected component of \(\mathbb{C}\setminus\mathcal{K}_{\mathfrak{a}}\) which contains the region \(\{|k|>K_{\mathfrak{a},\mathfrak{1}}\}.\) As both sides of (60) are harmonic on \(O_{\mathfrak{a}}:=U_{\mathfrak{a}}\cap U_{\mathfrak{1}},\) the equality can be extended to \(O_{\mathfrak{a}}.\) In other words, for \(k\in O_{\mathfrak{a}},\) we have \(\mathfrak{m}_{\mathfrak{a}}(P_{k})=\mathfrak{m}_{\mathfrak{1}}(P_{k})+g( \mathfrak{a}).\) It only remains to express \(g\) explicitly in terms of \(\mathfrak{a}.\) Consider the functions \[a_{j}\frac{\partial\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})}{\partial a_{j} }=\frac{1}{(2\pi i)^{n}}\int_{|x_{1}|=a_{1},\ldots,|\tilde{x_{j}}|=a_{j}, \ldots,|x_{n}|=a_{n}}\left(\int_{|x_{j}|=a_{j}}\frac{\partial_{x_{j}}P_{k}}{P_{k }}dx_{j}\right)\frac{dx_{1}}{x_{1}}\cdots\widehat{\frac{dx_{j}}{x_{j}}}\cdots \frac{dx_{n}}{x_{n}}\] for all \(j=1,\ldots,n.\) Here \(\widehat{\ }\) denotes that the term is omitted from the expression. Now, again following the steps for 2-variable case we conclude that \(a_{j}\frac{\partial\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})}{\partial a_{j}}\) is constant depending only on \(a_{j}.\) More precisely, we find that \[a_{j}\frac{\partial\tilde{\mathfrak{m}}_{\mathfrak{a}}(P_{k})}{\partial a_{j}}= \nu_{\mathfrak{a},k}^{j}, \tag{61}\] where \(\nu^{j}_{\mathfrak{a},k}\) is the difference between the number of zeroes (counting multiplicities) of \(P_{k}(a_{1},\ldots,a_{j-1},x_{j},a_{j+1},\ldots,x_{n})\) inside the circle \(\mathbb{T}^{1}_{a_{j}},\) denoted by \(Z^{j}_{\mathfrak{a},k},\) and the order of the pole of \(P_{k}(a_{1},\ldots,a_{j-1},x_{j},a_{j+1},\ldots,x_{n})\) at \(x_{j}=0,\) denoted by \(P^{j}_{\mathfrak{a},k}.\) In other words, \[\nu^{j}_{\mathfrak{a},k}=Z^{j}_{\mathfrak{a},k}-P^{j}_{\mathfrak{a},k}.\] We also note that \(\nu^{j}_{\mathfrak{a},k}\) is independent of \(k\) when \(k\in O_{\mathfrak{a}},\) and only depends on \(\mathfrak{a}\) and the polynomial \(P=k-P_{k}.\) Integrating (61) with respect to \(a_{j}\) for \(j=1,\ldots,n,\) and then substituting those values in (60) lead to the following theorem. **Theorem 7.1**.: _Let \(\mathfrak{a}=(a_{1},\ldots,a_{n})\in(\mathbb{R}_{>0})^{n}.\) Let \(P_{k}(x_{1},\ldots,x_{n})=k-P(x_{1},\ldots,x_{n})\in\mathbb{C}[x_{1}^{\pm}, \ldots,x_{n}^{\pm}],\) such that \(P\) has no constant term. Denote \(U_{\mathfrak{a}}\) the unbounded open connected component of \(\mathbb{C}\setminus\mathcal{K}_{\mathfrak{a}}\) containing some neighbourhood of \(k=\infty.\) Then, for \(k\in U_{\mathfrak{a}}\cap U_{1},\)_ \[\mathrm{m}_{\mathfrak{a}}(P_{k})=\mathrm{m}(P_{k})+\sum_{j=1}^{n}\nu^{j}_{ \mathfrak{a},k}\log a_{j},\] _where \(\nu^{j}_{\mathfrak{a},k}\) is defined as above, and \(\mathrm{m}_{\mathfrak{1}}(P_{k})=\mathrm{m}(P_{k}).\) Moreover, for \(k\in U_{\mathfrak{a}}\cap U_{\mathfrak{1}}\) and \(j=1,\ldots,n,\)\(\nu^{j}_{\mathfrak{a},k}\) only depends on \(\mathfrak{a}.\)_ Next, after multiplying \(P_{k}\) with a suitable power of \(x_{j},\) we can factorise \(P_{k}\) in linear factors with coefficients in \(\overline{\mathbb{C}(x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})}\) as \[P_{k}(x_{1},\ldots,x_{n})=x_{j}^{-v_{j}}P^{j}_{F,k}(x_{1},\ldots,\widehat{x_{j }},\ldots,x_{n})\prod_{l=1}^{d_{n}}\left(x_{j}-X_{l,k,j}\left(x_{1},\ldots, \widehat{x_{j}},\ldots,x_{n}\right)\right),\] where \(d_{j}\) is the degree of \(P_{k}\) as a polynomial in \(x_{j},\)\(X_{l,k,j}\) are algebraic functions of \((x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})\) for \(l=1,\ldots,d_{n},\)\(P^{j}_{F,k}\) is the leading coefficient with respect to the variable \(x_{j},\) and \(v_{j}\) is the largest power of \(x_{j}^{-1}\) in \(P_{k}.\) Let \(P^{j}_{f,k}(x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})\) denote the "constant" coefficient with respect to the variable \(x_{j}.\) Then \[P^{j}_{F,k}(x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})\prod_{j=1}^{d_{n}}X_{l,k,j}(x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})=P^{j}_{f,k}(x_{1},\ldots, \widehat{x_{j}},\ldots,x_{n}).\] For \((u_{1},\ldots,\widehat{u_{j}},\ldots,u_{n})\in\mathbb{T}^{n-1}_{a_{1},\ldots, \widehat{a},\ldots,a_{n}},\) let \(\varrho^{j}_{\mathfrak{a},k}\left(u_{1},\ldots,\widehat{u_{j}},\ldots,u_{n}\right)\) be the number of zeroes (counting multiplicities) of \(P_{k}(u_{1},\ldots,u_{j-1},x_{j},u_{j+1},\ldots,u_{n})\) inside the circle \(\mathbb{T}^{1}_{a_{j}}.\) Then, from the above discussion, we have \(P^{j}_{\mathfrak{a},k}=v_{j},\) and \[\varrho^{j}_{\mathfrak{a},k}\left(a_{1},\ldots,\widehat{a_{j}},\ldots,a_{n} \right)=Z^{j}_{\mathfrak{a},k}=\nu^{j}_{\mathfrak{a},k}+P^{j}_{\mathfrak{a},k }=\nu^{j}_{\mathfrak{a},k}+v_{j}.\] An analogous argument as in the proof of Proposition 3.2 yields the following proposition. **Proposition 7.2**.: _Let \(k\notin\mathcal{K}_{\mathfrak{a}}.\) Then \(\varrho^{j}_{\mathfrak{a},k}(x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})\) is constant for all \((x_{1},\ldots,\widehat{x_{j}},\ldots,x_{n})\in\mathbb{T}^{n-1}_{a_{1},\ldots, \widehat{a_{j}},\ldots,a_{n}}.\)_ We omit the proof of the proposition here since it is an immediate extension of Proposition 3.2, which follows from an induction argument on \(n\geq 2.\) Then Proposition 7.2, along with a similar argument as in the proof of Theorem 1.3, establishes the following extension of Theorem 1.3 for several variable case. **Theorem 7.3**.: _Let \(\mathfrak{a}=(a_{1},\ldots,a_{n})\in\mathbb{R}_{>0}^{n}.\) Let \(k_{0}\in\mathbb{C}\setminus\mathcal{K}_{\mathfrak{a}}\) such that \(k_{0}\) belongs to one of the bounded open connected components of \(\mathbb{C}\setminus\mathcal{K}_{\mathfrak{a}}.\) We denote by \(V_{\mathfrak{a},k_{0}}\) the bounded open connected component containing \(k_{0}.\)_ 1. _For_ \(j=1,\ldots,n,\) _if all the roots of_ \(P_{k_{0}}(a_{1},\ldots,a_{j-1},x_{j},a_{j+1},\ldots,x_{n})\) _lie entirely inside the circle_ \(\mathbb{T}_{a_{j}}^{1},\) _then, for all_ \(k\in V_{\mathfrak{a},k_{0}},\)__ \[\mathrm{m}_{\mathfrak{a}}(P_{k})=\nu_{\mathfrak{a},k}^{j}\log a_{j}+\mathrm{m} _{a_{1},\ldots,\widehat{a_{j}},\ldots,a_{n}}\left(P_{F,k}^{j}\right).\] 2. _For_ \(j=1,\ldots,n,\) _if all the roots of_ \(P_{k_{0}}(a_{1},\ldots,a_{j-1},x_{j},a_{j+1},\ldots,x_{n})\) _lie entirely outside the circle_ \(\mathbb{T}_{a_{j}}^{1},\) _then, for all_ \(k\in V_{\mathfrak{a},k_{0}},\)__ \[\mathrm{m}_{\mathfrak{a}}(P_{k})=\nu_{\mathfrak{a},k}^{j}\log a_{j}+\mathrm{m} _{a_{1},\ldots,\widehat{a_{j}},\ldots,a_{n}}\left(P_{f,k}^{j}\right).\] ## 8. Conclusion There are several directions for further exploration. The most immediate question one can ask is how to evaluate \(\mathrm{m}_{a,b}(Q_{r})\) when \(r\in\mathcal{R}_{a,b}.\) A primary observation in this case is that the integration path is not necessarily closed (and in most cases it is not). This turns out to be a challenging problem since the integration path in this case cannot be easily identified as a cycle in the homology group. We face a similar obstacle while evaluating the Mahler measure of \(Q_{r}(x,y)\) on the bounded connected components when the number of roots of \(Q_{r}(a,y)\) (counting multiplicity) (or \(Q_{r}(x,b)\)) inside \(\mathbb{T}_{b}^{1}\) (or \(\mathbb{T}_{a}^{1}\)) is strictly less than the degree of the polynomials. In this situation it is frequently required to integrate the algebraic functions coming from the factorisation of \(Q_{r}(x,y)\) (when considered as a polynomial in either \(x\) or \(y\)), on paths which are not closed. These similar challenges also extend to the \(n\)-variable cases when \(n\geq 3.\) A different direction would be to consider the family of rational polynomials \[P_{k}(x_{1},\ldots,x_{n})=k-\frac{P(x_{1},\ldots,x_{n})}{Q(x_{1},\ldots,x_{n} )}\in\mathbb{C}(x_{1},\ldots,x_{n}),\qquad\text{for $k\in\mathbb{C}$}.\] Our method of proof for Theorems 1.2 and 1.3 extends to this type of rational polynomials when \(Q(x_{1},\ldots,x_{n})\) is a monomial, which essentially recovers Theorems 7.1 and 7.3. The expression of \(\nu_{\mathfrak{a},k}^{j}\) in (61) appears in the work of Forsberg, Passare, and Tsikh [23], where it is denoted as the _order_ of an element in the complement of the _Amoeba_ associated to the respected polynomial. Our theorems also re-establish certain properties of the _Ronkin function_ associated to amoebas mentioned in [23]. Therefore, it would be also natural to explore the generalized Mahler measure in terms of the Ronkin function associated to amoebas in more depth. ## Appendix: Explicit derivations of the region \(\mathcal{R}_{a,b}\) In this Appendix, we include the proofs of Lemmas 5.4 and 5.5 for completion. Proof of Lemma 5.4.: We first recall that, for \(r\) given by (32), \[\max_{r\in\mathcal{R}_{a,b}}\left|\,\mathrm{Im}(r)\right|=\left|a-a^{-1} \right|+\left|b-b^{-1}\right|. \tag{62}\] Using property (**B**) from Section 5 regarding the symmetric nature of the compact set \(\mathcal{R}_{a,b},\) we can focus on proving the statement when \(\alpha,\beta\in[0,\pi),\) and the other cases will follow from similar arguments. This particular choice implies \(\sin\alpha,\sin\beta>0.\) Also note that \(\mathcal{R}_{a,b}\) is invariant under the change of variables \(a\mapsto a^{-1}\) and \(b\mapsto b^{-1}.\) Therefore, we can further restrict ourselves to \(a,b>1.\) The above assumptions allow us to restate (62) as \[\max_{r\in\mathcal{R}_{a,b}}|\operatorname{Im}(r)|=a-a^{-1}+b-b^{-1}=2\sinh x+2 \sinh y,\] where \(a=e^{x},\)\(b=e^{y},\)\(a-a^{-1}=2\sinh x\) and \(b-b^{-1}=2\sinh y.\) Let \(\theta\in[0,\pi)\) such that \[\left[\left(a-a^{-1}\right)\sin\alpha+\left(b-b^{-1}\right)\sin \beta\right]=(a-a^{-1}+b-b^{-1})\sin\theta,\] \[\iff \sinh x\sin\alpha+\sinh y\sin\beta=(\sinh x+\sinh y)\sin\theta.\] Squaring both sides and then collecting the coefficients of \(\sinh^{2}x,\sinh^{2}y\) and \(\sinh x\sinh y,\) we find that \[(\sin^{2}\alpha-\sin^{2}\theta)\sinh^{2}x+(\sinh^{2}\beta-\sinh^{2}\theta) \sinh^{2}y-2\sinh x\sinh y(\sin\alpha\sin\beta-\sin^{2}\theta)=0.\] With the help of trigonometric identities and relations among hyperbolic functions, the above equality becomes \[(\cosh x+\cosh y)^{2}\cos^{2}\theta-(\cosh x\cos\alpha+\cosh y\cos \beta)^{2}\] \[= \sin^{2}\alpha+\sin^{2}\beta-2\sin^{2}\theta+2\cosh(x-y)\left( \sin\alpha\sin\beta-\sin^{2}\theta\right)\] \[+4\cosh x\cosh y\sin^{2}\left(\frac{\alpha+\beta}{2}\right). \tag{63}\] Notice that, in order to prove the lemma, we need to find certain conditions involving \(\sinh x\) and \(\sinh y\) such that the left hand side (LHS) of (63) is at least \(0.\) Indeed, if LHS is \(\geq 0,\) then \[r=(\operatorname{Re}(r),\operatorname{Im}(r))=(2\cosh x\cos\alpha+2\cosh y \cos\beta,2\sinh x\sin\alpha+2\sinh y\sin\beta)\] satisfies (37), which is essentially what Lemma 5.4 conveys. Since \(\alpha,\beta\in[0,\pi),\) the concavity of the sine function in \([0,\pi)\) implies that \[4\sin^{2}\left(\frac{\alpha+\beta}{2}\right)\geq\left(\sin\alpha+\sin\beta \right)^{2}.\] Then, the definition of \(\sin\theta\) (in terms of \(\sin\alpha,\sin\beta,\sinh x\) and \(\sinh y\)), the above discussion, and a further simplification of the right hand side (RHS) of (63) imply that RHS \[\geq \sin^{2}\alpha\left[1+\cosh x\cosh y-\frac{\sinh^{2}x}{\sinh^{2} \left(\frac{x+y}{2}\right)}\right]+\sin^{2}\beta\left[1+\cosh x\cosh y-\frac{ \sinh^{2}y}{\sinh^{2}\left(\frac{x+y}{2}\right)}\right]\] \[+2\sin\alpha\sin\beta\left[\cosh\left(\frac{x-y}{2}\right)+\cosh x \cosh y-\frac{\sinh x\sinh y}{\sinh^{2}\left(\frac{x+y}{2}\right)}\right].\] Therefore, when the coefficients of \(\sin^{2}\alpha,\sin^{2}\beta\) and \(\sin\alpha\sin\beta\) are all \(\geq 0,\) we have that both RHS and LHS are at least \(0.\) Notice that \[\left[1+\cosh x\cosh y-\frac{\max\{\sinh^{2}x,\sinh^{2}y\}}{\sinh^{2}\left( \frac{x+y}{2}\right)}\right]\geq 0 \tag{64}\] implies that the coefficient of \(\sin\alpha\sin\beta\) is also \(\geq 0.\) Therefore, (64) is a sufficient condition to prove the statement, as it is essentially a restatement of the condition in the statement of the lemma when \(x\neq y,\) and this concludes the proof. Since the bounded component of \(\mathbb{C}\setminus\mathcal{R}_{a,b}\) only exists when \(a\neq b,\) we can assume that \(a>b>0\) to prove Lemma 5.5. The symmetric nature of \(\mathcal{R}_{a,b}\) again narrows our proof down to the case \(a>b>1.\) Since \[a+a^{-1}=2\cosh x,\ a-a^{-1}=2\sinh x,\ b+b^{-1}=2\cosh y,\ \text{and}\ b-b^{-1}=2 \sinh y,\] we only need to consider the case \(x>y>0.\) Proof of Lemma 5.5.: Recall that every \(r\in\mathcal{R}_{a,b}\) is of the form \[r=\left(a+a^{-1}\right)\cos\alpha-\left(b+b^{-1}\right)\cos\beta+i\left[\left( a-a^{-1}\right)\sin\alpha-\left(b-b^{-1}\right)\sin\beta\right],\] where \(\alpha,\beta\in[-\pi,\pi).\) The expression of \(r\in\mathcal{R}_{a,b}\) above implies that \[\min_{r\in\mathcal{R}_{a,b}}\left|r\right|=\left(a+a^{-1}\right)-\left(b+b^{ -1}\right).\] To prove the statement, we need to find conditions on \(a\) and \(b\) such that the inner boundary of \(\mathcal{R}_{a,b}\) is an ellipse with centre at the origin, and the length of major and minor axes are \(2\left(a-a^{-1}\right)-2\left(b-b^{-1}\right)\) and \(2\left(a+a^{-1}\right)-2\left(b+b^{-1}\right),\) respectively. We also note that the major axis is the line joining \(i\left[\left(a-a^{-1}\right)-\left(b-b^{-1}\right)\right]\) and \(-i\left[\left(a-a^{-1}\right)-\left(b-b^{-1}\right)\right],\) and the minor axis is the line joining \(\left(a+a^{-1}\right)-\left(b+b^{-1}\right)\) and \(\left(b+b^{-1}\right)-\left(a+a^{-1}\right).\) Therefore, we need to investigate \(\operatorname{Im}(r)\) for \(r\in\mathcal{R}_{a,b},\) and find conditions on \(a\) and \(b\) such that \[\min_{r\in\mathcal{R}_{a,b}\cap i\mathbb{R}}\left|\operatorname{Im}(r)\right| =\left(a-a^{-1}\right)-\left(b-b^{-1}\right), \tag{65}\] and, for \(\phi\in[0,\pi),\) \[\left(a-a^{-1}\right)\sin\alpha-\left(b-b^{-1}\right)\sin\beta= \left[\left(a-a^{-1}\right)-\left(b-b^{-1}\right)\right]\sin\phi\] \[\Rightarrow \left|\left(a+a^{-1}\right)\cos\alpha-\left(b+b^{-1}\right)\cos \beta\right|\geq\left|\left[\left(a+a^{-1}\right)-\left(b+b^{-1}\right) \right]\cos\phi\right|. \tag{66}\] Note that, because of the symmetry of the region \(\mathcal{R}_{a,b},\) it suffices to consider the case when \(\alpha\in[0,\pi)\) and \(\beta\in[0,\pi).\) Since, \(r\in\mathcal{R}_{a,b}\) implies that \(\bar{r},-\bar{r},-r\in\mathcal{R}_{a,b},\) we can restrict ourselves to the upper half plane \(\mathbb{H}=\{z\in\mathbb{C}:\operatorname{Im}(z)>0\}.\) Recall that \(r\in\mathcal{R}_{a,b}\) can also be expressed as the points on the ellipses \[E_{x,z_{\tau}}:\left|r-z_{\tau}-2\right|+\left|r-z_{\tau}+2\right|=2\left(a+a ^{-1}\right)=4\cosh x,\] where \(z_{\tau}=\left(b+b^{-1}\right)\cos\tau+i\left(b-b^{-1}\right)\sin\tau=2\cosh y \cos\tau+2i\sinh y\sin\tau,\) for each \(\tau\in[-\pi,\pi).\) Let \[f(t):=2\left[\tanh x\sqrt{\cosh^{2}x-\cosh^{2}y+t^{2}\cosh^{2}y}-t\sinh y\right]\] be a function on \(t\) from \([0,1]\) to \(\mathbb{R}_{>0}.\) Then, for \(t_{\tau}=\sin\tau\) with \(\tau\in[0,\pi),\)\(u_{t_{\tau}}=if(t_{\tau})\) denotes the intersection point of \(E_{x,z_{\tau}}\) and the imaginary axis in \(\mathbb{H}.\) Since \(x\neq y,\) we know that \(0\notin\mathcal{R}_{a,b},\) which implies that \(\min_{t\in[0,1]}\left|f(t)\right|>0.\) Now it remains to find conditions on \(x\) and \(y\) such that (65) holds, and, in order to do that, we need to investigate \(\min_{t\in[0,1]}\left|f(t)\right|\). Since \(f(t)\) is a smooth function on \(t,\) we solve the equation \(\frac{df(t)}{dt}=0\) to find that \(t_{0}=\tanh y\cosh x\) is a solution. Further, when \(0<t_{0}<1,\) we find that \(\left.\frac{d^{2}f(t)}{dt^{2}}\right|_{t=t_{0}}>0,\) which implies that \(f\) attends a local minima at \(t=t_{0}.\) To conclude that \(f\) attains its global minima in the interval \([0,1]\) at \(t_{0},\) we compare \(f(t_{0})\) with \(f(0)\) and \(f(1).\) After a simplification, we find that the inequalities \(f(0)>f(t_{0})\) and \(f(1)>f(t_{0})\) are equivalent to the inequalities \[\sinh^{2}x\cosh^{2}y>\frac{1}{2}\left(\cosh 2x-\cosh 2y\right)=\cosh^{2}x-\cosh^{ 2}y,\] and \[\sinh x+\sinh y<\cosh x\cosh y,\] respectively. The above inequalities in fact hold for all \(x,y\in\mathbb{R},\) and this shows that, in the interval \([0,1],\)\(f\) attains global minima at \(t=t_{0},\) when \(t_{0}<1.\) Further the inequalities \(\frac{df(t)}{dt}>0\) and \(\frac{df(t)}{dt}<0\) can be reduced to the inequalities \(t>t_{0}\) and \(t<t_{0}.\) Therefore, when the condition (38) \(t_{0}=\tanh y\cosh x>1\) is satisfied, i.e. \(0\leq t\leq 1<t_{0},\) the above discussion implies that \(f(t)\) is decreasing in \([0,1],\) i.e. \[\min_{t\in[0,1]}f(t)=f(1)=2\sinh x-2\sinh y=\left(a-a^{-1}\right)-\left(b-b^{-1 }\right),\] when \(\tanh y\cosh x>1.\) Note that \(f(1)\) is essentially the required minimum mentioned in (65). In other words, when (38) \(\tanh y\cosh x>1,\) the equality in (65) holds. It now suffices to find further conditions on \(x\) and \(y,\) such that the inner boundary of \(\mathcal{R}_{a,b}\) takes a shape of an ellipse mentioned in the statement, i.e. (66) holds. In order to do that, we follow a similar argument as in the proof of Lemma 5.4. Here we set \(\theta^{\prime}\in[0,\pi)\) such that \[\left[\left(a-a^{-1}\right)\sin\alpha-\left(b-b^{-1}\right)\sin \beta\right]=\left[\left(a-a^{-1}\right)-\left(b-b^{-1}\right)\right]\sin \theta^{\prime},\] \[\Leftrightarrow \sinh x\sin\alpha-\sinh y\sin\beta=\left(\sinh x-\sinh y\right) \sin\theta^{\prime},\] where \(\alpha,\beta\in[0,\pi).\) By a similar calculation from the proof of Lemma 5.4, we find that the condition \[\eqref{eq:22}\quad\cosh^{2}\left(\frac{x+y}{2}\right)\left[\cosh x\cosh y-1 \right]\geq\max\{\sinh^{2}x,\sinh^{2}y\},\] implies that \(\left|\cosh x\cos\alpha-\cosh y\cos\beta\right|\geq\left|(\cosh x-\cosh y) \cos\theta^{\prime}\right|.\) From the above discussions we then conclude that, when the conditions (38) and (39) hold, \(r\in\mathcal{R}_{a,b}\) satisfies (40), and this completes the proof.
2301.08530
Self-Organization Towards $1/f$ Noise in Deep Neural Networks
The presence of $1/f$ noise, also known as pink noise, is a well-established phenomenon in biological neural networks, and is thought to play an important role in information processing in the brain. In this study, we find that such $1/f$ noise is also found in deep neural networks trained on natural language, resembling that of their biological counterparts. Specifically, we trained Long Short-Term Memory (LSTM) networks on the `IMDb' AI benchmark dataset, then measured the neuron activations. The detrended fluctuation analysis (DFA) on the time series of the different neurons demonstrate clear $1/f$ patterns, which is absent in the time series of the inputs to the LSTM. Interestingly, when the neural network is at overcapacity, having more than enough neurons to achieve the learning task, the activation patterns deviate from $1/f$ noise and shifts towards white noise. This is because many of the neurons are not effectively used, showing little fluctuations when fed with input data. We further examine the exponent values in the $1/f$ noise in ``internal" and ``external" activations in the LSTM cell, finding some resemblance in the variations of the exponents in fMRI signals of the human brain. Our findings further supports the hypothesis that $1/f$ noise is a signature of optimal learning. With deep learning models approaching or surpassing humans in certain tasks, and being more ``experimentable'' than their biological counterparts, our study suggests that they are good candidates to understand the fundamental origins of $1/f$ noise.
Nicholas Chong Jia Le, Ling Feng
2023-01-20T12:18:35Z
http://arxiv.org/abs/2301.08530v2
# Self-Organization Towards \(1/f\) Noise in Deep Neural Networks ###### Abstract Despite \(1/f\) noise being ubiquitous in both natural and artificial systems, no general explanations for the phenomenon have received widespread acceptance. One well-known system where \(1/f\) noise has been observed in is the human brain, with this 'noise' proposed by some to be important to the healthy function of the brain. As deep neural networks (DNNs) are loosely modelled after the human brain, and as they start to achieve human-level performance in specific tasks, it might be worth investigating if the same \(1/f\) noise is present in these artificial networks as well. Indeed, we find the existence of \(1/f\) noise in DNNs - specifically Long Short-Term Memory (LSTM) networks modelled on real world dataset - by measuring the Power Spectral Density (PSD) of different activations within the network in response to a sequential input of natural language. This was done in analogy to the measurement of \(1/f\) noise in human brains with techniques such as electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI). We further examine the exponent values in the \(1/f\) noise in "inner" and "outer" activations in the LSTM cell, finding some resemblance in the variations of the exponents in the fMRI signal. In addition, comparing the values of the exponent at "rest" compared to when performing "tasks" of the LSTM network, we find a similar trend to that of the human brain where the exponent while performing tasks is less negative. ## I Introduction Noise is an often unwanted phenomenon in many different systems such as audio systems, electrical systems, communications, and in measurements. A common type of noise is \(1/f\) noise, or pink noise, characterised by a power spectral density (PSD) \(S(f)\) that is inversely proportional to frequency: \(S(f)=kf^{-1}+C\). While there are relatively simple generative and stochastic models that explain other common types of noise such as white noise [1; 2] or Brownian noise [2], there are no such models for \(1/f\) noise in general. ### Sources of \(1/f\) Noise \(1/f\) noise has since been found and characterised in a variety of electrical systems, including ionic solutions [3], diodes and PN junctions [4], field effect transistors [5], and superconducting Josephson junctions [6]. In these systems, the definition of \(1/f\) noise has been expanded to include noise with a spectral density proportional to \(f^{\beta}\), with \(-2<\beta<0\). In this paper, the term \(1/f\) noise will be used to refer to these \(1/f\)-like signals that have an exponent \(\beta\) smaller than 0 (white noise) and greater than -2 (Brownian noise). Other than flicker noise in electronics, \(1/f\) noise is ubiquitous in many physical systems, both natural and man-made. This pattern has been found in undersea currents [7], global climate data [8], Nile river flood and minimum levels [9; 10], sunspot frequency [10], and many other natural processes [10]. Interestingly, \(1/f\) noise is also present in man-made systems such as traffic systems [11; 12], concrete structures [13], and surprisingly, even in canonical examples of man-made "data" such as in music and speech [14]. Another interesting source of \(1/f\) noise is in biological systems, such as human heart rate fluctuations, where the spectrum for a healthy human is \(1/f\), while that for someone with heart disease is closer to Brownian [15; 16]. \(1/f\) noise is also found in other biological systems such as in giant squid axons [17], human optical cells [18], and in activity scans of the human brain [19]. ### Motivation Despite the extraordinary ubiquity of \(1/f\) noise and it being studied in many different fields for almost a century now, there is still no universal description for its occurrence, only specific models made to explain specific processes such as in diodes [4; 20] or other electronic components [21]. Such a search is spurred by similar phenomena in other parts of statistical mechanics, such as the universal critical exponents in different universality classes [22; 23; 24]. As such, many believe that there is a similar universality in \(1/f\) noise, and there is thus a great interest amongst many towards finding a deep all-encompassing explanation for the phenomenon. One way of working towards that goal is to probe the areas where this \(1/f\) noise is present in order to add to the pool of knowledge we have about this phenomenon. If a \(1/f\) signal is persistent across both a system and a simpler analogue of it, one might be able to gain insight about the origin of the noise by studying the simpler system instead. One such pair of analogues is the previously mentioned \(1/f\) noise in the human brain, and the relatively less complex systems of deep neural networks (DNNs). While the neurons and connections in a DNN are many orders of magnitude less complex than those in the human brain, the general principle of operation of a DNN approximates that of a basic brain with simple connections [25]. Similar to the human heart, \(1/f\) noise presents itself strongly in the human brain [26]. Like in the heart, the \(1/f\) noise in the human brain presents itself differently in a healthy human brain compared to a brain with neurological conditions like schizophrenia [27]. While the study of this form of brain activity is in its early stages due to the \(1/f\) signal being regarded as extraneous noise in the past, it has been proposed that the \(1/f\) noise in the brain is important in regulating function and serves other cognitive purposes [28]. With the fast progress of artificial neural networks, or deep neural networks (DNN) in particular, these artificial neural systems are fast approaching the human cognition level. As such, it would be appropriate for an analysis of the presence of \(1/f\) noise in a DNN to include networks that approach a human level of competency. If this phenomenon also exists in DNN, the controllability and manipulability of the DNN would serve as a better experimental subject to further examine the origin of \(1/f\) noise than the brain. ### \(1/f\) noise in the human brain Brain activity can be measured in multiple different ways, such as non-invasive scalp electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). These neuroimaging techniques detect different properties of the brain as a proxy for brain activity, which results in detection of different types of activity in different parts of the brain. Scalp EEGs map brain activity by measuring electrical signals with electrodes placed on the scalp. These electrodes detect voltage fluctuations relative to a reference potential [29]. Due to the positioning of the detectors outside the skull, the signals obtained for scalp EEGs are dominated by brain activity on the surface of the brain rather than in the bulk of the brain [29]. \(1/f\) noise has shown up in the recordings of EEGs for decades [26]. Once considered an unwelcome signal to be filtered out as background or instrumental noise, there is now significant interest and research into the role of \(1/f\) noise in a healthy functioning brain [27; 28]. Numerous studies have measured the scaling exponent \(\beta\) of EEGs with similar results [30; 31]. The aggregate results obtained in [30] gives a scaling exponent \(\beta=-1.33\pm 0.19\), demonstrating \(1/f\) noise. Another form of neuroimaging is with fMRI which tracks blood flow through the brain [32]. It has been shown that brain activity is linked to blood flow through the activated regions of the brain [33], and this fact is the basis of how the images formed by fMRI are linked to brain activity. When mapping the fMRI signal to neural activity by comparing it to other methods of measuring electrical activity in the brain, it has been found that the fMRI signal mainly reflects the activity within the individual neurons rather than the outputs between the neurons [32]. Similar to in EEGs, \(1/f\) noise also shows up in fMRI recordings [26; 34]. The scaling exponents measured in these studies are less negative than those in EEGs, with an average scaling exponent of \(\beta=-0.84\). This exponent became even less negative when the brain performs tasks, averaging to \(\beta=-0.72\) across the brain. ### Recurrent Neural Networks Recurrent Neural Networks (RNNs) are a type of DNN that preserve the state of an input across a temporal sequence by feeding the outputs of some nodes back into those same nodes. This is as opposed to feedforward DNNs, where data flows only from layer to layer. By retaining knowledge of previous inputs through this recurrence, RNNs are significantly more adept than simple feedforward networks at processing sequential data. Figure 1 shows the most basic form of an RNN, demonstrating the idea that these recurrent networks are deep through _time_, in contrast to the depth through _layers_ of a simple feedforward network. However, this also means that the simple RNN suffers from the same problem as DNNs with large depths - the vanishing gradient problem. RNNs struggle to converge for particularly long input sequences of more than 10s of timesteps. In a way, RNNs are similar to the human brain at an abstract level, as the human brain continuously receives information and processes it using our biological neural networks. In this study we use a particular type of RNN called Long Short-Term Memory (LSTM) networks. In this arthitecture, an LSTM cell was created to replace the recurrent cell in the vanilla RNN shown in Figure 1 in order to solve the vanishing gradient problem [35]. The LSTM network attempts to resolve the problem by maintaining an internal cell state **c**. In an LSTM network, the RNN cell shown in Figure 1 is replaced by the LSTM cell (Figure 2) which consists of many different activations compared to the single activation in a vanilla RNN cell. This LSTM cell, like the vanilla RNN cell, takes in \(\textbf{x_{t}}\) and \(\textbf{h_{t-1}}\) as inputs, along with the additional input of the previous cell state \(\textbf{c_{t-1}}\) Like the vanilla RNN, the LSTM cell also outputs the current hidden state \(\mathbf{h_{t}}\) and additionally the current cell state \(\mathbf{c_{t}}\) to itself in the next timestep. The addition of the internal cell state \(\mathbf{c}\) helps in preserving temporal correlations [35]. ## II Methods The task selected for this experiment is a popular benchmark AI task, which is to predict the sentiment of natural languages of the Large Movie Review Dataset [36], which contains 50000 labelled movie reviews from the Internet Movie Database (IMDb). Traditional machine learning techniques like the Naive Bayes classifier, Maximum Entropy (MaxEnt) classification, and Support Vector Machines (SVMs) are effective at topic-based text classification, which classifies text based on keywords. However, they tend to have trouble classifying text based on positive or negative sentiment, which can require a more subtle "understanding" of context beyond single words or short phrases [37]. LSTM networks have demonstrated long range temporal memory beyond simple n-grams (unit of \(n\) words used in traditional natural language processing (NLP) techniques). As such, they are prime candidates for this task and frequently demonstrate close to human level performance in basic sentiment analysis. In this work we use LSTM rather than other RNN structures due to its superior performance over other variants in real tasks. To analyse specifically the time series behaviour of the LSTM cell activations, the LSTM network used for this task will contain the minimum number of layers to properly classify the data. Additional layers that are traditionally used to augment the performance of the network will not be included as they carry the same key features and do not generate significantly new theoretical insights. ### Dataset The dataset chosen for the sentiment analysis task will be the Large Movie Review Dataset [36] which consists of 50000 highly polar movie reviews obtained from the Internet Movie Database (IMDb) [38]. This dataset consists of 25000 positive reviews (score \(\geq\) 7 out of 10) and 25000 negative reviews (score \(\leq\) 4 out of 10). Preprocessing steps such as the removal of punctuation and converting of words to lowercase were performed. The words were also converted to tokens, with the top 4000 words (88.3% of the full vocabulary) converted into unique tokens, and the rest of the words converted into a single [UNK] token. ### LSTM network architecture The LSTM network will consist of three layers: An embedding layer that converts the words into lower dimensional internal representation vectors, the LSTM layer, and an output layer consisting of a single neuron with a sigmoid activation that outputs a value indicating if the review is positive (\(y\geq 0.5\)) or negative (\(y<0.5\)). The IMDb dataset was obtained using the Keras[39] datasets application programming interface (API), with the preprocessing done with custom code [40]. The networks were trained using Keras with the TensorFlow 2.6.0 [41] backend on a GeForce GTX 1080 GPU, with preprocessing steps performed on a Ryzen 9 3900X CPU. Figure 1: A (many-to-one) recurrent neural network visualised in its temporally unrolled representation. A time series (in this case a movie review with \(n\) words) is input into the network sequentially. For each timestep \(t\), the \(t\)th word passes into the embedding layer, which converts the word into a vector using a learned representation of a continuous vector space. The vector \(\mathbf{x_{t}}\) then passes into the recurrent layer, which accepts both \(\mathbf{x_{t}}\) and the output of itself from the previous timestep, \(\mathbf{h}_{t-1}\). The recurrent layer then passes its output, \(\mathbf{h}_{t}\), into itself for the next timestep. At the final timestep \(n\), the recurrent layer passes its output \(\mathbf{h}_{n}\) to the output layer which converts it to the output \(\mathbf{y}\). \begin{table} \begin{tabular}{c c c c c} Size of embedding layer & Size of LSTM layer & Training batch size & Dropout factor & L2 regularisation factor & Learning rate \\ \hline 32 & 60 & 128 & 0.1 & 0.001 & 0.005 \\ \end{tabular} \end{table} Table 1: Hyperparameters selected for the LSTM networks The hyperparameters used for the LSTM networks are shown in Table 1. These hyperparameters were selected with the KerasTuner[42] library using the Hyperband[43] search algorithm, selected over 10 hyperband iterations. Overall, we follow the best practices of the state-of-the-art for LSTM models in this work. ### Measuring \(1/f\) noise In order to measure the spectral noise in the LSTM cell, temporal sequences of the specific activations have to be obtained. To obtain the internal activations of the Keras LSTM cells, the cell was recreated in vanilla Python with NumPy[44]. The code for this is available at [45]. The steps to obtain the power spectral density of any specific activation (\(\mathbf{f}\) in this case) is then as follows: 1. Propagate the review through the LSTM layer, recording the vector \(\mathbf{f}_{t}\) corresponding to the forget gate of each LSTM cell at each timestep \(t\), forming 60 time series of activations corresponding to the 60 LSTM cells. 2. Perform a fast Fourier transform (FFT) on each time series. 3. Take the square of the FFT to obtain the PSD of each time series. 4. Sum the activation power spectral density of each cell in the layer to get the total PSD of the LSTM layer [46]. ## III Results and discussion When picking the optimal epoch of the networks, the epoch with the lowest network loss was selected. The accuracy achieved across the 5 networks was high [47], ranging from 88.41% to 89.19% prediction accuracy on the test data. Table 2 provides a summary of the network loss and accuracy of the 5 LSTM networks used at the optimal epoch. ### Exponent \(\beta\) for the test set The steps described in section II.3 were performed for the reviews in the test dataset of length \(\geq 500\) to remove the impact of the padding as the repeated identical padding tokens has the effect of lowering the exponent in the PSD. Note that training of the LSTM is carried out on reviews regardless of their word lengths, to keep in line with the accepted practices in AI. Figure 3(a) shows the PSD of one of the reviews for one of the networks, with the exponent for \(\mathbf{h}\) obtained by taking the gradient of the PSD on a log-log scale. Figure 3(b) displays a histogram of the exponents of \(\mathbf{h}\) obtained for all the test \begin{table} \begin{tabular}{c c c} \hline \hline Network & Network loss on test data & Accuracy on test data \\ \hline 1 & 0.2778 & 89.19 \\ 2 & 0.2867 & 88.77 \\ 3 & 0.2896 & 88.41 \\ 4 & 0.2788 & 89.07 \\ 5 & 0.2927 & 88.57 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the 5 different LSTM network performances on the test dataset of 15000 reviews. Figure 2: The LSTM cell (dotted circle) with its internal structure shown. The red lines represent the “internal” activations while the green lines represent the “external” activations. \(\sigma_{\mathbf{t}}\) and \(\sigma_{\mathbf{s}}\) represent the tanh activation and sigmoid activation respectively. reviews with length \(\geq 500\) for the same network. We see clear \(1/f\) noise here with a mean \(\mu=-0.993\pm 0.073\). ### Ruling out \(1/f\) noise in the input One possibility for the presence of \(1/f\) noise is that the input \(\mathbf{x}\) has a PSD that is \(1/f\). As such, it is important to rule out this effect if we were to demonstrate the emergence of \(1/f\) noise from the LSTM. To determine the exponent of the input data, the same process from II.3 was performed, using the embedding vector instead of the activation vector. Figure 3(c) shows the PSD of one of the inputs, with the exponent for \(\mathbf{x}\) obtained by taking the gradient of the PSD on a log-log scale. Figure 3(d) displays a histogram of the exponents of \(\mathbf{x}\) obtained for all the test reviews with length \(\geq 500\). Unlike the clear \(1/f\) noise demonstrated with the activations from the LSTM layer, the histogram here shows that the noise in the reviews themselves are effectively uncorrelated white noise, with a mean \(\mu=-0.020\pm 0.033\). Figure 4 is a scatter plot relating the histograms shown in Figure 3, demonstrating the lack of correlation between the activation exponent and the input exponent with an \(R^{2}\) value of 0.083. This further supports our hypothesis that the \(1/f\) noise observed in the LSTM networks are inherent to the networks, rather than a consequence of \(1/f\) noise in the inputs to the networks. ### Overall results for \(\beta\) The data for all the activations across all the networks was collected and the aggregate values of \(\beta\) for each activation are shown in Table 3. The aggregate values for \(\beta\) are provided for the networks before and after training, with the weights of untrained networks randomly initialised with the default GlorotUniform initialiser. The summary of the aggregate results on exponents for different neurons is also shown in Figure 5, where the effect of training is very clearly demonstrated. The "internal" activations \(\mathbf{f}\), \(\mathbf{i}\), \(\mathbf{cc}\), and \(\mathbf{o}\) have a relatively less Figure 3: (a) PSD (solid, blue) of the activation \(\mathbf{h}_{\mathbf{z}}\) for the entire LSTM layer for a single 580 word review (truncated to 500 words) with a line of best fit plotted (dashed, orange). The slope obtained in the log-log plot is the exponent \(\beta\), with a value of \(\beta=-0.99\). (b) Histogram showing the spread of values of \(\beta\) for the activation \(\mathbf{h}\) for a single LSTM network across all the test reviews with length \(\geq 500\). The mean value (solid, red) \(\mu=-0.993\) and standard deviation (dashed, red) \(\sigma=0.073\) are indicated on the histogram. (c) PSD (solid, blue) of the input \(\mathbf{x}_{t}\) for the same 580 word review shown in (a) with a line of best fit plotted (dashed, orange), giving \(\beta=0.00\). (d) Histogram showing the spread of values of \(\beta\) for the inputs \(\mathbf{x}\) across all the test reviews with length \(\geq 500\). The mean value (solid, red) \(\mu=-0.020\) and standard deviation (dashed, red) \(\sigma=0.033\) are indicated on the histogram. \begin{table} \begin{tabular}{c c c} \hline \hline Activation & \(\beta\) (Untrained) & \(\beta\) (Trained) \\ \hline **f** & -0.86 \(\pm\) 0.31 & -0.58 \(\pm\) 0.17 \\ **i** & -0.87 \(\pm\) 0.27 & -0.62 \(\pm\) 0.13 \\ **cc** & -0.03 \(\pm\) 0.29 & -0.312 \(\pm\) 0.086 \\ **o** & -0.78 \(\pm\) 0.18 & -0.56 \(\pm\) 0.15 \\ **c\({}_{\mathbf{out}}\)** & -1.14 \(\pm\) 0.31 & -1.05 \(\pm\) 0.16 \\ **h** & -1.13 \(\pm\) 0.31 & -0.80 \(\pm\) 0.15 \\ \hline \hline \end{tabular} \end{table} Table 3: Summary of exponent \(\beta\) values across the 5 LSTM networks before and after training. Figure 4: Scatter plot of the exponents of \(\mathbf{h}\) vs the exponents of the input \(\mathbf{x}\) for test reviews with length \(\geq 500\). Model used is the same as Figure 3. The input exponents here are not impacting the activation exponent values, showing that the ‘\(1/f\)’ phenomenon in the activation values is not from a similar pattern in the inputs. negative trained exponent of between -0.3 and -0.6, while the "external" activations \(\mathbf{c_{out}}\) and \(\mathbf{h}\) are closer to pink noise, with relatively more negative trained exponents. Another point of interest is that the effect of training is to make the exponents less negative, with the exception of the exponent \(\mathbf{cc}\). This behaviour is similar to that of fMRIs compared to EEGs, where the exponents measured by fMRIs [34] corresponding to signals from the volume of the brain are less negative than those measured by EEGs [30] corresponding to signals from the surface of the brain. ### Effect of performing a task on \(\beta\) It has been reported that the exponent \(\beta\) from human brain exhibits different values when at rest and performing tasks. Specifically, its value is more negative when at rest as compared to the latter[34]. Here we also mimic the'rest' state and 'task' state of the LSTM: we assume that using inputs consisting of only 0 values mimics the'rest' state, and using inputs of actual movie reviews mimics the 'task' state. As shown from the results in Figure 6, intriguingly the LSTM exhibits the same trend in the value of \(\beta\): it is more negative when at'rest'. One possible explanation is that a more negative \(\beta\) value is associated with a longer memory process. And since the'rest' state has constant values as input, this input data naturally has longer memory than that of the 'task' inputs. This longer memory process then gets carried over to the outputs of the neural network values. ## IV Conclusion In summary, we have found that \(1/f\) noise is also present in artificial neural networks like the Long Short Term Memory networks that are trained on a real world dataset. Further analysis showed that such a pattern is not a trivial consequence of a similar pattern in input data, as the input data shows a clear white noise pattern that is distinct from pink noise a.k.a. \(1/f\) noise. Since the input data are also real world natural language sentences that our brain processes, our results demonstrates that artificial network networks that perform close to the cognition of human level exhibits very similar \(1/f\) patterns as the later biological counterparts[19; 26; 34]. The analogy was also further extended with the similarity of the trends in the noise exponents for "inner" and "outer" neurons within the LSTM compared to fMRI and EEG exponents respectively [30; 34]. Similarly, the noise exponents for the LSTM networks at "rest" state compared to when performing the tasks exhibit the same trend found in fMRI data [34]. It is intriguing that despite the vast differences in the microscopic details between biological neural networks and artificial neural networks, such macroscopic patterns of \(1/f\) are strikingly similar. Such similarity points at Figure 5: Aggregate values of \(\beta\) for all the activations plotted. The error bar represents 1 standard deviation \(\sigma\) value over 5 LSTM networks. The dotted pink line marks \(\beta=-1.0\) (pink noise), and the dotted black line marks \(\beta=0\) (white noise). Figure 6: The effect of performing a task on the exponent \(\beta\) of the various activations in the LSTM networks. This is drawn in direct analogy with the “rest” vs. “task” measurements for fMRI signals in human subjects [34]. The exponents obtained for “task” correspond to the LSTM cells processing input vectors that correspond to movie reviews. The exponents obtained for “rest” correspond to the LSTM cells processing zero vectors of equal dimension to the movie reviews (40 dimensions after the embedding layer) for 500 timesteps. some deeper principles that govern their healthy functioning, something that is independent of the detailed neural interactions. With the artificial neural networks being more 'transparent' to our experimental manipulation and examination unlike its biological counterpart, it is an ideal proxy to understand the origin of \(1/f\) noise going forward, as well as a possible tool to understand more about the healthy functioning of the brain through the \(1/f\) noise perspective.
2305.04825
NewsQuote: A Dataset Built on Quote Extraction and Attribution for Expert Recommendation in Fact-Checking
To enhance the ability to find credible evidence in news articles, we propose a novel task of expert recommendation, which aims to identify trustworthy experts on a specific news topic. To achieve the aim, we describe the construction of a novel NewsQuote dataset consisting of 24,031 quote-speaker pairs that appeared on a COVID-19 news corpus. We demonstrate an automatic pipeline for speaker and quote extraction via a BERT-based Question Answering model. Then, we formulate expert recommendations as document retrieval task by retrieving relevant quotes first as an intermediate step for expert identification, and expert retrieval by directly retrieving sources based on the probability of a query conditional on a candidate expert. Experimental results on NewsQuote show that document retrieval is more effective in identifying relevant experts for a given news topic compared to expert retrieval
Wenjia Zhang, Lin Gui, Rob Procter, Yulan He
2023-05-05T11:10:48Z
http://arxiv.org/abs/2305.04825v1
NewsQuote: A Dataset Built on Quote Extraction and Attribution for Expert Recommendation in Fact-Checking ###### Abstract To enhance the ability to find credible evidence in news articles, we propose a novel task of expert recommendation, which aims to identify trustworthy experts on a specific news topic. To achieve the aim, we describe the construction of a novel NewsQuote dataset consisting of 24,031 quote-speaker pairs that appeared on a COVID-19 news corpus. We demonstrate an automatic pipeline for speaker and quote extraction via a BERT-based Question Answering model. Then, we formulate expert recommendations as document retrieval task by retrieving relevant quotes first as an intermediate step for expert identification, and expert retrieval by directly retrieving sources based on the probability of a query conditional on a candidate expert. Experimental results on NewsQuote show that document retrieval is more effective in identifying relevant experts for a given news topic compared to expert retrieval.1 Footnote 1: Our source code can be accessed at: [https://github.com/WenjiaZh/NewsQuote](https://github.com/WenjiaZh/NewsQuote) 1 Footnote 1: Our source code can be accessed at: [https://github.com/WenjiaZh/NewsQuote](https://github.com/WenjiaZh/NewsQuote) ## 1 Introduction The rapid growth of misinformation in recent years has been the subject of much attention from academia, journalists, political analysts and fact-checking organisations and has prompted research into NLP-based techniques and tools to support fact-checking work and evidence verification Lazarski, Al-Khasaweneh, and Howard (2021); Zeng, Abu-mamsour, and Zubiaga (2021); Guo, Schlichtkrull, and Vlachos (2022). Much of this research effort has been based on a _document-centric_ model of fact-checking work, where the end goal is to provide the journalist or fact-checker with an (automated) ranked list of documents relevant to the claim that they can then use as evidence for determining its likely veracity (e.g., Zhao et al. (2023)). Our recent research reveals that some fact-checkers use a _expert-centric_ model, whereby they search for credible and trustworthy experts who are willing to be quoted Procter et al. (2023). Finding such experts is a big challenge and often journalists and fact-checkers aim to interview several experts as relying solely on one source may not be considered as sufficiently credible. In the case of contentious claims, they may also need to ensure their reports are balanced Procter et al. (2023). There is thus an urgent need to develop a tool for journalists and fact-checkers to search for experts based on their past record of being quoted by news media and fact-checking organisations, and other trustworthy agencies. To achieve this goal, we need to first automatically extract quotes and their sources from news articles, and then second return a ranked list of experts relevant to a query that then can be assessed by the journalist or fact-checker. This can be formulated as two tasks: (1) quote extraction and attribution, and (2) expert recommendation. For the first task of quote extraction and attribution, most datasets were built on literature narratives and limited in size due to the reliance on manual annotation Zhang, Black, and Sproat (2003); Elson and McKeown (2010); Fernandes, Motta, and Milidiu (2011); Lee and Yeung (2016). But newswire has much fewer monologues and dialogues than fiction O'Keefe et al. (2012). Early work relied on rule-driven frameworks and manually-defined linguistic patterns, hence they mainly focused on direct quotes Lee and Yeung (2016); Zhang and Liu (2021); Vaucher et al. (2021). Unlike play scripts or fiction, people quoted in the news media are not limited to a list of fixed characters. In addition, the constantly evolving stream of events reported in news articles and diverse writing styles used by news media outlets make it difficult to identify experts and extract quotes by relying on regular expressions. For the second task of expert recommendation, much work has been conducted for expert finding in academic research Sun et al. (2015); Silva (2014); Wang et al. (2017), online communities Yuan et al. (2020), and the enterprise field Paul (2016); Askari, Verberne, and Pasi (2022). However, we are not aware of any work searching for experts based on their track record of being quoted in news articles. In this paper, we propose a semi-automatic approach to construct a news quotation dataset, called NewsQuote, from the AYLIEN coronavirus dataset2, which contains over 1.5 million English news articles generated from around 440 global sources. We utilise the semantic role labelling re sults of sentences in news articles to extract the quote trigger verbs, subjects (i.e., sources) and objects (i.e., quotes), and identify sources by their corresponding DBpedia3 ontology class labels. The resulting dataset contains both direct and indirect quotes, and also mixed quotes where only part of the quotations is placed inside quotation marks. We introduce the task of finding sources of evidence from news reports and present a set of approaches for (1) identifying quotations and their sources from text; and (2) recommending potential experts for given news topics. Our experimental results illustrate the feasibility of using our constructed NewsQuote dataset for developing an automated tool for searching and ranking subject-matter experts for journalists and fact-checkers. Footnote 3: [https://www.dbpedia.org/](https://www.dbpedia.org/) ## 2 Related Work Quotation Extraction and AttributionQuotation extraction and attribution originated as a study of literary works [15], and now typically covers three sub-tasks: identifying sources, extracting quotations, and attributing a quotation to its source. In Table 1, we summarise several large-scale English quotations datasets that are built on news articles. The **StylisticsCorpus**[1] was designed for discourse presentation in written British narratives. They opted for hard news (e.g., accidents, conflicts, and crimes) [1] as a part of the data source because of its circulation, narrative, authenticity, and cultural prominence. Of the total data, 5407 occurrences came from the press. They classified these samples into speech, writing, and thought. Then they divided each class into many presentation categories, such as indirect, free indirect, direct, and free direct. The **PARC3**[1] project aims to fill the gap of the attribution relation (AR). Their annotation scheme tagged three constitutive components of an AR: source, cue, and content. They labeled the quote status as direct, indirect, or mixed by the usage of quote marks, and looked into the depth of attribution by the level of nesting. The inspiration for generating **QuoteBank**[1] came from the tangled nature of contemporary news flow. vaucher2021exploitedited duplicate reports in different media to learn the patterns of quote-source pairs. Focusing on the attribution of direct quotations, they proposed an end-to-end minimally supervised framework, named Quobert, to extract and attribute quotations. Using Quobert, they generated QuoteBank from the Spinn3r dataset [1], and linked source entities to the Wikidata knowledge base. **DirectQuote**[15] contains direct quotations manually annotated from online news media. Like QuoteBank, each source can be linked to a Wikidata named entity to benefit various downstream tasks. Among the existing news quotation datasets, StylisticCorpus and PARC3 contain both direct and indirect quotes, but do not originate from multi-platform news stories, nor do they provide source-entity linking to Wikidata. The other two datasets, QuoteBank and DirectQuote, have each of their sources linked to a Wikidata named entity, but they only focus on direct quotes. In comparison, our NewsQuote contains various types of quotes including direct, indirect and mixed quotes where only part of the quotation is inside the quotation marks. In addition, all sources have their DBpedia entity links. Expert FindingThe core task in expert finding is to identify candidates with the required expertise for a given query [10]. Therefore, solutions focus on matching the demand of searchers and the experience of relevant experts. In practise, this problem has expanded to different situations where various factors were considered. Academic accounts for up to 65% expert finding research [12]. When looking for academic experts, attention is given to topic relevance, expert quality, research connectivity [13, 14], as well as capacity limitation [11]. Meanwhile, many expert finding systems are used on online platforms, such as community question answering, social networks and forums [10, 12]. In the enterprise field, experts' accessibility and productivity are considered to have significant economic benefits [15, 16]. In the medical domain, when looking for the most suitable doctor for a particular patient, the patient's underlying conditions are of critical importance [13]. In lawyer finding, users may prefer candidates in the same state or city, hence the physical location was emphasized [1]. ## 3 NewsQuote: Dataset Construction In this section, we describe how we constructed the dataset, including details of the data source, pre-processing steps \begin{table} \begin{tabular}{c c c c l} \hline \hline **Corpus** & **\#Quotes** & **Indirect\%** & **Entity** & **Data Source** \\ \hline StylisticsCorpus & 16,533 & 16 & ✗ & Fiction, Newspaper, Biographies \\ \hline PARC3 & 19,712 & 72 & ✗ & Wall Street Journal \\ \hline QuoteBank & 178 million & - & ✓ & News Articles \\ \hline DirectQuote & 10,279 & 0 & ✓ & News Articles \\ \hline NewsQuote & 24,031 & 81 & ✓ & News Articles \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of large-scale (larger than 10,000) news-originated English quotation corpora. performed, and test set annotation. Example data entries and dataset statistics will be presented at the end. ### Data Collection We built our NewsQuote dataset from the AYLIEN coronavirus dataset, published between November 2019 and August 2020. We used the AYLIEN News API4 to retrieve news articles. Apart from text, each article is also accompanied with the meta data such as authors, keywords, summary, source, publishing time, topical categories coded by both the Interactive Advertising Bureau (IAB) taxonomy5 and the IPTC NewsCodes6, as well as recognized entities and entity links from DBpedia. Footnote 4: [https://aylien.com/product/news-api](https://aylien.com/product/news-api) Footnote 5: [https://www.iab.com](https://www.iab.com) Footnote 6: [https://iptc.org/standards/newcodes/](https://iptc.org/standards/newcodes/) Footnote 7: [https://huggingface.co/vslaykovsky/roberta-news-duplicates](https://huggingface.co/vslaykovsky/roberta-news-duplicates) ### Pre-processing Data De-duplicationAs the same news story may be posted by multiple sources and there were exact duplicates in the original dataset, we removed news articles that are similar to ones already been published. News articles were first sorted in chronological order. News duplicates were then detected using a RoBERTa classifier7 trained with title-body pairs using semi-supervised learning [14]. For processing efficiency, the dataset was split into 16 equal-sized subsets. For each subset, titles and first sentence of news summaries of temporally-ordered news articles were sequentially fed as input to the RoBERTa classifier. Any duplicates were removed. After data de-duplication, 158,325 news articles remained. The total number of source platforms is 258, and as shown in Figure A1b, the top 5 source platforms are: Daily Mail, Yahoo, Seeking Alpha, Business Insider, Reuters. Footnote 7: [https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedTriggerVerbs.csv](https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedTriggerVerbs.csv) Quote Trigger Word FilteringFor each of the selected articles, we segment the the main body into sentences, and then use a pre-trained BERT-based semantic role labeling model [15] to extract verbs (or predicates), subjects, and objects. We obtained a candidate verb list sorted by their occurrence frequencies. After manually checking the candidate verbs with over 100 occurrences, we identified 352 quote trigger words that are more likely indicative of direct or indirect quotes. The list of verbs are presented in our source code repository 8. Some of the verbs are clearly indicative of quotes, such as _'said'_, while others may not be associated with quotes in a traditional sense, for example, _'tweet'_. After identifying the quote trigger words, we only kept the sentences with at least one trigger word, one subject and one object. The subject is regarded as a potential source and the object is considered as a potential quotation. To ensure that the quotations are informative, we also require that the length of the object should be more than three words. Footnote 8: [https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedTriggerVerbs.csv](https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedTriggerVerbs.csv) Source and Quote FilteringWe required that the subject of a candidate sentence should be a person or an organisation and therefore identified potential source entities via the accompanying DBpedia ontology labels9 in the dataset. Our selected ontology classes are shown in our source code repository 10. Since each entity could have more than one ontology class, we further removed samples with sources labeled as _Location_, _Place_ and _Country_. As the same subject could have multiple mentions, we use DBPedia entity links for entity resolution and normalisation. In addition, we required a named entity to appear at least twice in the dataset. Finally, to avoid the sentence split error, we required quotation marks to be paired in sentences that contain direct quotes and mixed quotes. Footnote 9: [http://mappings.dbpedia.org/server/ontology/classes/](http://mappings.dbpedia.org/server/ontology/classes/) Footnote 10: [https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedOntologyClasses.txt](https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedOntologyClasses.txt) ### Test Set Annotation Since in practice, given a topic, we can only identify experts based on their previous quotes published in earlier news articles, we divide the dataset into training, validation and testing sets by news articles publishing timestamps, ensuring quote-source pairs in the validation and testing sets occurred later than those in the training set. Figure 2 demonstrates the distribution of quote-source pairs based on the publishing dates of their associated news articles.11 Footnote 11: There is no data between 2020-05-31 and 2020-06-21 in the original dataset. To ensure data quality, samples in the test set were manually screened by one annotator. We list five types of noise and corresponding examples appearing in the raw test set in Table A1. Data falling into one of these noise categories were removed from the test set. ### Dataset Statistics Our data covers three categories of quotes, illustrated in Figure 1. In short, direct quotations are placed inside quotation marks, while indirect quotations are not, and a mix of direct and indirect quotations have only part of the quotations placed inside quotation marks. We roughly estimated the weight of each quotation type on the dataset by the number and position of quotation marks: 81% for indirect quotes, 10% for direct quotes and 9% for mixed quotes. In the test set, there are 1,867 (84%) indirect quotes, 215 (10%) mixed quotes and 143 direct quotes (6%). Table 2 shows the statistics of our final NewsQuote dataset. In summary, we have a total of 24,031 English source-quote pairs with 3,246 sources from 258 global sources. More related statistics and plots are presented in Appendix A. ## 4 Task Definition In our dataset, each sample \(S_{i}\) consists of a context \(c_{i}\), a quote-source pair \((q,e)_{i}\), a list of keywords \(k_{i}\) and metadata \(m_{i}\). The context contains 3 sentences, the main sentence where the source and quote appeared, its preceding sentence and following sentence. Both keywords and metadata are defined at the document level and are retrieved from the AYLIEN coronavirus dataset. We propose the following two tasks on this NewsQuote dataset: **Source and quote extraction** is defined as automatically extracting the source-quote pair \((q,e)_{i}\) from a given context \(c_{i}\). **Expert recommendation** involves suggesting a ranked list of experts given a query, based on what they said in the past. ## 5 Approaches We present approaches for source and quote extraction, and expert recommendation. An overview of the approaches is illustrated in Figure 3. ### Source and Quote Extraction We tackle the problem of extracting quote-source pairs using three approaches: rule-based method, sequence labelling, and question answering. **Approach 1: Rule-based Quote Annotator** Regular-expression-like rules can be used to extract direct quotes. We run the Quote Annotator 12 from Stanford CoreNLP Manning et al. (2014) on our test sample sentences. It can only extract direct quotes that are delimited by quotation marks. Footnote 12: [https://stanfordnlp.github.io/CoreNLP/quote.html](https://stanfordnlp.github.io/CoreNLP/quote.html) Approach 2: Sequence LabellingWe can label each sample in our dataset with a 5-class BIO tagging scheme. The source is annotated by 'B-S' and 'I-S', denoting whether the corresponding token indicates the beginning of a source mention, or is inside a source mention. Similarly, the quotation is annotated by 'B-Q' and 'I-Q', and all the other tokens are marked by 'O'. We then fine tune a BERT-based token classifier Devlin et al. (2018) to identify sources and quotes from the context. Approach 3: Question Answering (QA) pipelineWe use a QA pipeline for source and quote extraction by asking two questions in turn: Q1: Who is the source? Q2: What did [source] say? During training, the [source] in Q2 is the gold standard answer for question Q1. During inference, it is the extracted answer for Q1. The input context is composed of a question, a left sentence, \(l\), a main sentence, \(s\) and a right sentence, \(r\). To extract the answer from the context, we fine-tuned the pre-trained BERT-based extractive QA model Devlin et al. (2018), where the input takes the form: [CLS] Question [SEP] 1 [SEP] s [SEP] r [SEP] ### Expert Recommendation We can formulate expert recommendation as a retrieval problem, that given a query, we would like to retrieve sources who can comment on the topic discussed in the query ranked by their relevance to the query. There are two possible approaches, one is to use sources' past quotes as \begin{table} \begin{tabular}{l c c c} \hline \hline & **Test** & **Valid** & **Train** \\ \hline **No. of samples** & 2,236 & 2,082 & 19,713 \\ **No. of articles** & 1,937 & 1,766 & 14,526 \\ **No. of source entities** & 1,016 & 765 & 2,963 \\ **Avg. quote length** & 28.38 & 29.16 & 28.99 \\ **No. of news sources** & 180 & 178 & 252 \\ **No. of news categories** & 470 & 440 & 629 \\ **Avg. keywords per article** & 43.23 & 44.28 & 42.17 \\ \hline \hline \end{tabular} \end{table} Table 2: The NewsQuote Dataset statistics. Figure 1: Three types of quotes in our dataset. Sources are highlighted in blue, trigger verbs are highlighted in red, and quotes are highlighted in yellow. Figure 2: The distribution of quote-source pairs. The training set contains samples released from 2020-01-19 to 2020-05-31, and the validation/testing set contains samples released from 2020-06-21 to 2020-08-02. documents and perform _document retrieval_ and then return the sources of the retrieved quotes as results, another is to perform _expert retrieval_ directly. Approach 1: Document Retrieval_Document retrieval_ aims to first retrieve relevant documents (i.e., the context where a quote appears) given a query, and then extract the sources from the documents as results. For document indexing, we experiment with a sparse bag-of-words Lucene index and four kinds of dense transformer-encoded Faiss indices via Pyserini13. A BM25 ranking approach on the sparse index and a nearest-neighbor search on dense indexes were then applied to return the top 10 most relevant documents for a given query. Sources in the top 10 retrieved documents are then identified as the recommended experts. Footnote 13: [https://github.com/castorini/pyserini](https://github.com/castorini/pyserini) Approach 2: Expert Retrieval_Expert retrieval_ directly retrieves sources based on the probability of a query conditional on a given candidate source \(P(q|e)\). Following the the framework introduced by Balog, Azzopardi, and de Rijke (2009), we implemented both candidate-based and document-based expert finding approaches. Candidate-Based Expert Retrieval Assuming that each term in the query is sampled identically and independently, also that the document and the expert source candidate are conditionally independent, the candidate-based approach estimates \(P(q|e)\) by: \[P(q|e)=\prod_{t\in q}\{(1-\lambda)(\sum_{d\in D}p(t|d)p(d|e)+ \lambda p(t)\}^{n(t,q)},\] \[\lambda=\frac{\beta}{\beta+n(e)},\quad\beta=\frac{\sum_{E}|\{d: n(e,d)>0\}|\cdot|d|}{|E|},\] where \(\lambda\) is the smoothing parameter, \(p(t|d)\), \(p(d|e)\) and \(p(t)\) are the conditional probability of a term \(t\) in document \(d\), the conditional probability of a document \(d\) given source \(e\), and the probability of term \(t\), respectively. Both \(p(t|d)\) and \(p(t)\) are estimated by maximum likelihood. The probability \(p(d|e)\) is set by a Boolean model, which will be discussed later. \(|d|\) is the average document length, \(n(t,q)\) is the number of times that a term \(t\) appears in the query \(q\), \(n(e,d)\) is the occurrence frequency of an expert \(e\) appeared in the document \(d\), and \(n(e)\) is the total number of occurrences in documents associated with the source \(e\). Document-Based Expert Retrieval The document-based expert retrieval approach searches for sources via relevant document collection. This approach assumes the conditional independence between the query and candidate, and estimates the probability of a term \(t\) in each document: \[P(q|e)=\sum_{d\in D}\{\prod_{t\in q}((1-\lambda)p(t|d)+\lambda p(t))^{n(t,q)} \}p(d|e),\] \[\lambda=\frac{\beta}{\beta+n(d)},\quad\beta=|d|,\] where \(n(d)\) is the length of document \(d\). In both the candidate-based and document-based expert finding approaches, the document-candidate associations, \(p(d|e)\), is estimated by a simple Boolean model, where it is set to 1, if \(n(e,d)>0\), and 0, otherwise. ## 6 Experiments ### Experimental Setup For the rule-based approach, we directly feed the raw sentences into the Quote Annotator. To build the token classifier, we segment the input text into a sequence of 512 tokens, and fine tune the model for 100 epochs with an initial learning rate of 2e-7. For the extractive QA model, the maximum length of the extracted answer is set to 30 when questioning sources and 512 when questioning quotes. For the question about source, we train the model for 50 epochs with an initial learning rate of 2e-6. For the question about quote, we Figure 3: Illustrations of 5 approaches described in Section 5. Plot(a) describes the QA pipeline, the sequence labelling and the Rule-based Quota Annotator used for quote-source extraction. Plot(b) introduces the document retrieval approach for expert recommendation, and plot(c) presents the expert retrieval approach for expert recommendation train the model for 100 epochs with an initial learning rate of 2e-5. For expert recommendation, we consider two types of documents: the main sentence where a source/quote occurred, or the main sentence together with its surrounding context (i.e., the preceding and following sentences). For the query to be used for expert retrieval, we use either the title of a news article, its keywords, or the first sentence of the summary. To further remove interference, we eliminate the source name from the input query if there is any. For the expert retrieval method, we take only the first \(w\) words in the news article title (the keyword list or the first sentence of the news summary) as the input query to reduce the running time. After validating the value of \(w\) between 1 and 10, we finally set \(w=5\). ### Evaluation Metrics To measure model performances for quote extraction and attribution, we use two metrics defined in SQuAD Rajpurkar et al. (2016), the exact match and the macro-averaged F1 score. **Exact Match** is equal to one if the predicted outcome is completely identical to the ground truth, while **(Macro-averaged) F1** measures the average overlap between predicted and ground truth answers at the token-level. For expert recommendation, we use two metrics commonly used in information retrieval, the mean average precision (MAP) and the normalized discounted cumulative gain (NDCG). **Mean Averaged Precision** is the average value of the precision at the points where relevant documents are retrieved. **Normalized Discounted Cumulative Gain at K** first discounts the gain scale at the \(i\)-th rank position by \(\frac{1}{\log_{2}(i)}\), then adds up the converted gain scales up to rank \(k\), and finally normalizes the result by the ideal ranking order. In addition, we propose **relaxed metrics** where the retrieved expert is considered relevant if it is in the same cluster as the true source. In the construction of relaxed metrics, we opt for the top 100 most frequent source DBpedia categories and use the binary vectors to embed sources14. We then perform \(k\)-means clustering on the source embeddings. We empirically set \(k=40\) according to the cluster coherence and separation scores. Footnote 14: In our dataset, a source is assigned to 4 to 5 DBpedia categories on average. ### Experimental Results We first present the results of the three quote extraction and attribution methods described in Section 5, and subsequently present the evaluation results for the two expert recommendation approaches introduced in Section 5. #### 7.4.1 Quote Extraction and Attribution Table 3 presents the performance of the rule-based annotator, sequence labeling and the QA pipeline on the test set. It is not surprising that the rule-based quote annotator performs the worst as it can only extract direct quotes using regular-expression-like rules. In our test set, only 337 out of 2225 samples were identified as containing quotes. On this subset, the rule-based annotator gives a higher exact match score of 49.65 for sources compared to quotes. But it performs much better for direct quote extraction in Macro F1 compared to source extraction. On the other two categories, indirect and mixed quotes, the rule-based annotator essentially failed to produce any sensible results. Sequence labeling gives much better results compared to the rule-based annotator. We notice that in terms of exact match, quote extraction appears to be nearly 10% lower than source extraction, showing that the model struggled with longer answer extraction. For the three categories of quotes, the model gives the best results for quote extraction on the direct quotes, followed by the indirect quotes, and it performs the worst on the mixed quotes. This is expected since mixed quotes are more complex to deal with compared to the other two categories. The QA pipeline achieves the best performance in both identifying sources and extracting the quotations. In testing the QA pipeline's quote extraction capabilities, we experimented with three scenarios by using either: the true source name in the question for quote, the predicted source from the results of QA\({}_{source}\), or masking the source with the pronoun '_they_' to completely remove the source information from the question. Since the accuracy of our QA model for source identification is already high, using the true or predicted source for the question for quote extraction does not make much difference. However, if the source information is lost, the quote extraction performance drops by nearly 2% in Macro F1 and over 4% in exact match. #### 7.4.2 Expert Recommendation We show in Table 4 the expert recommendation results from using keywords of a news article as query, and the context of quotes (the main sentence where the source and quote occurred, together with the preceding and the following sentences) as the document. It can be observed that the document retrieval (**DR**) approaches generally outperform the expert retrieval (**ER**) approaches. Among various document indexing strategies, using Lucene sparse bag-of-words index (**DR\({}_{sparse}\)**) gives superior results compared to other dense transformer-encoded Faiss indices. As expected, using the Relaxed Metrics where a retrieved source is considered as relevant to the true source if they reside in the same cluster, we obtain better results compared to the strict metrics.15 Footnote 15: Results using other document retrieval or expert retrieval approaches based on different combinations of the formulation of documents and queries are in Appendix C. ## 8 Challenges and Future Directions We have presented our NewsQuote dataset, and introduced a new task, namely expert recommendation in the field of journalism and fact-checking. Our experiments confirmed the possibility of extracting quote-source pairs using a question-answering pipeline as well as finding expert sources using document retrieval and expert retrieval. Here, we outline some potential future directions. First, in the construction of our dataset, the quote trigger verbs are manually selected from the most frequent group of verbs. On one hand, the identified verb list does not cover all the possible verbs that are indicative of quotations, such as those occurred less frequently or are not closely related to the Covid topic. On the other hand, some verbs are ambiguous and need to be contextualized to determine whether they are indeed the trigger words. Although we removed disambiguous cases when examining the test set, it is not practical to perform manual filtering on such large-scale data. Future work could explore the possibility of leveraging other large-scale quote corpora for training a model for the detection of quote trigger words. Also, our dataset has been constructed from the news articles about the coronavirus. In the future, this could be extended to cover a wide range of topics such as business, technology, education, and politics. Second, co-reference resolution will be vital for increasing the quote-source attribution data as it is common to use pronouns to refer to previously mentioned sources in news articles. Our preliminary experiments on co-referencing resolution led to noisy quote-source attribution results. In the future work, the content similarity and/or coherence between the quote associated with a pronoun and a quote of a candidate source could be leveraged to improve the co-reference resolution results. Third, with the DBpedia links referred to as identifications of sources in our dataset, external knowledge could be imported as evidence to enhance the performance of expert recommendation. Fourth, our framework makes it possible to build a quote-source library for the newsroom that can help with veracity assessment, where summaries of the comments made by each source, including who has quoted them, when and in relation to which veracity check, can be made available to journalists and fact-checkers, thereby reducing duplication of effort and supporting collaboration. Finally, it is important that journalists and fact-checkers do not become over-reliant on tools such as the one we present here (i.e., fall victim to so-called 'automation bias'). The results therefore need to be interpreted with care and the final decision on which experts to approach should always made by the journalist or fact-checker. It is therefore important that such models provide evidence for their recommendations that can be assessed for credibility and relevance by the user (Procter et al., 2023). \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**Strict Metrics**} & \multicolumn{3}{c}{**Relaxed Metrics**} \\ & **MAP** & **NDCG\({}_{5}\)** & **NDCG\({}_{10}\)** & **MAP** & **NDCG\({}_{5}\)** & **NDCG\({}_{10}\)** \\ \hline \(\textbf{DR}_{sparse}\) & **0.2903** & **0.2807** & **0.3590** & **0.4162** & **0.3925** & **0.5183** \\ \(\textbf{DR}_{flat}\) & 0.1481 & 0.1440 & 0.1939 & 0.2886 & 0.2714 & 0.3887 \\ \(\textbf{DR}_{nnswq}\) & 0.1509 & 0.1473 & 0.1926 & 0.2966 & 0.2805 & 0.3956 \\ \(\textbf{DR}_{nswq}\) & 0.1446 & 0.1406 & 0.1889 & 0.2865 & 0.2686 & 0.3850 \\ \(\textbf{DR}_{pq}\) & 0.1395 & 0.1363 & 0.1838 & 0.2739 & 0.2583 & 0.3734 \\ \hline \(\textbf{ER}_{can}\) & 0.1021 & 0.1106 & 0.1252 & 0.2306 & 0.2294 & 0.3135 \\ \(\textbf{ER}_{doc}\) & 0.1205 & 0.1281 & 0.1418 & 0.2465 & 0.2412 & 0.3285 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of expert recommendation using quote context as document, and news article keywords as query. In the first five rows, **DR** denotes the document retrieval approach, and the subscripts represent 5 types of retrieval indices mentioned in Section 5 Approach 1, Lucene sparse bag-of-words index, Faiss flat index, Faiss HNSWPQ index, Faiss HNSWPQ index, and Faiss PQ index. \(\textbf{ER}_{can}\) is the candidate-based expert finding approach, and \(\textbf{ER}_{doc}\) is the document-based expert finding approach. In document-based expert finding approaches, the input query length is set to 5 keywords. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Overall**} & \multicolumn{2}{c}{**Direct Quotes**} & \multicolumn{2}{c}{**Indirect Quotes**} & \multicolumn{2}{c}{**Mixed Quotes**} \\ \cline{2-9} & **Macro F1** & **Exact Match** & **Macro F1** & **Exact Match** & **Macro F1** & **Exact Match** & **Macro F1** & **Exact Match** \\ \hline \(\textbf{Rule}_{source}\) & 5.76 & 5.62 & 50.58 & 49.65 & 0.214 & 0.214 & 24.11 & 23.26 \\ \(\textbf{Rule}_{quote}\) & 7.72 & 1.93 & 82.33 & 30.07 & 0.145 & 0.00 & 23.84 & 0.00 \\ \hline \(\textbf{SL}_{source}\) & 98.06 & 95.37 & 98.63 & 95.80 & 97.99 & 95.34 & 98.23 & 95.35 \\ \(\textbf{SL}_{quote}\) & 95.65 & 85.17 & **97.17** & 89.51 & 95.61 & 85.11 & 95.05 & 82.79 \\ \hline \(\textbf{QA}_{speakr}\) & **98.86** & **98.61** & **99.30** & **99.30** & **98.77** & **98.50** & **99.38** & **99.07** \\ \(\textbf{QA}_{quote}\) & & & & & & & & \\ \(w/\;true\;source\) & **95.96** & **90.74** & 95.83 & **93.01** & **95.96** & **90.31** & **96.06** & **93.02** \\ \(w/\;pred.\;source\) & 95.61 & 89.93 & 95.78 & 93.01 & 95.55 & 89.34 & 96.06 & 93.02 \\ \(w/\;source\;mask\) & 93.92 & 85.84 & 96.53 & 90.21 & 93.56 & 85.11 & 95.28 & 89.30 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of source and quotation extraction on the test set. **Rule** – the rule-based annotator, **SL** – sequence labeling, **QA** – the question answering pipeline. The subscripts indicate the aim of the models, either for \(source\) extraction or for \(quote\) extraction. Under the QA\({}_{quote}\), ‘\(w/\;true\;source\)’ is where we use the true source name when asking _”What did + [source] + say?”_, while ‘\(w/\;pred.\;source\)’ uses the predicted source from the QA\({}_{speakr}\) results, and ‘\(w/\;source\;mask\)’ uses the generic word _”they_. ## 8 Conclusions We have described the construction of a novel, large-scale dataset on quote-source pairs retrieved from news articles. Our NewsQuote dataset comprises direct quotations, indirect quotations and their combinations. The diversity of quote types will encourage the development of more advanced approaches for the challenging tasks of indirect and mixed quote extraction. Based on the NewsQuote dataset, we have demonstrated that the QA pipeline is able to achieve over 98% exact match for source extraction and close to 90% for quote extraction. In addition, we have introduced the expert recommendation task and shown that the document retrieval approach with sparse indexing gives the best results compared to other dense retrieval approaches. ## Ethics Statement All data we used are from open public sources. We have obtained a written consent from the Aylien to download their data. As per the data owner's requirement, we will not directly share the downloaded data, instead, we will share the download script and all pre-processing scripts so that others could obtain the same dataset we used in the paper from the Aylien's website. ## Acknowledgements This work was supported in part by the EPSRC (grant no. EP/V048597/1). YH is supported by a Turing AI Fellowship funded by the UKRI (grant no. EP/V020579/2).
2307.15189
Med-Flamingo: a Multimodal Medical Few-shot Learner
Medicine, by its nature, is a multifaceted domain that requires the synthesis of information across various modalities. Medical generative vision-language models (VLMs) make a first step in this direction and promise many exciting clinical applications. However, existing models typically have to be fine-tuned on sizeable down-stream datasets, which poses a significant limitation as in many medical applications data is scarce, necessitating models that are capable of learning from few examples in real-time. Here we propose Med-Flamingo, a multimodal few-shot learner adapted to the medical domain. Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks. Med-Flamingo unlocks few-shot generative medical visual question answering (VQA) abilities, which we evaluate on several datasets including a novel challenging open-ended VQA dataset of visual USMLE-style problems. Furthermore, we conduct the first human evaluation for generative medical VQA where physicians review the problems and blinded generations in an interactive app. Med-Flamingo improves performance in generative medical VQA by up to 20\% in clinician's rating and firstly enables multimodal medical few-shot adaptations, such as rationale generation. We release our model, code, and evaluation app under https://github.com/snap-stanford/med-flamingo.
Michael Moor, Qian Huang, Shirley Wu, Michihiro Yasunaga, Cyril Zakka, Yash Dalmia, Eduardo Pontes Reis, Pranav Rajpurkar, Jure Leskovec
2023-07-27T20:36:02Z
http://arxiv.org/abs/2307.15189v1
# Med-Flamingo: a Multimodal Medical Few-shot Learner ###### Abstract Medicine, by its nature, is a multifaceted domain that requires the synthesis of information across various modalities. Medical generative vision-language models (VLMs) make a first step in this direction and promise many exciting clinical applications. However, existing models typically have to be fine-tuned on sizeable down-stream datasets, which poses a significant limitation as in many medical applications data is scarce, necessitating models that are capable of learning from few examples in real-time. Here we propose Med-Flamingo, a multimodal few-shot learner adapted to the medical domain. Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks. Med-Flamingo unlocks few-shot generative medical visual question answering (VQA) abilities, which we evaluate on several datasets including a novel challenging open-ended VQA dataset of visual USMLE-style problems. Furthermore, we conduct the first human evaluation for generative medical VQA where physicians review the problems and blinded generations in an interactive app. Med-Flamingo improves performance in generative medical VQA by up to 20% in clinician's rating and firstly enables multimodal medical few-shot adaptations, such as rationale generation. We release our model, code, and evaluation app under [https://github.com/snap-stanford/med-flamingo](https://github.com/snap-stanford/med-flamingo). ## 1 Introduction Large, pre-trained models (or foundation models) have demonstrated remarkable capabilities in solving an abundance of tasks by being provided only a few labeled examples as context Bommasani et al. (2021). This is known as in-context learning Brown et al. (2020), through which a model learns a task from a few provided examples specifically during prompting and without tuning the model parameters. In the medical domain, this bears great potential to vastly expand the capabilities of existing medical AI models Moor et al. (2023). Most notably, it will enable medical AI models to handle the various rare cases faced by clinicians every day in a unified way, to provide relevant rationales to justify their statements, and to easily customize model generations to specific use cases. Implementing the in-context learning capability in a medical setting is challenging due to the inherent complexity and multimodality of medical data and the diversity of tasks to be solved. Previous efforts to create multimodal medical foundation models, such as ChexZero Tiu et al. (2022) and BiomedCLIP Zhang et al. (2023), have made significant strides in their respective domains. ChexZero specializes in chest X-ray interpretation, while BiomedCLIP has been trained on more diverse images paired with captions from the biomedical literature. Other models have also been developed for electronic health record (EHR) data Steinberg et al. (2021) and surgical videos Kiyasseh et al. (2023). However, none of these models have embraced in-context learning for the multimodal medical domain. Existing medical VLMs, such as MedVINT Zhang et al. (2023), are typically trained on paired image-text data with a single image in the context, as opposed to more general streams of text that are interleaved with multiple images. Therefore, these models were not designed and tested to perform multimodal in-context learning with few-shot examples1 Footnote 1: For example, a challenge with multimodal in-context learning for existing medical vision language models is the potential for image information to leak across examples, potentially misleading the model. Here, we propose Med-Flamingo, the first medical foundation model that can perform multimodal in-context learning specialized for the medical domain. Med-Flamingo is a vision-language model based on Flamingo (Alayrac et al., 2022) that can naturally ingest data with interleaved modalities (images and text), to generate text conditioned on this multimodal input. Building on the success of Flamingo, which was among the first vision-language models to exhibit in-context learning and few-shot learning abilities, Med-Flamingo extends these capabilities to the medical domain by pre-training on multimodal knowledge sources across medical disciplines. In preparation for the training of Med-Flamingo, our initial step involved constructing a unique, interleaved image-text dataset, which was derived from an extensive collection of over \(4K\) medical textbooks (Section 3). Given the critical nature of accuracy and precision within the medical field, it is important to note that the quality, reliability, and source of the training data can considerably shape the results. Therefore, to ensure accuracy in medical facts, we meticulously curated our dataset from respected and authoritative sources of medical knowledge, as opposed to relying on potentially unreliable web-sourced data. In our experiments, we evaluate Med-Flamingo on generative medical visual question-answering (VQA) tasks by directly generating open-ended answers, as opposed to scoring artificial answer options _ex post_-as CLIP-based medical vision-language models do. We design a new realistic evaluation protocol to measure the model generations' clinical usefulness. For this, we conduct an in-depth human evaluation study with clinical experts which results in a human evaluation score that serves as our main metric. In addition, due to existing medical VQA datasets being narrowly focused on image interpretation among the specialties of radiology and pathology, we create Visual USMLE, a challenging generative VQA dataset of complex USMLE-style problems across specialties, which are augmented with images, case vignettes, and potentially with lab results. Averaged across three generative medical VQA datasets, few-shot prompted Med-Flamingo achieves the best average rank in clinical evaluation score (rank of \(1.67\), best prior model has \(2.33\)), indicating that the model generates answers that are most preferred by clinicians, with up to 20% improvement over prior models. Furthermore, Med-Flamingo is capable of performing medical reasoning, such as answering complex medical questions (such as visually grounded USMLE-style questions) and providing explanations (i.e., rationales), a capability not previously demonstrated by other multimodal medical foundation models. However, it is important to note that Med-Flamingo's performance may be limited by the availability and diversity of training data, as well as the complexity of certain medical tasks. All investigated models and baselines would occasionally hallucinate or generate low-quality responses. Despite these limitations, our work represents a significant step forward in the development of multimodal medical foundation models and their ability to perform multimodal in-context learning in the medical domain. We release the Med-Flamingo-9B checkpoint for further research, and Figure 1: Example of how Med-Flamingo answers complex multimodal medical questions by generating open-ended responses conditioned on textual and visual information. ## 1 Multimodal pre-training on medical literature Figure 2: Overview of the Med-Flamingo model and the three steps of our study. First, we pre-train our Med-Flamingo model using paired and interleaved image-text data from the general medical domain (sourced from publications and textbooks). We initialize our model at the OpenFlamingo checkpoint continue pre-training on medical image-text data. Second, we perform few-shot generative visual question answering (VQA). For this, we leverage two existing medical VQA datasets, and a new one, Visual USMLE. Third, we conduct a human rater study with clinicians to rate generations in the context of a given image, question and correct answer. The human evaluation was conducted with a dedicated app and results in a clinical evaluation score that serves as our main metric for evaluation. make our code available under [https://github.com/snap-stanford/med-flamingo](https://github.com/snap-stanford/med-flamingo). In summary, our paper makes the following contributions: 1. We present the first multimodal few-shot learner adapted to the medical domain, which promises novel clinical applications such as rationale generation and conditioning on retrieved multimodal context. 2. We create a novel dataset that enables the pre-training of a multimodal few-shot learner for the general medical domain. 3. We create a novel USMLE-style evaluation dataset that combines medical VQA with complex, across-specialty medical reasoning. 4. We highlight shortcomings of existing evaluation strategies, and conduct an in-depth clinical evaluation study of open-ended VQA generations with medical raters using a dedicated evaluation app. ## 2 Related works The success of large language models (LLMs) Brown et al.; Liang et al. (2022); Qin et al. (2023) has led to significant advancements in training specialized models for the medical domain. This has resulted in the emergence of various models, including BioBERT Lee et al. (2020), ClinicalBERT Huang et al. (2019), PubMedBERT Gu et al. (2021), BioLinkBERT Yasunaga et al. (b), DRAGON Yasunaga et al. (a), BioMedLM Bolton et al., BioGPT Luo et al. (2022), and Med-PaLM Singhal et al.. Although these medical language models are typically smaller than general-purpose LLMs like GPT-3 Brown et al., they can match or even surpass their performance on medical tasks, such as medical question answering. Recently, there has been a growing interest in extending language models to handle vision-language multimodal data and tasks Su et al. (2019); Ramesh et al.; Alayrac et al. (2022); Aghajanyan et al.; Yasunaga et al. (2023). Furthermore, many medical applications involve multimodal information, such as radiology tasks that require the analysis of both X-ray images and radiology reports Tint et al. (2022). Motivated by these factors, we present a medical vision-language model (VLM). Existing medical VLMs include BiomedCLIP Zhang et al. (2023), MedVINT Zhang et al. (2023). While BiomedCLIP is an encoder-only model, our focus lies in developing a generative VLM, demonstrating superior performance compared to MedVINT. Finally, Llava-Med is another recent medical generative VLM Li et al. (2023), however the model was not yet available for benchmarking. ## 3 Med-Flamingo To train a Flamingo model adapted to the medical domain, we leverage the pre-trained OpenFlamingo-9B model checkpoint Awadalla et al. (2023), which is a general-domain VLM that was built on top Figure 3: Overview of the distribution of medical textbook categories of the MTB dataset. We classify each book title into one of the 49 manually created categories or “other” using the Claude-1 model. of the frozen language model LLaMA-7B Touvron et al. (2023) and frozen vision encoder CLIP ViT/L-14 Radford et al.. We perform continued pre-training in the medical domain which results in the model we refer to as Med-Flamingo. ### Data We pre-train Med-Flamingo by jointly training on interleaved image-text data and paired image-text data. As for the interleaved dataset, we created a interleaved dataset from a set of medical textbooks, which we subsequently refer to as MTB. As for the paired datasets, we used PMC-OA Lin et al. (2023). MtBWe construct a new multimodal dataset from a set of \(4\,721\) textbooks from different medical specialties (see Figure 3). During preprocessing, each book is first converted from PDF to HTML with all tags removed, except the image tags are converted to \(<\)image\(>\) tokens. We then carry out data cleaning via deduplication and content filtering. Finally, each book with cleaned text and images is then chopped into segments for pretraining so that each segment contains at least one image and up to 10 images and a maximum length. In total, MTB consists of approximately 0.8M images and 584M tokens. We use 95% of the data for training and 5% of the data for evaluation during the pre-training. Pmc-OAWe adopt the PMC-OA dataset Lin et al. (2023) which is a biomedical dataset with 1.6M image-caption pairs collected from PubMedCentral's OpenAccess subset. We use 1.3M image-caption pairs for training and 0.16M pairs for evaluation following the public split2. Footnote 2: [https://huggingface.co/datasets/axiong/pmc_oa_beta](https://huggingface.co/datasets/axiong/pmc_oa_beta) ### Objectives We follow the original Flamingo model approach Alayrac et al., which considers the following language modelling problem: \[p\left(y_{\ell}\mid x_{<\ell},y_{<\ell}\right)=\prod_{\ell=1}^{L}p\left(y_{ \ell}\mid y_{<\ell},x_{<\ell}\right),\] where \(y_{\ell}\) refers to the \(\ell\)-th language token, \(y_{<\ell}\) to the set of preceding language tokens, and \(x_{<\ell}\) to the set of preceding visual tokens. As we focus on modelling the medical literature, here we consider only image-text data (i.e., no videos). Following Alayrac et al., we minimize a joint objective \(\mathcal{L}\) over paired and interleaved data: \[\mathcal{L}=\mathbb{E}_{(x,y)\sim D_{p}}\left[-\sum_{\ell=1}^{L}\log p\left(y _{\ell}\mid y_{<\ell},x_{<\ell}\right)\right]+\lambda\cdot\mathbb{E}_{(x,y) \sim D_{i}}\left[-\sum_{\ell=1}^{L}\log p\left(y_{\ell}\mid y_{<\ell},x_{< \ell}\right)\right],\] where \(D_{p}\) and \(D_{i}\) stand for the paired and interleaved dataset, respectively. In our case, we use \(\lambda=1\). ### Training We performed multi-gpu training on a single node with 8x 80GB NVIDIA A100 GPUs. We trained the model using DeepSpeed ZeRO Stage 2: Optimizer states and gradients are sharded across devices. To further reduce memory load, we employed the 8-bit AdamW optimizer as well as the memory-efficient attention implementation of PyTorch 2.0. Med-Flamingo was initialized at the checkpoint of the Open-Flamingo model and then pre-trained for 2700 steps (or 6.75 days in wall time, including the validation steps), using 50 gradient accumulation steps and a per-device batch size of 1, resulting in a total batch size of 400. The model has \(1.3B\) trainable parameters (gated cross attention layers and perceiver layers) and roughly \(7B\) frozen parameters (decoder layers and vision encoder), which results in a total of \(8.3B\) parameters. Note that this is the same number parameters as in the OpenFlamingo-9B model (version 1). ## 4 Evaluation ### Automatic Evaluation BaselinesTo compare generative VQA abilities against the literature, we consider different variants of the following baselines: 1. MedVINT Zhang et al. (2023b), a visual instruction-tuned VLM based on Llama. As this model was not designed to do few-shot learning (e.g. the image information is prepended to the overall input), we report two modes for MedVINT: zero-shot and fine-tuned, where the model was fine-tuned on the training split of the VQA dataset. Since the rather small Visual-USMLE dataset has no separate training split, we ommit the fine-tuned baseline for that dataset. We used the MedVInT-TD model with PMC-LLaMA and PMC-CLIP backbones. 2. OpenFlamingo Awadalla et al. (2023), a powerful VLM which was trained on general-domain data, and which served as the base model to train Med-Flamingo. We report both zero-shot and few-shot performance. We expect Flamingo-type models to shine in the few-shot setting which they are designed for (as already the pre-training task includes multiple interleaved image-text examples). Evaluation datasetsTo evaluate our model and compare it against the baselines, we leverage two existing VQA datasets from the medical domain (VQA-RAD and PathVQA). Upon closer inspection of the VQA-RAD dataset, we identified severe data leakage in the official train / test splits, which is problematic given that many recent VLMs fine-tune on the train split. To address this, we created a custom train / test split by seperately splitting images and questions (each 90% / 10%) to ensure that no image or question of the train split leaks into the test split. On these datasets, \(6\) shots were used for few-shot. Furthermore, we create Visual USMLE, a challenging multimodal problem set of \(618\) USMLE-style questions which are not only augmented with images but also with a case vignette and potentially tables of laboratory measurements. The Visual USMLE dataset was created by adapting problems from the Amboss platform (using licenced user access). To make the Visual USMLE problems more actionable and useful, we rephrased the problems to be open-ended instead of multiple-choice. This makes the benchmark harder and more realistic, as the models have to come up with differential diagnoses and potential procedures completely on their own--as opposed to selecting the most reasonable answer choice from few choices. Figure 8 gives an overview of the broad range of specialties that are covered in the dataset, greatly extending existing medical VQA datasets which are narrowly focused on radiology and pathology. For this comparatively small dataset, instead of creating a training split for finetuning, we created a small train split of \(10\) problems which can be used for few-shot prompting. For this dataset (with considerably longer problems and answers), we used only \(4\) shots to fit in the context window. Evaluation metricsPrevious works in medical vision-language modelling typically focused scoring all available answers of a VQA dataset to arrive at a classification accuracy. However, since we are interested in _generative_ VQA (as opposed to post-hoc scoring different potential answers), for sake of clinical utility, we employ the following evaluation metrics that directly assess the quality of the generated answer: 1. Clinical evaluation score, as rated by three medical doctors (including one board-certified radiologist) using a human evaluation app that we developed for this study. More details are provided in Section 4.2. 2. BERT similarity score (BERT-sim), the F1 BERT score between the generated answer and the correct answer Zhang et al. (2020). 3. Exact-match, the fraction of generated answers that exactly match (modulo punctuation) the correct answer. This metric is rather noisy and conservative as useful answers may not lexically match the correct answer. ### Human evaluation We implemented a human evaluation app using Streamlit to visually display the generative VQA problems for clinical experts to rate the quality of the generated answers with scores from \(0\) to \(10\). Figure 4 shows an examplary view of the app. For each VQA problem, the raters are provided with the image, the question, the correct answer, and a set of blinded generations (e.g., appearing as "prediction_1" in Figure 4), that appear in randomized order. ### Deduplication and leakage During the evaluation of the Med-Flamingo model, we were concerned that there may be leakage between the pre-training datasets (PMC-OA and MTB) and the down-stream VQA datasets used for evaluation; this could inflate judgements of model quality, as the model could memorize image-question-answer triples. To alleviate this concern, we performed data deduplication based upon pairwise similarity between images from our pre-training datasets and the images from our evaluation benchmarks. To detect similar images, in spite of perturbations due to cropping, color shifts, size, etc, we embedded the images using Google's Vision Transformer, preserving the last hidden state as the resultant embedding Dosovitskiy et al. (2021). We then found the k-nearest neighbors to each evaluation image from amongst the pre-training images (using the FAISS library) Johnson et al. (2019). We then sorted and visualized image-image pairs by least euclidean distance; we found that images might be duplicates until a pairwise distance of around 80; beyond this point, there were no duplicates. This process revealed that the pretraining datasets leaked into the PVQA evaluation benchmark. Out of 6700 total images in PVQA test set, we judged 194 to be highly similar to images in the pretraining datasets, and thus, we removed them from our down-stream evaluation. ## 5 Results In our experiments, we focus on generative medical visual question answering (VQA). While recent medical VLMs predominantly performed VQA in a non-generative but rather discriminative manner (i.e., by scoring different answer choices), we believe that this ex-post classification to carry less clinical usefulness, than directly generating responses. On the other hand, generative VQA is more challenging to evaluate, as automated metrics suffer from significant limitations as they do not fully capture the domain-specific context. Thus, we perform a human evaluation study where clinical Figure 4: Illustration of our Human evaluation app that we created for clinical experts to evaluate generated answers. experts review model generations (blinded) and score them (between 0 and 10) in terms of clinical usefulness. Conventional VQA datasetsTable 1 shows the results for VQA-RAD, the radiological VQA dataset for which we created custom splits to address leakage (see Section4). Med-Flamingo few-shot shows strong results, improving the clinical eval score by \(\sim 20\%\) over the best baseline. In this dataset, the auxiliary metrics are rather aligned with clinical preference. Finetuning the MedVINT baseline did not lead to improved performance on this dataset which may be due to its small size. MedVINT zero-shot outperforms the other zero-shot ablations which may be partially attributed to its instruction tuning step on PMC-VQA. Table 2 shows for the results for Path-VQA, the pathology VQA dataset. Compared to the other datasets, all models overall perform poorer on the Path-VQA dataset in terms of clinical evaluation score. We hypothesize that this has to do with the fact the models are not pre-trained on actual \begin{table} \begin{tabular}{l r r r} \hline \hline VQA-RAD & Clinical eval. score & BERT-sim & Exact-match \\ \hline MedVINT zero-shot & 4.63 & 0.628 & 0.167 \\ MedVINT fine-tuned (\(\sim 2K\) samples) & 2.87 & 0.611 & 0.133 \\ OpenFlamingo zero-shot & 4.39 & 0.490 & 0.000 \\ OpenFlamingo few-shot & 4.69 & 0.645 & **0.200** \\ Med-Flamingo zero-shot & 3.82 & 0.480 & 0.000 \\ Med-Flamingo few-shot & **5.61** & **0.650** & **0.200** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance metrics on the VQA-Rad dataset. Best scores are shown in bold. We put emphasis on the clinical evaluation score. BERT-sim may not fully capture the fine-grained medical details. Exact-match is quite noisy and brittle, but conservative. The fine-tuned baseline did not improve over zero-shot which could be explained by the small dataset size in combination with our custom splits which were created to prevent leakage. Figure 5: Multimodal medical few-shot prompting illustrated with an example. Few-shot prompting here allows users to customize the response format, _e.g._, to provide rationales for the provided answers. In addition, multimodal few-shot prompts potentially offer the ability to include relevant context retrieved from the medical literature. large-scale and fine-grained pathology image datasets, but only on a rather small amount of pathology literature (which may not be enough to achieve strong performance). For instance, Figure 3 shows that only a small fraction of our training data covers pathology. In the automated metrics (BERT-sim and exact-match), Med-Flamingo improves upon the OpenFlamingo baseline, however the overall quality does not improve (as seen in the clinical evaluation score). MedVINT was fine-tuned on a sizeable training split which results in strong automated metrics, but did not result in a clinical evaluation score that matches any Flamingo variant. Visual USMLETable 3 shows the results for the Visual USMLE dataset. Med-Flamingo (few-shot) results in the clinically most preferrable generations, whereas OpenFlamingo (zero-shot) is a close runner-up. As the ground truth answers were rather lengthy paragraphs, exact match was not an informative metric (constant 0 for all methods). The few-shot prompted models lead to lower automated scores than their zero-shot counterparts, which we hypothesize has to do with the fact that the USMLE problems are long (long vignettes as well as long answers) which forced us to summarize the questions and answers when designing few-shot prompts (for which we used GPT-4). Hence, it's possible that those prompts lead to short answers that in terms of BERT-sim score may differ more from the correct answer than a more wordy zero-shot generation. Across datasetsOverall, we find that Med-Flamingo's multimodal in-domain few-shot learning abilities lead to favorable generative VQA performance, leading to the lowest average rank of \(1.67\) in terms of clinical evaluation score as averaged across all evaluation datasets. As runner-up, OpenFlamingo zero-shot achieves a rank of \(2.33\). Qualitative analysisFinally, we showcase few examples of Med-Flamingo generations in more detail in Figures 1,5, and 6. Figure 5 exemplifies that a medical few-shot learner like Med-Flamingo can be prompted to generate rationale for its VQA answer. The shown example is impressive in that the rationale is visually guiding the reader towards the object of interest (calcification of the aortic wall). We note, however, that at this stage, few-shot multimodal prompted rationales may not be robust, especially when a model arrives at a wrong answer. Figures 1 and 6 showcase two example problems from the Visual USMLE dataset. The problem descriptions were slightly rephrased and summarized using GPT-4 for display. In Figure 6, Med-Flamingo generates the correct answer while not mentioning the underlying diagnosis (urotheli \begin{table} \begin{tabular}{l c c c} \hline \hline Path-VQA & Clinical eval. score & BERT-sim & Exact-match \\ \hline MedVINT zero-shot & 0.13 & 0.608 & 0.272 \\ MedVINT fine-tuned (\(\sim 20K\) samples) & 1.23 & **0.723** & **0.385** \\ OpenFlamingo zero-shot & **2.16** & 0.474 & 0.009 \\ OpenFlamingo few-shot & 2.08 & 0.669 & 0.288 \\ Med-Flamingo zero-shot & 1.72 & 0.521 & 0.120 \\ Med-Flamingo few-shot & 1.81 & 0.678 & 0.303 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance metrics on the PathVQA dataset. Best scores are shown in bold. Across models, this dataset showed lowest clinical performance among all evaluation datasets. This highlights a performance deficit in pathology across models, and demonstrates that previous classification-based metrics severely overestimated the performance of general medical VLMs in this specialty. \begin{table} \begin{tabular}{l c c} \hline \hline Visual USMLE & Clinical eval. score & BERT-sim \\ \hline MedVINT zero-shot & 0.41 & 0.421 \\ OpenFlamingo zero-shot & 4.31 & **0.512** \\ OpenFlamingo few-shot & 3.39 & 0.470 \\ Med-Flamingo zero-shot & 4.18 & 0.473 \\ Med-Flamingo few-shot & **4.33** & 0.431 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance metrics on the Visual USMLE dataset. Best scores are shown in bold. Due to rather lenghty correct answers, the Exact-match metric was not informative as it was constantly \(0\) on this dataset. cancer) as it was not asked for. By contrast, we observed baselines to directly diagnose the patient (instead of answering the actual question in a targeted way). The problem in Figure 1 illustrates that Med-Flamingo has the ability to integrate complex medical history information together with visual information to synthesize a comprehensive diagnosis that draws from the information of both modalities. ## 6 Discussion In this paper, we presented Med-Flamingo, the first medically adapted multimodal few-shot learner. While this is an early proof-of-concept for a medical multimodal few-shot learner, we expect to see significant improvements with increased model and data scale, more thoroughly cleaned data, as well as with alignment to human preference via instruction tuning or explicit optimization for preferences. We expect that the rise of multimodal medical few-shot learners will lead to exciting opportunities with regard to model explainability (via rationale generation) as well as grounding the model in verified sources (via multimodal retrieval to augment the few-shot prompt). Thereby, our work serves as a first step towards more generalist medical AI models Moor et al. (2023). LimitationsThis work demonstrates a proof-of-concept. As such, Med-Flamingo is _not_ intended nor safe for clinical use. In all VLMs we analyzed, hallucinations were observed. Furthermore, as Med-Flamingo is a pre-trained model without further instruction or preference tuning, it is possible that the model occasionally outputs low-quality generations. Future workIt will be an exciting route for future work to further train Med-Flamingo on clinical data, high-resolution medical image datasets as well as 3D volumes and medical videos. While current general-purpose medical VLMs are pre-trained on the broad medical literature (_i.e.,_ they are only "book-smart"), also learning from diverse patient data directly will become crucial for down-stream applications. ## Acknowledgments We thank Rok Sosic for his technical support in the data preprocessing. Figure 6: Example of a Visual USMLE problem.
2305.03192
LSTM Framework for Classification of Radar and Communications Signals
Although radar and communications signal classification are usually treated separately, they share similar characteristics, and methods applied in one domain can be potentially applied in the other. We propose a simple and unified scheme for the classification of radar and communications signals using Long Short-Term Memory (LSTM) neural networks. This proposal provides an improvement of the state of the art on radar signals where LSTM models are starting to be applied within schemes of higher complexity. To date, there is no standard public dataset for radar signals. Therefore, we propose DeepRadar2022, a radar dataset used in our systematic evaluations that is available publicly and will facilitate a standard comparison between methods.
Victoria Clerico, Jorge Gonzalez-Lopez, Gady Agam, Jesus Grajal
2023-05-04T22:31:03Z
http://arxiv.org/abs/2305.03192v1
# LSTM Framework for Classification of Radar and Communications Signals ###### Abstract Although radar and communications signal classification are usually treated separately, they share similar characteristics, and methods applied in one domain can be potentially applied in the other. We propose a simple and unified scheme for the classification of radar and communications signals using Long Short-Term Memory (LSTM) neural networks. This proposal provides an improvement of the state of the art on radar signals where LSTM models are starting to be applied within schemes of higher complexity. To date, there is no standard public dataset for radar signals. Therefore, we propose DeepRadar20221, a radar dataset used in our systematic evaluations that is available publicly and will facilitate a standard comparison between methods. Communications signals, radar signals, automatic modulation classifier, neural networks, long short-term memory networks. Footnote 1: Available for download in [https://www.kaggle.com/datasets/khilian/deepradar](https://www.kaggle.com/datasets/khilian/deepradar) ## I Introduction Automatic modulation classification (AMC) consists in automatic determination of the modulation of a series of collected samples. It is the step that follows the detection of the signal and that is needed for data demodulation, therefore, it plays an important role in many civilian and military receivers [1]. Taking into account the classical approaches, AMC algorithms can be classified into two categories: those based on the likelihood function (LB, 'Likelihood-Based') and those based on feature extraction (FB, 'Feature Based') [2]. The first offers the optimal solution by minimizing the probability of false classification through the assumption that the probability density function (PDF) contains all the information needed for a specific waveform. Therefore, classification is performed by comparing the PDF likelihood ratio with a decision threshold [2]. The problem lies in its high computational complexity, which means that this method could not be suitable for real working environments. In contrast, FB algorithms extract representative features of each type of signal for their subsequent classification. These algorithms are suboptimal but are often preferred because they are easy to implement and suitable for real-time applications [2]. Despite that, FB classifiers rely heavily on expert knowledge, so even though they are a good approximation on specific environments, they are highly complex and require a lot of time for development. While these algorithms have been successfully implemented to develop AMCs, Machine Learning (ML) and Deep Learning (DL) are considered good alternatives to develop high-performance and accurate AMCs without the need of time-consuming classical approaches. Algorithms such as K-Nearest Neighbors [3], Support Vector Machine [3, 4], Multilayer Perceptron (MLP) [5], Recurrent Neural Networks (RNN) [6, 7] and Convolutional Neural Networks (CNN) [8, 9, 10, 11] have recently been used for this purpose. Previous literature shows the success of LSTM networks in processing and classifying sequences. Thus, our objective is to provide a robust and simplified AMC based on LSTM networks for both communications and radar signals. To do so, the public RadioML 2018.01A communications dataset [9] was used. Since there is no public radar signal dataset, we have created and published DeepRadar20221, a radar dataset with continuous and pulsed signals of 23 classes. Furthermore, for comparison purposes, a dataset of eight types of radar signals was reproduced, which was proposed in the current state-of-the-art literature on radar signal classification [7]. Footnote 1: Available for download in [https://www.kaggle.com/datasets/khilian/deepradar](https://www.kaggle.com/datasets/khilian/deepradar) The remainder of the paper is distributed as follows. The most relevant proposals and studies on AMCs with DL and ML are outlined in Section II. The signal model and datasets are introduced in Section III. Afterwards, our neural network architecture, metrics, and experimentation are described in Section IV, and the results are presented in Section V. Finally, the main conclusions of this work are presented in Section VI. ## II Related Work Despite the usefulness of the LB and FB classification algorithms, the appearance of artificial intelligence has revolutionized many areas of interest. ML and DL tools have been used in the past to build modulation classifiers for communications and radar signals but have been treated separately. When classifying communications signals, the most typical signal representation is time series, and the most widely used method for its classification has been one-dimensional CNN [8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. Similarly, recent proposals such as GGCNN [22] and SE-MSFN [23] used RadioML 2018.01A providing outperforming results. In contrast, radar signals have been classified by feature extraction and some MLP architecture [5] or by using recorded signals as time-frequency images to be used in conjunction with two-dimensional CNN [24, 3, 4, 5] or mixed CNN with tree structure-based machine learning process optimization classifier [25], CNN with Reinforcement Learning [26], CNN with denoising autoencoders [28, 11, 27] and CNN with LSTM [29, 30]. Some examples of pre-trained models based on CNNs that have been used lately for radar classification are AlexNet [10] and Inception [11]. In addition to that, certain attempts have been made to use RNN in recent years by processing signals as time series, some of them using LSTM with attention mechanisms [7] or gated recurrent unit networks [31, 32, 33]. ## III Signal model and datasets To perform our experiments, one or more data sources should be chosen with a large number of signals per class. ### _Signal Model_ The datasets are composed of radar or communication signals. These signals are defined by the following model: \[s(t)=As_{n}(t)+r(t) \tag{1}\] where \(s_{n}(t)\) represents a normalized signal of unit power, \(A\) corresponds to a scale factor of the signal power, and \(r(t)\) is Complex Additive Gaussian Noise (CAWN) with real and imaginary parts with variance \(\sigma^{2}\). Consequently, the resulting SNR is established as follows. \[snr=\frac{A^{2}}{2\sigma^{2}} \tag{2}\] All signals have a 1024x2 size corresponding to the In-Phase and Quadrature of the time samples. Data bases of equal-sized signals are required to process the sequences through our LSTM framework, which has a fixed input size. Hence, the three datasets used for our experiments are presented below. ### _Datasets_ * 256 QAM, AM-SSB-WC, AM-SSB-SC, AM-DSB-WC, AM-DSB-SC, FM, GMSK and OQPSK. In addition to signals with synthetic channel effects generated with GNU Radio, it also includes over-the-air (OTA) recordings [9]. Its main limitation is that neither the parameters regarding the generation of the signals nor the origin of each signal, a synthetic recording or an OTA capture, were specified. Footnote 1: Available for download in www.deepsig.ai/datasets * Radar signals: We created **DeepRadar2022**, a dataset that contains time sequences of radar signals considering the 23 modulations given in Table I with all their parameters listed: sampling frequency (\(f_{s}\)), carrier frequency (\(f_{c}\)), pulse-width (\(PW\)), bandwidth (\(BW\)), symbol rate (\(v_{s}\)), length of Barker (\(L_{c}\)), number of carrier cycles in a single-phase symbol (\(M\)), relative prime of M (\(r\)), amplitude of secondary lobes (\(S\)), number of segments (\(N_{g}\)), phase states (\(PS\)) and frequency variation (\(\Delta f\)). Note that \(f_{s}\) and the input sequence are fixed to 100 MHz and 1024 samples, respectively. However, modulations with few symbols and high symbol rates lead to short signals in time and, therefore, to a small number of samples. Thus, we perform an interpolation to obtain a set of equal-sized signals. Finally, we have created 469200 signals for the training set and 156400 signals for the validation and testing set each. As a result, the entire dataset contains 782000 signals. Furthermore, we recreated the 8-class signal dataset pro posed by the authors of SABLNet [7]. These signals have SNRs between -20 and 20 dB with 2 dB step. Following their specifications, the resulting signals are generated with CW, LFM, BFSK, SIN, EXP, SFW, BPSK and BASK modulations. In addition, the authors proposed the classification of the signals in two input formats: raw time sequences and after a preprocessing step with the autocorrelation computation. ## IV Long Short-Term Memory Framework In recent years, the use of RNNs has been introduced. RNNs are derived from an attempt to correct the lack of memory of convolutional networks. Instead of processing the input data as a whole, they iterate through the sequence elements, keeping the information regarding all the previous elements. However, the major problem with recurrent neural networks is the vanishing gradient, since the input sequences can be very long. LSTM networks solve the vanishing gradient issue, which is particularly troublesome. They consist of four different gate units, Figure 1, and two different memory states, long-term ( memory cells, \(c^{<t>}\)) and short-term (activations, \(a^{<t>}\)). This way, they deal with the vanishing gradient by discarding the non-valuable information while keeping the important one [34]. This aspect is what makes LSTM networks a very suitable approach to our classification problem when processing signals as sampled time sequences. Figure 1 represents the internal functioning of each of the cells that make up the LSTM layer. These cells are formed by a more complex structure which includes forget (\(f^{<t>}\)), update (\(i^{<t>}\)) and output (\(o^{<t>}\)) parameters and an additional output to the activations, the memory cell (\(c^{<t>}\)). Each gate performs the following equations: \[\tilde{c}^{<t>}=\tanh(W_{c}\cdot[a^{<t-1>},x^{<t>}]+b_{c}) \tag{3}\] \[f^{<t>}=\text{sigmoid}(W_{u}\cdot[a^{<t-1>},x^{<t>}]+b_{u}) \tag{4}\] \[i^{<t>}=\text{sigmoid}(W_{f}\cdot[a^{<t-1>},x^{<t>}]+b_{f}) \tag{5}\] \[o^{<t>}=\text{sigmoid}(W_{o}\cdot[a^{<t-1>},x^{<t>}]+b_{o}) \tag{6}\] \[c^{<t>}=f^{<t>}*\tilde{c}^{<t>}+i^{<t>}*c^{<t-1>} \tag{7}\] \[a^{<t>}=o^{<t>}*\tanh(c^{<t>}) \tag{8}\] where \(W_{c}\), \(W_{u}\), \(W_{f}\), and \(W_{o}\) stand for the parameter matrices; \(b_{c}\), \(b_{u}\), \(b_{f}\), and \(b_{o}\) for the biases; and \(a^{<t>}\), \(x^{<t>}\), and \(c^{<t>}\) for the activation, input, and memory cell values, respectively, on the time stamp \(t\). Following the idea that LSTM networks are ideal for processing sequences, our proposal consists of an LSTM Recurrent Neural Network over the signal's time series. The simplified network is made up of only three stacked LSTM layers of 128 cells each before the output layer. The first two LSTM layers return all sequence values (1024x128), and the final layer only returns a single value for the whole sequence and for each memory cell (1x128). Finally, the classification layer is a dense layer with a softmax activation function that has _n_classes_ output neurons, a parameter that changes depending on the dataset, see Figure 2. Additionally, its implementation in TensorFlow (Version 2.4.1) consists of 330240 parameters in addition to the classification layer parameters, which can be 1032, 2967 or 3096 for 8, 23 and 24 classes respectively. Regarding hyperparameters, we considered a batch size of 256 samples and the Adam optimizer with a cyclical learning rate to prevent convergence to local optima with values oscillating between \(1\cdot 10^{-7}\) and \(1\cdot 10^{-3}\). Finally, the network was trained from scratch in 300 epochs. This framework was designed to be trained, validated, and tested with the proposed datasets. To evaluate the performance of this neural network, three main metrics are used: * Average accuracy of all classes with respect to the SNR. * Minimum SNR (sensitivity) at which 90% classification accuracy is achieved. * Confusion matrices with respect to the SNR. ## V Results This section presents the performance of the proposed framework as a function of different parameters, for two radar datasets and one communications dataset. First, the performance of the network is evaluated depending on the simplicity of the structure by modifying the number of LSTM layers. To do so, training was performed on the radar dataset Fig. 1: Internal functioning of a LSTM cell. The temporal parameter t is defined to represent the input element of the temporal series. Fig. 2: Proposed framework based on three stacked LSTM layers and a fully connected layer for signal classification. of eight modulations, considering that it is the simplest case with fewer number of classes. Figure 3 shows that there is a significant gap between the performance with 1 layer and 2 and 3 LSTM layers. Since the best results are obtained with the 3-layer LSTM network, this structure will be further used to evaluate the performance of the radar and communications datasets. ### _Radar Signals: 8-class dataset_ The performance of our framework is compared with the results obtained by the authors of the 8-class dataset and the SABLNet framework [7]. Their proposed structure is of higher complexity with CNN, Bi-LSTM and attention layers. This section shows the results of two studies. First, we considered the impact of the signal domain, whether the network achieves better results with signals in time or in the autocorrelation domain. Our results do not show a significant improvement for sequences in the autocorrelation domain compared to raw time sequences, suggesting that our network learns from the original signal information, contrary to what the authors stated in [7]; see Figures 4 and 5. Although the average classification accuracy does not improve for lower SNRs compared to SABLNet results, the minimum SNR required to obtain a 90% classification accuracy remains at -10 dB for our network and SABLNet. Second, we reviewed the impact of the SNR range input data, when varying the SNR of the training samples from -12:2:20 dB to -20:2:20 dB and testing with the entire range of -20 to 20 dB, see Figure 4. Under these circumstances, our network seemseeeeee to be robust and insensitive while maintaining the classification accuracy for high-SNR scenarios. ### _Radar Signals: DeepRadar2022_ In this section, we used our synthetic dataset DeepRadar2022, a more complex dataset with 23 signal types (see Table I). continuous wave PSKs (4PSK and 8PSK). This idea is also outlined in the confusion matrix in Figure 8, where 4PSK and 8 PSK are misclassified as the other. This limitation could be improved by classifying all of them as PSK and discerning the order afterwards. Similarly, some signals with phase modulations using Frank code and codes P1 to P4 are incorrectly classified as the other. ### _Communications Signals_ Our network was trained and tested with the RadioML 2018.01A dataset of 24 modulation types. In their work, a network with residual stacks was proposed [9]. Similarly, we compared the average classification accuracy with the results obtained by the authors of GGCNN [22] and SE-MSFN [23], see Figure 9. The results showed that an SNR of 6 dB is needed to obtain 90% of classification accuracy. This value is high because of the difficulty in classifying high-order PSK and QAM and discerning between AM-DSB and AM-SSB with carrier (WC) and suppressed carrier (SC). Compared to the other proposals, the performance of our framework is superior but similar to that of SE-MSFN. However, in terms of complexity, our framework is simpler, avoiding a feature-extraction step and within fewer number of layers. Furthermore, we compared the sensitivity of each signal with the authors of RadioML 2018.01A, as they provided results related to the classification accuracy of each modulation. However, since we do not have the data set broken down by type of signal (synthetic or OTA capture) as in the original paper [9], a fair comparison of the results is not possible. Despite that, by analyzing the results of the authors when classifying only the synthetic signals, some of the signals (64QAM, AM-DSB-SC, AM-SSB-SC and 256QAM) do not reach a 90% accuracy at the highest evaluated SNR (20 dB). In contrast, our experiments show that all signals except AMSB-SC reach a 90% classification accuracy for a SNR lower than 10 dB, see Figure 10. Therefore, the sensitivity of our framework seems to be more robust for most of the signals. ## VI Conclusion We have developed an LSTM-based architecture for the classification of communications and radar signals. Our ex Fig. 8: Confusion matrix for the DeepRadar2022 dataset at SNR = -2 dB that shows the misclassification of high-order phase modulations. Fig. 10: Comparison of the sensitivity at 90% classification accuracy of our framework and the authors of the RadioML 2018.01A dataset [9]. Fig. 7: Sensitivity at 90% classification accuracy for each signal modulation using the DeepRadar2022 dataset. Fig. 9: Average classification accuracy of our framework compared to GGCNN [22] and SE-MSFN [23] using the RadioML 2018.01A dataset. periments show that this framework achieves state-of-the-art performance for both types of signals. These LSTM-based classifiers offer higher sensitivity and robustness compared to current deep learning-based AMCs with a simpler structure and, therefore, a reduced number of trainable parameters.
2310.07142
Validating Synthetic Usage Data in Living Lab Environments
Evaluating retrieval performance without editorial relevance judgments is challenging, but instead, user interactions can be used as relevance signals. Living labs offer a way for small-scale platforms to validate information retrieval systems with real users. If enough user interaction data are available, click models can be parameterized from historical sessions to evaluate systems before exposing users to experimental rankings. However, interaction data are sparse in living labs, and little is studied about how click models can be validated for reliable user simulations when click data are available in moderate amounts. This work introduces an evaluation approach for validating synthetic usage data generated by click models in data-sparse human-in-the-loop environments like living labs. We ground our methodology on the click model's estimates about a system ranking compared to a reference ranking for which the relative performance is known. Our experiments compare different click models and their reliability and robustness as more session log data becomes available. In our setup, simple click models can reliably determine the relative system performance with already 20 logged sessions for 50 queries. In contrast, more complex click models require more session data for reliable estimates, but they are a better choice in simulated interleaving experiments when enough session data are available. While it is easier for click models to distinguish between more diverse systems, it is harder to reproduce the system ranking based on the same retrieval algorithm with different interpolation weights. Our setup is entirely open, and we share the code to reproduce the experiments.
Timo Breuer, Norbert Fuhr, Philipp Schaer
2023-10-11T02:36:38Z
http://arxiv.org/abs/2310.07142v1
# Validating Synthetic Usage Data in Living Lab Environments ###### Abstract. Evaluating retrieval performance without editorial relevance judgments is challenging, but instead, user interactions can be used as relevance signals. Living labs offer a way for small-scale platforms to validate information retrieval systems with real users. If enough user interaction data are available, click models can be parameterized from historical sessions to evaluate systems before exposing users to experimental rankings. However, interaction data are sparse in living labs, and little is studied about how click models can be validated for reliable user simulations when click data are available in moderate amounts. This work introduces an evaluation approach for validating synthetic usage data generated by click models in data-sparse human-in-the-loop environments like living labs. We ground our methodology on the click model's estimates about a system ranking compared to a reference ranking for which the relative performance is known. Our experiments compare different click models and their reliability and robustness as more session log data becomes available. In our setup, simple click models can reliably determine the relative system performance with already 20 logged sessions for 50 queries. In contrast, more complex click models require more session data for reliable estimates, but they are a better choice in simulated interleaving experiments when enough session data are available. While it is easier for click models to distinguish between more diverse systems, it is harder to reproduce the system ranking based on the same retrieval algorithm with different interpolation weights. Our setup is entirely open, and we share the code to reproduce the experiments. Synthetic usage data, Click signals, System evaluation, Living labs + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Data and Information Quality + Footnote †: journal: Acm Journal of Applied Sciences, Germany with user interaction data. Opposed to A/B experiments, which only deliver meaningful results with large amounts of user data, previous living labs [33; 81; 83] implemented the experimental design based on interleavings. The general idea is to combine ranking lists of two or more retrieval systems, show the interleaved ranking to users, and let them decide on the better-performing system by their click decision based on their relative preference. Earlier works concluded that user interaction data in living labs is sparse [42; 81; 83]. While there is a need to validate laboratory or system-oriented experiments in the real world, the corresponding experiments come with the risk of harming the user experience. The risk is even higher for small and domain-specific search services, and it is a desideratum to keep the online time of experimental systems short while having insights about their usefulness. As a way out, synthetic usage data can be considered a possibility to account for a user-oriented evaluation without the risk of exposing real users to bad search results. User interactions like clicks are alternative _relevance signals_ or proxies that could be used for estimating the system performance from a different perspective [22]. If enough user interaction data are available, it is possible to parameterize click models that can be used for generating synthetic user interactions. These click models bear the potential to replace real users in living labs when evaluating highly experimental systems. As user interaction data are low, it is of high interest to make an estimate of how much data are required for a robust parameterization of the click model to use it in a reliable way when generating synthetic interactions. This work is about validating synthetic usage data of click models in data-sparse environments like living labs. Figure 1 illustrates the evaluation task, where the _actor_ is interested in validating the retrieval effectiveness and usefulness of an _experimental system_ in the real world. _Living labs_ offer a gateway to user experiments. Once submitted to the living lab infrastructure, the experimental system can be deployed on the backend of _search platforms_, which, in turn, can provide users with results in interleaving experiments. User interaction feedback data like clicks are logged and sent to a central database of the living lab infrastructure. Figure 1: Click model evaluations based on system rankings and logged user interaction data from living labs. The actor usually has the choice of different systems or configurations, and not all of them are worth being validated in a user experiment. As a solution, the actor could select suitable systems for online experiments in a pre-assessment step based on user simulations. In this case, a click simulator is used to separate good-performing systems from the rest. As the user interactions are continuously logged, it is possible to update the click model's parameters with new log data. But how does the actor know when the click model is parameterized well enough such that it can be used for good user simulations? We propose an evaluation approach in which the click model has to decide about the relative system performance, i.e., the system ranking. According to this method, the actor provides the click models with a _reference system ranking_, for which the relative system performance is known in advance with high confidence. Based on its click decisions and the generated click data, the model itself also produces a _click model system ranking_, which can be compared to the reference system ranking. If the click model returns the correct system ranking, it can be considered a suitable user simulator that generates meaningful synthetic usage data. Recently, several datasets, for which the ground truth relevance of query document pairs was inferred from click signals, were released [77, 97, 101]. More specifically, multi-graded relevance labels were derived from the click-through rate and threshold values [77]. However, we note that clicks are not a direct substitute for editorial relevance judgments like they are made for IR test collections, as found in a previous study [46]. In addition, finding reasonable threshold values to make multi-graded labels from the click-through rate is not well studied and can be critical as the threshold criteria might differ across queries/topics [82]. Instead, we see user clicks as alternative _relevance signals_ that require a different evaluation approach. For this reason, the work at hand investigates the problem of evaluating the relative system performance based on click models in the absence of editorial relevance judgments. More specifically, we evaluate to which extent click models can determine the relative system performance by the Log-Likelihood of the click probability and, afterward, in simulated interleaving experiments. We analyze the reliability and robustness of click models for estimating the relative system performance by evaluating the click models' system ranking over an increasing amount of queries and click logs. More specifically, we compare the Document-based Click-Through Rate Model (DCTR) to the Dependent Click Model (DCM) and the Simplified Dynamic Bayesian Network Model (SDBN), which embed the continuation probability and the notion of satisfying clicks. Furthermore, we include two types of system rankings. The first is based on different lexical retrieval methods (LRM), whereas the second is made from interpolated retrieval methods (IRM). For both the Log-Likelihood and the interleaving experiments, we determine the correlation with a reference ranking by Kendall's \(\tau\). More precisely, we give answers to the following research questions: **RQ1**: _Can click models reproduce system rankings?_ **RQ2**: _Do continuation and satisfaction probabilities in click models improve the simulation quality?_ **RQ3**: _How does the type of system ranking impact the outcomes of simulated interleaving experiments?_ **RQ1** addresses the general plausibility of the introduced evaluation approach. The focus of **RQ2** is the comparison of DCTR to more complex models. In particular, DCM and SDBN have a less abstract user model than DCTR. Besides the attractiveness of search results, they account for the click sequence and whether there are satisfied clicks. **RQ3** addresses two different types of system rankings that could be compared. While the LRM ranking is composed of more distinct systems, the IRM ranking is based on the same system with different interpolation weights. Besides the answers to our research questions, the contributions of this work are as follows: * We **introduce an evaluation approach** for validating synthetic usage data generated by click models in data-sparse _human-in-the-loop_ environments like living labs, * **compare two different system rankings**, including lexical-based systems and the same system with different interpolation weights to evaluate the proposed methodology, * **compare three different click models**, including DCTR, DCM, and SDBN, * **validate the proposed methodology** by simulated interleaving experiments with state-of-the-art Transformer-based rankings, * **provide an open and fully reproducible experimental setup** including open-source code and open data.1 Footnote 1: [https://www.github.com/irgroup/validating-synthetic-usage-data](https://www.github.com/irgroup/validating-synthetic-usage-data) The remainder is structured as follows. Section 2 reviews the related work about living labs, user simulations, and click models. Section 3 outlines the methodology and the experimental setup, whereas Section 4 presents the experimental evaluations. Section 5 gives answers to our research questions. Finally, Section 6 discusses the results and concludes. ## 2. Related Work and Background This section reviews the related work about living lab experiments, briefly summarizes relevant work about user simulations, and finally, provides the fundamentals of click models. ### Living Labs The principle of the living lab paradigm within the scope of shared tasks can be described as follows. Participants contribute their experimental systems or sometimes only the pre-computed outputs to the living lab platform, which connects participants and their experimental systems on the one side with the connected search services on the other side. Users can then be provided with the experimental results upon request, and their interactions will be logged in order to evaluate or improve the experimental systems. One of the earlier works that mentioned the idea of a "living laboratory" was made by Kelly et al. (2017) and dates back to 2009. The idea was picked up by Azzopardi and Balog (2013), who made the first proposal for a living lab architecture in 2011. In 2013, a workshop dedicated to living labs discussed several requirements and extensions of the living lab paradigm (Bauer et al., 2014) followed by the first implementation of the living lab architecture for ad-hoc IR experiments in 2014 (Bauer et al., 2014). Finally, the first living lab for ad-hoc retrieval was held at CLEF in 2015 and was continued in a second iteration in 2016 (Kelly et al., 2016). The same organizers were also involved in the Open Search track at TREC in 2016 and 2017 (Kelly et al., 2017). NEWSREEL was the first living lab for real-time news recommendations and ran from 2014 until 2017 (Kelly et al., 2017; Kelly et al., 2017). More recent living lab implementations are not specifically tailored for shared tasks but have a domain-specific focus. Some recent examples include APONE (Kelly et al., 2017; Benevic et al., 2018) and arXivDigest (Kelly et al., 2018). APONE is a living lab platform designed for A/B tests focusing on evaluating user interfaces. As it builds upon the PlanOut language (Bauer et al., 2014), it allows designing the experiments by scripting them. arXivDigest is a recommendation service for research articles based on personalized email updates on recent publications from arXiv's computer science repositories. After registration, an interest profile helps to find adequate recommendations, and feedback is provided with the help of clicked URLs in the personalized mail. Besides arXivDigest, Beel et al. (2018) also provide a living lab platform for scholarly recommendations. More recently, Schaer et al. (2018); Schaer et al. (2018) presented a novel infrastructure design for living labs. The infrastructure was tailored explicitly for shared task collaborations and was the backbone of the LiLAS lab at CLEF in 2021. One of the substantial improvements over earlier living lab attempts is the possibility of submitting the entire experimental system instead of submitting pre-computed results only, which addresses the shortcoming of pre-computed results in earlier living labs (Kumar et al., 2018). By using only pre-computed results for selected queries, the experiments are artificially shrunk to a subset of queries. Even more, they may be outdated quickly, which can become critical in e-commerce settings. Instead, Schaer et al. (2017); Schaer et al. (2017) envision a dockerized retrieval system that can dynamically be updated and deliver results for arbitrary queries. Participants of the shared task provide systems with retrieval and recommendation algorithms in the form of micro-services that can be deployed on purpose in a reproducible way. The infrastructure builds upon Docker and its containerization technology to make this possible. An additional central component is Git and the integration of the web service GitHub, facilitating the experimental components' software versioning, transparency, and reproducibility. Once the systems are implemented, the experimenters prepare them with Docker containers. More specifically, they prepare a dockerizable source code repository, and after registration, each dockerized system can be integrated into a single multi-container application. Multiple systems from possibly different experimenters are combined, which means that the administrators at the search platform do not have to set up individual systems but rather can rely on complete replicas of all submitted systems once the multi-container application is running. Each search platform deploys an instance of the multi-container application on its backend servers. Queries from users will then be redirected from the search interfaces to the individual experimental systems. Upon request, experimental search results are returned, and the search platform is supposed to log user interaction data, which eventually is sent to a central server of the infrastructure, where it is stored and can be used for further analysis, training, and optimization of the experimental systems. While Schaer et al. conclude that their infrastructure design overcomes the bottleneck of pre-computed queries, there were still moderate amounts of logged user interaction data that only partially allowed for statistical significance tests (Schaer et al., 2017). Small- and mid-scale search platforms generally have moderate user traffic, and relevance feedback is generally sparse (Kumar et al., 2018). However, it is common practice to reuse historical session logs to evaluate new ranking methods before exposing them to real users (Schaer et al., 2017), either to avoid harming the user experience or to reduce online time in order to increase the rate at which new experiments can be conducted (Schaer et al., 2017; Kumar et al., 2018). As an alternative to A/B experiments, which only deliver meaningful results with a large amount of user data, experimental systems can be deployed in interleaving experiments like it is often done in living lab environments (Schaer et al., 2017; Kumar et al., 2018; Schaer et al., 2017; Schaer et al., 2017). The general idea is to combine ranking lists of two or more retrieval systems and let users decide on the better-performing system by their click decisions based on the relative preference. There exist different interleaving strategies like probabilistic interleaving (Kumar et al., 2018), multileaving (Kumar et al., 2018), preference-based balanced (Kumar et al., 2018), or temporal interleaving (Kumar et al., 2018), but the team draft interleaving (Kumar et al., 2018) is more commonly used and also studied in this work. While interleaving reduces the risk of returning poor search results by combining experimental rankings with a reasonable baseline ranking, there is still the risk of harming the user experience. Preferably, promising systems should be identified before deploying them in online experiments. A viable solution for pre-assessments is user simulation, which will be described next. ### User Simulations The most prominent user model in system-oriented evaluations implies that the user formulates a single query for a given information need, scans the entire result list up to a fixed rank, and judges the relevance of each item independent of any context knowledge, e.g., from previously seen results (Kumar et al., 2018; Schaer et al., 2017). However, depending on the IR measure, additional assumptions about the underlying user model are made as part of the evaluations. For instance, nDCG (Kumar et al., 2018) discounts later items in the ranking by log-harmonic weights and, thus, simulates the user's persistence. Similarly, the RBP also allows defining the user's persistence (Kumar et al., 2018). Carretrette (Carrette, 2018) introduced a coherent framework for model-based measures. Similarly, Moffat et al. (Moffat et al., 2019) introduced the C/W/L framework to describe a family of parameterizable evaluation measures that account for the user browsing behavior by formalizing the conditional continuation probability of examining items in the ranking list. Both of these frameworks are able to describe conventional measures like nDCG, AP, or RBP but also allow for the analysis of derived variants. While all of these measures allow for a principled system-oriented evaluation over different topics with certain assumptions about the user behavior, they are still a strong abstraction of how the user interacts with the search system, and the user behavior has a somewhat static notion. Based on the idea of extending the underlying user model of system-oriented experiments, simulations make it feasible to evaluate retrieval systems with regards to more _dynamic_ user interactions. For instance, earlier seen retrieval results can be exploited for more diverse query formulations over multiple result pages, situational clicks, relevance decisions, and diverging browsing depths (Kumar et al., 2018). Simulated IR experiments date back to the early 1980s (Kumar et al., 2018; Kumar et al., 2018), but more recently, several frameworks and user models were introduced (Kumar et al., 2018; Kumar et al., 2018; Moffat et al., 2019; Moffat et al., 2019; Moffat et al., 2019; Kumar et al., 2018). Inspired by the user models of Baskaya et al. (Kumar et al., 2018) and Thomas et al. (Thomas et al., 2019), Maxwell and Azzopardi (Mazal et al., 2019; Moffat et al., 2019) introduced the _Complex Searcher Model_. Carterette et al. (Carrette et al., 2018) proposed the idea of _Dynamic Test Collections_, and Paakonen et al. (Paakonen et al., 2019) introduced the _Common Interaction Model_. Zhang et al. (Zhang et al., 2019) recently introduced another search simulation framework. As a special type of user simulation, the focus of click models is generating click interactions with the ranking list. In order to compare the fidelity of user simulation, Labhishetty and Zhai (Labhishetty and Zhai, 2019; Labhishetty and Zhai, 2019) introduced the Tester-based approach. The key idea is based on the definition of _Testers_ that are composed of single retrieval systems for which the relative retrieval effectiveness is known. The user simulator is evaluated by how well it can identify the correct relative retrieval effectiveness. ### Click models In contrast to explicit editorial relevance judgments of test collections, click signals, or user interactions in general, are a more implicit form of relevance feedback (Labhishetty and Zhai, 2019), which is often used to improve the quality of search results (Baskaya et al., 2018). Generally, it is controversially discussed how user interactions like clicks can reflect topical relevance. While several studies suggest that improved system performance does not directly translate into better user performance (Kumar et al., 2018; Moffat et al., 2019; Moffat et al., 2019; Moffat et al., 2019; Kumar et al., 2018), some works concluded that user and system metrics correlate under certain constraints (Zhou et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). It is beyond the scope of this work to draw any conclusions about how clicks correlate with topical relevance judgments and we consider clicks as an alternative that can be used as a proxy when it is not feasible to have editorial relevance judgments. While earlier click models mostly differ by the predefined rules that make assumptions about the underlying user behavior (Zhou et al., 2019; Li et al., 2019), several improved models were introduced, accounting for clicks on multiple result pages, and aggregated search (Zhou et al., 2019; Li et al., 2019), embedding time awareness by accounting for dwell times and timestamps between click sequences (Lob et al., 2019), or omitting predefined rules by replacing them with neural vector states learned from user logs (Li et al., 2019), or embedding global and local click models into a framework for better personalization (Moffat et al., 2019). Click models can be distinguished by the parameter estimation, which is either done by maximum likelihood estimation (MLE) or the expectation-maximization (EM) algorithm, which has been improved for more efficiency (Li et al., 2019) and online retraining (Moffat et al., 2019). Suppose both clicks and editorial relevance judgments are available. In that case, it is possible to turn click models into information retrieval metrics (Kumar et al., 2018) or to make new relevance labels for previously unjudged documents (Kumar et al., 2018; Li et al., 2019). The quality of click models is often evaluated by the Log-Likelihood and Perplexity (Kendall, 2017), but also other reliability measures exist (Kendall, 2017). In previous work, click models have mainly been evaluated on semi-public web search datasets, e.g., from Yahoo! (Kendall, 2017; Kendall, 2017; Kendall, 2017) or Yandex (Yandex, 2018; Yandex, 2018), in which the SERPs are anonymized and the underlying web corpus is fully or partially private. To the best of our knowledge, we are the first to evaluate simulated interleaving experiments with a completely open and transparent experimental setup. ## 3. Methodology and Evaluation Setup As described in the introduction of this work, our overall methodology aims to validate the simulation quality of click models in interaction data-sparse environments like living labs. The key idea is to validate the click model by its ability to identify the correct system ranking, for which we know the relative performance of each system with high confidence in advance. To better understand how much user interaction data are required for a reasonable parameterization of the click model, we parameterize and subsequently evaluate the click model over an increasing amount of session logs. The performance estimates of the click model are either based on the Log-Likelihood or on the highest click probability, which is used in simulated interleaving experiments. The click model system ranking is compared to the reference ranking with the help of Kendall's \(\tau\), which determines the rank correlation. In the following, we describe the two types of system rankings and the corresponding single systems that will be used as the reference system rankings to which the performance estimates of the click models are compared (cf. 3.1). Afterward, we describe and compare the three click models by their attractiveness and examination probabilities (cf. 3.2). In comparison, the click models mainly differ by how the examination probability is determined, and we discuss the corresponding assumptions about the concepts of satisfaction and continuation by an illustrative toy example. Furthermore, we describe our experimental setup that is based on the TripClick dataset (cf. 3.3) and introduce the evaluation measures (cf. 3.4). Finally, we provide details about the implementation and hardware (cf. 3.5). ### Experimental Systems In our experiments, we include two types of system rankings, and selecting them is motivated by the Tester-based approach by Labhishetty and Zhai (Labhishetty and Zhai, 2018; Zhai, 2018). According to them, a user simulator (in this study, it is the click model) can be validated by its ability to distinguish the retrieval performance of methods for which we know the relative system effectiveness with high confidence or based on heuristics. For instance, by experience, we can safely assume that BM25 is more effective than ranking documents by the term frequency. The first system ranking is based on **L**exical **R**etrieval **M**ethods (**LRM**) and is defined by \[\text{DFR}\chi^{2}>\text{BM25}>\text{Tf}\succ\text{DI}\succ\text{Null}.\] More specifically, it is composed of the following five methods (in decreasing order of hypothesized effectiveness), including (1) the DFR \(\chi^{2}\) model (Blei et al., 2017), which is a (free from parameters) DFR method based on Pearson's \(\chi^{2}\) divergence, (2) the BM25 (Kendall, 2017) method, (3) the term frequency (Tf) of the query terms in the document, (4) the query-agnostic method based on document length (DI), and (5) a method that assigns score values of zero (Null). In contrast, the second system ranking is composed of an **I**nterpolated **R**etrieval **M**ethod (**IRM**) with different interpolations between a reasonable and a less effective retrieval method, which gives us more control over the effectiveness by weighting the influence of the less effective retrieval method. In our experiments, we combine the DFR ranking method with the ranking criterion based on document length (DI) and determine the ranking score for a document-query pair \((d,q)\) as follows: \[\text{score}(d,q)=\rho\cdot\text{score}_{\text{DI}}(d,q)+(1-\rho)\cdot\text{ score}_{\text{DFR}}(d,q). \tag{1}\] By increasing \(\rho\), we deteriorate the ranking results in a systematic but also more subtle way, which better simulates incremental and less invasive changes to an existing search platform in an online experiment.2 The resulting IRM ranking is defined by Footnote 2: We exclude interpolations with \(\rho<0.4\) to cover a similar score range of the Jaccard similarity for the LRM and IRM rankings, as shown in Figure 2. \[\text{IRM}_{\rho_{1}}\succ\text{IRM}_{\rho_{2}}\succ\cdots\succ\text{IRM}_{ \rho_{n}}\] where \(\text{IRM}_{\rho}\) denotes a single system in the ranking, and \(\rho_{1}<\rho_{2}<...<\rho_{n}\), i.e., a lower interpolation weight \(\rho\) is supposed to result in a more effective retrieval system. We acknowledge that the intuition of the relative IRM system ranking is based on a weak heuristic that is only valid for this particular combination of retrieval methods (as will be evaluated in Subsection 3.4). Generally, it cannot be guaranteed that a higher interpolation parameter, giving more weight to the inferior ranking method, will decrease effectiveness. Especially if the difference in effectiveness is moderate, the linear combination of reasonable retrieval methods can improve the results, as exploited in many data fusion experiments. However, in our settings, we ensure a decrease in effectiveness by implementing the inferior retrieval method with a query-agnostic ranking criterion that will likely deteriorate the ranking. When comparing the LRM and IRM rankings, the LRM ranking has more diverse document rankings, as shown in Figure 2. The heatmaps compare the first 20 results of the document rankings for the 50 most frequent queries of the dataset described in Section 3.3 between the combinations of the different systems by the Jaccard similarity. Given the rankings of the two systems, we compare the corresponding document sets by the Jaccard similarity. The higher the Jaccard index, the more similar the two document sets. We note that a perfect Jaccard index of 1.0 could be achieved with the same document sets but different rankings, i.e., the documents in both rankings do not need to be in the same order. This evaluation focuses _diversity_ in the document rankings. For the evaluation of rank correlations, Kendall's \(\tau\) or the Rank-biased Overlap (RBO) [95] should be preferred (cf. Figure 2: Jaccard similarity between the first 20 documents of the 50 head queries for the **LRM** (left) and **IRM** (right) system rankings. The Jaccard index is determined based on the document identifiers of both rankings. Subsection 3.4). Except for the comparison of DFR and BM25, most of the LRM combinations are quite dissimilar. In comparison, the IRM systems with different interpolation weights cover a similar score range but have a more gradual transition of the Jaccard similarity over the different combinations of weight pairs. The LRM ranking includes fewer but has more distinct systems. In contrast, the IRM ranking is based on more similar document rankings but also more systems, which means that changing the rank position of a single system would result in less severe changes in Kendall's \(\tau\) as compared to changes in the LRM ranking. ### Click Models In the following, we review the analyzed click models (Kang et al., 2018), which are based on probabilistic modeling of the underlying user behavior as opposed to other models based on neural networks (Kang et al., 2018; Wang et al., 2019). All of them can only estimate the click probability of query-document pairs that were available during the parameter optimization. Given a document ranking, a click model estimates the probability \(P\left(C_{d}=1\mid\text{C}_{<r}\right)\) of a click \(C_{d}\) on the document \(d\) considering earlier clicks \(\text{C}_{<r}\) before the rank \(r\) by \[P\left(C_{d}=1\mid\text{C}_{<r}\right)=P\left(C_{d}=1\mid E_{r}=1\right)\cdot P \left(E_{r}=1\right)=\alpha_{dq}\epsilon_{r} \tag{2}\] where the probability \(P\left(C_{d}=1\mid E_{r}=1\right)\) depends on the probability \(P\left(E_{r}\right)\) that the document is examined. Thus, the click probability of a document \(d\) can be decomposed into the _attractiveness_\(\alpha_{dq}\) of the query-document pair \((d,q)\) and the _examination_ probability \(\epsilon_{r}\). The attractiveness of all click models in this study is given by \[\alpha_{dq}=\frac{1}{\left|\mathcal{S}_{dq}\right|}\sum_{s\in\mathcal{S}_{dq }}\text{c}_{d}^{(s)} \tag{3}\] and only differs by the set of sessions \(\mathcal{S}_{dq}\). In this work, a session \(s\) covers a single query, a corresponding SERP with ranked items, and multiple clicks. Unlike other works, the analyzed click models do not consider multi-query sessions. We acknowledge the simplified understanding of a session that contrasts other user-oriented studies that, for instance, consider query reformulations for the same information need. The DCTR model determines the click probability solely by the ratio of clicks on a document \(d\) and how often it has been shown to users for a query \(q\). The attractiveness is determined over all available sessions where \(q\) and \(d\) occur. The examination probability of DCTR for the document at the next rank \((r+1)\) is defined as \[\epsilon_{r+1}=1 \tag{4}\] i.e., the click model does not consider the context of other documents and the notion of _satisfaction_. In comparison, both click models DCM and SDBN extend the _cascade model_(Shen et al., 2017) and determine the attractiveness by considering sessions with documents before the last-clicked document at rank \(l\) in a particular session, assuming that the user continued the search after having clicked unsatisfying results and documents beyond \(l\) were not observed by the user. The set of sessions is defined as \[\mathcal{S}_{dq}=\left\{s_{q}:d\in s_{q},r\leq l\right\}. \tag{5}\] In order to account for the satisfaction of clicks, the DCM introduces the continuation probability \(\lambda_{r}\) determined by the ratio between the total number of sessions with clicks at rank \(r\) that were not the last click in a session (denoted as \(\mathbf{I}(r\neq l)\)) and the total number of sessions in which rank \(r\) was logged \(|\mathcal{S}_{r}|\). The continuation probability \(\lambda_{r}\) is defined as \[\lambda_{r}=\frac{1}{|\mathcal{S}_{r}|}\sum_{s\in\mathcal{S}_{r}}\mathcal{I}(r\neq l). \tag{6}\] The examination probability \(\varepsilon_{r+1}\) of DCM is then defined as \[\varepsilon_{r+1}=c_{r}^{(s)}\lambda_{r}+\left(1-c_{r}^{(s)}\right)\frac{\left( 1-\alpha_{dq}\right)\varepsilon_{r}}{1-\alpha_{dq}\varepsilon_{r}} \tag{7}\] where \(c_{r}^{(s)}\) denotes the probability of a click being observed at rank \(r\) in a session \(s\). Similarly, the SDBN model embeds the satisfaction probability by the parameter \(\sigma_{dq}\) but instead, it accounts for the total number of sessions with the last clicks (denoted as \(\mathcal{I}\left(r_{d}^{(s)}=l\right)\)) in reference to the total number of sessions \(\mathcal{S}_{dq}^{\prime}\) in which the document \(d\) is clicked at a rank before or equal to \(l\). The satisfaction probability \(\sigma_{dq}\) is defined as \[\sigma_{dq}=\frac{1}{\left|\mathcal{S}_{dq}^{\prime}\right|}\sum_{s\in \mathcal{S}_{dq}^{\prime}}\mathcal{I}\left(r^{(s)}=l\right) \tag{8}\] where the corresponding set of sessions \(\mathcal{S}_{dq}^{\prime}\) is defined by \[\mathcal{S}_{dq}^{\prime}=\left\{s_{q}:d\in s_{q},r\leq l,c_{d}^{(s)}=1\right\}. \tag{9}\] The examination probability \(\varepsilon_{r+1}\) of SDBN is then defined as \[\varepsilon_{r+1}=c_{r}^{(s)}\left(1-\sigma_{dq}\right)+\left(1-c_{r}^{(s)} \right)\frac{\left(1-\alpha_{dq}\right)\varepsilon_{r}}{1-\alpha_{dq} \varepsilon_{r}}. \tag{10}\] For the sake of better comparability, Table 1 provides an overview of how the click models' attractiveness and examination probabilities are determined. For all three click models, the parameters are derived from observable variables, e.g., via the MLE algorithm. For a better illustration of how the continuation and satisfaction probabilities can be determined, Table 2 provides a toy example with five sessions, for which we assume that the same ranking was logged for a single query \(q\), where filled circles represent the clicks. For instance, we can determine the continuation probability of the second rank \(\lambda_{r_{2}}\) by the sessions \(s_{1}\), \(s_{3}\), and \(s_{4}\) at which the rank \(r_{2}\) was clicked. For two out of these three sessions, the click at the second rank was followed by additional clicks at the lower ranks, which indicates that the users continued to browse through \begin{table} \begin{tabular}{|l|c|c|} \hline Click model & \(\mathcal{S}_{dq}\) of attractiveness \(\alpha_{dq}\) & Examination probability \(\varepsilon_{r+1}\) \\ \hline DCTR [30] & \(\mathcal{S}_{dq}=\left\{s_{q}:d\in s_{q}\right\}\) & \(\varepsilon_{r+1}=1\) \\ \hline DCM [35] & & \(\varepsilon_{r+1}=c_{r}^{(s)}\lambda_{r}+\left(1-c_{r}^{(s)}\right)\frac{\left( 1-\alpha_{dq}\right)\varepsilon_{r}}{1-\alpha_{dq}\varepsilon_{r}}\) \\ & \(\mathcal{S}_{dq}=\left\{s_{q}:d\in s_{q},r\leq l\right\}\) & with \(\lambda_{r}=\frac{1}{|\mathcal{S}_{r}|}\sum_{s\in\mathcal{S}_{r}}\mathcal{I} (r\neq l)\) \\ \hline \multirow{4}{*}{SDBN [21]} & \(l\) is the rank of the & \(\varepsilon_{r+1}=c_{r}^{(s)}\left(1-\sigma_{dq}\right)+\left(1-c_{r}^{(s)} \right)\frac{\left(1-\alpha_{dq}\right)\varepsilon_{r}}{1-\alpha_{dq} \varepsilon_{r}}\) \\ & last-clicked document & with \(\sigma_{dq}=\frac{1}{|\mathcal{S}_{dq}^{\prime}|}\sum_{s\in\mathcal{S}_{dq}^{ \prime}}\mathcal{I}\left(r^{(s)}=l\right)\) \\ \cline{1-1} & & where \(\mathcal{S}_{dq}^{\prime}=\left\{s_{q}:d\in s_{q},r_{d}\leq l,c_{d}^{(s)}=1\right\}\) \\ \hline \end{tabular} \end{table} Table 1. Click models: the examination probability \(\varepsilon_{r+1}\) of DCM and SDBN depends on \(c_{r}^{(s)}\) that denotes the probability of a click being observed at rank \(r\) in a session \(s\). the ranking after having seen the document at rank \(r_{2}\). Accordingly, the continuation probability is determined by this ratio, i.e., \(\lambda_{r_{2}}=\frac{2}{3}\). Similarly, we can determine the satisfaction probability at the second rank \(\sigma_{d_{r_{2}}q}\). For one out of the three sessions (\(s_{4}\)), it was also the last click in the session. Accordingly, the satisfaction probability is determined by this ratio, i.e., \(\sigma_{d_{r_{2}}q}=\frac{1}{3}\). Note that the continuation and satisfaction probabilities are complementary when comparing them for a single query, i.e., \(\lambda_{r}=1-\sigma_{dq}\). The two click models DCM and SDBN differ if they are compared over multiple queries, as the continuation probability of DCM depends only on the rank \(r\) and is determined over all queries. In contrast, the satisfaction probability of SDBN is specific to the query-document pair. Suppose no clicks at a rank have been logged. In this case, it is impossible to determine the continuation and satisfaction probabilities (cf. \(r_{4}\)), and as a workaround, default probabilities can be used, or it is likewise possible to estimate values from the probability distribution. ### Dataset For our experiments, it is a fundamental requirement to have open data. Nowadays, several datasets are available for the general research community, but a large fraction of them is not suitable for our experiments. As pointed out before, previous work about click models was done in cooperation with large web search companies like Yahoo! [19; 54; 72; 73] or Yandex [24; 34] and used entirely private or semi-public datasets. A popular dataset for the training of click models was made publicly available by Yandex as part of the _Personalized Web Search Challenge_.3 A similar dataset is publicly provided by Yahoo! as the _L18 - Anonymized Yahoo! Search Logs with Relevance Judgments_.4 However, in both datasets, the web search results are anonymized, and no document collection of the entire corpus is provided. This is critical for our experiments as we want to build custom index and retrieval pipelines as defined above. Footnote 3: [https://www.kaggle.com/competitions/yandex-personalized-web-search-challenge/overview](https://www.kaggle.com/competitions/yandex-personalized-web-search-challenge/overview) Footnote 4: [https://webscope.sandbox.yahoo.com/](https://webscope.sandbox.yahoo.com/) ORCAS [29] is a companion dataset to MSMARCO that provides click-document pairs, and both the query as well as the document, are available in a clear text version. However, the DCM and SDBN click models do not only require triples containing the query, the documents, and the corresponding clicks but also the context of other documents in the SERP that were seen but not clicked, making ORCAS unusable for our experiments. We note that there exist several datasets that were curated in cooperation with the Chinese web search engine provider Sogou, like Sogou-QCL [101] or Sogou-SRR [97], but these are not usable for us as non-Chinese speakers. More importantly, the dataset covers more general topics as it is based on web search results, but living labs usually have a domain-specific focus [42; 81]. \begin{table} \begin{tabular}{|c||c|c|c|c|c||c|c|} \hline \(r_{i}\)\(s_{i}\) & \(s_{1}\) & \(s_{2}\) & \(s_{3}\) & \(s_{4}\) & \(s_{5}\) & \(\lambda_{r}\) & \(\sigma_{dq}\) \\ \hline \hline \(r_{1}\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\frac{1}{1}=1.0\) & \(\frac{0}{1}=0.0\) \\ \(r_{2}\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\frac{2}{3}=0.6\) & \(\frac{1}{3}=0.3\) \\ \(r_{3}\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\frac{1}{3}=0.3\) & \(\frac{2}{3}=0.6\) \\ \(r_{4}\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & - & - \\ \(r_{5}\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\frac{0}{2}=0.0\) & \(\frac{2}{2}=1.0\) \\ \hline \end{tabular} \end{table} Table 2. Toy example of the continuation \(\lambda_{r}\) and satisfaction \(\sigma_{dq}\) probabilities for five logged sessions for a single query \(q\). The filled circles correspond to clicks. Instead, we use the recently introduced TripClick (TripClick, 2020) dataset of the biomedical search engine Trip in our experiments. It contains documents and user interaction logs covering a period of seven years, from 2013 to 2020. It was highlighted that the annotation coverage for the top results is low (Zhu et al., 2020), and recently, topical relevance judgments called TripJudge were introduced (Bruhn et al., 2020). In our experiments, we can only use data logs with information about the entire SERP, which are available from 13th August 2016. Furthermore, we restrict the sessions to the 50 most frequent queries in the dataset to ensure that at least 100 logged sessions are available for each query. We note that the Trip database has professional and non-professional users alike and that the head queries are a very particular query type. Even though they are domain-specific, the selected sample of 50 queries is more generic than other queries in the torso or tail of the query distribution. Most of the query strings are composed of two terms only. As can be seen by the most frequent query in the logs ("_covid-19 and pregnancy_"), the COVID-19 pandemic had an influence on the data logs, which is representational for the dynamic evaluation environment where new queries and information needs emerge from recent trends. ### Evaluation measures In the following, we introduce the measures of our experimental evaluations, including the Log-Likelihood, the outcome of interleaving experiments, and the rank correlation measure Kendall's \(\tau\). Besides, we include a preliminary system-oriented evaluation of the system rankings based on the TripClick and TripJudge relevance labels to verify the assumptions about the relative effectiveness of the single systems. #### 3.4.1. Log-Likelihood This is a standard evaluation measure of click models, and it was found that better scores correlate with a higher fidelity of simulated clicks (Zhu et al., 2020). We determine it over a run \(R\) with \(|\mathbf{Q}|\) queries and ranking length \(n\) as follows: \[\mathcal{L}\mathcal{L}(R)=\sum_{q\in Q}\sum_{r=1}^{n}\log P\left(C_{d}=c_{d} \mid\mathbf{C}_{<r}\right) \tag{11}\] where \(P\left(C_{d}=c_{d}\mid\mathbf{C}_{<r}\right)\) denotes the click probability of a particular click model for a document \(d\) at rank \(r\) given the ranking of a retrieval method for a query \(q\) and the list of previous clicks \(\mathbf{C}_{<r}\) before rank \(r\) of the examined document. In our experiments, we use the TripClick data logs that contain SERPs with 20 entries (\(n=20\)) and \(|\mathbf{Q}|\in[1,50]\). Unlike previous work, we do not use Log-Likelihood to evaluate the click model itself but to distinguish between the ranking quality of retrieval systems. Assuming that a well-performing retrieval method delivers attractive rankings that result in clicks, the system maximizes the click probabilities, and thus the Log-Likelihood, over every result in a ranking list. #### 3.4.2. Outcome of interleaving experiments Our interleavings are based on the Team Draft Interleaving algorithm (TripClick, 2020). The corresponding interleaved ranking lists can be decomposed into two sets containing the documents \(D_{\text{exp}}\) contributed by the experimental system and the documents \(D_{\text{base}}\) of the competing baseline. An experimental system wins if it contributes the document with the highest click probability to the interleaved ranking, i.e., we determine the rank of the document with the highest click probability by \[r=\operatorname*{arg\,max}_{k\in\{1,\dots,n\}}P\left(C_{k}\mid\mathbf{C}_{<k}\right) \tag{12}\] whereas we assign a _win_ if \(d_{r}\in D_{\text{exp}}\). Otherwise, the experimental system looses, i.e., \(d_{r}\in D_{\text{base}}\), and a _loss_ is assigned. Suppose the click probabilities of the interleaving are indifferent from those of a ranking with unknown documents. In that case, the click model cannot decide on a better system, and a _tie_ is assigned. Finally, the _outcome_ is determined over multiple queries \(\mathcal{Q}\) and is defined as \[\text{Outcome}=\frac{\text{Wins}_{\mathcal{Q}}}{\text{Wins}_{\mathcal{Q}}+\text{ Losses}_{\mathcal{Q}}} \tag{13}\] A clear _winner_ achieves an outcome of 1.0, whereas 0.5 means that the experimental system is on par with the baseline, and any outcome below 0.5 indicates an inferior experimental system. #### 3.4.3. Relative system performance As it is common practice when comparing relative system rankings, we use Kendall's \(\tau\) to compare the reference system ranking \(\mathcal{R}\) with the click model system ranking \(\mathcal{R}^{\prime}\) as follows: \[\tau(\mathcal{R},\mathcal{R}^{\prime})=\frac{P-Q}{\sqrt{\big{(}P+Q+U\big{)} \big{(}P+Q+V\big{)}}} \tag{14}\] whereas \(P\) is the total number of concordant pairs (system pairs that are ranked in the same order in both rankings), \(Q\) is the total number of discordant pairs (system pairs that are ranked in the opposite order in the two rankings), \(U\) and \(V\) are the number of ties, in \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\), respectively. As a rule of thumb, Voorhees considers correlations with \(\tau>0.9\) as acceptable (Kendall, 2017). We evaluate the system rankings resulting from the click model-based evaluations in reference to the LRM and IRM rankings, for which the relative orderings are motivated by the Tester-based approach (cf. 3.1). We note that there exist other measures to determine the correlation between the two rankings. For instance, the Rank-biased Overlap (RBO) (Kendall, 2017) can be used to quantify the overlap between two lists of ranked items. Opposed to Kendall's \(\tau\), RBO does not require identical sets of ranked items to be compared, i.e., it can be used with rankings with infinite lengths and dissimilar sets of documents that may only overlap to some extent. Additionally, RBO models the user's browsing behavior by the transition probability \(p\) to the next ranked item, which allows giving more weight to overlap in higher rank positions. The lower the transition probability \(p\), the more emphasis is put on overlaps in higher-ranked positions, modeling an impatient user. While it is generally preferable to compare RBO alongside Kendall's \(\tau\) when evaluating document rankings (where Kendall's \(\tau\) is known as the stricter measure (Kendall, 2017; Kendall, 2017)), we evaluate the relative system performance by Kendall's \(\tau\) only for two reasons. First, we compare the ranking of systems, not documents, and there is no need to include a user model in the evaluation of relative system performance, as the user would not be exposed to the system but to their corresponding outputs -- the rankings. Second, we deal with a fixed set of systems, and Kendall's \(\tau\) can be used for more rigorous evaluations. In order to strengthen the reasoning behind the hypothesized system rankings, we evaluate them with the help of editorial relevance judgments. For this purpose, we use the previously mentioned TripJudge relevance labels (Bradbury et al., 2017). The results in Figure 3 show that the system-oriented experiment gives evidence to the hypothesized relative orderings of the system performance. We can control the retrieval performance for both types of system constellations by choosing an entirely different ranking method or increasing the interpolation weight towards the inferior ranking criterion. Regarding the IRM ranking, we see that an interpolation parameter of \(\rho\leq 0.4\) does not substantially change the retrieval effectiveness. In our experiments, we set \(\rho=\{0.4,0.45,...,1.0\}\), and we exclude all of the systems with \(\rho\leq 0.4\), as the experimental setup requires differences in effectiveness. Very likely, interpolations with \(\rho\leq 0.4\) do not impact effectiveness, as we determine the document's length by the abstracts. Naturally, abstracts are shorter than the corresponding full-texts, and abstracts do not differ in length as much as publications do. In the interpolations, the ranking method requires a certain weight to impact retrieval effectiveness. We consider the IRM system with \(\rho=0.7\) as an adequate baseline, ranked in the middle of the IRM ordering with six systems performing better (\(\rho<0.7\)) and six systems performing worse (\(\rho>0.7\)). Similarly, IRM\({}_{\rho=0.7}\) is almost on par with the Tf-based method that is in the middle of the LRM ranking. In addition, we include Table 3 in the appendix, which compares the system-oriented measures P@20, nDCG@20, and AP based on the click-based relevance labels of TripClick to that based on the editorial relevance labels of TripJudge. As these results demonstrate, we can confirm that the coverage of relevant documents at the top ranks is higher when using the editorial TripJudge labels. However, in our case, both types of relevance labels agree about the relative system performance for both the LRM and IRM rankings. These system-oriented experiments are another perspective of the system performance, strengthening our methodology's reasoning as a form of external validation. ### Implementation details We implement the experiments with the help of the Pyterrier retrieval toolkit (Pytr, 2017) (the Python interface to the Java-based retrieval toolkit Terrier (Tran et al., 2018)) and the dataset library ir_datasets (Pytr, 2017), which features bindings to the TripClick dataset. We filter and select the session logs with the help of the NoSQL database MongoDB. We rely on the PyClick5(Pytr, 2017) library when implementing the click models. In addition, we provide the required parsers to ingest the session logs from our data-base into the PyClick framework. All of the experiments are run on a Dell workstation with an _Intel Xeon Gold 6144_ CPU and 64 GB of RAM on _Ubuntu 18.04 LTS_. The entire code to rerun the experiments is available on GitHub at [https://www.github.com/irgroup/validating-synthetic-usage-data](https://www.github.com/irgroup/validating-synthetic-usage-data). Footnote 5: [https://github.com/markovi/PyClick](https://github.com/markovi/PyClick) ## 4. Experimental Evaluations In the following, we present the experimental evaluations based on the analysis of the Log-Likelihood and the simulated interleaving experiments. In order to determine the performance of click models over an increasing amount of click data and queries, we randomly sample an increasing number of logged sessions, which are used to parameterize the click model. For each query \(q\in\mathcal{Q}\), we Figure 3. LRM (left) & IRM (right) system rankings evaluated by editorial relevance judgments. The dashed lines correspond to the baseline system IRM\({}_{\rho=0.7}\) (cf. 3.4). randomly sample \(s\) sessions ten times, i.e., we let the click model adapt to the given data sample (with \(s\) sessions for \(|\mathbf{Q}|\) queries) and evaluate the system rankings over ten trials. In the first experiment in 4.1, the system rankings are determined by Log-Likelihood based on the click probabilities, whereas in the second experiment in 4.2, the living labs are simulated, and the system rankings are based on the outcomes (cf. Equation 13) of the corresponding interleaving experiments. Each system ranking that results from either the Log-Likelihood or the outcome measure is compared to the reference system rankings, which were introduced in 3.1, with the help of Kendall's \(\tau\) (cf. Equation 14). ### Log-Likelihood Evaluations We determine the Log-Likelihood for all combinations resulting from the two system rankings and the three click models and evaluate them over an increasing amount of click log data that is used for parameterizing the click models. Figure 4 shows the Log-Likelihood over the number of sessions with either 5, 10, 20, or 50 queries. Unsurprisingly, the Log-Likelihood increases as more sessions are used to parameterize the click models. As more click logs are available, the click models _becomes familiar_ with relevant, i.e., previously clicked documents, and consequently, there is a higher click probability. There are apparent differences between DCTR and the other two click models when comparing them. In the case of the DCTR-based Log-Likelihood, the ranking order of documents is irrelevant as the click model does not account for the ranking position. Consequently, there is no rank-biased discount of the documents' attractiveness, leading to an overall higher Log-Likelihood of the DCTR model. In contrast, the document order affects the click probabilities of the DCM and SDBN click models, leading to an overall lower Log-Likelihood, which can be explained by the examination probabilities of these click models that are a rank-biased discount of the documents' attractiveness. As it can be seen from the LRM ranking (in the upper half of Figure 4), the _Null_ system has a constant Log-Likelihood and is an estimate for lower bound performance. For the other systems, the Log-Likelihood increases as more sessions are considered, whereas the DFR and BM25 methods are quite distinct from the simple ranking criteria based on the term frequency (Tf) and document length (DI). In the lower half of Figure 4, the IRM system rankings based on the Log-Likelihood aligns with the earlier system-oriented evaluations in 3.4.3, i.e., the overall Log-Likelihood is lower (the retrieval system performs worse) when the interpolation parameter \(\rho\) gives more weight to the inferior ranking criterion. By evaluating the Log-Likelihood with 50 queries, we see a steeper increase in the Log-Likelihood as more (possibly earlier clicked) documents are retrieved. Once enough click data are available, there are consistent click probabilities, as can be seen by the plateau-like shape of the Log-Likelihood plots with 50 queries. Any additional sessions with new click data only provide redundant relevance information and only affect the click probabilities to a negligible extent. In comparison, the Log-Likelihood averaged over fewer queries is noisier, as also can be seen by the larger confidence intervals, but it also increases over the sessions. By the example of the DCTR model, we see that the Log-Likelihood also increases as more queries are considered. However, comparing the results based on 10 or 20 queries to those based on 50 queries, there is a slightly higher Log-Likelihood when fewer queries are used. As the results are averaged over the queries, this can be explained by the higher click-through rates of the more frequent queries (top-10 or top-20), while less frequent queries also have lower click-through rates. Overall, these preliminary evaluations suggest that either more queries or more sessions are required to distinguish between the single ranking systems. To this end, we conduct a more extensive analysis with an increasing number of queries and sessions. Figure 5 compares Kendall's \(\tau\) scores over different combinations of queries and log sessions for all three click models and the two system rankings. The heatmaps show the rank correlation in terms of Kendall's \(\tau\) for the different combinations of queries (ranging from 3 to 50) and sessions (ranging from 1 to 20). The greener the corresponding patch, the higher the correlation between the reference and the click model system ranking. The first heatmap based on DCTR and the LRM ranking shows a diagonal transition from the upper left corner to the lower right corner - the heatmap gets more greenish as more queries and sessions are used to evaluate the click model. In comparison, the IRM heatmap of the DCTR model has an overall darker appearance, which means that in comparison to the LRM ranking, less log data and queries are required in order to determine the correct system ranking. Evaluating 50 queries with a DCTR model based on 20 session logs for each query is already enough to reproduce the LRM system ranking with a perfect correlation of \(\tau=1.0\). In contrast, the DCM and SDBN click models require more session logs to reliably reproduce the correct system orderings, resulting in lower correlation scores of \(\tau_{\text{DCM}}=0.4267\) and \(\tau_{\text{SDBN}}=0.5867\) on average with the same amount of queries and corresponding sessions. This can also be seen by the overall lighter heatmaps, which indicate low correlations between the system rankings. In general, the IRM system ranking also results in higher Kendall's \(\tau\) scores with fewer queries and sessions for DCM and SDBN, which suggests that it is easier for the click models to distinguish between systems that rely on the same retrieval method by the Log-Likelihood. We assume that the smaller document pool can explain this (cf. Figure 2), i.e., there are fewer document candidates by which the method can be compared, and less click data are required for meaningful parameterization. We conclude that evaluating the relative system performance by the Log-Likelihood is a viable solution under the assumption that good-performing systems maximize the click-through rate only by the attractiveness of the ranking list. In comparison, DCTR is more robust and results in more reliable estimations when less log data are available. For instance, the LRM system rankings result in Kendall's \(\tau\) scores of 1.0 with 50 queries and click data from 20 sessions for each query, while the Log-Likelihood based on DCM and SDBN scores is below 0.6 when evaluated with the same amount of queries and click data. Overall, Log-Likelihood is lower when evaluated with the DCM and SDBN click models due to the examination probability discounting the attractiveness. While the Log-Likelihood is an adequate indicator of system effectiveness in these evaluations, it is still an open question how it is related to user satisfaction or how it is related to editorial relevance. Additionally, the cognitive biases of the user should be considered in more user-oriented evaluations. Liu et al. (Liu et al., 2019) found that user satisfaction is affected by the rank positions of relevant items. A large number of relevant items at the end of a session results in higher satisfaction than rankings of relevant items at higher positions earlier in the session. In this regard, DCTR attributes the same importance to documents irrespective of their position in the ranking. In comparison, the other click models rely on the cascade model that gives more weight to higher rank positions. ### Simulated Interleaving Experiments In the interleaving experiments, we determine the system ordering by the outcome measure (cf. Eq. 13) for which the highest click probability is used as the winning criterion (cf. Eq. 12). For each interleaving, the experimental ranking is interleaved with the baseline, which is consistent for both types of system rankings for the sake of better comparability and is set to IRM\({}_{\alpha=0.7}\). Figure 6 compares the outcomes for 50 queries with 100 session logs over ten trials for each experiment. Most strikingly, all of the click models can reproduce the correct orderings of the LRM system ranking, whereas, for the IRM system rankings, the relative ordering cannot be reproduced, but all of the click models can differentiate between systems that out- or underperform the baseline. In our analysis, often the _winning_ queries, i.e., those queries for which the experimental system Figure 4: Log-Likelihood of the LRM and IRM system rankings based on the three click models DCTR, DCM, SDBN and compared by 5, 10, 20, and 50 queries. Figure 5: Kendall’s \(\tau\) of the LRM and IRM system rankings for different numbers of queries and logged sessions, compared for the three click models wins, directly turn into losing queries as soon as the bad ranking criterion is assigned a higher weight than that of the baseline system. For better illustration, an in-depth analysis of the _winning_ and _losing_ queries is given in Figure 7. More specifically, the Jaccard similarity is shown for the _winning_ (lower triangle) and for the _losing_ (upper) queries over different interpolation weights, whereas winning and losing queries are those for which the experimental system is either assigned a _win_ or a _loss_, respectively. It can be seen that there are higher query similarities between those systems with an interpolation weight, which is either below or above that of the baseline system. However, there is a low overall similarity when comparing the winning/losing queries of system combinations with lower and higher interpolation weights (cf. to the light green areas in the lower left and upper right of the heatmap). This is independent of the click model, as the three heatmaps show similar results. It means that for the IRM system ranking, the winning queries, i.e., those queries for which the experimental system wins, turn into losing queries as soon as the bad ranking criterion is assigned a higher weight than that of the baseline system. Queries resulting in _ties_ barely change, i.e., no or an equal number of clicks are made for both interleaved systems, as the click models cannot decide on a better system with unseen documents. These experimental results demonstrate that it can be problematic to compare systems with a small document pool with fewer document candidates and low click-through rates. Finally, Figure 8 shows Kendall's \(\tau\) of the system rankings derived from the interleaving experiments resulting from click models parameterized over an increasing number of sessions. As can be seen by the light stripes in the heatmap, it is not possible to reproduce the correct ordering of IRM systems for any of the click models. Most of the rank correlations of the IRM rankings stay below 0.6, which aligns with our earlier observations. When comparing the LRM system rankings of the click models, we see that the DCTR model results in comparably higher correlations when less log data are available. For instance, the patches in the heatmap have a darker green when using 10 or fewer session logs per query for the DCTR model. However, the DCTR experiments demonstrate that the correlation scores do not stabilize even if more sessions are used for the parameterization. Once a certain amount of log data are used to parameterize the click models, DCM and SDBN deliver more robust correlation scores. For Figure 6: Outcome measures of interleaving experiments with click models based on 50 queries and 100 session logs. The dashed line corresponds to the baseline (IRM\({}_{\rho=0.7}\)) that is consistent for system rankings. a better understanding and analysis, we determine the relative error between the cumulated and the ideal Kendall's \(\tau\) score as \[\delta\tau=\frac{\Delta\tau}{\tau_{ideal}}=\frac{\tau_{ideal}-\tau_{sum}}{\tau_{ ideal}}=1-\frac{\tau_{sum}}{\tau_{ideal}}=1-\frac{\sum_{s=1}^{|\mathcal{S}|}\tau_{s}}{ \sum_{s=1}^{|\mathcal{S}|}1}=1-\frac{\sum_{s=1}^{|\mathcal{S}|}\tau_{s}}{| \mathcal{S}|} \tag{15}\] where \(\tau_{ideal}\) is considered as the sum of the ideal rank correlation up to the amount of considered sessions \(|\mathcal{S}|\), and _ideal_ refers to a perfect rank correlation of 1. Accordingly, \(\Delta\tau\) describes the difference between the actual sum of rank correlations and the ideal sum. A good performing user simulator or click model gives a low \(\delta\tau\) score or minimizes it as it gets more session data for an adequate parameterization. Figure 9 shows \(\delta\tau\) for the click models in combination with both types of system rankings over an increasing amount of session logs. These results confirm that once enough session data are available, the DCM and SDBN click models can better distinguish between the relative system performance in these particular simulated interleaving experiments. Regarding the LRM system ranking, there are higher errors for DCM and SDBN when only a few sessions are available, and the DCTR is a better choice when considering the lower error rates. However, it can be that with an increasing amount of click data, the error for both DCM and SDBN decreases while the error of the DCTR model evens out and does not decrease as more sessions are used for the parameterization. In comparison, it is generally harder for the click models to distinguish between the IRM system ranking based on interpolations. The experiments with 100 sessions result in considerably higher errors (higher \(\delta\tau\) scores), but still, the DCM and SDBN give slightly better estimates than the DCTR. In this case, the \(\delta\tau\) scores even out, while the scores of the DCTR still increase as more session logs become available. Similar to the earlier results, it is better to use DCTR when less log data are available. However, once enough logged clicks are available for the parameterization, the DCM and SDBN are less error-prone and more reliable. ### Interleaving Experiments with Transformer-based Rankings In addition to the former experiments that confirmed the general plausibility of the introduced evaluation method, we demonstrate its application when evaluating state-of-the-art Transformer-based rankings. To have enough click logs available, our click models were parameterized with Figure 7. Jaccard similarity between the _winning_ (lower triangle) and _losing queries_ (upper triangle) of the simulated interleaving experiments with DCTR, DCM, SDBN click models. Figure 8: Kendall’s \(\tau\) of the LRM and IRM rankings based on simulated interleavings, compared for the click models DCTR, DCM, and SDBN parameterized with an increasing number of sessions. Figure 10: **Left: Outcomes of simulated interleaving experiments with SPECTER-based rerankings of a first-stage BM25 ranking with different cutoff levels competing against the BM25 baseline. Right: The red point plot corresponds to the Bpref scores of the SPECTER rankings. The Rank-biased Overlap (RBO) with \(p=0.95\) is determined between the first 20 documents of BM25 and SPECTER-based rerankings at different cutoffs.** Figure 9: \(\delta\tau\) over an increasing number of sessions for the LRM and IRM rankings based on interleavings, compared for the click models DCTR, DCM, and SDBN. TripClick logs that are part of the data collection's training dataset. For this reason, it is not possible to fine-tune any Transformer-based method, as this would inevitably lead to leakage when using the same click logs during training and evaluation. As an alternative, we ground our experiments on the SPECTER language model [28; 85] that we use as a zero-shot ranker without task-specific fine-tuning. More specifically, SPECTER generates dense vector representations of scientific documents. The language model is pretrained with the help of the documents' citation signals building upon SciBERT [11], which, in turn, is a variant of the renowned BERT model [31]. Cohan et al. [28] demonstrated that the model outperformed many baselines on different NLP tasks without fine-tuning. Likewise, the model performed well for zero-shot ad-hoc retrieval [85]. We implement a typical two-stage ranking pipeline that includes a first-stage ranking based on BM25, reranked by SPECTER. Earlier experiments found that the reranking depth, i.e., the rank cutoff of the first-stage ranking, impacts the effectiveness of the final ranking [55]. As the length of the first-stage ranking increases, the Transformer-based method can potentially find more relevant documents and push them to higher positions in the ranking. Conversely, more candidate documents result in higher computational costs, which can become critical in industrial applications, where system efficiency impacts user satisfaction. To this end, keeping the reranking depth low without sacrificing effectiveness is a desideratum. In the simulated interleaving experiments, we let the final SPECTER rerankings with different cutoff levels compete against the BM25 baseline. In practice, this approach could be used to make estimates of an adequate reranking depth, considering effectiveness and efficiency tradeoffs. Figure 10 (left) shows the results of the simulated interleaving experiments. In addition, Figure 10 (right) shows Bpref [16], which is a measure that solely considers judged documents, and the RBO [95] between the first 20 documents of BM25 and SPECTER, which corresponds to the total number of documents shown to the click model. As we evaluate Bpref on the TripClick relevance judgments, it is a proxy measure of how well the system finds previously clicked documents. Similar to the evaluations of the previous subsection, we parameterize each click model with 100 sessions and simulate interleaving experiments with 50 head queries. For the cutoffs at 30 to 60, the click models do not agree on the better-performing system. Based on the outcomes of the DCTR model, the reranking is indistinguishable from the BM25 baseline. As the cutoffs increase, the click models agree on SPECTER as the better-performing system; i.e., above a cutoff level of 70, SPECTER is considered more effective. In general, different click models make it feasible to evaluate the benefits for different kinds of user behaviors. If the click models disagree on the relative system performance, there is a higher risk of harming the users' search experience than in the case of agreement between the models. With special regard to the cutoff level at 20, we see that the click models agree on SPECTER as the more effective system, which can be explained by the fact that, in this case, the SPECTER ranking is a purely (improved) reranking of the top 20 documents by BM25. For higher cutoffs, the reranker could bring up other documents -- unfamiliar to the click model -- among the first 20 ranking positions that are less likely to be clicked, which is the case for cutoff levels between 30 and 60. As the cutoff further increases above 60, SPECTER can rely on more relevant candidates in the BM25 ranking that are brought to the top 20 positions. This circumstance is further underlined by the fact that the Bpref scores increase, and the RBO scores decrease over the cutoffs. The increasing Bpref scores show that the SPECTER rerankings indeed benefit from an increase in the first-stage ranking's depth. Similarly, the RBO shows that increasing first-stage cutoffs leads to different document orderings in the top 20 positions. Even though the click models correctly identify SPECTER as the better system, a relative order is not evident from the outcome scores, i.e., there is no clear preference for any cutoff from 70 and above. By these results, we conclude that it is generally possible to identify better-performing Transformer rankings, e.g., in this particular case, having a reranking depth of at least 70 documents retrieved with BM25 is recommended. However, this experiment also demonstrates the limitations of the evaluation approach. It has to be considered that these Transformer-based rankings could bring up many documents that were not seen by the click models, which limits the estimates of the relative system effectiveness, especially for higher cutoff levels. ## 5. Answers to the research questions In the following, we recapture our main findings of the experimental evaluations in the previous Section 4 by giving answers to our three research questions. ### RQ1: Can click models reproduce system rankings? In our experimental evaluations, all click models can reproduce the system rankings if enough click logs are available, which is fundamental to our proposed methodology. We defined the simulation quality by how well the click model's click probabilities can reproduce the correct system ranking that is known in advance. The simulation quality improves depending on how much session data are available to parameterize the click model. In environments where user interaction data are sparse, keeping the required amount of user interaction data low becomes critical. In this regard, the DCTR model is able to distinguish reliably between the LRM systems by the Log-Likelihood with already 20 logged sessions if 50 queries are used in our experimental setup. In direct comparison, the IRM ranking can be reproduced with fewer data, which can be explained by a smaller pool of documents for which interaction data has to be logged. ### RQ2: Do continuation and satisfaction probabilities in click models improve the simulation quality? In our experimental setup, it is not recommended to use the DCM and SDBN for the Log-Likelihood in an interactive data-sparse setting. In the corresponding evaluations, DCM and SDBN result in overall lower scores in comparison to the DCTR model, which can be explained by the rank-biased discount of the attractiveness due to the examination probability. This is not critical when large amounts of session logs are available. For instance, if we can use 100 sessions per query, it is enough for adequate parameterization. However, compared to the DCTR, 20 sessions per query are not enough to let the DCM and SDBN reproduce the correct system ranking. On the other hand, the DCM and SDBN system rankings are a better choice when simulating the interleaving like they are implemented in living labs. In this case, the estimates of the LRM system ranking are much more robust, and the continuation and satisfaction probabilities of DCM and SDBN can indeed improve the simulation quality in our experimental setting. ### RQ3: How does the type of system ranking impact the outcomes of simulated interleaving experiments? While all of the models can determine the correct ordering of the LRM system ranking reasonably well in the simulated interleaving experiments, it is impossible to reproduce the correct IRM ranking. However, one can still distinguish between better and worse-performing IRM systems and separate them from the baseline. In our experimental setting, it is generally harder to reproduce the IRM ranking as there are deciding queries that either let the IRM system win or lose against the baseline system, depending on the interpolation weight. Once the interpolation parameter gives a higher weight to the bad ranking criterion, most of the queries, which formerly let the system win against the baseline, are the deciding queries that let the system lose against the baseline. This finding is critical for search platform operators, as different parameterizations of the same retrieval method may result in measurable differences in system-oriented experiments, while they are not reproducible in click model-based simulations. ## 6. Discussion and Conclusion Living labs are a special type of human-in-the-loop environment that facilitates the evaluation of IR systems in real-world experiments. However, previous work has highlighted that user interaction data in living labs is usually sparse, and it is desirable not to damage a search platform's reputation with bad search results. These circumstances lead to the two requirements of 1) inferring relevance information from as little interaction data as possible and 2) keeping the online time of highly experimental systems short. As a solution, it is possible to evaluate experimental systems with synthetic usage data based on simulations instead of risking the exposure of possibly poor results to real users. However, it remains unclear when a user simulator can be reliably used to simulate real user behavior by generating meaningful synthetic data. To this end, we introduced an evaluation approach for validating a click model's simulation quality in human-in-the-loop environments like living labs. Earlier living labs primarily logged user interaction data in the form of clicks that were used to evaluate the systems directly in interleaving experiments, but likewise, the clicks could be used to parameterize click models. However, it is often unclear if the click model received enough click logs for adequate parameterization. Our evaluation methodology aims at letting the click model decide about the relative system performance that is known with high confidence or based on some reasonable heuristics. In the literature, this approach was recently introduced as the Tester-based approach (Tester and Denton, 2015; Tester and Denton, 2015). The click model's system ranking is compared to the reference system ranking, and the rank correlation, determined by Kendall's \(\tau\), is an indicator of the simulation quality. In our experiments, we compared two different types of system rankings to validate the plausibility of the proposed evaluation method. The first ranking was composed of different lexical retrieval methods. In contrast, the second ranking was composed of a single ranking approach with different interpolations between a reasonable and less effective retrieval method. While these retrieval methods are rather simple compared to other state-of-the-art approaches, they are better candidates to validate the general plausibility of our approach. More specifically, the two types of system rankings cover the decision scenarios of platform operators. While the first system ranking corresponds to a scenario in which it is unclear what retrieval method to use in general, and a diverse set of methods should be evaluated, the second system ranking corresponds to a scenario in which a previously chosen retrieval method should be fine-tuned. Our experiments have shown how the DCTR, DCM, and SDBN click models can be used in combination with the Log-Likelihood and the outcomes of simulated interleaving experiments for the assessments of retrieval methods and how much session data are required for reliable performance estimates. Overall, it is possible to reproduce the system rankings in simulations based on click models, confirming our methodology's general plausibility. Regarding the evaluations based on the Log-Likelihood, the DCTR click model is a better choice if only a few sessions are logged. Our experiments showed that the DCTR could perfectly reproduce the system ranking with 20 logged sessions for 50 queries, while the DCM and SDBN could not. However, as more session logs become available, the DCM and SDBN click models are equally well-suited for this type of evaluation. While these outcomes are promising, it must be pointed out that the evaluation's focus is only on the attractiveness of the search results, which results in a simplified assumption about the users, making them more abstract. The rank-biased discount that better approximates real user behavior is not beneficial in this evaluation setting. This leaves the question of how the interpretation of the examination probabilities of the DCM and SDBN models is of benefit for the user simulations. For a better understanding, we simulated living lab experiments and let the click models decide about the preference for one of two competing systems in interleavings. The corresponding system rankings were based on the outcome measure and showed that, once again, DCTR is a better choice when only a small amount of session data are available. However, as more session logs became available, the DCM and SDBN gave better, i.e., more robust, estimates about the system rankings. When comparing the DCM to the SDBN model, there were no substantial differences in our experiments. The rank-biased discount of the DCM model is determined by a rank-dependent continuation probability, which is determined over all available sessions, while the SDBN introduces an additional satisfaction probability specific to the query-document pair. We conclude that for the underlying TripClick dataset, the consideration of the satisfaction probability did not make that much of a difference in comparison to the continuation probability. We note that the decisions behind clicking on a snippet and annotating a document with a positive editorial label are fundamentally different. However, we think that click signal-based evaluations are a promising alternative when a curated test collection is not available, and click models can be used to evaluate the relative system performance when editorial relevance judgments are missing. For instance, click models could be used in a pre-assessment, similar to the idea of pseudo-relevance judgments [86], to identify more promising systems for online experiments. Especially for small- and mid-scale search platforms that often partnered with living labs in the past, it would be a viable solution to use click signals instead of curating a costly test collection. Finally, the simulated interleaving experiments with Transformer-based rankings revealed some limitations of the proposed methodology. More specifically, we compared rerankings based on SPECTER with different cutoff levels to the BM25 baseline ranking. While click models identified SPECTER as the more effective ranking method, it was impossible to derive a relative system ordering from the interleaving experiments. In principle, a higher cutoff level of the first-stage ranking should result in better retrieval performance, as the Transformer-based method can rely on more relevant items that are possibly reranked to higher positions in the list. However, our experiments showed that there is no preference for any cutoff level once the baseline ranking returns ranking lists with adequate depth. This circumstance can very likely be explained by the fact that SPECTER-based rerankings brought up many previously unclicked items, which are consequently unknown to the click model, still keeping an adequate amount of clicked items in higher positions to outperform the baseline. Generally, it is recommended to deploy different types of retrieval systems when collecting click feedback data, similar to relying on system diversity in the pooling when constructing a test collection. Nonetheless, the experiments could demonstrate how click models can at least be used to determine the required cutoff level. In practice, this method could help platform operators who aim for better estimates of the required cutoff level for balancing effectiveness and efficiency. Lastly, click data are biased [96]. To a certain extent, the click models address the bias that would emerge from using single clicks as relevance indicators, i.e., the probabilistic models grasp the behavior and preferences of the average user. However, there are other biases related to the click signals. For instance, a position or system bias was introduced by the unknown production system of the Trip database that we could not remove from the session logs. As part of future work, it should be analyzed to which extent these kinds of evaluations are insightful pre-assessments of the real system performance by deploying them in living labs [33, 81, 83]. In this way, the fidelity of the click models can be further investigated with real users. ## Appendix
2304.05812
Cost-damage analysis of attack trees
Attack trees (ATs) are a widely deployed modelling technique to categorize potential attacks on a system. An attacker of such a system aims at doing as much damage as possible, but might be limited by a cost budget. The maximum possible damage for a given cost budget is an important security metric of a system. In this paper, we find the maximum damage given a cost budget by modelling this problem with ATs, both in deterministic and probabilistic settings. We show that the general problem is NP-complete, and provide heuristics to solve it. For general ATs these are based on integer linear programming. However when the AT is tree-structured, then one can instead use a faster bottom-up approach. We also extend these methods to other problems related to the cost-damage tradeoff, such as the cost-damage Pareto front.
Milan Lopuhaä-Zwakenberg, Mariëlle Stoelinga
2023-04-12T12:40:58Z
http://arxiv.org/abs/2304.05812v1
# Cost-damage analysis of attack trees ###### Abstract Attack trees (ATs) are a widely deployed modelling technique to categorize potential attacks on a system. An attacker of such a system aims at doing as much damage as possible, but might be limited by a cost budget. The maximum possible damage for a given cost budget is an important security metric of a system. In this paper, we find the maximum damage given a cost budget by modelling this problem with ATs, both in deterministic and probabilistic settings. We show that the general problem is NP-complete, and provide heuristics to solve it. For general ATs these are based on integer linear programming. However when the AT is tree-structured, then one can instead use a faster bottom-up approach. We also extend these methods to other problems related to the cost-damage tradeoff, such as the cost-damage Pareto front. Attack trees, Pareto front, cost-damage analysis, integer linear programming ## I Introduction **Attack trees.** Attack trees (ATs) are a prominent methodology in security analysis. They aid security specialists in identifying, analyzing and prioritizing (cyber)risks. ATs are included in several popular system engineering frameworks, e.g., _UMLsec_[1] and _SysMLsec_[2], and are supported by industrial tools such as Isograph's _AttackTree_[3]. ATs have been used in many scenarios, such as military information infrastructure [4], electronic voting [5], and IoT insider threats [6]. Their popularity is owed to their simplicity, which allows for a range of applications, and their analyzability. An AT is an hierarchical diagram that describes a system's vulnerabilities to attacks. Despite the name, an AT is a rooted directed acyclic graph (DAG). Its root represents the adversary's goal, while leaves represent basic attack steps (BASs) undertaken by the adversary. Other nodes represent intermediate attack goals and are labeled with an OR-gate or AND-gate, determining how its activation depends on that of its children. An example is given in Fig. 1. **Quantitative analysis.** Besides describing possible attacks on a system, ATs can also be used to analyze quantitative information about such attacks. Many _attack metrics_ exist, such as the damage, required cost, or required skill of an attack. Such metrics are key performance indicators that formalize a system's security performance. These metrics do not exist in isolation, and their interplay is important for quantitative security analysis. For instance, one attack may be cheaper than another, but require more time, or a more skilled attacker. Therefore, it is essential to understand the tradeoff between different security metrics. To understand and quantify such tradeoffs, one considers the _Pareto front_ of multiple metrics [7], which includes all attacks that are not dominated by another attack in all metrics. For instance, in Fig. 1 the attack {ca} does damage 200 for cost 1, which is preferable over {fd} which does 10 damage for cost 2. **Cost-damage analysis.** In this paper we consider the interplay between two important attack metrics: _attack cost_[8], describing an attacker's budget in, e.g., money or time; and _attack damage_[9], representing the damage done to the system, e.g., in terms of monetary value. The larger the cost budget available to an attacker, the more damaging an attack can be. While damage is the most relevant metric to the system owner, knowing the cost of an attack helps them understand the likelihood of such an attack. This fits within the perspective that likelihood and impact both play an important role in risk analysis [10]. For a comprehensive risk assessment of a system's security, it is therefore paramount to solve the following problems: **Problem statement.** Given an attack tree \(T\), solve the following problems: * Find the most **D**amaging attack **g**iven a Cost budget. * Find the **C**heapest attack **g**iven a **D**amage threshold. * Find the **C**ost-**D**amage **P**areto **F**ront. Existing approaches to calculating the Pareto front of multiple AT metrics [7, 11, 12] cannot be applied to cost-damage problems for two reasons: First, existing methods assume that only BASs are assigned metric values. For damage, this Fig. 1: Attack tree for a factory. Production can be stopped by a cyberattack or by destroying the production robot, for which an attacker forces their way inside and places a bomb. Damage values (in 1000 USD) are inscribed in the nodes, and cost values are below the BASs. assumption is not realistic, as the internal nodes often represent disabled subsystems, which also have an associated damage value. For instance, in Fig. 1, the attack {ca} and {pb,fd} both shut down production, but the latter does so by destroying the production robot, leading to greater monetary loss. Second, existing methods only consider _successful attacks_, i.e., attacks that activate the top node of the AT. In the case of cost-damage analysis, however, attacks not reaching the top node can still do quite some damage on intermediate nodes, and should be considered in the analysis. For instance, an attacker can try to rob an ATM by forcing it with explosives. Even if the attacker fails in stealing the money, the explosives still cause significant damage to the ATM owner. Thus existing work cannot solve cost-damage problems in the generality required to model realistic scenarios. For these reasons new approaches and algorithms need to be developed. **Approach.** This paper introduces three novel methods to solve the problems stated above. We first consider a deterministic setting, where BASs always succeed. We then consider a probabilistic setting, where BASs may fail with a given probability. _NP-completeness:_ We first prove two important negative results, showing that even the simplest cost-damage problems do not have 'easy' solutions. Cost-damage problems are similar to binary knapsack problems [13]; we use this to prove that even the simplest type of cost-damage analysis is NP-complete. Unfortunately, this similarity cannot be exploited to apply heuristics for knapsack problems or their many extensions [14, 15, 16] to cost-damage problems: All extensions assume properties of the damage function (i.e., the function assigning a damage value to each attack) that are not met in our setting. In fact, we prove that the damage function can be any nondecreasing function. This highlights the need for the completely new methods for cost-damage analysis in ATs. As common, our algorithms distinguish between tree- and DAG-shaped attack trees. Further, we consider deterministic versus probabilistic failure behaviour in the leaves. _Bottom-up algorithm for treelike ATs:_ Existing approaches to the Pareto front of two metrics work bottom-up, discarding non-optimal attacks at every node [7]. This does not work for damage, as intermediate nodes also carry damage values. Hence attacks that are non-optimal at a certain node may do more damage at a higher node, becoming optimal there. To solve problem CDPF above, we describe a new bottom-up method for finding the Pareto front in both the deterministic and probabilistic setting. The key insight is to perform a bottom-up Pareto analysis in an _extended cost-damage domain_, by adding a dimension for the current top node's activation (or activation probability in the probabilistic setting); this dimension signifies an attack's 'potential' to do more damage at higher nodes. As shown in our experiments, these bottom-up methods drastically reduce computation time from multiple hours to less than 0.1 second. For the single-objective problems DgC and CgD we cannot do a'simpler' bottom-up approach in which only the optimal attack is propagated, as one needs the overview of the full AT to decide which attack is optimal. Instead, we still need to propagate (part of) the Pareto front, and we gain our solution for DgC and CgD from minor adaptations to the CDPF approach. _Integer linear programming for DAG-like ATs:_ It is well-known [12, 17] that bottom-up algorithms do not work for DAG-like ATs: since nodes may have multiple parents, their cost/damage being counted twice. We introduce a novel method for the deterministic setting by translating cost-damage problems into the _bi-objective integer linear programming_ (BILP) framework [18]; we can then apply existing BILP solvers to solve them [19]. This translation is nontrivial, as damage is a nonlinear function of the adversary's attack, as we will show in Section V. The key insights behind our algorithm are that (1) damage is linear in terms of the _structure function_ that describes which AT nodes are reached by an attack and (2) the constraints defining the structure function can be phrased as linear constraints. We use existing biobjective methods and solvers to solve CDPF [20], and single-objective solvers to solve DgC and CgD [21]. This does not extend to the probabilistic setting, where equations become nonlinear; we leave the analysis of probabilistic DAG-like ATs as an open problem. Finally, in experiments we show our methods can be used for risk analysis by applying them to two systems: a wireless sensor device tracking wildlife in a giant panda reservation, and a data server in a network behind a firewall. The ATs of these systems are taken from the literature [22, 23]. We use the cost-damage Pareto front to assess the vulnerabilities of these systems. Furthermore, we also measure the computing time in the case studies and on 500 random ATs: both bottom-up and BILP methods vastly outperform the existing enumerative approach. This shows that our methods present an enormous speedup compared to the status quo. **Contributions.** Summarized, our contributions are: 1. A formal definition of cost-damage problems in ATs; 2. A proof that these problems are NP-complete (Sec. V); 3. A proof that cost-damage problems cannot be reduced to common extensions of the binary knapsack problem; (Sec. V); 4. A bottom-up method to solve the deterministic and probabilistic cost-damage problems for treelike attack trees (Sec. VI & IX); 5. An integer linear programming-based method to solve the deterministic cost-damage problems for DAG-like attack trees (Sec. VII). 6. An experimental evaluation of the above methods on two realistic cases from the literature (Sec. X). The Matlab code for the experiments can be found at [24]. \begin{table} \begin{tabular}{c|c c} & Tree & DAG \\ \hline Deterministic & bottom-up (Theorem 4) & BILP (Theorem 6) \\ Probabilistic & bottom-up (Theorem 9) & _open problem_ \\ \end{tabular} \end{table} TABLE I: Overview of this paper’s algorithmic contributions. ## II Related work In the literature, there are multiple approaches to decorating an AT with cost and damage values. Existing work concerning damage (also called _impact_) on ATs can be divided into three categories: works in which only BASs have a damage attribute [9, 11, 25, 26], works in which only the root node has a damage attribute [27], and works in which every node can have a damage attribute [28]. In the same manner, in some works intermediate nodes are allowed to have an associated cost [11, 29], while in others only BASs have costs [11, 12, 30]. In this paper, every node has a damage attribute, while only BASs have a cost attribute. We choose this because it is the simplest model for the most expressivity; as we will show in Section IV, cost values on internal nodes can be modeled by adding dummy BASs, but damage values cannot. Most of the work listed above only considers one metric at a time. For instance, in [25] binary decision diagrams (BDDs) are used to calculate both the minimal cost of a succesful attack and the maximal damage, but the tradeoff between the two metrics is not investigated. Other methods for calculating single metrics include bottom-up methods for treelike ATs [12] and priced-timed automata [29]. Of the works that consider cost-damage tradeoffs, some focuse on modeling rather than algorithms [9, 28]. One approach to the Pareto front is via priced-timed automata [11]; however, we cannot directly apply this to our setting as in that work only BASs have a damage attribute. In [27], cost and damage are used to define a single attack parameter _outcome_, which is optimized heuristically. Other works on ATs consider the Pareto front between two generic metrics. A bottom-up method for calculating Pareto fronts for treelike ATs, and under some additional assumption for DAG-like ATs, is given in [7]. Furthermore, a BDD-based approach for DAG-like ATs is developed in [12]. However, damage does not satisfy the conditions for either of these two approaches, and these cannot be used for our CgD, DgC and CDPF problems. Overall, we can conclude that none of the existing literature is able to solve cost-damage problem in the general model discussed in this paper. Another approach to multi-objective optimization is to approximate the Pareto front, for example using genetic algorithms [31, 32]. This has also been applied to ATs with cost [26]. While such an approach would be interesting for cost-damage ATs, in this paper we instead focus on provably optimal solutions, corresponding to provable security guarantees. ## III Preliminaries Let \(\mathbb{B}\) be the set \(\{0,1\}\), with logical operators \(\wedge,\vee\). **Definition 1**.: _An attack tree is a rooted directed acyclic graph \(T=(N,E)\) where each node \(v\in N\) has a type \(\gamma(v)\in\{\mathrm{BAS},\mathrm{OR},\mathrm{AND}\}\), such that \(\gamma(v)=\mathrm{BAS}\) if and only if \(v\) is a leaf._ Contrary to terminology an AT is not necessarily a tree. When the DAG \(T\) is actually a tree, it is called _treelike_; the general case is referred to as _DAG-like_. The root of \(T\) is denoted \(\mathrm{R}_{T}\). For a node \(v\) we denote its set of children by \(\mathrm{Ch}(v)=\{w\mid(v,w)\in E\}\); we also say that \(v\) is an _ancestor_ of \(w\), and \(w\) a _descendant_ of \(v\), if there is a path \(v\to w\) in \(T\). When \(\mathrm{Ch}(v)=\{v_{1},\ldots,v_{n}\}\), we write \(v=\mathrm{OR}(v_{1},\ldots,v_{n})\) or \(v=\mathrm{AND}(v_{1},\ldots,v_{n})\) depending on \(\gamma(v)\). The set of BASs a.k.a. leaves is denoted by \(B\). For instance, in the AT \(T\) from Fig. 1 one has \(B=\{\mathtt{ca},\mathtt{pb},\mathtt{fd}\}\), \(\mathtt{dr}=\mathrm{AND}(\mathtt{pb},\mathtt{fd})\), and \(\mathrm{R}_{T}=\mathtt{ps}=\mathrm{OR}(\mathtt{ca},\mathtt{dr})\). Note that \(T\) is treelike. An attacker performs an attack by activating a chosen set of BASs, represented by a _status vector_\(\mathbf{x}\in\mathbb{B}^{B}\); the status \(x_{v}\) of a BAS \(v\) equals \(1\) if \(v\) is activated, and \(0\) if it is not. Such a status vector can also be regarded as a subset of \(B\). Transposing the partial order \(\subseteq\) to status vectors yields a partial order \(\preceq\). **Definition 2**.: _An attack on \(T\) is a vector \(\mathbf{x}\in\mathbb{B}^{B}\); we let \(\mathcal{A}=\mathbb{B}^{B}\) be the set of all attacks. This has a partial order \(\preceq\) given by \(\mathbf{x}\preceq\mathbf{y}\) iff \(x_{v}\leq y_{v}\) for all \(v\in B\)._ An attack propagates upwards from the BASs. A node is reached by an attack depending on its type \(\mathrm{OR}\) or \(\mathrm{AND}\), and whether any/all of its children are reached by the attack. This idea is formalized by the structure function \(\mathrm{S}\). Given an attack vector \(\mathbf{x}\), and a node \(v\), \(\mathrm{S}(\mathbf{x},v)\) indicates whether \(v\) is reached by \(\mathbf{x}\), i.e., if \(\mathrm{S}(\mathbf{x},v)=1\). **Definition 3**.: _The structure function \(\mathrm{S}\colon\mathcal{A}\times N\to\mathbb{B}\) of \(T\) is defined recursively:_ \[\mathrm{S}(\mathbf{x},v)=\begin{cases}x_{v}&\text{ if }\gamma(v)=\mathrm{BAS},\\ \bigvee_{v^{\prime}\in\mathrm{Ch}(v)}\mathrm{S}(\mathbf{x},v^{\prime})&\text{ if } \gamma(v)=\mathrm{OR},\\ \bigwedge_{v^{\prime}\in\mathrm{Ch}(v)}\mathrm{S}(\mathbf{x},v^{\prime})&\text{ if } \gamma(v)=\mathrm{AND}.\end{cases}\] \begin{table} \begin{tabular}{c c c} Notation & Explanation & page \\ \hline \(\mathbb{B}\) & \(\{0,1\}\) & \(3\) \\ \(T=(N,E)\) & Attack Tree & \(3\) \\ \(B\) & BASs of \(T\) & \(3\) \\ \(\gamma(v)\) & Type of node \(v\) & \(3\) \\ \(\mathrm{Ch}(\mathbf{v})\) & Children of node \(v\) & \(3\) \\ \((\mathcal{A},\preceq)\) & Poset of attacks & \(3\) \\ \(\mathrm{S}(\mathbf{x},v)\) & Structure function of \(T\) & \(3\) \\ \(\mathrm{c}(\mathbf{v})\) & Cost of BAS \(v\) & \(4\) \\ \(\mathrm{d}(v)\) & Damage of node \(v\) & \(4\) \\ \(\tilde{\mathtt{c}}(\mathbf{x})\) & Cost of attack \(\mathbf{x}\) & \(4\) \\ \(\mathrm{d}(\mathbf{x})\) & Damage of attack \(\mathbf{x}\) & \(4\) \\ \(\min_{T}X\) & Set of minima of \(\mathcal{X}\) & \(4\) \\ \((\frac{\mathrm{R}_{2}^{2},\sqsubseteq}{\sqsubseteq})\) & Poset of attribute pairs & \(4\) \\ \((\frac{\mathrm{d}}{\tilde{\varepsilon}})\) & Attribution map & \(4\) \\ \(\mathrm{PF}(T)\) & Pareto-front of \(T\) & \(4\) \\ CDPF & Cost-damage Pareto front & \(4\) \\ \(\mathrm{DgC}\) & Maximal damage given cost & \(4\) \\ \(\mathrm{CgD}\) & Minimal cost given damage & \(4\) \\ \((\mathtt{Drzip},\sqsubseteq)\) & Deterministic attribute triples & \(6\) \\ \(\min_{T}\) & Cost-restricted min & \(6\),\(9\) \\ \(\mathrm{C}_{U}^{\mathrm{C}}(v)\) & Incomplete deterministic PF at \(v\) & \(6\) \\ \(\mathrm{p}(v)\) & Probability of BAS \(v\) & \(8\) \\ \(\mathrm{Y}_{\mathbf{x}}\) & Actualized attack & \(8\) \\ \(\mathrm{d}_{\mathbbm{E}}(\mathbf{x})\) & Expected damage of attack \(\mathbf{x}\) & \(8\) \\ CEDPF & Cost-expected damage Pareto front & \(8\) \\ EDSC & expected damage given cost & \(8\) \\ \(\mathrm{CgE}\) & cost given expected damage & \(8\) \\ \(\mathrm{PS}(\mathbf{x},v)\) & Probabilistic structure function & \(8\) \\ \((\mathtt{Przip},\sqsubseteq)\) & Probabilistic attribute triples & \(9\) \\ \(C_{U}^{\mathrm{C}}(v)\) & Incomplete probabilistic PF at \(v\) & \(9\) \\ \end{tabular} \end{table} TABLE II: Notation used in this paper. ## IV Deterministic cost-damage problems for ATs In this section we formulate this paper's problem; solutions are presented in Sections VI and VII. This section deals with a deterministic setting, where a BAS's success is guaranteed; its probabilistic equivalent is presented in Section VIII. The attacker's goal is to disrupt the system as much as possible, which is measured by a _damage_ value representing financial cost, downtime, etc. Each node \(v\) has a damage value \(\mathrm{d}(v)\), and an attack's total damage \(\hat{\mathrm{d}}(\mathbf{x})\) is the sum of the damage value of all nodes reached by \(\mathbf{x}\). At the same time, an attacker may have only limited resources. Each BAS \(v\) has a _cost_ value \(\mathrm{c}(v)\) representing e.g. the money, time or resources the attacker has to spend to activate it. The total cost \(\hat{\mathrm{c}}(\mathbf{x})\) of an attack is the sum of the costs of the activated BASs. **Definition 4**.: _A cd-AT is a triple \((T,\mathrm{c},\mathrm{d})\) of an AT \(T\) and maps \(\mathrm{c}\colon B\to\mathbb{R}_{\geq 0}\) and \(\mathrm{d}\colon N\to\mathbb{R}_{\geq 0}\). Define the total cost and damage functions \(\hat{\mathrm{c}},\hat{\mathrm{d}}\colon\mathcal{A}\to\mathbb{R}_{\geq 0}\) by_ \[\hat{\mathrm{c}}(\mathbf{x})=\sum_{v\in B}x_{v}\,\mathrm{c}(v),\qquad\hat{ \mathrm{d}}(\mathbf{x})=\sum_{v\in N}\mathrm{S}(\mathbf{x},v)\,\mathrm{d}(v).\] As opposed to other works in quantitative analysis on ATs [7, 12], we do not only consider so-called _successful_ attacks, i.e., \(\mathbf{x}\) for which \(\mathrm{S}(\mathbf{x},\mathrm{R}_{T})=1\). The reason is that in our model damage can be done at any level, not just at the top node. It is therefore important to know the damaging capabilities of an attacker, even when that attacker's limited resources mean that they cannot damage the top node. Furthermore, an attacker may try different avenues towards success, and while a given path may be discarded without reaching the top node, side effects may remain. We therefore assign damage values not only to the top node, but also to internal nodes. **Example 1**.: _Consider the AT \(T\) from Fig. 1, repeated below, and its cost and damage functions. Then the functions \(\hat{\mathrm{c}}\) and \(\hat{\mathrm{d}}\) are calculated as in the following table._ \begin{tabular}{c|c c c c c c c c} \(x_{\text{ca}}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(x_{\text{pb}}\) & \(0\) & \(0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(1\) & \(1\) \\ \(x_{\text{fd}}\) & \(0\) & \(1\) & \(0\) & \(1\) & \(0\) & \(1\) & \(0\) & \(1\) \\ \hline \(\hat{\mathrm{c}}(\mathbf{x})\) & \(0\) & \(2\) & \(3\) & \(5\) & \(1\) & \(3\) & \(4\) & \(6\) \\ \(\hat{\mathrm{d}}(\mathbf{x})\) & \(0\) & _10_ & \(0\) & _310_ & _200_ & _210_ & _200_ & _310_ \\ \end{tabular} Some works also assign cost values to internal nodes [11, 29], the interpretation being that an internal node is only activated if enough of its children are activated and its cost is paid. However, this can be simulated by adding a dummy BAS which holds the associated cost, as in Fig. 2. However, the same cannot be done for damage: moving the damage to the dummy BAS leads to a situation where _only_ the dummy needs to be activated to do the damage. For full expressivity we thus allow internal nodes to have damage values, but not cost values. ### _Cost damage problems_ In ATs, there is a tradeoff between resource utilization and damage: the higher the cost budget an attacker has at their disposal, the more damage they may cause. This tradeoff can be analyzed via the _Pareto front_: the cost and damage values of all attacks that are not dominated by other attacks, where \(\mathbf{x}\) dominates \(\mathbf{y}\) if \(\mathbf{x}\) is cheaper than \(\mathbf{y}\) while doing more damage. An attack \(\mathbf{x}\) in the Pareto front is called _Pareto optimal_, and it is the most damaging attack if the attacker cannot exceed cost \(\hat{\mathrm{c}}(\mathbf{x})\). Thus the Pareto front gives a full overview of the system's vulnerability to any attacker. For a general poset \((X,\preceq)\), we define its set of minimal elements as \[\underline{\min}_{\preceq}\ X=\{x\in X\mid\forall x^{\prime}\in X.x^{\prime} \not\prec x\}.\] We drop the subscript \(\preceq\) if it is clear from the context. We consider the domain of _attribute pairs_, i.e., the set \(\mathbb{R}_{\geq 0}^{2}\) with a partial order \(\sqsubseteq\) given by \((a,a^{\prime})\sqsubseteq(b,b^{\prime})\) if and only if \(a\leq b\) and \(a^{\prime}\geq b^{\prime}\). For a cd-AT \((T,\mathrm{c},\mathrm{d})\), we define the evaluation map \(\binom{\hat{c}}{\lambda}\colon\mathcal{A}\to\mathbb{R}_{\geq 0}^{2}\) by \(\binom{\hat{c}}{\lambda}(\mathbf{x})=\binom{\hat{c}(\hat{c})}{\lambda(x)}\) (we represent elements of \(\mathbb{R}_{\geq 0}^{2}\) as column vectors). Note that \(\mathbf{x}\) dominates \(\mathbf{y}\) if and only if \(\binom{\hat{c}}{\lambda}(\mathbf{x})\sqsubseteq\binom{\hat{c}}{\lambda}( \mathbf{y})\). The aim of this paper is to find the cost-damage Pareto front, as well as two related single-objective problems. Mathematically, these are formulated as follows: **Problems.** Given a cd-AT \((T,\mathrm{c},\mathrm{d})\), solve the following problems: **CDPF**: Cost-damage Pareto front: find \(\underline{\min}_{\sqsubset}\binom{\hat{c}}{\lambda}(\mathcal{A})\subseteq \mathbb{R}_{\geq 0}^{2}\). **DgC**: Maximal damage given cost constraint: Given \(U\in\mathbb{R}_{\geq 0}\), find \(d_{\mathrm{opt}}=\max_{\mathbf{x}\colon\hat{\mathrm{c}}(\mathbf{x})\in U}\hat{ \mathrm{d}}(\mathbf{x})\). **CgD**: Minimal cost given damage constraint: \(L\in\mathbb{R}_{\geq 0}\), find \(c_{\mathrm{opt}}=\min_{\mathbf{x}\colon\hat{\mathrm{d}}(\mathbf{x})\succeq L} \hat{\mathrm{c}}(\mathbf{x})\). From CDPF one can solve DgC and CgD via \[d_{\mathrm{opt}} =\max\{d\in\mathbb{R}_{\geq 0}\mid\exists c\in[0,U].\left( \begin{smallmatrix}c\\ d\end{smallmatrix}\right)\in\mathrm{PF}(T)\}, \tag{1}\] \[c_{\mathrm{opt}} =\min\{c\in\mathbb{R}_{\geq 0}\mid\exists d\in\mathbb{R}_{\geq L}. \left(\begin{smallmatrix}c\\ d\end{smallmatrix}\right)\in\mathrm{PF}(T)\}. \tag{2}\] Fig. 2: An example showing that damage values on internal nodes are necessary, but cost values on internal nodes are not. The cost value on the internal node in the AT on the left is replaced by a dummy BAS in the middle AT, which is equivalent: both ATs require cost 2 to perform 1 damage. In the right AT, the damage is also moved to the dummy BAS, but the result is not equivalent: 1 cost already yields 1 damage. These problems are relevant in security analysis: \(\mathrm{DgC}\) can be used to determine the damaging capabilities of different attacker profiles [11, 26]. \(\mathrm{CDPF}\) can be used to give an overview over all attacker profiles. For a security operations center monitoring a network, a cost-damage analysis (with cost measured in time) provides insight in whether the response time is sufficient to stop damaging attacks. **Example 2**.: _In Example 1, \(\left(\begin{smallmatrix}\hat{c}\\ \mathrm{d}\end{smallmatrix}\right)(\mathcal{A})\) is given by the lower two rows of the table. A number of these attacks are not Pareto optimal: we have \(\left(\begin{smallmatrix}1\\ 200\end{smallmatrix}\right)\sqsubset\left(\begin{smallmatrix}2\\ 10\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}4\\ 200\end{smallmatrix}\right)\), and furthermore \(\left(\begin{smallmatrix}5\\ 310\end{smallmatrix}\right)\sqsubset\left(\begin{smallmatrix}6\\ 310\end{smallmatrix}\right)\). It follows that (see Fig. 3):_ \[\mathrm{PF}(T)=\left\{\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 200\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 210\end{smallmatrix}\right),\left(\begin{smallmatrix}5\\ 310\end{smallmatrix}\right)\right\}. \tag{3}\] _From this we find, for instance that the solution to \(\mathrm{DgC}\) for \(U=2\) is given by \(d_{\mathrm{opt}}=200\)._ In what follows, we present novel methods to solve \(\mathrm{CDPF}\), \(\mathrm{DgC}\) and \(\mathrm{CgD}\). As in many problems related to calculating AT metrics, an important factor in the complexity of solutions is whether the AT is treelike or not [12]. We introduce a bottom-up method for treelike ATs in Section VI, and a method based on integer linear programming for DAG-like ATs in Section VII. ## V Relation to knapsack problems and NP-completeness In this section, we prove two important negative results, based on the similarity of cost-damage problems to binary knapsack problems. First, we show that even the simplest cost-damage problem is NP-complete. Second, we show that cost-damage problems are considerably more general than (extended) knapsack problems, which means that existing heuristics for knapsack problems cannot be applied to our situation. Both results emphasize the importance of finding new heuristics for cost-damage problems. \(\mathrm{DgC}\) is a generalisation of the binary knapsack problem [13], which is \[\mathrm{minimize}_{\mathbf{x}\in\mathbb{B}^{n}}\ \ f(\mathbf{x})\ \ \ \mathrm{ subject}\ \mathrm{to}\ \ g(\mathbf{x})\leq b\] where \(b\in\mathbb{R}\) and \(n\in\mathbb{N}\) are constants and the objective and constraint functions \(f\) and \(g\) are _linear_, i.e., \(f(\mathbf{x})=\sum_{i=1}^{n}f_{i}\mathbf{x}_{i}\) for some constants \(f_{i}\in\mathbb{R}\). In \(\mathrm{DgC}\), \(n=|B|\), \(b=U\), and the objective and constraint functions are \(-\hat{\mathrm{d}}\) and \(\hat{\mathrm{c}}\). Although \(\hat{\mathrm{c}}\) is linear, \(-\hat{\mathrm{d}}\) is not; for instance, in the AT \(\mathrm{AND}(a,b)\), one has \(\hat{\mathrm{d}}(\mathbf{x})=\mathrm{d}(a)x_{a}+\mathrm{d}(b)x_{b}+\mathrm{d} (\mathrm{R}_{T})(x_{a}\wedge x_{b})\). To show NP-completeness, consider the _decision problem_ associated to \(\mathrm{CDPF}\), \(\mathrm{DgC}\) and \(\mathrm{CgD}\): **Problem** (Cost-damage decision problem (CDDP)).: _Given a cd-AT \((T,\mathrm{c},\mathrm{d})\), a cost upper bound \(U\) and a damage lower bound \(L\), decide whether there exists an attack \(\mathbf{x}\in\mathcal{A}\) such that \(\hat{\mathrm{c}}(\mathbf{x})\leq U\) and \(\hat{\mathrm{d}}(\mathbf{x})\geq L\)._ CDDP can be reduced to \(\mathrm{CDPF}\), \(\mathrm{DgC}\) or \(\mathrm{CgD}\). Theorem 1 shows that the knapsack decision problem can be reduced to the \(\mathrm{CDDP}\) (in fact, a _treelike_ AT with \(n\) BASs and a root suffices). Since the knapsack decision problem is known to be NP-complete [33] and it is straightforward to show that \(\mathrm{CDDP}\) is in NP, we find the following result: **Theorem 1**.: _CDDP is NP-complete, even when restricted to treelike ATs._ The binary knapsack decision problem (and by extension \(\mathrm{CDDP}\)) is known to be NP-complete [34]. It should come as no surprise that we do not give polynomial-time methods to solve \(\mathrm{CDPF}\), \(\mathrm{DgC}\), and \(\mathrm{CgD}\), but instead introduce heuristic methods. These methods discard infeasible solutions throughout the computation instead of at the end, making them faster than the naive approach. In the literature, many extensions of the binary knapsack problem have been considered that allow less restrictive types of objectives functions, such as quadratic [14], cubic [15] and submodular [16] objective functions. However, the following theorem shows that objective functions \(\hat{\mathrm{d}}\) arising from cd-ATs form the even larger class of _nondecreasing_ functions (i.e., \(\mathbf{x}\preceq\mathbf{y}\) implies \(f(\mathbf{x})\leq f(\mathbf{y})\), see Definition 2). **Theorem 2**.: _Let \(X\) be a finite set, and let \(f\colon\mathbb{B}^{X}\to\mathbb{R}_{\geq 0}\) be any nondecreasing function. Then there is a cd-AT \((T,\mathrm{c},\mathrm{d})\) with \(B=X\) and \(\mathrm{d}=f\)._ It follows that we cannot use existing binary knapsack approaches to solve \(\mathrm{DgC}\), since these approaches [14, 15, 16] put some assumptions on \(\hat{\mathrm{d}}\). Instead, we develop new techniques, based on bottom-up methods and integer linear programming. These techniques exploit the structure of the cd-AT from which the objective \(\hat{\mathrm{d}}\) originates. ## VI Treelike ATs, deterministic setting For treelike ATs in the deterministic setting we focus on \(\mathrm{CDPF}\). \(\mathrm{DgC}\) and \(\mathrm{CgD}\) then follow from (1) and (2) respectively. These single-objective problems cannot be computed easier because, as we will demonstrate below, we need to propagate (part of) the Pareto front bottom-up, rather than a single damage/cost value, to solve these problems. ### _Cdpf_ A naive way to solve \(\mathrm{CDPF}\) (and with it \(\mathrm{DgC}\) and \(\mathrm{CgD}\)) is by calculating \(\hat{\mathrm{c}}(\mathbf{x})\) and \(\hat{\mathrm{d}}(\mathbf{x})\) for each \(\mathbf{x}\in\mathcal{A}\). Since Fig. 3: \(\mathrm{CDPF}\) for Examples 1 and 2. Filled nodes are Pareto-optimal attacks. \(|\mathcal{A}|=2^{|B|}\), this is impractical for large ATs, and new heuristics are needed. We solve CDPF via a bottom-up approach in which only a small set of attacks is handled at each node, and infeasilibity is determined at each node rather than at the end. The key insight to make this work is that at intermediate nodes, we perform Pareto analysis in an extended domain DTrip, and we only project to \(\mathbb{R}^{2}_{\geq 0}\) at the root. For a node \(v\), we let \(T_{v}\) be the sub-AT of \(T\) with root \(v\), and we let \(B_{v}\) be its set of BASs. At the node \(v\), we are interested in the cost and damage of attacks on \(T_{v}\), which are elements of \(\mathbb{B}^{B_{v}}\). Suppose that \(\mathrm{Ch}(v)=\{v_{1},v_{2}\}\). Since \(T\) is treelike, one has \(B_{v_{1}}\cap B_{v_{2}}=\varnothing\). So \(\mathbb{B}^{B_{v}}=\mathbb{B}^{B_{v_{1}}}\times\mathbb{B}^{B_{v_{2}}}\), and an attack \(\mathbf{x}\) on \(T_{v}\) can be written \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2})\) for attacks \(\mathbf{x}_{1}\) on \(T_{v_{1}}\) and \(\mathbf{x}_{2}\) on \(T_{v_{2}}\). With regards to cost and damage, we find \[\hat{\mathrm{c}}(\mathbf{x}) =\hat{\mathrm{c}}(\mathbf{x}_{1})+\hat{\mathrm{c}}(\mathbf{x}_{2 }), \tag{4}\] \[\hat{\mathrm{d}}(\mathbf{x}) =\hat{\mathrm{d}}(\mathbf{x}_{1})+\hat{\mathrm{d}}(\mathbf{x}_{2 })+\mathrm{S}(\mathbf{x},v)\,\mathrm{d}(v) \tag{5}\] where we recall that \(\mathrm{S}(\mathbf{x},v)\) is defined as \[\mathrm{S}(\mathbf{x},v)=\begin{cases}x_{v},&\text{ if }\gamma(v)=\mathrm{BAS}, \\ \mathrm{S}(\mathbf{x}_{1},v_{1})\vee\mathrm{S}(\mathbf{x}_{2},v_{2}),&\text{ if } \gamma(v)=\mathrm{OR},\\ \mathrm{S}(\mathbf{x}_{1},v_{1})\wedge\mathrm{S}(\mathbf{x}_{2},v_{2}),&\text{ if } \gamma(v)=\mathrm{AND}.\end{cases}\] Thus, in order to correctly calculate the cost and damage of attacks as we combine them, we need to store each attack \(\mathbf{x}\) as an _attribute triple_ in the _deterministic attribute triple domain_: \[\left(\begin{smallmatrix}\hat{\mathrm{c}}(\mathbf{x})\\ \hat{\mathrm{d}}(\mathbf{x})\\ \mathrm{S}(\mathbf{x},v)\end{smallmatrix}\right)\in\mathtt{DTrip}:=\mathbb{R}_ {\geq 0}\times\mathbb{R}_{\geq 0}\times\mathbb{B}.\] **Example 3**.: _Consider the AT of Example 1. Each BAS has only two possible attacks (activating that BAS or not) so for \(\mathtt{pb}\) we have \(\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 1\\ 1\end{smallmatrix}\right)\right\}\subset\mathtt{DTrip}\), and \(\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 1\\ 1\end{smallmatrix}\right)\right\}\subset\mathtt{DTrip}\) for \(\mathtt{fd}\). Combining these, we have four possible attacks on the \(\mathrm{AND}\)-gate \(\mathtt{dr}\), which is the set_ \[\left\{\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}5\\ 11\\ 1\end{smallmatrix}\right)\right\}\subset\mathtt{DTrip}.\] After finding the values of all attacks on \(v\) by combining those on \(v_{1}\) and on \(v_{2}\), we discard the infeasible ones. Infeasibility is based on two conditions: 1. In \(\mathtt{DgC}\), if \(\hat{\mathrm{c}}(\mathbf{x})>U\), then \(\mathbf{x}\) is infeasible. 2. Other than that, feasibility is determined by Pareto optimality on the poset \((\mathtt{DTrip},\sqsubseteq)\), where \(\left(\begin{smallmatrix}\hat{c}\\ \hat{b}\end{smallmatrix}\right)\sqsubseteq\left(\begin{smallmatrix}c^{\prime}\\ d^{\prime}\\ b^{\prime}\end{smallmatrix}\right)\) if and only if \(c\leq c^{\prime}\), \(d\geq d^{\prime}\) and \(b\geq b^{\prime}\). The first two inequalities are to be expected from cost-damage optimality. The third inequality is introduced for the following reason: if \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) are two attacks on \(v\) corresponding to \((c,d,0)^{\intercal}\) and \((c^{\prime},d^{\prime},1)^{\intercal}\), respectively, then potentially \(\mathbf{x}^{\prime}\) can reach nodes higher up in \(T\), and thereby eventually do more damage than \(\mathbf{x}\). However, whether this happens or not cannot be detected at the level of \(v\), and therefore we need to keep both triples. **Example 4**.: _We continue Example 3. At \(\mathtt{dr}\), we have \(\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right)\sqsubseteq\left(\begin{smallmatrix}3\\ 0\\ 0\end{smallmatrix}\right)\), so the latter is infeasible and discarded, leaving us with the Pareto front_ \[\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 10\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}5\\ 110\\ 1\end{smallmatrix}\right)\right\}\subset\mathtt{DTrip}.\] _This example shows why we need the third dimension: if not, we would have discarded the attack \(\left(\begin{smallmatrix}3\\ 0\end{smallmatrix}\right)\) at \(\mathtt{pb}\) for being infeasible: \(\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right)\) does the same damage at lower cost. However, had we done so at \(\mathtt{pb}\), we would have concluded that it is always optimal not to activate \(\mathtt{pb}\), thereby missing out on the attack \(\left(\begin{smallmatrix}5\\ 110\end{smallmatrix}\right)\) at \(\mathtt{dr}\). By also storing the top node's activation, we ensure that activating \(\mathtt{pb}\) is still considered feasible._ This approach can be formally defined as follows. Let \(U\in[0,\infty]\). For each \(v\in N\), we define a Pareto front \(\mathcal{C}^{\mathrm{D}}_{U}(v)\subseteq\mathtt{DTrip}\) (for _D_eterministic) of feasible attacks on \(v\). To do this, we define a map \(\min_{U}:\mathcal{P}(\mathtt{DTrip})\rightarrow\mathcal{P}(\mathtt{DTrip})\) given by \[\min_{U}(X)=\min_{\sqsubseteq}\left\{\left(\begin{smallmatrix}\hat{c}\\ \hat{b}\end{smallmatrix}\right)\in X:c\leq U\right\}\] which returns the Pareto-optimal elements (w.r.t. the partial order \(\sqsubseteq\) of \(\mathtt{DTrip}\)) of a set \(X\) that do not exceed the cost constraint. From now we assume that \(T\) is _binary_, i.e., \(|\,\mathrm{Ch}(v)|\in\{0,2\}\) for all \(v\). Since every AT is equivalent to a binary one this assumption is purely to simplify notation. We then recursively define the Pareto front \(\mathcal{C}^{\mathrm{D}}_{U}(v)\) of attribute triples, by combining elements of \(\mathcal{C}^{\mathrm{D}}_{U}(v_{1})\) and \(\mathcal{C}^{\mathrm{D}}_{U}(v_{2})\) via (4) and (5) and then discarding the nonfeasible triples: \[\mathcal{C}^{\mathrm{D}}_{U}(v)\] \[=\left\{\begin{cases}\left\{\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}\mathrm{c}(v)\\ 0\\ 1\end{smallmatrix}\right)\right\},&\text{ if }\gamma(v)=\mathrm{BAS\ and\ c}(v)\leq U,\\ \left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right)\right\},&\text{ if }\gamma(v)=\mathrm{BAS\ and\ c}(v)>U,\\ \mathcal{C}^{\mathrm{D}}_{U}(\mathrm{AND}(v_{1},v_{2}))\\ =\min_{U}\left\{\left(\begin{smallmatrix}c_{1}+c_{2}\\ d_{1}+d_{2}+(b_{1}\wedge b_{2})\cdot\mathrm{d}(v)\\ b_{1}\wedge b_{2}\end{smallmatrix}\right)\in\mathtt{DTrip}\middle|\left(\begin{smallmatrix}c _{i}\\ b_{i}\end{smallmatrix}\right)\in\mathcal{C}^{\mathrm{D}}_{U}(v_{i})\right\},\] \[\mathcal{C}^{\mathrm{D}}_{U}(\mathrm{OR}(v_{1},v_{2}))\] \[=\min_{U}\left\{\left(\begin{smallmatrix}c_{1}+d_{2}+\frac{c_{1}+c_ {2}}{b_{1}\lor b_{2}}\cdot\mathrm{d}(v)\\ b_{1}\lor b_{2}\end{smallmatrix}\right)\in\mathtt{DTrip}\middle|\left(\begin{smallmatrix}c _{i}\\ b_{i}\end{smallmatrix}\right)\in\mathcal{C}^{\mathrm{D}}_{U}(v_{i})\right\}.\] These theorems show the validity of this approach. **Theorem 3**.: _The solution to \(\mathtt{DgC}\) is given by \(\max\left\{d\in\mathbb{R}_{\geq 0}\middle|\left(\begin{smallmatrix}\hat{c}\\ \hat{b}\end{smallmatrix}\right)\in\mathcal{C}^{\mathrm{D}}_{U}(\mathrm{R}_{T})\right\}\)._ **Theorem 4**.: _The solution to \(\mathtt{CDPF}\) is given by \(\min\pi(\mathcal{C}^{\mathrm{D}}_{\infty}(\mathrm{R}_{T}))\), where \(\pi\colon\mathtt{DTrip}\rightarrow\mathbb{R}^{2}_{\ \[\left\{\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 1\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}5\\ 1\\ 1\end{smallmatrix}\right)\right\}\] \[\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\\ 1\end{smallmatrix}\right)\right\} \left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}5\\ 1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}6\\ 310\\ 1\end{smallmatrix}\right)\right\}\] \[\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\\ 1\end{smallmatrix}\right)\right\} \left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 1\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}5\\ 1\\ 1\end{smallmatrix}\right)\right\}\] \[\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}1\\ 1\\ 1\end{smallmatrix}\right)\right\} \left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 0\\ 1\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 1\\ 1\end{smallmatrix}\right)\right\}\] \[\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}3\\ 1\\ 1\end{smallmatrix}\right)\right\} \left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}2\\ 1\\ 1\end{smallmatrix}\right)\right\}\] ### _DgC and CgD_ For DgC we still have to compute a Pareto front at every node \(v\), instead of taking the most damaging attack satisfying the cost constraint \(\hat{\mathrm{c}}(\mathbf{x})\leq U\), for the following reason. Suppose \(\left(\begin{smallmatrix}\hat{\mathrm{c}}\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}\hat{\mathrm{c}^{\prime}}\\ \hat{\mathrm{c}^{\prime}}\\ \hat{\mathrm{c}^{\prime}}\\ \hat{\mathrm{c}}^{\prime}\\ ### _CgD and DgC_ We solve CgD and DgC, by deriving _constrained single-objective optimization problems_ from (7). Associated to a BILP problem (6) one has these single-objective problems: \[\begin{array}{llll}\operatorname{minimize}_{\mathbf{y}\in\mathbb{Z}^{n}}&c_ {1}\cdot\mathbf{y}&\operatorname{subject}\ \mathrm{to}&A\cdot\mathbf{y}\leq 0,\\ &&c_{2}\cdot\mathbf{y}&\leq C_{2},\\ \operatorname{minimize}_{\mathbf{y}\in\mathbb{Z}^{n}}&c_{2}\cdot\mathbf{y}& \operatorname{subject}\ \mathrm{to}&A\cdot\mathbf{y}\leq 0,\\ &&c_{1}\cdot\mathbf{y}&\leq C_{1}.\end{array}\] These are standard integer linear program (ILP) problems, for which efficient solvers exist [35]. By applying this to (7), we can formulate DgC and CgD as single-objective ILP problems, which can be fed to a solver. **Theorem 7**.: _DgC and CgD are solved by solving the constrained single-objective optimization problems derived from (7) with respective added constraints_ \[\sum_{v\in B}\mathrm{c}(v)y_{v}\leq U,\qquad\quad-\sum_{v\in N}\mathrm{d}(v)y _{v}\leq-L.\] Note that to solve DgC and CgD via Theorem 7, one does not need to first solve the BILP problem (7), but one can directly solve the single-objective problem. ## VIII Probabilistic cost-damage Pareto front So far, we have assumed that any BAS undertaken by the attacker will succeed. However, in reality an attempted BAS may or may not succeed. Following earlier work [36, 12] we now assume a _probabilistic setting_ in which each BAS \(v\) has a success probability \(\mathrm{p}(v)\). More precisely, we assume: 1. The activation of the BASs may or may not succeed; 2. The successes of different BASs are independent; 3. The attacker pays the cost of a BAS, whether its activation succeeds or not; 4. All BASs are attempted simultaneously and paid for in advance; 5. Each BAS can only be attempted once. The independence assumption is standard [36, 12], while the other assumptions lead to the most straightforward setting. Extensions are possible: for instance, the attacker might recoup some of the costs of failed activations, or BASs are attempted one by one and the attacker may choose to reallocate their budget based on BASs that have succeeded or failed their activation thusfar. Such extensions lead to more complicated models, and are left to future work. **Definition 5**.: _A cdp-AT is a tuple \((T,\mathrm{c},\mathrm{d},\mathrm{p})\) of an AT \(T\) and maps \(\mathrm{c}\colon B\to\mathbb{R}_{\geq 0}\), \(\mathrm{d}\colon N\to\mathbb{R}_{\geq 0}\), and \(\mathrm{p}\colon B\to[0,1]\)._ In a cdp-AT, the damage done by an attack is a random variable: its value depends on the _actualized attack_, i.e., the BASs that succeed. Therefore, an attacker is interested in the _expected damage_ of an attack. **Definition 6**.: _Let \((T,\mathrm{c},\mathrm{d},\mathrm{p})\) be a cdp-AT. For \(\mathbf{x}\in\mathcal{A}\), define the actualized attack to be the random variable \(Y_{\mathbf{x}}\) on \(\mathcal{A}\) given by_ \[\mathbb{P}(Y_{\mathbf{x}}=\mathbf{y})=\begin{cases}\prod_{v\colon x_{v}=1} \mathrm{p}(v)^{\eta_{v}}(1-\mathrm{p}(v))^{1-y_{v}},&\text{if }\mathbf{y}\preceq \mathbf{x},\\ 0,&\text{otherwise}.\end{cases}\] _We define the expected damage of an attack to be \(\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})=\mathbb{E}[\hat{\mathrm{d}}(Y_{ \mathbf{x}})]\in\mathbb{R}_{\geq 0}\)._ **Example 8**.: _We return to the setting of Example 1. We extend the cd-AT \((T,\mathrm{c},\mathrm{d})\) with a probability map \(\mathrm{p}\colon B\to[0,1]\) given by \(\mathrm{p}(\mathrm{ca})=0.2\), \(\mathrm{p}(\mathrm{pb})=0.4\) and \(\mathrm{p}(\mathrm{fd})=0.9\). We use this to calculate the function \(\hat{\mathrm{d}}_{\mathrm{E}}\); we write an attack \(\mathbf{x}\) as the vector \((x_{\mathbf{c}\mathbf{a}},x_{\mathbf{p}\mathbf{b}},x_{\mathbf{z}\mathbf{d}})\). Then the random variable \(Y_{(0,1,1)}\) is given by_ \[\mathbb{P}[Y_{(0,1,1)}=(0,0,0)]=0.6\cdot 0.1=0.06,\] \[\mathbb{P}[Y_{(0,1,1)}=(0,0,1)]=0.6\cdot 0.9=0.54,\] \[\mathbb{P}[Y_{(0,1,1)}=(0,1,0)]=0.4\cdot 0.1=0.04,\] \[\mathbb{P}[Y_{(0,1,1)}=(0,1,1)]=0.4\cdot 0.9=0.36.\] Similar to \(\hat{\mathrm{d}}_{\mathrm{E}}\), we also define \(\binom{\hat{\mathrm{d}}}{\hat{\mathrm{d}}_{\mathrm{E}}}(\mathbf{x})=(\hat{ \mathrm{c}}(\mathbf{x}),\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x}))\in\mathbb{R }_{\geq 0}^{2}\). We then have the following probabilistic counterparts of CDPF, DgC, and CgD: **Problems.** Given a cdp-AT \((T,\mathrm{c},\mathrm{d},\mathrm{p})\), solve the following problems: **CEDPF** Cost-expected damage Pareto front: find \(\frac{\min_{-}\binom{\hat{\mathrm{c}}}{\hat{\mathrm{d}}_{\mathrm{E}}}}{ \hat{\mathrm{d}}_{\mathrm{E}}}(\mathcal{A})\subseteq\mathbb{R}_{\geq 0}^{2}\). **EDgC** Maximal expected damage given cost constraint: Given \(U\in\mathbb{R}_{\geq 0}\), find \(d_{\mathrm{E},\mathrm{opt}}=\max_{\mathbf{x}_{\colon}\ \hat{\mathrm{c}}(\mathbf{x})\leq U}\hat{ \mathrm{d}}_{\mathrm{E}}(\mathbf{x})\). **CgD** Minimal cost given expected damage constraint: \(L\in\mathbb{R}_{\geq 0}\), find \(\mathrm{c}_{\mathrm{E},\mathrm{opt}}=\min_{\mathbf{x}_{\colon}\ \hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})\geq L}\hat{ \mathrm{c}}(\mathbf{x})\). **Example 9**.: _We continue Example 8. Using the definition of \(\hat{\mathrm{d}}\) from the table in Example 1, we find \(\hat{\mathrm{d}}_{\mathrm{E}}(0,1,1)=0.06\cdot 0+0.54\cdot 0+0.04\cdot 10+0.36\cdot 310=112\)._ Solving CDEPF naively is more involved than CDPF: not only do we have to calculate \(\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})\) for exponentially many \(\mathbf{x}\), but a single \(\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})\) also requires \(\mathbb{P}(Y_{\mathbf{x}}=\mathbf{y})\cdot\hat{\mathrm{d}}(\mathbf{y})\) for exponentially many \(\mathbf{y}\). Therefore, we introduce new methods to solve CDEPF for treelike ATs in Section IX, by adapting the deterministic method of Section VI to account for probabilities. For DAG-like ATs, we cannot simply adapt the BILP method of Section VII, as (7) becomes nonlinear, and CDEPF, EDgC and CgED for DAG-like ATs are left to future work. ## IX Treelike ATs, probabilistic setting EDgC and CEDPF for treelike ATs can be solved similar to the approach of Section VI. The main difference is that instead of working with the structure function \(\mathrm{S}(\mathbf{x},v)\), we work with the _probabilistic structure function_\(\mathrm{PS}(\mathbf{x},v):=\mathbb{P}(\mathrm{S}(Y_{\mathbf{x}},v)=1)\). With this notation we can write \[\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})=\sum_{v\in N}\mathrm{PS}(\mathbf{x},v )\,\mathrm{d}(v).\] Let \(v\) be a node with children \(v_{1},v_{2}\), and let \(\mathbf{x}\in\mathcal{A}\). Since \(T\) is treelike, \(v_{1}\) and \(v_{2}\) do not have shared BASs. Since the truth values of the BASs in \(Y_{\mathbf{x}}\) are independent of each other, this means that the random variables \(\mathrm{S}(Y_{\mathbf{x}},v_{1})\) and \(\mathrm{S}(Y_{\mathbf{x}},v_{2})\) are independent, and so we find \[\mathrm{PS}(\mathbf{x},\mathrm{OR}(v_{1},v_{2}))\] \[=\mathrm{PS}(\mathbf{x},v_{1})+\mathrm{PS}(\mathbf{x},v_{2})- \mathrm{PS}(\mathbf{x},v_{1})\,\mathrm{PS}(\mathbf{x},v_{2}), \tag{8}\] \[\mathrm{PS}(\mathbf{x},\mathrm{AND}(v_{1},v_{2}))\] \[=\mathrm{PS}(\mathbf{x},v_{1})\,\mathrm{PS}(\mathbf{x},v_{2}). \tag{9}\] On the other hand, we can express \(\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})\) as \[\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x})=\hat{\mathrm{d}}_{\mathrm{E}}( \mathbf{x}_{1})+\hat{\mathrm{d}}_{\mathrm{E}}(\mathbf{x}_{2})+\mathrm{PS}( \mathbf{x},v)\,\mathrm{d}(v). \tag{10}\] Combining this with (8) and (9) we can calculate the attributes \(\hat{\mathrm{c}}\), \(\hat{\mathrm{d}}_{\mathrm{E}}\), \(\mathrm{PS}\) of attacks on \(v\) from their constituent attacks on \(v_{1}\) and \(v_{2}\). From here, we continue akin to Section VI. More precisely, we consider the _probabilistic attribute triple domain_, which is the poset \((\texttt{PTrip},\sqsubseteq)\) given by \(\texttt{PTrip}=\texttt{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\times[0,1]\) and \((c,d,p)\sqsubseteq(c^{\prime},d^{\prime},p^{\prime})\) if and only if \(c\leq c^{\prime},d\geq d^{\prime}\) and \(p\geq p^{\prime}\). For every node \(v\) we define a set \(\mathcal{C}^{\mathrm{P}}_{U}(v)\subseteq\mathcal{P}(\texttt{PTrip})\) of attribute triples. Just as in the deterministic case, we add the requirement \(p\geq p^{\prime}\) in determining feasibility because a greater activation probability of a node may lead to more damage higher up in the AT. As in Section VI, we define a map \(\min_{U}\colon\mathcal{P}(\texttt{PTrip})\rightarrow\mathcal{P}(\texttt{PTrip})\) by \(\min_{U}(X)=\min\left\{\left(\frac{c}{p}\right)\in X:c\leq U\right\}\). Define \(\star\colon[0,1]^{2}\rightarrow[0,1]\) by \(p\star p^{\prime}=p+p^{\prime}-pp^{\prime}\). Then we again assume that \(T\) is binary, and we define \(\mathcal{C}^{\mathrm{P}}_{U}(v)\) recursively by \[\mathcal{C}^{\mathrm{P}}_{U}(v) \tag{11}\] \[=\left\{\begin{cases}\left\{\left(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix}\right),\left(\begin{smallmatrix}c(v)\\ \mathrm{p}(v)\,\mathrm{d}(v)\\ \mathrm{p}(v)\end{smallmatrix}\right)\right\},&\text{ if }\gamma(v)=\mathrm{BAS}\text{ and }\c(v)>U,\\ \left\{\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right)\right\},&\text{ if }\gamma(v)=\mathrm{BAS}\text{ and }\c(v)>U, \end{cases}\right.\] (12) \[\mathcal{C}^{\mathrm{P}}_{U}(\mathrm{OR}(v_{1},v_{2}))\] (13) \[=\min_{U}\left\{\left(\begin{smallmatrix}c_{1}+c_{2}\\ \mathrm{p}_{1}+d_{2}+p_{1}p_{2}\,\mathrm{d}(v)\\ p_{1}+p_{2}\end{smallmatrix}\right)\in\texttt{PTrip}\Big{|}\left(\begin{smallmatrix} c_{i}\\ d_{i}\\ p_{i}\end{smallmatrix}\right)\in\mathcal{C}^{\mathrm{P}}_{U}(v_{i})\right\}. \tag{14}\] Then similar to the results in Section VI one can prove: **Theorem 8**.: _The solution to EDgC is given by \(\max\left\{d\in\mathbb{R}_{\geq 0}\middle|\left(\begin{smallmatrix}\hat{d}\\ \hat{p}\end{smallmatrix}\right)\in\mathcal{C}^{\mathrm{P}}_{U}(\mathrm{R}_{T})\right\}\)._ **Theorem 9**.: _The solution to CEDPF is given by \(\min\pi(\mathcal{C}^{\mathrm{P}}_{\infty}(\mathrm{R}_{T}))\), where \(\pi\colon\texttt{PTrip}\rightarrow\mathbb{R}^{2}_{\geq 0}\) is the projection map onto the first two coefficients._ In the worst case, the complexity of this approach will be the same as in Section VI; the Pareto frontier can still be of exponential size. Typically, however, \(\mathcal{C}^{\mathrm{P}}_{U}(v)\) will be larger than \(\mathcal{C}^{\mathrm{D}}_{U}(v)\); in the deterministic model, it is often nonoptimal to add BASs with no damage but with extra costs to an attack, when that attack already activates their parent nodes. However, in the probabilistic model, attempting extra BASs that are not needed in the deterministic model typically leads to a higher probability of activating the parent nodes, giving another way of increasing the cost of an attack to increase its expected damage. **Example 10**.: _Consider the AT with \(w=\mathrm{R}_{T}=\mathrm{OR}(v_{1},v_{2})\), with \(\gamma(v_{i})=\mathrm{BAS}\), \(\mathrm{c}(v_{i})=1\), \(\mathrm{d}(v_{i})=0\), \(\mathrm{p}(v_{i})=0.5\) for \(i=1,2\), and \(\mathrm{d}(w)=1\). For \(U\geq 2\) the incomplete Pareto fronts \(\mathcal{C}^{\mathrm{D}}_{U}\) and \(\mathcal{C}^{\mathrm{P}}_{U}\) are given in the table below._ In the deterministic case one \(v_{i}\) suffices to reach \(w\), and activating the other comes with extra costs without benefit, which is infeasible. In the probabilistic case attempting both \(v_{i}\) instead comes at the same extra cost, but it increases the expected damage because it increases the probability of \(w\) being reached. For DAG-like ATs in the probabilistic setting one cannot transpose our BILP approach of Section VII, because the associated equations become nonlinear. For instance, if we introduce a vector \(\vec{y}\in[0,1]^{N}\) where \(y_{v}\) represents \(\mathrm{PS}(\mathbf{x},v)\), then for \(v=\mathrm{AND}(v_{1},v_{2})\) we get a constraint \(y_{v}=y_{v_{1}}\cdot y_{v_{2}}\), which is nonlinear. In Section VII, this issue was circumvented because this equation can be linearized if one knows \(y_{v}\in\{0,1\}\), but in general this is not possible. Therefore, we leave CEDPF, CgED and EDgC for DAG-like ATs as an open problem. ## X Experiments We tested the validity of our methods by executing them on two established ATs from the literature; these model the attacks on private information of valuable assets in a wireless sensor network [22] and on a data server on a network behind a firewall [23]. We also evaluate computation time on a suite of randomly generated ATs. As discussed in Section II, existing approaches cannot be applied to solve the Cg(E)D, (E)DgC and C(E)DPF problems; instead, we compare computation time to an enumerative method that goes through all attacks to find the Pareto optimal ones. The methods are implemented in Matlab and executed on PC with an Intel Core i7-10750HQ 2.8GHz processor and 16GB memory. The source code can be found at [24]. The BILP problems are solved by translating them into single-objective problems via the methods of [18] in the YALMIP environment [21], which translates them into the Gurobi solver [19], a state-of-the-art optimizer that can handle ILP problems. We find that our methods compute C(E)DPF considerably faster than the naive method, and that the resulting Pareto front provides valuable insight into the weak points of the system. ### _IoT sensor network for wildlife observation_ The first AT [22] is treelike (Fig. 4). It shows attacks on a wireless IoT sensor network that have the goal of obtaining the location information of valuable assets; in this case, giant pandas in a reservation in China [22]. The costs of BASs are given in [22] as unitless values 1-5. Detection probability is also given as a value 1-5; we take this as the BAS's success probability by converting it to a value 0.1-0.9. The work [22] does not contain damage values; instead, we estimate these from the economic value of giant pandas and the average reservation size [37]. The top event (the location information of one giant panda) only does minor damage compared to some of the internal nodes; e.g, if the base station is compromised, all pandas' location information is leaked. On this AT, we first disregard probability and calculate the cost-damage Pareto front bottom-up via Theorem 4. The resulting Pareto front is shown in Fig. 5(a), and the corresponding Pareto-optimal attacks are listed as subsets of \(B\) (where \(b_{i}\) is the BAS numbered \(i\) in Fig. 4). As we can see, only a few of the \(2^{22}\) possible attacks are Pareto optimal. Furthermore, every optimal attack contains at least one of the minimal attacks \(\{b_{18}\},\{b_{19},b_{20}\}\) and \(\{b_{21},b_{22}\}\), and many contain two of them. These three minimal attacks do a lot of damage at relatively small cost; indeed, after these the curve tapers off, and extra cost beyond this has less damage impact. Thus, these attacks require the most defense, and security improvements should focus on location information leakage by internal sources (\(b_{18}\)) and base station compromise by either physical theft (\(b_{19},b_{20}\)) or code theft (\(b_{21},b_{22}\)). After defenses are put in place, a new cost-damage analysis is needed to see whether attack risks have been mitigated satisfactorily. We also calculate the cost-expected damage Pareto front via Theorem 9. It has 31 Pareto-optimal attacks; this increase compared to the deterministic situation comes from the fact that in the probabilistic case it is beneficial to activate multiple children of an OR-gate, as in Example 10. Again the attack \(\{b_{18}\}\) is Pareto-optimal at \((3,18)\); however, \(\{b_{19},b_{20}\}\) and \(\{b_{21},b_{22}\}\) have expected damage \(10.5\) and \(4.5\), respectively, and at cost \(4\) are no longer Pareto-optimal. Instead, the next Pareto-optimal attack is \(\{b_{18},b_{19},b_{20}\}\), which targets two valuable low-level nodes. In this probabilistic setting, we see that internal local information leakage (\(b_{18}\)) is part of every Pareto-optimal attack, which suggests this is the most important attack to defend against. Fig. 4: Attack tree for privacy attacks on a giant panda preservation IOT monitoring system [22]. Nonzero damage values (in million USD) are in **bold**, BASs have cost values (unitless) and probabilities inscribed. Fig. 5: Attack tree for a data server on a network [23]. Nonzero damage values (unitless) are in **bold**, BASs have cost (in seconds) inscribed. ### _Data server on a network_ The second AT we consider represents the attack on a data server through a firewall using known exploits [23]. Since is DAG-like we only consider the deterministic case. The damage values are from [23] and represent unitless composites aggregating lost revenue, non-productive downtime, damage recovery, public embarassment, and law penalty. The cost is measured in time spent by the attacker, and the values are taken from [38], where the time taken by similar attacks is modeled via exponential distributions; we take the expected value as each node's duration. The rates in [38] are unitless, so we assume they are in \(\frac{1}{100_{\text{s}}}\); this does not affect the Pareto front except for scaling. We have slightly changed the AT compared to [23] as the presentation here focused on vulnerabilities rather than attacks. Note that some nodes, such as UserAccessToTerminal, are superfluous if one only cares about activating the top node since they require UserAccessToSMPServer, but they do play a role in cost-damage analysis since they carry damage values. The results are depicted in Fig. 5(c). There are 5 nonzero Pareto-optimal attacks. Furthermore, every Pareto-optimal attack contains the previous one. This implies that FTP buffer overflow attacks on the FTP server (\(b_{6},b_{8}\)) are the most important BASs to defend against, followed by \(b_{11}\) and \(b_{12}\), etc. Note that of these Pareto optimal attacks only \(A_{2}\) would have been found by a minimal attack analysis. ### _Computation time: Case studies_ We also measure the computation time of both our bottom-up and BILP methods for our analyses where applicable. We also compare it to an enumerative approach in which we calculate the cost and damage for each possible attack, and keep only the Pareto-optimal ones. For Fig. 4, this amounts to \(2^{22}\approx 4\cdot 10^{6}\) attacks. The bottom-up method is about \(10\times\) as fast as BILP, and both outperform the enumerative method by an enormous margin, especially in the larger Fig. 4. To check the robustness of our timing results, we also evaluate our methods on the same ATs, but with random \(\mathrm{c},\mathrm{d},\mathrm{p}\) values on each node (\(\mathrm{c}(v)\in\{1,\ldots,10\}\), \(\mathrm{d}(v)\in\{0,\ldots,10\}\), \(\mathrm{p}(v)\in\{0.1,0.2,\ldots,1.0\}\)). The average computation time and standard deviations are given in Table III. For the bottom-up methods, the results conform to our earlier results, but _BILP is_ slower; this may be because the random ATs contain considerably more nonzero values. The enumerative method is skipped because it is a lot slower than our new approaches; we compare it to our methods more comprehensively below. ### _Computation time: Randomly generated ATs_ We also apply our methods to a suite of ATs, randomly generated through a method adapted from [39]. More specifically, we generate ATs by taking literature ATs (see Table IV) and combining them in one of the three following ways (see [39]): 1. We take a random BAS from the first AT and replace it with the root of the second AT, thus joining the two ATs; 2. We give the roots of the two ATs a common parent with a random type; Fig. 6: Pareto fronts for the example ATs, together with the corresponding attacks as subsets of \(B\). Except for \(A_{1}\) of (c) all optimal attacks reach the top node. 3. Same as the previous, but we also identify two randomly chosen BASs, one from each AT. For each integer \(1\leq n\leq 100\), we combine ATs from Table IV via a method randomly drawn from the three above until the resulting AT satisfies \(|N|\geq n\). We do this five times for each \(n\), yielding a testing suite \(\mathcal{T}_{\mathrm{DAG}}\) of 500 DATs, with random \(\mathrm{c},\mathrm{d},\mathrm{p}\) as above. To test our bottom-up methods, we also create a suite \(\mathcal{T}_{\mathrm{tree}}\) of treelike ATs, using the first two combining methods above and only the treelike ATs from Table IV. We evaluate computation times and average the results in groups of ATs grouped by \(\lfloor N\rfloor/10\); see Fig. 7. We only evaluated the enumerative method for the first 3 groups. Again, BU is faster than BILP, and both are considerably faster than the enumerative approach. For large ATs probabilistic BU is slower than deterministic BU, which is not yet seen from the case study (\(N=38\)). This is probably because not only there are exponentially many attacks to consider, each attack also considers exponentially many actualized attacks to calculate expected damage, see Example 8. ## XI Conclusion This paper introduced two novel methods to solve cost-damage problems for attack trees, both by optimizing damage (resp. cost) under a cost (resp. damage) constraint, and by calculating the cost-damage Pareto front. For treelike ATs, this is done via bottom-up methods, both in the deterministic and the probabilistic case. For DAG-like ATs in the deterministic case, we introduce a method based on integer linear programming. There are multiple avenues for further research. An obvious one is the probabilistic case for DAG-like ATs, which is not discussed in this paper. One approach would be to use a bottom-up approach, but in a polynomial ring with formal variables for nodes that occur multiple times, rather than in the real numbers. In that way, one can keep track of which nodes occur twice, and tweak addition to prevent double counting. Another extension is to compare the formal, provably optimal approach presented in this paper with a genetic algorithm approach to multiobjective optimization to approximate the Pareto front [32]. From experiments it could be established to what extent the performance gain (if any) from using genetic algorithms comes at an accuracy cost. Finally, the cost and damage values may not be precisely known, but carry some uncertainty. A more elaborate analysis can incorporate this uncertainty, for example using fuzzy numbers, to obtain a robust version of the cost-damage Pareto front. \begin{table} \begin{tabular}{l l l l l l} Source & \(|N|\) & treelike & Source & \(|N|\) & treelike \\ \hline [11] Fig. 1 & 12 & no & [40] & Fig. 3 & 8 & yes \\ [11] Fig. 8 & 20 & no & [40] & Fig. 5 & 21 & yes \\ [11] Fig. 9 & 12 & no & [40] & Fig. 7 & 25 & yes \\ [8] Fig. 1 & 16 & no & [41] & Fig. 2 & 20 & yes \\ & & & [17] & Fig. 1 & 15 & yes \\ \end{tabular} \end{table} TABLE IV: ATs from the literature used as building blocks. The trees from [41] and [17] are attack-defense trees; only the root component of the attack part was used for these trees. Fig. 7: Computation time on randomly generated ATs. Means over subsets grouped by \(\lfloor N/10\rfloor\). \begin{table} \begin{tabular}{l|l l l|l l l} AT & \multicolumn{3}{c}{True \(\mathrm{c},\mathrm{d},\mathrm{p}\)} & \multicolumn{3}{c}{Random \(\mathrm{c},\mathrm{d},\mathrm{p}\)} \\ & \multicolumn{3}{c}{time (BU)} & \multicolumn{3}{c}{time (BILP)} & \multicolumn{3}{c}{time (ennurative)} & \multicolumn{3}{c}{time (BU)} & \multicolumn{3}{c}{time (BILP)} & \multicolumn{3}{c}{time (ennurative)} \\ \hline Fig. 4 deterministic & 0.044s & 0.438s & 34h & 0.037s\(\pm\)0.004s & 3.144s\(\pm\)0.526s & 3.144s\(\pm\)0.526s & \\ Fig. 4 probabilistic & 0.047s & n/a & 49h & 0.051s\(\pm\)0.012s & n/a & \\ Fig. 5 deterministic & n/a & 0.380s & 79.53s & n/a & 1.558s\(\pm\)0.252s & 84.19s\(\pm\)4.79s & \\ \end{tabular} \end{table} TABLE III: Computation time for C(E)DPF for the given ATs using bottom-up methods (Theorems 4 & 9), BILP (Theorem 6) and enumerative methods for their given \(\mathrm{c},\mathrm{d},\mathrm{p}\) values, and average and standard deviations over \(100\) random \(\mathrm{c},\mathrm{d},\mathrm{p}\) values.
2306.13809
Demonstrating the Merits of Integrating Multipath Signals into 5G LoS-Based Positioning Systems for Navigation in Challenging Environments
Constrained environments, such as indoor and urban settings, present a significant challenge for accurate moving object positioning due to the diminished line-of-sight (LoS)communication with the wireless anchor used for positioning. The 5th generation new radio (5G NR) millimeter wave (mmWave) spectrum promises high multipath resolvability in the time and angle domain, enabling the utilization of multipath signals for such problems rather than mitigating their effects. This paper investigates the benefits of integrating multipath signals into 5G LoS-based positioning systems with onboard motion sensors (OBMS). We provide a comprehensive analysis of the positioning system's performance in various conditions of erroneous 5G measurements and outage scenarios, which offers insights into the system's behavior in challenging environments. To validate our approach, we conducted a road test in downtown Toronto, utilizing actual OBMS measurements gathered from sensors installed in the test vehicle. The results indicate that utilization of multipath signals for wireless positioning operating in multipath-rich environments (e.g. urban and indoor) can bridge 5G LoS signal outages, thus enhancing the reliability and accuracy of the positioning solution. The redundant measurements obtained from the multipath signals can enhance the system's robustness, particularly when low-cost 5G receivers with a limited resolution for angle or range measurements are present. This holds true even when only considering the utilization of single-bounce reflections (SBRs).
Qamar Bader, Sharief Saleh, Mohamed Elhabiby, Aboelmagd Noureldin
2023-06-23T23:02:49Z
http://arxiv.org/abs/2306.13809v1
Demonstrating the Merits of Integrating Multipath Signals into 5G LoS-Based Positioning Systems for Navigation in Challenging Environments ###### Abstract Constrained environments, such as indoor and urban settings, present a significant challenge for accurate moving object positioning due to the diminished line-of-sight (LoS)communication with the wireless anchor used for positioning. The 5th generation new radio (5G NR) millimeter wave (mmWave) spectrum promises high multipath resolvability in the time and angle domain, enabling the utilization of multipath signals for such problems rather than mitigating their effects. This paper investigates the benefits of integrating multipath signals into 5G LoS-based positioning systems with onboard motion sensors (OBMS). We provide a comprehensive analysis of the positioning system's performance in various conditions of erroneous 5G measurements and outage scenarios, which offers insights into the system's behavior in challenging environments. To validate our approach, we conducted a road test in downtown Toronto, utilizing actual OBMS measurements gathered from sensors installed in the test vehicle. The results indicate that utilization of multipath signals for wireless positioning operating in multipath-rich environments (e.g urban and indoor) can bridge 5G LoS signal outages, thus enhancing the reliability and accuracy of the positioning solution. The redundant measurements obtained from the multipath signals can enhance the system's robustness, particularly when low-cost 5G receivers with a limited resolution for angle or range measurements are present. This holds true even when only considering the utilization of single-bounce reflections (SBRs). 5G NR; Autonomous Vehicles (AVs); Kalman Filter (KF); mmWave; Non-line-of-site (NLoS), Positioning. + Footnote †: publication: pubid: 978-1-6654-3540-6/22 © 2022 IEEE ## I Introduction Accurate and reliable positioning of moving objects in indoor environments is a critical challenge for various applications, such as asset tracking, indoor navigation, and emergency response. Over the past decade, several indoor positioning systems have been proposed and implemented, using various technologies such as Wi-Fi [1], Bluetooth [2], and Ultra-Wideband (UWB) [3]. However, each technology has its limitations in terms of accuracy, reliability, scalability, and cost [4]. Recently, the emergence of 5G cellular networks has provided a new opportunity for indoor positioning, thanks to the support of advanced features such as directional antennas, beamforming, and multi-antenna arrays [5]. In addition, 5G networks have higher multipath resolvability, enabling the utilization of multipath signals that were previously difficult to extract in such environments. This makes 5G a promising candidate for indoor positioning systems, as it can provide more accurate and robust positioning solutions. The exploitation of multipath signals in indoor positioning systems leads to a decrease in 5G signal outages, thus enhancing the reliability and accuracy of the positioning solution. Furthermore, the additional measurements provided by the multipath signals can improve the robustness of the system, particularly in the presence of low-cost 5G receivers with a limited resolution for angle or range measurements. The objective of this work is to examine the advantages of incorporating multipath signals into 5G-based integrated positioning systems for indoor environments. The integrated positioning system being analyzed in this paper is based on our previous work [6], which proposed the integration of 5G mmWave-based positioning solutions from both LoS and NLoS measurements, along with onboard motion sensors such as accelerometers, gyroscopes, and an odometer. To assess the system's robustness, our investigation will include analyzing the system under various conditions of erroneous 5G measurements. Additionally, we will conduct outage analysis by introducing artificial 5G LoS outages of varying lengths and object dynamics. This study aims to demonstrate the benefits of incorporating multipath signals in such multipath-rich environments, and how they can improve the overall robustness and reliability of the integrated positioning system. Although the used measurements were acquired from an outdoor dense urban environment, our analysis is equally important for indoor environments as both environments are filled with obstacles and multipath-rich. Our Contribution can be summarised in the following aspects: * Conducting a comprehensive analysis of the positioning system's performance under various conditions of erroneous 5G measurements and outage scenarios, which provides insights into the system's behavior in challenging environments. * For validation, a road test was conducted in downtown Toronto (Ontario, Canada) using actual OBMS measurements collected from sensors installed in the test vehicle. The road tests were carried out in a simulation environment that accurately emulates the complex urban environment of Toronto's downtown area. * Examining the advantages of incorporating multipath signals for positioning in multipath-rich environments. The remaining sections of this paper are structured as follows: In Section II, we present the system overview where we outline the key components of the positioning system under analysis. Section III describes the experimental setup used in the road test. Section IV presents the framework for robustness analysis, along with the results and discussions. Finally, in Section V, we summarize our conclusions and offer some concluding remarks. ## II System Overview The evaluated system [6] aims to provide an accurate and reliable positioning solution for mobile objects in restricted constrained, leveraging a fusion of 5G cellular measurements and OBMS. The system is comprised of three principal modules, namely, an inertial navigation system (INS) module, a Line-of-Sight (LoS) positioning module, and a multipath positioning module. A comprehensive depiction of the overall system architecture is presented in Fig. 1. The INS module provides the object's position and velocity estimations obtained by integrating accelerometer and gyroscope measurements and additionally provides an estimation of the orientation of the moving object [7]. The LoS-based positioning module, on the other hand, leverages 5G LoS measurements such as Round-Trip Time (RTT) and Angle of Departure (AoD) to estimate the moving object's position and velocity [8]. The multipath positioning module leverages reflected signals from obstructions such as buildings to estimate the position utilizing channel parameters such as time of arrival (ToA), angle of arrival (AoA), and AoD [9]. Before the positioning estimation, a measurement exclusion process is performed to filter out Non-Line-of-Sight (NLoS) signals, allowing only LoS signals to be utilized by the LoS-based positioning module. This process is based on the distinction in distance computation between the User Equipment (UE) and the Base Station (BS) through the utilization of time-based and received signal strength-based calculations [10]. Furthermore, when multipath signals are used for positioning, channel parameters are passed to an Out-of-Reflection-Region Identifier (OoRI) module [11], which filters out higher Fig. 1: Block diagram of the integrated positioning system under analysis [6]. order reflections by allowing only single-bounce reflections to be passed on to the multipath positioning module. The OoRI module is based on machine learning, trained on a dataset comprising 5G channel parameters, and achieved a classification accuracy of \(99.8\%\). The position computations resulting from multipath positioning undergo a second stage of validation, which is contingent upon the vehicle's motion constraints. These constraints are determined using odometer measurements and posterior estimations from the previous epoch. Finally, the position estimates from the three modules are fused using an unscented Kalman filter (UKF) [12], which provides a robust and accurate estimate of the mobile object's position, velocity, and orientation. ## III Road Test Setup For validation purposes, a quasi-real 5G simulation configuration was utilized from Siradel. As seen in Fig. 2, Siradel's 5G Channel suite comprises LiDAR-based maps of downtown city regions like Toronto to display the building structures, vegetation, and water bodies. As using a map of the actual downtown area offers accurate information on the physical features of the environment, it follows that captured multipath signals are likely to resemble realistic signal propagation. Examples of factors that affect the spread of 5G signals include building heights, roadway widths, and material compositions. The simulation tool generates positioning measurables including received signal strength (RSS), ToA, AoA, and AoD based on the UE reference positions and virtually connected BS positions by using its ray-tracing capabilities and propagation models. To collect the UE reference solution, a real car was equipped by a high-end integrated positioning system provided by Novatel as seen in Fig. 4. The unit features a tactical-grade IMU (KVH 1750) along with a GNSS receiver with RTK capabilities. The 3GPP's Release 16 requirements were followed, therefore BSs were spaced roughly \(250\)m apart throughout the driving path. The required 5G measurables were then acquired by Siradel utilizing the imported BS positions and the reference solution. The carrier frequency and bandwidth of Siradel's mmWave broadcasts were \(28\) GHz and \(400\) MHz, respectively. The BSs had \(8\times 1\) ULAs while the UE had an omnidirectional antenna. ## IV Robustness Analysis Results and Discussions ### _5G-LoS Outage Analysis_ In this section, we present an analysis of the impact of introducing artificial 5G LoS outages on the 3D positioning accuracy of our proposed integrated positioning system. We evaluate the performance of the system under various outage scenarios, with varying duration, distances, speeds, and total change in azimuth (heading) angle. The objective of this analysis is to assess the system's robustness in constrained environments like indoor or urban environments, where LoS signals are subject to blockages and interruptions due to the presence of obstacles and other environmental factors. Additionally, we investigate the effect of single-bounce reflections on the system's accuracy by comparing the positioning errors with and without considering these reflections. Table I presents the characteristics of 10 introduced 5G LoS outages, while Table II depicts the root mean square error (RMSE) and the max error as a percentage of distance traveled during a 5G LoS outage. The outages' duration ranges from \(20\) seconds to \(1000\) Fig. 3: Downtown Toronto Trajectory (Red), and 5G gNBs (Black circles). Fig. 2: Downtown Toronto, ON, Google Earth (Top) vs Siradel simulation tool (Bottom). seconds, and their distances range from 3.4 meters to 6255 meters. The average speed of the object during the outages ranges from \(0.17\) m/s to \(9.8\) m/s, while the total cumulative change in azimuth angle ranges from \(3.3\) degrees to \(29324\) degrees. Overall, it can be seen that the presence of SBRs significantly reduces the positioning error in all cases. The observation that the positioning error tends to increase with the duration of the outage, with the exception of outage 6, warrants further analysis. Several factors may have contributed to this outcome. Notably, errors in the estimation of the azimuth angle could lead to inaccuracies in the determination of the object's position, which, in turn, is modulated by the object's speed which is referred to as non-stationary error [13]. Fig. 5 provides a close-up view of the positioning solution during outage #9, both before and after incorporating SBRs. The plot demonstrates that, as a result of the drifting errors of INS, the positioning solution without multipath assistance drifts away from the reference solution. On the other hand, the multipath-assisted measurements continue to closely follow the reference solution. ### _Effect of Noisy Range and Angle Measurements_ In this section, we introduce normally distributed errors in the angle and range measurements with varying variances to analyze the robustness of the positioning system before and after incorporating multipath signals. Figure. 6 showcases the cumulative distribution function (CDF) of the positioning accuracy under noisy range measurements of variances \(0.5\) m\({}^{2}\), \(1\) m\({}^{2}\), and \(2\) m\({}^{2}\). In general, the utilization of multipath measurements appears to result in superior positioning accuracy. Specifically, it has been observed that the positioning accuracy with a variance of \(1\) m\({}^{2}\) is consistently higher than that achieved with a variance of \(0.5\) m\({}^{2}\) when relying solely on LoS signals. Figs. 7 and 8 depict the positioning solution prior to and after the integration of SBRs under the influence of noisy range measurements with a variance of \(2\) m\({}^{2}\). It is observed \begin{table} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{**Outage \#**} & \multirow{2}{*}{**Duration**} & \multirow{2}{*}{**Distance**} & **Avg.** & \(\Delta\) \\ & & & **Speed** & **Azimuth** \\ 2 & & & & \([\)\(\mathrm{m}/\mathrm{s}]\) & \([\)\(\mathrm{deg}]\) \\ \hline 1 & 20 & 196 & 9.8 & 72 \\ 2 & 20 & 174 & 8.7 & 3.1 \\ 3 & 20 & 3.4 & 0.17 & 1.3 \\ 4 & 40 & 374 & 9.4 & 152 \\ 5 & 40 & 192 & 4.7 & 14.7 \\ 6 & 40 & 25.6 & 0.6 & 3.8 \\ 7 & 60 & 547 & 9.1 & 156 \\ 8 & 200 & 1137 & 5.6 & 300 \\ 9 & 400 & 2615 & 6.5 & 674 \\ 10 & 1000 & 6255 & 6.3 & 29324 \\ \hline \end{tabular} \end{table} TABLE I: Artificial 5G-LoS Outages Characteristics Fig. 4: NavINST testbed for multi-sensor positioning. Fig. 5: Close-up scenario of 5G positioning solution during LoS outage (Shaded area) with and without SBRs. \begin{table} \begin{tabular}{c c c c} \hline \multirow{2}{*}{**Outage \#**} & \multirow{2}{*}{**RMS**} & \multirow{2}{*}{**\%**} & **Avg.** & \(\Delta\) \\ & & & **Speed** & **Azimuth** \\ 2 & & & & \([\)\(\mathrm{m}/\mathrm{s}]\) & \([\)\(\mathrm{deg}]\) \\ 3 & & & & \\ 4 & & & & \\ 5 & & & & \\ 6 & & & & \\ 7 & & & & \\ 8 & & & & \\ 9 & & & & \\ 10 & & & & \\ \hline \end{tabular} \end{table} TABLE II: 3D Positioning Errors With and Without SBRs that the presence of noise in the measurements during the 5G LoS outage, represented by the obstruction of the BS, leads to large errors in the positioning solution. This can be potentially attributed to erroneous corrections to the INS mechanization process, compounded by a complete loss of the LoS measurements. Consequently, the computed position based on dead reckoning would be subject to substantial bias until an LoS signal becomes available. The integration with SBRsled to a notable enhancement in the positioning solution. This improvement can be attributed to the augmented redundancy of measurements thanks to the increased availability of SBRs compared to LoS communication. As a result, the solution was able to track the reference solution more accurately. This highlights the importance of incorporating redundant measurements, particularly in scenarios where the primary measurement source, such as the LoS signal, may not be available. Fig. 9, in contrast, illustrates the CDF of positioning accuracy affected by noisy angle measurements. It is worth noting that the computation of position using multipath signals was impacted by two types of noisy measurements, namely, AoA and AoD, while only AoA measurements affected the LoS signals. Despite this, the use of SBRs resulted in superior positioning accuracy. Examining the sub-\(20\) cm level of accuracy, it can be observed that utilizing multipath signals with a variance of \(0.01\) deg\({}^{2}\) produced better positioning accuracy than utilizing LoS signals with a variance of \(0.001\) deg\({}^{2}\). Fig. 10 demonstrates the effect of noisy angle measurements with a variance of \(0.05\) deg\({}^{2}\) on the positioning solution. As previously observed in Figs. 7 and 8, the integration of SBRs successfully reduced the large errors associated with noisy measurements during LoS outages. ## V Conclusion In conclusion, this study has demonstrated the potential benefits of utilizing multipath signals in 5G-based indoor Fig. 8: Close-up scenario of the positioning solution with and without SBRs under noisy range measurements. Fig. 6: Positioning CDF due to noisy range measurements. Fig. 7: Close-up scenario of the positioning solution with and without SBRs under noisy range measurements. Fig. 9: Positioning CDF due to noisy angle measurements. positioning systems. The high accuracy of 5G mmWave range and angle measurements, combined with its ability to resolve multipath signals, presents a promising opportunity for achieving accurate and reliable indoor positioning. Through our experiments, we have shown that incorporating multipath-based measurements into our positioning system can significantly reduce RMS errors during 5G signal outages, improving the robustness and accuracy of the system. Moreover, the system was also evaluated under noisy range and angle measurements with varying variances. The results demonstrate that incorporating multipath measurements leads to improved positioning accuracy, even when the system is affected by two noisy measurements instead of one, compared to the system that does not use multipath assistance. These findings suggest that incorporating multipath signals into 5G-based indoor positioning systems can enhance their performance and open up new opportunities for a wide range of applications. Further research is required to explore the full potential of multipath-based positioning systems in indoor environments utilizing higher-order reflections.
2303.10506
Neural Operators of Backstepping Controller and Observer Gain Functions for Reaction-Diffusion PDEs
Unlike ODEs, whose models involve system matrices and whose controllers involve vector or matrix gains, PDE models involve functions in those roles functional coefficients, dependent on the spatial variables, and gain functions dependent on space as well. The designs of gains for controllers and observers for PDEs, such as PDE backstepping, are mappings of system model functions into gain functions. These infinite dimensional nonlinear operators are given in an implicit form through PDEs, in spatial variables, which need to be solved to determine the gain function for each new functional coefficient of the PDE. The need for solving such PDEs can be eliminated by learning and approximating the said design mapping in the form of a neural operator. Learning the neural operator requires a sufficient number of prior solutions for the design PDEs, offline, as well as the training of the operator. In recent work, we developed the neural operators for PDE backstepping designs for first order hyperbolic PDEs. Here we extend this framework to the more complex class of parabolic PDEs. The key theoretical question is whether the controllers are still stabilizing, and whether the observers are still convergent, if they employ the approximate functional gains generated by the neural operator. We provide affirmative answers to these questions, namely, we prove stability in closed loop under gains produced by neural operators. We illustrate the theoretical results with numerical tests and publish our code on github. The neural operators are three orders of magnitude faster in generating gain functions than PDE solvers for such gain functions. This opens up the opportunity for the use of this neural operator methodology in adaptive control and in gain scheduling control for nonlinear PDEs.
Miroslav Krstic, Luke Bhan, Yuanyuan Shi
2023-03-18T21:55:44Z
http://arxiv.org/abs/2303.10506v1
# Neural Operators of Backstepping Controller and Observer Gain Functions for Reaction-Diffusion PDEs ###### Abstract Unlike ODEs, whose models involve system matrices and whose controllers involve vector or matrix gains, PDE models involve functions in those roles--functional coefficients, dependent on the spatial variables, and gain functions dependent on space as well. The designs of gains for controllers and observers for PDEs, such as PDE backstepping, are mappings of system model functions into gain functions. These infinite-dimensional nonlinear operators are given in an implicit form through PDEs, in spatial variables, which need to be solved to determine the gain function for each new functional coefficient of the PDE. The need for solving such PDEs can be eliminated by learning and approximating the said design mapping in the form of a neural operator. Learning the neural operator requires a sufficient number of prior solutions for the design PDEs, offline, as well as the training of the operator. In recent work, we developed the neural operators for PDE backstepping designs for first-order hyperbolic PDEs. Here we extend this framework to the more complex class of parabolic PDEs. The key theoretical question is whether the controllers are still stabilizing, and whether the observers are still convergent, if they employ the approximate functional gains generated by the neural operator. We provide affirmative answers to these questions, namely, we prove stability in closed loop under gains produced by neural operators. We illustrate the theoretical results with numerical tests and publish our code on github. The neural operators are three orders of magnitude faster in generating gain functions than PDE solvers for such gain functions. This opens up the opportunity for the use of this neural operator methodology in adaptive control and in gain scheduling control for nonlinear PDEs. ## 1 Introduction ML as a tool for learning control methodologies.In the recent manuscript [10] we introduced a learning-based control _framework_ which devises a new role for machine learning (ML): learn an entire control design _methodology_, in the form of a mapping from the plant model to the controller gains, or even to the control inputs. This framework is neither model-free nor methodology-agnostic. On the contrary, it is method-specific. For a particular method (LQR, pole placement, MPC, backstepping, etc.), after a large number of training calculations of the controller gains on a sample set of plant models, an ML-approximated mapping of the method is learned. Once learned as a plant-to-gains mapping, the control design for a _new_/next plant (outside of the training set) does not require another solution of the design equations (Riccati, Bezout, etc.) but merely entails an "evaluation of the learnt map" to obtain the control gains. One would argue that no dire need exists for LQR or other linear finite-dimensional designs for such a learning-based capability, where an entire methodology is "encoded" into a neural mapping. The cost of the solution of a design problem (say, a Riccati equation, even of a high dimension) is not prohibitive, even online with current technology. Indeed, we are not motivated by design challenges in finite dimensions but for PDEs. In PDE control, the design problems are not matrix equations. They are PDEs themselves (or harder problems, such as operator Riccati equations). Since the infinite-dimensional state of a PDE is a function of _spatial_ variables, the controller gain is also a function of spatial variables. Finding the gain typically entails solving a PDE in space (but not in time). It is therefore of interest, in PDE control, to have a capability where producing the control gain functions is just an evaluation of a neural mapping that has already learned the design methodology on a large set of previously offline-solved control design problems for a sample set of PDEs in a certain class. Neural operators for approximating mappings of functions into functions.Just as the control designs for linear finite-dimensional systems are matrix-to-matrix mappings (\(A,B\) into gain \(K\)), control designs for PDEs are function-to-function mappings (spatially-dependent coefficients into gains). Our inspiration for encoding PDE control methodologies into machine learning comes from recent advances in the mathematics of machine learning. Motivated by the tasks of finding solution/flow maps (from the initial conditions into future states) for physical PDEs (such as the difficult Navier-Stokes PDEs), research teams led by George Karniadakis [55, 56], Anima Anandkumar and Andrew Stuart [53, 54], and George Pappas and Manfred Morari [41, 73], have developed neural approximation methods, termed "neural operators," with provable properties for nonlinear operators acting on functions and producing functions. These approaches are not simply discretizing PDEs and finding solution maps to the resulting large ODE solution problems. In the language of distributed parameter systems, they are not "early lumping" methods of learning solution maps. They approximate (non-discretized) function-to-function nonlinear operators and provide guarantees of the accuracy of approximation in terms of the required sizes of the training sets and neural networks. The value of such a capability in PDE control cannot be understated. With a theoretically rigorous and numerically powerful capability like this, specific PDE control methods, for specific classes of PDEs, can be learned once and encoded as neural operators, ready to produce the control gain functions for any new functional coefficients of the same classes of PDEs. In a theoretically rigorous field like PDE control, a computational capability with rigorous approximation guarantees has a value primarily if it allows the retention of the theoretical properties proven for the "exact design". This is indeed what we show in the paper [10] in which we introduce the framework: approximate neural operator representations of a particular PDE control method--PDE backstepping--preserves its stability guarantees in spite of the control gains not being generated by solving the design PDEs but by the gains being generated from the learned "neural model" of PDE backstepping. Extension of PDE backstepping neural operators from hyperbolic [10] to parabolic PDEs.Hyperbolic PDEs involve only the first derivatives in space and time. This makes them (all else being equal) the "simplest" PDE class for control. Delay systems combine ODEs with delays--the simplest form of a PDE. While the simplest among PDEs, hyperbolic PDEs are not necessarily easy to control. They can be unstable, with many unstable eigenvalues, and only one input acting at the boundary of a domain. This mix of simplicity within the PDE family, with the non-triviality for control, makes hyperbolic PDEs the ideal entry point for any new study in PDE control, including the introduction of a new framework for learning-based control in our [10]. The framework is depicted in Figure 1. The _learning_ and _implementation_ portions of the framework in Figure 1 are depicted in Figure 2. Control design problems for hyperbolic PDEs are hyperbolic PDEs themselves, namely, equations with only first derivatives in multiple spatial variables. Parabolic PDEs, with their first derivative in time but second derivatives in space, are the natural next challenge for learning the PDE backstepping methodology using neural operators. This is what we undertake in this paper. The chief difficulty with learning backstepping kernel operators for parabolic PDEs is that the kernels are governed by second-order PDEs, which raises the difficulty for solving such PDEs and for proving the sufficient smoothness of their solutions so that the neural operator (NO) approximations have guarantee of sufficient accuracy for preserving stabilization. At the intuitive level, with more derivatives, and more boundary conditions, the nonlinear operator from the reaction function to the gain in parabolic PDEs is a more complex operator than the nonlinear operator from the recirculation function to the gain in hyperbolic PDEs. There are hyperbolic cases where the kernel mapping can be written (though not solved) using the Laplace transform. That is never the case with parabolic PDEs; the design problem is never of spatial dimension lower than two. Figure 1: An algorithmic representation of our design paradigm of employing neural operators in boundary control of PDEs. Three major step clusters are performed: (1) derivation of the integral equations for the backstepping kernels, performed only once; (2) learning of the mapping from the plant parameter functions into the backstepping kernel functions, also performed only once; and (3) implementation of the controller for specific plant parameters. The task in the top box has been completed in [46, 77]. In this paper, the task in the middle box is introduced and stability guarantees for the task in the bottom box are provided. We consider parabolic PDE systems of the form \[u_{t}(x,t) =u_{xx}(x,t)+\lambda(x)u(x,t),\qquad x\in[0,1) \tag{1}\] \[u(0,t) =0\] (2) \[u(1,t) =U(t). \tag{3}\] Our goal is the design of a PDE backstepping boundary control \[U(t)=\int_{0}^{1}k(1,y)u(y,t)dy, \tag{4}\] as well as an observer with the (collocated) boundary sensing of \(u_{x}(1,t)\). By "design" we mean to find the gain function \(k\) in the control law (4), namely, to find the output \(k\) of the function-to-function mapping \(\mathcal{K}:\lambda\mapsto k\), depicted in Figure 3. This paper's objective is to learn the design operator \(\mathcal{K}\) with a neural operator approximation \(\hat{\mathcal{K}}\) (top of Figure 2) and to employ the resulting approximate gain \(\hat{k}\) in the control law (bottom of Figure 2). Since parabolic PDEs in one dimension have two boundary conditions, and also boundary actuation and boundary sensing can be employed at either boundary, a total of sixteen combinations of boundary actuation, boundary sensing, and boundary condition on the unactuated boundary are possible. Taking the symmetry between the boundaries \(x=0\) and \(x=1\) into account, the total number of truly distinct combinations is eight. They are listed in Table 1. We are able to solve all eight problems but, in this paper, pursue the simplest of the eight combinations for pedagogical reasons. The case with Dirichlet boundary conditions, \(u(0,t)=0,u(1,t)=U(t)\) is, notationally, the simplest case. It allows the reader to most quickly grasp the utility and the technical steps in employing neural operators in the control of parabolic PDEs. All the results in the paper--a full-state controller, an observer, and an output-feedback law (as well as the seven additional combinations not pursued in the paper)--can be extended to the more general class of parabolic PDE systems, \[v_{t}(x,t) =\epsilon(x)v_{xx}(x,t)+b(x)v_{x}(x,t)\] \[\quad+\lambda(x)v(x,t)+g(x)v(0,t)\] \[\quad+\int_{0}^{x}f(x,y)v(y,t)dy,\quad x\in(0,L). \tag{5}\] The domain \([0,L]\) is easily normalized to \([0,1]\), the diffusion \(\epsilon(x)\) is easily normalized to unity, and the advection term \(b(x)v_{x}(x,t)\) is easily eliminated with a scaling transformation. We forego pursuing this myriad of alternatives for pedagogical reasons--they are overwhelming but standard. Likewise, we forgo the treatment of the terms \(g(x)v(0,t)+\int_{0}^{x}f(x,y)v(y,t)dy\) because it adds complications but it is standard as well. The parabolic PDE with unity diffusion and with spatially-varying reaction \(\lambda(x)\) is a perfect introduction to the possibilities that neural operators present for PDE backstepping control where the computation of the gain kernel \(k(x,y)\) for each new \(\lambda(x)\) can be avoided by developing a neural op \begin{table} \begin{tabular}{|c|c|c|c|} \hline actuation & opposite boundary & sensing & \\ \hline \hline \(u(1,t)=U(t)\) & \(u(0,t)=0\) & \(u_{x}(0,t)\) & anti-col \\ \hline \(u(1,t)=U(t)\) & \(u(0,t)=0\) & \(u_{x}(1,t)\) & **col** \\ \hline \(u(1,t)=U(t)\) & \(u_{x}(0,t)=0\) & \(u(0,t)\) & anti-col \\ \hline \(u(1,t)=U(t)\) & \(u_{x}(0,t)=0\) & \(u(1,t)\) & col \\ \hline \(u_{x}(1,t)=U(t)\) & \(u(0,t)=0\) & \(u_{x}(0,t)\) & anti-col \\ \hline \(u_{x}(1,t)=U(t)\) & \(u(0,t)=0\) & \(u_{x}(1,t)\) & col \\ \hline \(u_{x}(1,t)=U(t)\) & \(u_{x}(0,t)=0\) & \(u(0,t)\) & anti-col \\ \hline \(u_{x}(1,t)=U(t)\) & \(u_{x}(0,t)=0\) & \(u(1,t)\) & col \\ \hline \end{tabular} \end{table} Table 1: Eight possible combinations of boundary actuation, sensing, and boundary condition at the opposite end of \([0,1]\). We focus on the simplest combination—in the second row. Figure 2: Stages (2) and (3) of the framework in Figure 1. TOP: The process of learning the PDE backstepping design operator \(\mathcal{K}:\lambda\mapsto k\) involves many solutions of a kernel PDE \(k_{xx}-k_{yy}=\lambda k\) in the Goursat form, for different functions \(\lambda_{i}(x)\) and then training of a neural operator \(\hat{\mathcal{K}}:\lambda\mapsto\hat{k}\). BOTTOM: Feedback implementation of PDE backstepping control with gain kernel \(\hat{k}(1,x)\) generated by the DeepONet \(\hat{\mathcal{K}}\). erator approximation of the functional mapping (nonlinear operator) \(\mathcal{K}:\lambda\mapsto k\), which is depicted in Figure 3. We opt to present in the paper the results for the combination in the second row of Table 1 because this combination allows us to "kill two birds with one stone" in our exposition. For this particular actuator-sensor combination, which is collocated (and the simplest of the four collocated combinations), the same kernel is used to obtain the gain functions for both the controller and the observer. This relieves the reader of the burden of following multiple approximations of the kernel, multiple neural operators, multiple training processes for those operators, and multiple theorems that guarantee the approximability of those multiple operators. The concept of encoding the methodologies of controller, observer, and output-feedback design into a neural operator is grasped through a single operator \(\mathcal{K}:\lambda\mapsto k\). And the duplication in the exposition is avoided. A reader who possesses the skills in calculations and the stamina can work out the remaining combinations in Table 1. PDE Backstepping.Even though PDE backstepping was first developed for parabolic systems [77], it is best to begin its study from the easier, hyperbolic case [46]. Control of hyperbolic PDEs has grown into a rich area because, in the hyperbolic case, one can stabilize a coupled system with fewer inputs than PDEs. A _pair_ of coupled hyperbolic PDEs was stabilized with a single boundary input in [18], an extension to \(n+1\) hyperbolic PDEs with a single input was introduced in [28], an extension to \(n+m\) PDEs with boundary actuation on \(m\) "homodirectional" PDEs in [35, 36], an extension to cascades with ODEs in [29], and an extension to "sandwiched" ODE-PDE-ODE systems in [94, 95]. Redesigns robust to delays are provided in [3]. PDE backstepping-based output-feedback regulation with disturbances is proposed in [26, 27]. For parabolic PDEs, backstepping for full-state feedback stabilization was developed in [77] and for observer design in [78]. A complex extension from linear to nonlinear parabolic PDEs, using infinite Volterra series, was provided in [88, 89]. Backstepping was combined with differential flatness in [61]. The first solutions to the null-controllability problem for parabolic PDEs were provided, using backstepping, in [19, 31]. Sampled-data and event-triggered versions of backstepping for parabolic PDEs appeared in [30, 38, 39, 71]. Work on cascades of parabolic PDEs with other systems has included heat-ODE cascades [5, 43], delay-parabolic cascades [44], and ODE-heat-ODE sandwich systems [93]. A backstepping design for a moving-boundary PDE-ODE Stefan system was presented in [42]. Coupled parabolic PDEs introduce special challenges and have been tackled in [4, 65, 90]. Extensions from multiple 1D parabolic PDEs to PDEs in 2D and higher dimensions, such as in the book [60] are arguably even more challenging and have been pursued for Navier-Stokes and magnetohydrodynamic systems in [87, 92] on channel domains, as well as for reaction-diffusion systems on balls of arbitrary dimensions [91]. Adaptive control designs for parabolic PDEs were introduced in [45, 79, 80], extended in [40], and extended to the hyperbolic case in [8]. For coupled hyperbolic PDEs with unknown parameters, the parabolic designs in [45, 79, 80] inspired a comprehensive collection of adaptive control designs in the book [1]. Applications of backstepping to PDE models of traffic are introduced in [96, 97]. Figure 3: The PDE backstepping design operator \(\mathcal{K}:\lambda\mapsto k\), where \(\lambda(x)\) is the spatially-varying reaction coefficient of the PDE, whereas \(k(x,y)\) is the kernel function of the backstepping transformation, producing the feedback gain function \(k(1,y)\) in the feedback law \(U(t)=\int_{0}^{1}k(1,y)u(y,t)dy\). Advances in learning-based control.What we present here is one more among many directions in learning-based control. For the benefit of the reader from PDE control, we highlight a few results from this vast and growing literature. Stability of learning-based MPC was established in [2, 72] and followed, for nonlinear systems, by efforts on joint learning of the controller and(or) Lyapunov functions [13, 14, 15, 21, 22]. In addition, [64, 83] have explored how learning-based control affects systems with known Lyapunov functions, [23, 68, 12] studied learning of stability certificates and stable controllers from data, and [6] developed a provably stable data-driven algorithm based on system measurements and prior system knowledge. For reinforcement learning (RL) [9], the focus has been on learning the system dynamics and providing closed-loop guarantees in _finite-time_ for both linear [16, 48, 4] and nonlinear systems [7, 37, 49, 76]. For model-free RL, [66, 32, 62, 100] proved the convergence of policy optimization to the optimal controller for LTI systems, [63, 67] for LTV systems, [82] for partially observed linear systems. For a review of policy optimization (PO) methods for LQR, \(H_{\infty}\) control, risk-sensitive control, LQG, and output feedback synthesis, see [34]. For nonlinear systems, [75, 20, 17] investigated PO with stability guarantees from CLFs. In addition to PO, [84, 85, 11, 51] proved stability and convergence of actor-critic methods [85, 51] and Q-learning [84]. In CPS, learning-based control was developed for partially observable systems [57]. Learning-based control in games and for MAS is pursued in [86, 99, 38, 59, 90, 98, 99, 35]. Convergence is shown to Nash equilibria in zero-sum linear quadratic games [99], continuous games [59], Stackelberg games [33], Markov games [98, 58], and multi-agent learning over networked systems [69, 70]. We focus on learning-based control for PDE systems. In our previous work [74], we demonstrate the empirical success of using NOs for accelerating PDE backstepping observers. Our recent work [10] represents the first step towards using NOs for _provably_ bypassing gain computations and directly learning the controller with closed-loop stabilization guarantee, in hyperbolic PDE systems. Neural operators--a brief summary.Neural operators are neural network-parameterized maps for learning relationships between function spaces. They consist of three components: an encoder, an approximator, and a reconstructor [50]. The encoder is an interpolation from an infinite-dimensional function space to a finite-dimensional vector representation. The approximator aims to mimic the infinite map using a finite-dimensional representation of both the domain function space and the target function space. The reconstructor then transforms the approximation output into the infinite-dimensional target function space. The implementation of both the approximator and the reconstructor is generally coupled and can take many forms. For example, the original DeepONet [56] contains a "branch" net that represents the approximation network and a "trunk" net that builds a basis for the target function space. The outputs of the two networks are then taken in linear combination with each other to form the operator. FNO [54] utilizes the approximation network in the Fourier domain where the reconstruction is done on a basis of the trigonometric polynomials. LOCA [41] integrates the approximation network and reconstruction step with a unified attention mechanism. NOMAD [73] extends the linear reconstructor map in DeepONet to a nonlinear map that is capable of learning on nonlinear submanifolds in function spaces. With the basic notions and notation for NOs given in Appendix A, we state next the key technical result that enables our use of NOs to learn the PDE backstepping kernel mappings. The result is quoted in its general/abstract form. It is specialized to the PDE control setting in our Theorem 4. **Theorem 1** (DeepONet universal approximation theorem [25, Theorem 2.1]).: _Let \(X\subset\mathbb{R}^{d_{x}}\) and \(Y\subset\mathbb{R}^{d_{y}}\) be compact sets of vectors \(x\in X\) and \(y\in Y\), respectively. Let \(\mathcal{U}:X\to U\subset\mathbb{R}^{d_{u}}\) and \(\mathcal{V}:Y\to V\subset\mathbb{R}^{d_{v}}\) be sets of continuous functions \(u(x)\) and \(v(y)\), respectively. Let \(\mathcal{U}\) be also compact. Assume the operator \(\mathcal{G}:\mathcal{U}\rightarrow\mathcal{V}\) is continuous. Then, for all \(\varepsilon>0\), there exist \(m^{*},p^{*}\in N\) such that for each \(n\geq m^{*}\), \(p\geq p^{*}\), there exist \(\theta^{(k)},\vartheta^{(k)}\), neural networks \(f^{\mathcal{V}}(\cdot;\theta^{(k)}),g^{\mathcal{V}}(\cdot;\vartheta^{(k)}),k= 1,\ldots,p\), and \(x_{j}\in X,j=1,\ldots,m\), with corresponding \(\mathbf{u}_{m}=(u(x_{1}),u(x_{2}),\cdots,u(x_{m}))^{\mathsf{T}}\), such that_ \[|\mathcal{G}(u)(y)-\mathcal{G}_{\mathbb{N}}(\mathbf{u}_{m})(y)|<\varepsilon \tag{6}\] _for all functions \(u\in\mathcal{U}\) and all values \(y\in Y\) of \(\mathcal{G}(u)\in\mathcal{V}\)._ In the sequel, we denote the DeepONet neural operator values \(\mathcal{G}_{\mathbb{N}}(\mathbf{u}_{m})(y)\) compactly as \(\mathcal{G}(u)(y)\) and the operators themselves as \(\mathcal{G}\). Paper outline and contributions.In Section 2 we recap the basic PDE backstepping approach from [77]. Recalling in Section 3 the twice continuous differentiability of the backstepping kernel function we establish the existence of a neural operator with an arbitrary accuracy for a set of continuously differentiable reaction coefficients not exceeding a certain size in the supremum norm. Sections 4 and 5 contain our main results. In Section 4 we prove the stability of a feedback law employing a DeepONet approximation of the backstepping gain. In Section 5 we prove the convergence of a backstepping observer that employs a DeepONet approximation of the observer gain. In Section 6 we combine the DeepONet-based full-state feedback and observer, to obtain a DeepONet-based output feedback controller with an actuator-sensor pair collocated at the \(x=1\) boundary. In Section 7 we illustrate the theoretical results with numerical tests. This paper's contribution relative to the inaugural work on backstepping for parabolic PDEs [77] is in providing a methodology for capturing the backstepping design in the form of a neural operator and avoiding the need for the solution of kernel PDEs, after the neural operator is once synthesized. This capability is highly valuable in future work in the adaptive control of parabolic PDEs and gain-scheduling for semilinear parabolic PDEs. In relation to our recent work on neural operator approximated backstepping control of hyperbolic PDEs, this paper extends this methodology, including stability guarantees, to a more difficult class of PDE systems and kernel operators. Additionally, compared to [10] where only full-state feedback is considered, in this paper, we solve problems in observer design and output-feedback control, with a convergence guarantee for the DeepONet-approximated backstepping observer. ## 2 Basic Backstepping Design for Reaction-Diffusion PDE We employ the following backstepping transformation, \[w(x,t)=u(x,t)-\int_{0}^{x}k(x,y)u(y,t)dy, \tag{7}\] to convert (1), (2), (3) into the target system \[w_{t} =w_{xx} \tag{8}\] \[w(0,t) =0\] (9) \[w(1,t) =0 \tag{10}\] with the help of feedback (4). We could as well pursue the target system \(w_{t}=w_{xx}-cw,c>0\), but we forego this design flexibility for the sake of simplicity. To convert (1), (2), (3) into (8), (9), (10), \(k\) needs to satisfy \[k_{xx}(x,y)-k_{yy}(x,y) =\lambda(y)k(x,y),\quad\forall(x,y)\in\mathcal{J} \tag{11}\] \[k(x,0) =0\] (12) \[k(x,x) =-\frac{1}{2}\int_{0}^{x}\lambda(y)dy \tag{13}\] where \(\mathcal{J}=\{0<y\leq x<1\}\) and \(\mathcal{T}=\{0\leq y\leq x\leq 1\}\). The following assumption is important. **Assumption 2**: \(\lambda\in C^{1}([0,1])\)_._ ## 3 Accuracy of Approximation of Backstepping Kernel Operator with DeepONet **Theorem 3**: (proven in [77, 81]) _For every \(\lambda\in C^{1}([0,1])\), the PDE system (11), (12), (13) has a unique \(C^{2}(\mathcal{T})\) solution with the property_ \[|k(x,y)|\leq\tilde{\lambda}\mathrm{e}^{\tilde{2}\lambda x}, \tag{14}\] _for all \(x\in[0,1]\), where \(\tilde{\lambda}=\sup_{x\in[0,1]}|\lambda(x)|\)._ This theorem is proven by representing the PDE system (11), (12), (13) as an integral equation \[G(\xi,\eta) =-\frac{1}{4}\int_{\eta}^{\xi}\lambda\left(\frac{s}{2}\right)ds\] \[\quad+\frac{1}{4}\int_{\eta}^{\xi}\int_{0}^{\eta}\lambda\left( \frac{\sigma-s}{2}\right)G(\sigma,s)dsd\sigma, \tag{15}\] where \[\xi=x+y,\ \ \eta=x-y,\quad x=\frac{\xi+\eta}{2},\ \ y=\frac{\xi- \eta}{2} \tag{16}\] \[G(\xi,\eta) =k(x,y)=k\left(\frac{\xi+\eta}{2},\frac{\xi-\eta}{2}\right). \tag{17}\] The change of variables (16) converts the domain \(\mathcal{T}\) for \((x,y)\) into the larger triangular domain \(\mathcal{T}_{1}=\{0\leq\eta\leq\xi\leq 1\}\cup\{1\leq\xi\leq 2-\eta\leq 2\}\) for \((\xi,\eta)\). The integral equation (15) is one of the useful approaches in generating solutions for \(k(x,y)\) for the purpose of training the neural approximation of the operator \(\lambda\mapsto k\). Next, denote the set of functions \[\underline{K}=\left\{\left.k\in C^{2}(\mathcal{T})\right|k(x,0)=0,\forall x \in[0,1]\right\} \tag{18}\] and let the operator \(\mathcal{K}:C^{1}[0,1]\rightarrow\underline{K}\) be defined by \[k(x,y)=:\mathcal{K}(\lambda)(x,y). \tag{19}\] Additionally, let the operator \(\mathcal{M}:C^{1}[0,1]\rightarrow\underline{K}\times C^{1}[0,1]\times C^{0}( \mathcal{T})\) be defined by \[(k(x,y),\kappa_{1}(x),\kappa_{2}(x,y))=:\mathcal{M}(\lambda)(x,y), \tag{20}\] where \[\kappa_{1}(x) =2\frac{d}{dx}\left(k(x,x)\right)+\lambda(x) \tag{21}\] \[\kappa_{2}(x,y) =k_{xx}(x,y)-k_{yy}(x,y)-\lambda(y)k(x,y). \tag{22}\] Based on Theorem 3, \(\mathcal{M}\) is a continuous operator. By applying Theorem 1, we get the following key result for the approximation of a backstepping kernel by a DeepONet (top of Figure 2). **Theorem 4**: _For all \(B_{\lambda},B_{\lambda^{\prime}}>0\) and \(\epsilon>0\), there exists a neural operator \(\hat{\mathcal{M}}\) such that, for all \((x,y)\in\mathcal{T}\),_ \[\left|\mathcal{M}(\lambda)(x,y)-\hat{\mathcal{M}}(\lambda)(x,y)\right|<\epsilon \tag{23}\] _holds for all Lipschitz \(\lambda\) with the properties that \(\|\lambda\|_{\infty}\leq B_{\lambda},\|\lambda^{\prime}\|_{\infty}\leq B_{ \lambda^{\prime}}\), namely, there exists a neural operator \(\hat{\mathcal{M}}\)_ _such that \(\hat{\mathcal{K}}(\lambda)(x,0)\equiv 0\) and_ \[\left|\mathcal{K}(\lambda)(x,y)-\hat{\mathcal{K}}(\lambda)(x,y)\right|\] \[+\left|2\frac{d}{dx}\left(\mathcal{K}(\lambda)(x,x)-\hat{\mathcal{ K}}(\lambda)(x,x)\right)\right|\] \[+\left|(\partial_{xx}-\partial_{yy})\left(\mathcal{K}(\lambda)(x, y)-\hat{\mathcal{K}}(\lambda)(x,y)\right)\right.\] \[\left.-\lambda(y)\left(\mathcal{K}(\lambda)(x,y)-\hat{\mathcal{ K}}(\lambda)(x,y)\right)\right|<\varepsilon. \tag{24}\] ## 4 Stabilization under DeepONet Gain Feedback The following theorem establishes the properties of the feedback system at the bottom of Figure 2. **Theorem 5**.: _Let \(B_{\lambda},B_{\lambda^{\prime}}>0\) be arbitrarily large and consider the system (1), (2), (3) with any \(\lambda\in C^{1}([0,1])\) whose derivative \(\lambda^{\prime}\) is Lipschitz and which satisfies \(\|\lambda\|_{\infty}\leq B_{\lambda}\) and \(\|\lambda^{\prime}\|_{\infty}\leq B_{\lambda^{\prime}}\). There exists a sufficiently small \(\varepsilon^{*}(B_{\lambda},B_{\lambda^{\prime}})>0\) such that the feedback law_ \[U(t)=\int_{0}^{1}\hat{k}(1,y)u(y,t)dy, \tag{25}\] _with all NO gain kernels \(\hat{k}=\hat{\mathcal{K}}(\lambda)\) of approximation accuracy \(\varepsilon\in(0,\varepsilon^{*})\) in relation to the exact backstepping kernel \(k=\mathcal{K}(\lambda)\) ensures that the closed-loop system satisfies the exponential stability bound_ \[\|u(t)\|\leq M\mathrm{e}^{-(t-t_{0})/2}\|u_{0}\|,\quad\forall t\geq t_{0}, \tag{26}\] _where_ \[M(\varepsilon,\bar{\lambda})=\left(1+\bar{\lambda}e^{2\bar{\lambda}}\right) \left(1+\bar{\lambda}e^{2\bar{\lambda}}+\varepsilon\right)\mathrm{e}^{\bar{ \lambda}e^{2\bar{\lambda}}+\varepsilon}. \tag{27}\] **PROOF.**_Approximate backstepping transform and perturbed target system._ Take the backstepping transformation \[\hat{w}(x,t)=u(x,t)-\int_{0}^{x}\hat{k}(x,y)u(y,t)dy, \tag{28}\] where \(\hat{k}=\hat{\mathcal{K}}(\lambda)\). With the control law (25), the target system becomes \[\hat{w}_{t}(x,t) =\hat{w}_{xx}(x,t)+\delta_{k0}(x)u(x,t)\] \[+\int_{0}^{x}\delta_{k1}(x,y)u(y,t)dy \tag{29}\] \[\hat{w}(0,t) =0\] (30) \[\hat{w}(1,t) =0, \tag{31}\] with \[\delta_{k0}(x)=2\frac{d}{dx}\left(\hat{k}(x,x)\right)+\lambda(x)\] \[=-2\frac{d}{dx}\left(\hat{k}(x,x)\right) \tag{32}\] \[\delta_{k1}(x,y) =\partial_{xx}\hat{k}(x,y)-\partial_{yy}\hat{k}(x,y)-\lambda(y) \hat{k}(x,y)\] \[=-\partial_{xx}\tilde{k}(x,y)+\partial_{yy}\tilde{k}(x,y)+\lambda (y)\tilde{k}(x,y), \tag{33}\] where \[\tilde{k}=k-\hat{k}=\mathcal{K}(\lambda)-\hat{\mathcal{K}}(\lambda). \tag{34}\] With (24), we get \[\|\delta_{k0}\|_{\infty}\leq\varepsilon \tag{35}\] \[\|\delta_{k1}\|_{\infty}\leq\varepsilon. \tag{36}\] _Inverse approximate backstepping transformation._ Since the state \(u\) appears under the integral in the \(\hat{w}\)-system (29), in the Lyapunov analysis we need the inverse backstepping transformation \[u(x,t)=\hat{w}(x,t)+\int_{0}^{x}\hat{l}(x,y)\hat{w}(y,t)dy. \tag{37}\] It is shown in [47] that the direct and inverse backstepping kernels satisfy in general the relationship \[\hat{l}(x,y)=\hat{k}(x,y)+\int_{y}^{x}\hat{k}(x,\xi)\hat{l}(\xi,y)dy. \tag{38}\] The inverse kernel satisfies the following conservative bound \[\|\hat{l}\|_{\infty}\leq\|\hat{k}\|_{\infty}\mathrm{e}^{\|\hat{k}\|_{\infty}}. \tag{39}\] Since \(\|k-\hat{k}\|_{\infty}<\varepsilon\), we have that \(\|\hat{k}\|_{\infty}\leq\|k\|_{\infty}+\varepsilon\). With (14) we get \[\|\hat{k}\|_{\infty}\leq\tilde{k}+\varepsilon \tag{40}\] \[\tilde{k}(\bar{\lambda}):=\bar{\lambda}\mathrm{e}^{2\bar{\lambda}}, \tag{41}\] and hence \[\|\hat{l}\|_{\infty}\leq\left(\bar{\lambda}\mathrm{e}^{2\bar{\lambda}}+ \varepsilon\right)\mathrm{e}^{\bar{\lambda}e^{2\bar{\lambda}}+\varepsilon}. \tag{42}\] _Lyapunov analysis._ The Lyapunov functional \[V=\frac{1}{2}\|\hat{w}\|^{2} \tag{43}\] has a derivative \[\dot{V}=-\|\hat{w}_{x}\|^{2}+\Delta_{0}+\Delta_{1}, \tag{44}\] where \[\Delta_{0}(t) =\int_{0}^{1}\hat{w}(x,t)\delta_{k0}(x)u(x,t)dx \tag{45}\] \[\Delta_{1}(t) =\int_{0}^{1}\hat{w}(x,t)\int_{0}^{x}\delta_{k1}(x,y)u(y,t)dydx. \tag{46}\] With several straightforward majorizations, we get \[\Delta_{0} \leq\|\delta_{\bar{\mathbf{k}}0}\|_{\infty}\left(1+\|\tilde{l}\|_{ \infty}\right)\|\hat{w}\|^{2}\] \[=\|\delta_{\bar{\mathbf{k}}0}\|_{\infty}\left(1+\|\tilde{l}\|_{ \infty}\right)2V. \tag{47}\] and \[\Delta_{1} =\int_{0}^{1}\hat{w}(x)\int_{0}^{x}\hat{\delta}_{k}(x,\sigma) \hat{l}(\sigma,y)d\sigma dydx\] \[\quad+\int_{0}^{1}\hat{w}(x)\int_{0}^{x}\hat{\delta}(x,y)\hat{w}( y)dydx\] \[\leq\|\delta_{\bar{k}1}\|_{\infty}\left(1+\|\tilde{l}\|_{\infty} \right)\|\hat{w}\|^{2}\] \[=\|\delta_{\bar{k}1}\|_{\infty}\left(1+\|\tilde{l}\|_{\infty} \right)2V. \tag{48}\] From (44), (47), (48), (42), and Poincare's inequality, we get \[\dot{V}\leq-\frac{1}{2}(1-\delta^{*})V, \tag{49}\] where \[\delta^{*}(\epsilon,\bar{\lambda})=2\epsilon\left(1+\bar{\lambda}\mathrm{e}^{ 2\bar{\lambda}}+\epsilon\right)\mathrm{e}^{\bar{\lambda}\mathrm{e}^{2\bar{ \lambda}}+\epsilon} \tag{50}\] is an increasing function of \(\epsilon,\bar{\lambda}\), with the property that \(\delta^{*}(0,\bar{\lambda})=0\). Hence, there exists \(\epsilon^{*}(\bar{\lambda})\) such that, for all \(\epsilon\in[0,\epsilon^{*}]\), \[\dot{V}\leq-\frac{1}{4}V, \tag{51}\] namely, \(V(t)\leq V_{0}\mathrm{e}^{-(t-t_{0})/4}\). From the direct and inverse backstepping transformations it follows that \[\frac{1}{1+\|\tilde{l}\|_{\infty}}\|u\|\leq\sqrt{2V}\leq\left(1+\|\tilde{k}\| _{\infty}\right)\|u\|. \tag{52}\] In conclusion, \[\|u(t)\|\leq\left(1+\|\tilde{l}\|_{\infty}\right)\left(1+\|\tilde{k}\|_{ \infty}\right)\mathrm{e}^{-(t-t_{0})/2}\|u_{0}\|. \tag{53}\] With (40), (41), (42), the proof is completed. \(\Box\) ## 5 Observer Design State estimators (observers) with boundary measurements can be formulated with four measurement choices on the interval \([0,1]\): the measured quantities can be \(u(0,t),u_{x}(0,t),u(1,t),u_{x}(1,t)\). That leads to many possible problem formulations. The possibilities multiply once we note that, on the opposite boundary from the one at which measurement is conducted, one can have either a Dirichlet or Neumann (or even Robin) boundary condition. Our objective in this paper is not to solve all the possible problems. We are concerned only with illustrating how NOs can be combined with PDE observers. Hence, our choice among the many possibilities is the simplest choice, of the highest pedagogical value. Since our goals with observers are twofold--to estimate the unmeasured state but also to use it in output-feedback control for stabilization--our choice of measurement needs to be consistent with the actuation choice we have already pursued in this note, namely, Dirichlet actuation of \(u(1,t)=U(t)\). So, we cannot use \(u(1,t)\) for measurement but we can use \(u(0,t),u_{x}(0,t),u_{x}(1,t)\). We make the last among these three choices. We let the output \(u_{x}(1,t)\) be measured. Our choice of \(u_{x}(1,t)\) for measurement, as indicated in the observer diagram in Figure 4, is motivated by the fact that, with this measurement, an observer can be built using the same kernel \(k(x,y)\) as for the control law. In other words, with a single training of a neural operator \(\hat{\mathcal{K}}\), we obtain gains for both a controller and an observer--we kill two birds (pedagogically speaking) with one stone. We don't have to expend an undue amount of the reader's effort on the verification of the conditions of the DeepONet approximation theorem. It is enough for the reader to see once how this is done. The rest of the effort is better spent on illustrating the uses of this approximation capability in observers and output-feedback controllers. **Theorem 6**.: _Let \(B_{\lambda},B_{\lambda^{\prime}}>0\) be arbitrarily large and consider the system (1), (2), (3) with any \(\lambda\in C^{1}([0,1])\) whose derivative \(\lambda^{\prime}\) is Lipschitz and which satisfies \(\|\lambda\|_{\infty}\leq B_{\lambda}\) and \(\|\lambda^{\prime}\|_{\infty}\leq B_{\lambda^{\prime}}\). There exists a sufficiently small \(\epsilon^{*}(B_{\lambda},B_{\lambda^{\prime}})>0\) such that the observer_ \[\hat{u}_{t}(x,t) =\hat{u}_{xx}(x,t)+\lambda(x)\hat{u}(x,t)\] \[\quad-\hat{k}(1,x)\left[u_{x}(1,t)-\hat{u}_{x}(1,t)\right] \tag{54}\] \[\hat{u}(0,t) =0\] (55) \[\hat{u}(1,t) =U(t), \tag{56}\] _with all NO gain kernels \(\hat{k}=\hat{\mathcal{K}}(\lambda)\) of approximation accuracy \(\epsilon\in(0,\epsilon^{*})\) in relation to the exact backstepping kernel \(k=\mathcal{K}(\lambda)\) ensure that the observer error system, for all Figure 4: The PDE backstepping observer (54), (55), (56) uses boundary measurement of the flux \(u_{x}(1,t)\). The gain \(\hat{k}(1,t)\) is produced with the DeepONet \(\hat{\mathcal{K}}\). \[\tilde{u}(x,t)=\tilde{u}(x,t)+\int_{x}^{1}\hat{l}(y,x)\tilde{u}(y,t)dy \tag{70}\] where \(\hat{l}\) satisfies the bound (42). \(\Box\) ## 6 Collocated Output-Feedback Stabilization In this section we put together the observer (54), (55), (56), along with the observer-based controller \[U(t)=\int_{0}^{1}\hat{k}(1,x)\hat{u}(x,t)dx \tag{71}\] to stabilize the system (1), (2), (3) by output feedback. The backstepping transformations \[\ddot{w}(x,t) =\hat{u}(x,t)-\int_{0}^{x}\hat{k}(x,y)\hat{u}(y,t)dy \tag{84}\] \[\tilde{u}(x,t) =\omega(x,t)-\int_{x}^{1}\hat{k}(y,x)\omega(y,t)dy. \tag{85}\] transform the overall system into the cascade \[\ddot{w}_{t}(x,t) =\ddot{w}_{xx}(x,t)+\delta_{k0}(x)\ddot{w}(x,t)dy\] \[\quad+\delta_{k0}(x,y)\int_{0}^{x}\hat{l}(x,y)\ddot{w}(y,t)dy\] \[\quad+\int_{0}^{x}\delta_{k1}(x,y)\ddot{w}(y,t)dy\] \[\quad+\int_{0}^{x}\delta_{k1}(x,y)\int_{0}^{y}\hat{l}(y,\eta) \ddot{w}(\eta,t)d\eta dy\] \[\quad-\left(\hat{k}(1,x)-\int_{0}^{x}\hat{k}(x,y)\hat{k}(1,y)dy \right)\omega_{x}(1,t) \tag{86}\] \[\ddot{w}(0,t) =0\] (87) \[\ddot{w}(1,t) =0\] (88) \[\omega_{x}(x,t) =\omega_{xx}(x,t)+\delta_{k0}(x)\omega(x,t)\] \[\quad+\int_{x}^{1}\hat{l}(y,x)\delta_{k0}(y)\omega(y,t)dy\] \[\quad+\int_{x}^{1}\left(\delta_{k1}(y,x)\omega(y,t)\right.\] \[\quad\left.+\hat{l}(y,x)\int_{y}^{1}\delta_{k1}(s,y)\omega(s,t) ds\right)dy\] (89) \[\omega(0,t) =0\] (90) \[\omega(1,t) =0. \tag{91}\] Both the \(\omega\)-subsystem (89)-(91), which is autonomous, and the \(\ddot{w}\)-subsystem (86)-(88), which is driven by the output \(\omega_{x}(1,t)\) of the \(\omega\)-subsystem, are exponentially stable in \(L^{2}[0,1]\) and higher norms for sufficiently small \(\varepsilon\). However, because the trace term \(\omega_{x}(1,t)\) in the last line of (86) cannot be easily bounded even by an \(H^{2}\) norm of \(\omega\), we do not pursue a stability analysis of the composite system, i.e., we leave the "separation principle" unproven for the observer-based feedback (54), (55), (56), (83) acting on the system (1), (2), (3). The technical challenge has nothing to do with the NO implementation of the kernel \(\hat{k}\), as the challenge does not arise due to the perturbation kernels \(\delta_{k0},\delta_{k1}\). The challenge is due to the unbounded nature of the _output mapping_\(\omega(t)\mapsto\omega_{x}(1,t)\), a challenge not encountered in ODEs but only in PDE control with boundary sensing or actuation. The result given next, which is of a slightly more complicated form, is provable but we give it without a proof because the calculations are very, very lengthy and partly duplicate the calculations in the previous sections. The actuation-sensing setup is from the last row of Table 1, namely, a collocated Neumann actuation and Dirichlet sensing. Stability established is in the \(H^{1}\) norm \(\|u(t)\|_{H^{1}}+\|\tilde{u}(t)\|_{H^{1}}\). **Theorem 7**.: _Consider the system_ \[u_{t}(x,t) =u_{xx}(x,t)+\lambda(x)u(x,t),\qquad x\in[0,1) \tag{92}\] \[u(0,t) =0\] (93) \[u_{x}(1,t) =U(t) \tag{94}\] _with a measured Dirichlet output \(u(1,t)\), along with the collocated observer-based Neumann-actuated controller_ \[\hat{u}_{t}(x,t) =\hat{u}_{xx}(x,t)+\lambda(x)\hat{u}(x,t)\] \[\quad+\kappa(x)\left[u(1,t)-\hat{u}(1,t)\right] \tag{95}\] \[\hat{u}(0,t) =0\] (96) \[\hat{u}_{x}(1,t) =U(t)-\hat{k}(1,1)\left(u(1,t)-\hat{u}(1,t)\right)\] (97) \[U(t) =\hat{k}(1,1)u(1,t)+\int_{0}^{1}\kappa(x)\hat{u}(x,t)dx, \tag{98}\] _where the gain function of both the controller and the observer is given by_ \[\kappa(x):=\hat{k}_{\xi}(\xi,x)\big{|}_{\xi=1}\,. \tag{99}\] _For all \(B_{\lambda},B_{\lambda^{\prime}}>0\) and for all \(\lambda\in C^{1}([0,1])\) whose derivative is Lipschitz and which satisfies \(\|\lambda\|_{\infty}\leq B_{\lambda}\) and \(\|\lambda^{\prime}\|_{\infty}\leq B_{\lambda^{\prime}}\), there exists a sufficiently small \(\varepsilon^{*}(B_{\lambda},B_{\lambda^{\prime}})>0\) such that for all NO gain kernels \(\hat{k}=\hat{\mathcal{K}}(\lambda)\) of approximation accuracy \(\varepsilon\in(0,\varepsilon^{*})\) in relation to the exact backstepping kernel \(k=\mathcal{K}(\lambda)\) there exists sufficiently large \(M(\varepsilon,\lambda)>0\) such that the above observer-based feedback ensures that the closed-loop system, for all \(u_{0},\hat{u}_{0}\in H^{1}[0,1]\), satisfies the exponential stability bound_ \[\|u(t)\|_{H^{1}}+\|\hat{u}(t)\|_{H^{1}}\leq M\mathrm{e}^{-(t-t_{0})/4}\left( \|u_{0}\|_{H^{1}}+\|\hat{u}_{0}\|_{H^{1}}\right) \tag{100}\] _for all \(t\geq t_{0}\)._ The proof is based on the backstepping transformations (84), (85) into a perturbed version of the target system \[\ddot{w}_{t}(x,t) =\ddot{w}_{xx}(x,t)\] \[\quad+\left(\kappa(x)-\int_{0}^{x}\kappa(y)\hat{k}(x,y)dy\right) \omega(1,t) \tag{101}\] \[\ddot{w}(0,t) =0\] (102) \[\ddot{w}(1,t) =0\] (103) \[\omega_{x}(x,t) =\omega_{xx}(x,t)\] (104) \[\omega(0,t) =0\] (105) \[\omega(1,t) =0. \tag{106}\] The perturbation terms are as in (86) and (89), employing the functions \(\delta_{k0}\) and \(\delta_{k1}\) (which are uniformly bounded by \(\varepsilon\)). The Lyapunov analysis employs the \(H^{1}\) norms of \(\ddot{w}\) and \(\omega\), along with Agmon's inequality to bound the perturbation term \(\omega(1,t)\) in the \(\ddot{w}\)-system using the norm \(\|\omega_{x}\|\). ## 7 Numerical Results: Full-State Feedback, Observer, and Output Feedback In Figure 5, we show that the system is open-loop unstable for the reaction term \(\lambda(x)=50\cos(\gamma\cos^{-1}(x))\) for \(\gamma=5,8\). The increased oscillation in larger \(\gamma\) yields a lower rate of instability as shown on the right. We simulate the PDE and its control using the finite difference scheme in Appendix B. In Figure 6, we demonstrate both the analytical and learned DeepONet kernels for the two \(\gamma\) values corresponding to Figure 5. To learn the mapping \(\mathcal{K}:\lambda(x)\mapsto k(x,y)\), we construct a dataset of 900 different \(\lambda(x)\) as the Chebyshev polynomials define above with \(\gamma\sim\) uniform \((4,9)\). We choose \(\lambda\) of this form due to the rich set of kernel functions generated by varying only a single parameter. To effectively utilize the DeepONet without modifying the grid, we stack \(\lambda(x)\) repeatedly \(n_{y}\) times over the \(y\) axis to make a 2D input to the network. Then, we capitalize on the 2D mapping by implementing a CNN for the DeepONet branch network. In the future, one can explore neural operators on irregular girds along the direction of [52]. For training, the relative \(L_{2}\) error is \(3.5e-2\) and the testing error is \(3.6e-2\). With the learned neural operator, we achieve speedups on the magnitude of \(10^{3}\) compared to an efficient finite difference implementation. In Figure 7, we demonstrate closed-loop stability with the neural operator approximated gain function for the control feedback law. Additionally, we see the error is largest at the beginning achieving a maximum in both cases of approximately 10%. In Figure 8, we test the observer (55), (55), (56) with a DeepONet-approximated kernel trained as above using \(\lambda(x)=20\cos(5\cos^{-1}(x))\) with \(\gamma\sim\) uniform \((4,9)\). Additionally, we apply a boundary signal of \(U(t)=7\sin(16\pi t)+10\cos(2\pi t)\) to generate a challenging and rich PDE motion for estimation. The true system state begins with initial conditions \(u(x,0)=10\) while the DeepONet observer has initial conditions of \(\hat{u}_{NO}(x,0)=20\). Despite this, the observer approximates the PDE well with a peak Figure 5: Open-loop instability (top) for the two respective reaction coefficients \(\lambda(x)\) shown on the bottom row. Figure 6: Examples of the kernel \(k(x,y)\) (top row), learned kernel \(\hat{k}(x,y)\) (middle row), and the kernel error \(k(x,y)-\hat{k}(x,y)\) (bottom row). The two respective \(\lambda(x)\) values correspond to the same respective values as in Fig 5. error of less than 5% compared to the analytical observer while maintaining the same \(10^{3}\)x speedup over the finite difference scheme. ## 8 Conclusions In this paper, we build on the framework introduced in [10], and depicted in Figures 1 and 2, and extend the neural operator-supported PDE backstepping methodology from the hyperbolic to the harder parabolic case. We limit ourselves to the reaction-diffusion parabolic class for the clarity of exposition. With the foundation laid for hyperbolic and parabolic PDE backstepping designs, which free the user from having to solve kernel PDEs in real time and result in a _thousandfold speedup_, the road is open to developing this methodology for two important control domains in which the backstepping kernels constantly evolve in the course of implementation: (1) gain scheduling for nonlinear PDEs, where the kernel depends on the current state of the PDE; and (2) adaptive control of PDEs whose functional coefficients are unknown, have to be adaptively estimated online, and the kernel has to be continuously updated. In both applications, the solving of the \(k\)-PDE online is eliminated with the aid of the pre-determined neural operator \(\hat{\mathcal{K}}\). ## Appendix A Neural networks notation An \(n\)-layer neural network (NN) \(f^{\mathcal{N}}:\mathbb{R}^{d_{1}}\rightarrow\mathbb{R}^{d_{n}}\) is given by \[f^{\mathcal{N}}(x,\theta):=(l_{n}\circ l_{n-1}\circ...\circ l_{2}\circ l_{1})( x,\theta)\] (A.1) where layers \(l_{i}\) start with \(l_{0}=x\in\mathbb{R}^{d_{1}}\) and continue as \[l_{i+1}(l_{i},\theta_{i+1}):=\sigma(W_{i+1}l_{i}+b_{i+1}),\quad i=1,\ldots,n-1\] (A.2) Figure 7: For the two respective \(\lambda(x)\) values as in Fig 5, the top row showcases closed-loop solutions with the learned kernel \(\hat{k}(x,y)\), whereas the bottom row shows the closed-loop PDE error between applying the original kernel \(k(x,y)\) and the learned kernel \(\hat{k}(x,y)\). \(\sigma\) is a nonlinear activation function, and weights \(W_{i+1}\in\mathbb{R}^{d_{i+1}\times d_{i}}\) and biases \(b_{i+1}\in\mathbb{R}^{d_{i+1}}\) are parameters to be learned, collected into \(\theta_{i}\in\mathbb{R}^{d_{i+1}(d_{i}+1)}\), and then into \(\theta=[\theta_{1}^{\mathsf{T}},\ldots,\theta_{n}^{\mathsf{T}}]^{\mathsf{T}} \in\mathbb{R}^{\sum_{i=1}^{n-1}d_{i+1}(d_{i}+1)}\). Let \(\vartheta^{(k)},\theta^{(k)}\in\mathbb{R}^{\sum_{i=1}^{n-1}d_{i+1}(d_{i}+1)}\) denote a sequence of NN weights. A neural operator (NO) for approximating a nonlinear operator \(\mathcal{G}:\mathcal{U}\mapsto\mathcal{V}\) is defined as \[\mathcal{G}_{\mathbb{N}}(\mathbf{u}_{\mathbf{m}})(y)=\sum_{k=1}^{p}\mathcal{ S}^{\mathcal{N}}(\mathbf{u}_{m};\vartheta^{(k)})f^{\mathcal{N}}(y;\theta^{(k)})\] (A.3) where \(\mathcal{U},\mathcal{V}\) are function spaces of continuous functions \(u\in\mathcal{U},v\in\mathcal{V}\). \(\mathbf{u}_{m}\) is the evaluation of function \(u\) at points \(x_{i}=x_{1},...,x_{n}\), \(p\) is the number of chosen basis components in the target space, \(y\in Y\) is the location of the output function \(v(y)\) evaluations, and \(g^{\mathcal{N}}\), \(f^{\mathcal{N}}\) are NNs termed branch and trunk networks. Note, \(g^{\mathcal{N}}\) and \(f^{\mathcal{N}}\) are not limited to feedforward NNs (A.1), but can also be of convolutional or recurrent. ## Appendix B FD Scheme for Goursat-Form Kernel PDE For the PDE in (1), (2), (3), we utilize the following finite difference scheme adapted from [81]: \[k_{j}^{i+1} =-k_{j}^{i-1}+k_{j+1}^{i}+k_{j-1}^{i}+h^{2}\lambda_{j}\frac{k_{j +1}^{i}+k_{j-1}^{i}}{2}\] (B.1) \[k_{i}^{i+1} =k_{i}^{i}+\frac{h}{2}\lambda_{i}\] (B.2) \[k_{i+1}^{i+1}=k_{i}^{i}-\frac{h}{4}(\lambda_{i}+\lambda_{i+1}),\qquad k_{1}^{j +1}=0\] (B.3) with \(k_{i}^{j}=k((i-1)h,(j-1)h),i=2,...,N,j=2,...,i-1,\lambda_{i}=\bar{\lambda}((i- 1)h),h=1/N\) where \(N\) is the number of spatial steps.
2310.08271
Variant Codes Based on A Special Polynomial Ring and Their Fast Computations
Binary array codes are widely used in storage systems to prevent data loss, such as the Redundant Array of Independent Disks~(RAID). Most designs for such codes, such as Blaum-Roth~(BR) codes and Independent-Parity~(IP) codes, are carried out on the polynomial ring F_2[x]/<\sum_{i=0}^{p-1}x^i >, where F_2 is a binary field, and p is a prime number. In this paper, we consider the polynomial ring F_2[x]/<\sum_{i=0}^{p-1}x^{i\tau}>, where p>1 is an odd number and \tau \geq 1 is any power of two, and explore variant codes from codes over this polynomial ring. Particularly, the variant codes are derived by mapping parity-check matrices over the polynomial ring to binary parity-check matrices. Specifically, we first propose two classes of variant codes, termed V-ETBR and V-ESIP codes. To make these variant codes binary maximum distance separable~(MDS) array codes that achieve optimal storage efficiency, this paper then derives the connections between them and their counterparts over polynomial rings. These connections are general, making it easy to construct variant MDS array codes from various forms of matrices over polynomial rings. Subsequently, some instances are explicitly constructed based on Cauchy and Vandermonde matrices. In the proposed constructions, both V-ETBR and V-ESIP MDS array codes can have any number of parity columns and have the total number of data columns of exponential order with respect to $p$. In terms of computation, two fast syndrome computations are proposed for the Vandermonde-based V-ETBR and V-ESIP MDS array codes, both meeting the lowest known asymptotic complexity among MDS codes. Due to the fact that all variant codes are constructed from parity-check matrices over simple binary fields instead of polynomial rings, they are attractive in practice.
Leilei Yu, Yunghsiang S. Han, Jiasheng Yuan, Zhongpei Zhang
2023-10-12T12:19:56Z
http://arxiv.org/abs/2310.08271v2
# Variant Codes Based on A Special Polynomial Ring and Their Fast Computations ###### Abstract In this paper, we propose two new classes of binary array codes, termed V-ETBR and V-ESIP codes, by reformulating and generalizing the variant technique of deriving the well-known generalized row-diagonal parity (RDP) codes [1] from shortened independent parity (IP) codes [2]. The V-ETBR and V-ESIP codes are both based on binary parity-check matrices and are essentially variants of two classes of codes over a special polynomial ring (termed ETBR and ESIP codes in this paper). To explore the conditions that make the variant codes binary Maximum Distance Separable (MDS) array codes that achieve optimal storage efficiency, this paper derives the connections between V-ETBR/V-ESIP codes and ETBR/ESIP codes. These connections are beneficial for constructing various forms of the variant codes. By utilizing these connections, this paper also explicitly presents the constructions of V-ETBR and V-ESIP MDS array codes with any number of parity columns \(r\), along with their fast syndrome computations. In terms of construction, all proposed MDS array codes have an exponentially growing total number of data columns with respect to the column size, while alternative codes have that only with linear order. In terms of computation, the proposed syndrome computations make the corresponding encoding/decoding asymptotically require \(\lfloor\lg r\rfloor+1\) XOR (exclusive OR) operations per data bit, when the total number of data columns approaches infinity. This is also the lowest known asymptotic complexity in MDS codes [3]. Binary array codes, generalized RDP codes, IP codes, syndrome computation, storage systems. ## I Introduction Modern distributed storage systems require data redundancy to maintain data reliability and durability in the presence of unpredictable failures. Replications and erasure codes are two typical redundancy mechanisms [4]. Compared to the former, erasure codes only need less data redundancy to attain the same level of data protection [5]. One well-known class of erasure codes is _binary array codes_[1, 2, 6, 7]. Their coding procedures involve only XOR (exclusive OR) and cyclic shift operations, which enables simple and efficient implementations in both software and hardware [8]. This paper focuses on such codes. Binary array codes have been widely used in storage systems, such as RAID (Redundant Array of Independent Disks) [9]. With the development of distributed storage systems in recent years, they have also been used as the basis for developing other erasure codes, such as locally repairable codes [4, 8, 10, 11] and regenerating codes [12, 13, 14]. For an \(\ell\times(k+r)\) binary array code, any codeword can be viewed as an \(\ell\times(k+r)\) array of bits, where \(k\) columns store all information bits to form \(k\) information columns, and the remaining columns store all the parity bits encoded from information bits to form \(r\) parity columns. The column size \(\ell\) generally depends on the code construction. In coding theory, maximum distance separable (MDS) codes reach optimal storage efficiency [15] and each of their codewords consists of information and parity symbols, such that any subset of symbols in the codeword with the same number as information symbols can recover the entire codeword. _Binary MDS array codes_ have the same property by treating each column as a symbol. More precisely, for an \(\ell\times(k+r)\) binary MDS array code, any \(k\) out of \(k+r\) columns suffice to decode (reconstruct) all \(k\) information columns. Some well-known examples of binary array codes are EVENODD [16], row-diagonal parity (RDP) [17], STAR [18], and triple-fault-tolerance codes [19]. These codes are all binary MDS array codes for the case of two or three parity columns. Examples of binary array codes with more parity columns are Blaum-Roth (BR) [6], Independent-Parity (IP) [2], generalized RDP codes [1], and the codes in [20]. Although they are not always binary MDS array codes, the conditions that render them such codes can be found in the corresponding literature. The new binary array codes proposed in this paper target an arbitrary number of parity columns, and their constructions are closely related to the above-mentioned BR, IP, and generalized RDP codes. Note that BR and IP codes are both constructed by parity-check matrices over an all-one polynomial (AOP) ring \(\frac{\mathbb{F}_{2}[x]}{(\sum_{i=0}^{x}a^{i}x^{i})}\), where \(\mathbb{F}_{2}\) denotes a binary field and \(p\) is a prime number [2, 6]. Generalized RDP codes can be regarded as a variant of shortened IP codes [1], and they possess lower encoding/decoding complexity [21]. In this paper, we reformulate the generalized RDP codes, and then one can intuitively understand the essence of the generalized RDP codes being more computationally superior. Briefly, when computing syndromes (which dominates the overall computational complexity of encoding/decoding when code rate \(k/(k+r)\) is close to one), codes over \(\frac{\mathbb{F}_{2}[x]}{(\sum_{i=0}^{x}a^{i})}\) are first calculated in an auxiliary polynomial ring \(\frac{\mathbb{F}_{2}[x]}{(x^{p}+1)}\), where multiplying \(x\) only requires performing a simple cyclic-shift operation, and then all results are reduced to the original ring [2, 6]. As a variant, the generalized RDP codes are derived from the shortened IP codes in such a way that bits exceeding the number of bits in each symbol over the original ring are always not processed. Therefore, two operations of the shortened IP codes are eliminated in the generalized RDP codes. One is the processing for one fixed bit in each symbol, and the other is the modulo operation for reducing to the original ring. A binary parity-check matrix for the generalized RDP codes is explicitly provided in this paper (see (11)), which is not given in other literature. This paper generalizes the above variant technique so that the new codes based on binary parity-check matrices can be easily obtained from the codes over the special polynomial ring \(\frac{\mathbb{F}_{2}[x]}{\langle\sum_{p=0}^{p-1}x^{ir}\rangle}\), where \(p\) is an odd number and \(\tau\) is a power of two. The generalization means that \(\tau\) is not limited to one, and the parity-check matrices of codes over polynomial rings can be determined not only by the Vandermonde matrices containing only monomials (such as BR, IP codes) but also by a wider range of parameters. In this paper, the codes defined in \(\frac{\mathbb{F}_{2}[x]}{\langle\sum_{p=0}^{p-1}x^{ir}\rangle}\) are referred to as ETBR and ESIP codes, which can be regarded as extensions of BR and shortened IP codes, respectively. Correspondingly, the variants of ETBR and ESIP codes are referred to as V-ETBR and V-ESIP codes, respectively. The main contributions of this paper are enumerated as follows: 1. This paper reformulates and generalizes the variant technology of deriving generalized RDP codes from shortened IP codes, and then proposes two new classes of binary array codes (i.e., V-ETBR and V-ESIP codes), which are both based on binary parity-check matrices. 2. To obtain the conditions for the new codes to be binary MDS array codes, this paper presents the connections between them and the codes over special polynomial rings (i.e., ETBR and ESIP codes). Note that these connections are built on the foundation that all parity-check matrices have a sufficiently flexible form. This provides convenience for constructing binary MDS array codes. 3. This paper explicitly presents the constructions for the V-ETBR and V-ESIP MDS array codes, both with any number of parity columns \(r\). This leads to the corresponding MDS codes over polynomial rings being directly available as by-products. In particular, compared to existing binary MDS array codes, the constructed codes have significantly more data columns for a given design parameter \(p\), as well as a more flexible column size \(\ell\). 4. This paper also proposes two fast syndrome computations, which respectively corresponds to the constructed V-ETBR MDS array codes with any \(r\geq 2\) and the constructed V-ESIP MDS array codes with \(r=4\). Both of them make the corresponding encoding/decoding procedure require asymptotically \(\lfloor\lg r\rfloor+1\) XORs per data bit when the total number of data columns approaches infinity. This is also the lowest known asymptotic computational complexity in MDS codes [3]. Note that the fast syndrome computations proposed in this paper can be easily adjusted to be suitable for the corresponding codes over polynomial rings. To avoid tediousness, this paper will not repeat the presentation. It is worth mentioning that compared to the ETBR/ESIP codes over polynomial rings, the variant codes (i.e., V-ETBR/V-ESIP codes) have the following two advantages in calculations. One is that the variant technology brings strictly fewer operations in syndrome computations, as described earlier regarding generalized RDP codes. The other is that all variant codes are based on binary parity-check matrices, leading to easy implementation through the use of existing open-source libraries for matrix operations over \(\mathbb{F}_{2}\), such as M4RI [22]. Furthermore, it is possible that many existing scheduling algorithms for matrix operations over \(\mathbb{F}_{2}\) can be utilized to improve computational efficiency, such as those proposed in [23, 24]. In contrast, the arithmetic implementation of polynomial rings is complex. Recently, [8, 25] proposed some binary array codes that are essentially V-ETBR/V-ESIP codes. However, there are significant differences between their work and that of this paper: 1. This paper clearly reveals the relationship between V-ETBR/V-ESIP codes and the well-known generalized RDP codes, as the former is a generalization of the variant technique implied by the latter. This is not pointed out in [8, 25]. 2. In [8, 25], the proposed V-ETBR/V-ESIP codes consider only parity-check matrices determined by Vandermonde matrices. In contrast, the parity-check matrices for V-ETBR/V-ESIP codes have a more flexible form in this paper, of which the Vandermonde matrix is just a special instance. This can facilitate the construction of more variant codes. 3. [8] and [25] focus on V-ETBR/V-ESIP codes alone, without discussing their connections with the codes over polynomial rings. In this paper, we consider these connections and show that, based on them, some new MDS codes over polynomial rings can be directly obtained as by-products. 4. In terms of construction, all V-ETBR and V-ESIP MDS array codes proposed in [8, 25] and this paper can have a total number of data columns far exceeding the design parameter \(p\). However, the feasible number of parity columns \(r\) for the V-ESIP MDS array codes in [8, 25] is three, while that in this paper is any size. 5. In terms of computation, the fast syndrome computation proposed in [8, 25] is for \(2\leq r\leq 3\), whereas that proposed in this paper is for arbitrary \(r\geq 2\). The former is actually a special case of the latter. The remainder of this paper is organized as follows. Section II introduces all necessary preliminaries, including some existing well-known binary array codes and important notations. Section III provides the specific definitions of V-ETBR/V-ESIP and ESIP/ESIP codes. Section IV first presents the connections for V-ETBR/V-ESIP and ESIP/ESIP codes, and then constructs the V-ETBR/V-ESIP MDS array codes. In Section V, some fast syndrome computations for the constructed codes are proposed. Section VI concludes this paper. ## II Preliminaries This section describes some existing well-known classes of array codes, i.e., BR codes [6], IP codes [2], and generalized RDP codes [1]. To begin with, let \[\mathbb{R}_{p,\tau}:=\frac{\mathbb{F}_{2}[x]}{\langle f_{p,\tau}(x)\rangle} \tag{1}\] denote a binary polynomial ring, where \[f_{p,\tau}(x)=1+x^{\tau}+\cdots+x^{(p-1)\tau} \tag{2}\] with two positive integers \(p,\tau\). The identity that \(x^{pr}+1=(x^{\tau}+1)\cdot f_{p,\tau}(x)\) leads to that operations in \(\mathbb{R}_{p,\tau}\) can be performed first in polynomial ring \[\mathbb{R}:=\frac{\mathbb{F}_{2}[x]}{\langle x^{pr}+1\rangle}, \tag{3}\] and then, all results should be reduced modulo \(f_{p,\tau}(x)\). Since multiplying by \(x\) in \(\mathbb{R}\) is equivalent to performing a one-bit cyclic shift on a vector with \(p\tau\) bits, the above realization for the operations in \(\mathbb{R}_{p,\tau}\) is simple and efficient [2, 6]. ### _BR codes_ BR codes are constructed in polynomial ring \(\mathbb{R}_{p,1}\)[6], where \(p\) is a prime number. Given the value of \(p\), the \(\text{BR}(p,r<p)\) is defined as the set of \((p-1)\times p\) arrays (denoted by \([x_{i,j}]\), where \(x_{i,j}\in\{0,1\}\), the first \(p-r\) data columns are information columns and others are parity columns). For \(\ell=0,1,\cdots,p-1\), the \(\ell\)-th column of a \((p-1)\times p\) array can be viewed as a binary polynomial \(D_{\ell}=\sum_{i=0}^{p-2}x_{i,\ell}\cdot x^{i}\in\mathbb{R}_{p,1}\). The \(\text{BR}(p,r)\) requires that \(0=H_{BR}\cdot(D_{0},D_{1},\cdots,D_{p})^{\text{T}}\), where \(H_{BR}\in\mathbb{R}_{p,1}^{r\times n}\) is the Vandermonde parity-check matrix given by \[H_{BR}=\left(\begin{array}{ccccc}1&1&1&\cdots&1\\ 1&x&x^{2}&\cdots&x^{p-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x^{r-1}&x^{2(r-1)}&\cdots&x^{(r-1)(p-1)}\end{array}\right). \tag{4}\] BR codes have an intuitive graphical representation. Figure 1 provides an example of \(\text{BR}(5,3)\) to demonstrate it. In Figure 1, the last row is imaginary to facilitate operations, the leftmost two data columns are information columns of the \(\text{BR}(5,3)\), and the rightmost three data columns are all parity columns. According to the identity \(0=H_{BR}\cdot(D_{0},D_{1},\cdots,D_{p-1})^{\text{T}}\), the result obtained by bit-wise XORting all data columns is an all-zero column. By following the realization in \(\mathbb{R}_{p,1}\) mentioned above, which involves first performing operations in \(\mathbb{R}\) and then reducing to \(\mathbb{R}_{p,1}\), the above result is either an all-zero column or an all-one column if each column has been subjected to down-cyclic shifts according to the corresponding column index size. This is also true if the number of down-cyclic shifts is twice the size of the corresponding column index. One can know from [6] that BR codes are always binary MDS array codes. ### _IP codes_ IP codes are also constructed in \(\mathbb{R}_{p,1}\), but all parity columns are independent of each other, leading to a minimization of the number of parity updates when a data bit is updated [2, 10, 16]. Precisely, given the prime number \(p\) and a positive integer \(r\), the \(\text{IP}(p+r,r)\) is defined as the set of \((p-1)\times(p+r)\) arrays of bits. In the same way as the BR codes, each column of the array forms a binary polynomial, then the parity-check matrix of the \(\text{IP}(p+r,r)\) is \(H_{IP}=(H_{BR}|I_{r})\,,\) where \(H_{BR}\) is shown in (4) and \(I_{r}\) is an \(r\times r\) identity matrix. The matrix \(H_{IP}\) implies that IP codes also have an intuitive graphical representation similar to that shown in BR codes. Contrary to BR codes, IP codes are not always binary MDS array codes. The conditions for making IP codes to be MDS array codes can be found in [2, 7]. ### _Generalized RDP codes_ In [17], the authors presented a binary MDS array code with two parity columns, i.e., RDP codes. This code was generalized to support more parity columns in [1]. Generalized RDP codes are not directly constructed by parity-check matrices over \(\mathbb{R}_{p,1}\) like the two codes introduced above. Given a prime number \(p\) and a positive integer \(r\), the generalized \(\text{RDP}(p+r-1,r)\) code is defined as the set of \((p-1)\times(p+r-1)\) arrays (denoted by \([x_{i,j}]\), where \(x_{i,j}\in\{0,1\}\), the first \(p-1\) data columns are information columns, and others are parity columns). From [1], it satisfies the following encoding equations: \[x_{i,p-1}=\sum_{j=0}^{p-2}x_{i,j}\text{ for }0\leq i\leq p-2, \tag{5}\] \[\text{and }x_{i,p-1+j}=\sum_{\ell=0}^{p-1}x_{i-j\ell,\ell}\text{ for } \begin{array}{l}0\leq i\leq p-2,\\ 1\leq j\leq r-1\end{array}, \tag{6}\] where addition is performed through XOR, all subscripts in the right-hand side of equal signs are modulo \(p\), and \(x_{p-1,j}=0\) for \(j=0,1,\cdots,p-1\). Similar to BR and IP codes, the generalized RDP codes have an intuitive graphical representation. Figure 2 shows an example of \(p=5\) and \(r=3\), where the leftmost four data columns are information columns and the last row is imaginary. Clearly, the first parity column, i.e., \(\{x_{i,4}\}_{i=0}^{4}\), is obtained by bit-wise XORting the first 4 columns. The second parity column, i.e., \(\{x_{i,5}\}_{i=0}^{4}\), is obtained by bit-wise XORting the first 5 columns after each column has been subjected to down-cyclic shifts according to the corresponding column index size. The third parity column is similar to the second, but the number of down-cyclic shifts in each column becomes twice the corresponding column index size. The three parity columns of the generalized \(\text{RDP}(7,3)\) code are obtained by directly deleting the imaginary row. Generalized RDP codes are not always binary MDS array codes [1], as are IP codes. Conditions that make generalized RDP codes to be binary MDS array codes can be found in [1]. In particular, there is a connection between generalized RDP and shortened IP codes as follows [1]: Fig. 1: Diagram of the BR code with Fig. 2. Diagram of the generalized \(p=5\) and \(r=3\). RDP code with \(p=5\) and \(r=3\). **Theorem 1**.: _([1]) The generalized \(\text{RDP}(p+r-1,r)\) is a binary MDS array code if the shortened \(\text{IP}(p+r-1,r)\) with the following parity-check matrix over \(\mathbb{R}_{p,1}\) is a binary MDS array code_ \[H_{SIP}=\left(H_{BR}\ \left|\begin{array}{cccc}0&0&\cdots&0\\ 1&0&\cdots&0\\ 0&1&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&1\end{array}\right.\right). \tag{7}\] The encoding of the shortened IP code with (7) can be analogized from the BR code in Section II-A. It is easy to see that the process is similar to that in the generalized \(\text{RDP}(p+r-1,r)\). The only difference is that the latter does not need to calculate the last bit in each parity column and modulo \(f_{p,1}(x)\). ### _Notations_ Throughout this paper, the set \(\{0,1,2,3,\cdots\}\) is denoted by \(\mathbb{N}\) and the set \(\{i,i+1,\cdots,j-1\}\) is denoted by \([i,j)\), where \(i\in\mathbb{N},j\in\mathbb{N}\) with \(i<j\). The transpose of a matrix or vector is marked with the notation T in the upper right-hand corner. Unless otherwise stated, suppose that \[m=p\tau,\qquad\tau=2^{s}, \tag{8}\] where \(p>1\) is an odd number and \(s\in\mathbb{N}\). Note that \(p\) and \(\tau\) are determined if \(m\) is given. In addition, \(f_{p,r}(x)=f_{p,1}^{\tau}(x)\). Some special mappings are defined below. For any \(i\in\mathbb{N},j\in\mathbb{N}\) and \(a=\sum_{i=0}^{n-1}a_{i}\cdot x^{i}\in\mathbb{R}\), define a mapping \(\mathcal{A}_{i,j}:\mathbb{R}\rightarrow\mathbb{F}_{2}^{(m-i)\times(m-j)}\) by letting \(\mathcal{A}_{i,j}(a)\) be the resultant \((m-i)\times(m-j)\) binary matrix after deleting the last \(i\) rows and last \(j\) columns of the following \(m\times m\) binary circulant matrix \[\left(\begin{array}{cccc}a_{0}&a_{1}&a_{2}&\cdots&a_{m-1}\\ a_{m-1}&a_{0}&a_{1}&\cdots&a_{m-2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ a_{1}&a_{2}&a_{3}&\cdots&a_{0}\end{array}\right). \tag{9}\] From [26], one can see that \(\mathcal{A}_{0,0}\) is an isomorphic mapping. Moreover, for \(i\in\mathbb{N}\), \(\mathcal{A}_{i,i}(0)\) is the \((m-i)\times(m-i)\) zero matrix and \(\mathcal{A}_{i,i}(1)\) is an \((m-i)\times(m-i)\) identity matrix. For any \(\ell_{0}\in\mathbb{N}\) and \(\ell_{1}\in\mathbb{N}\), we define a mapping from the set consisting all \(\ell_{0}\times\ell_{1}\) matrices over \(\mathbb{R}\) to the set consisting of all \(\ell_{0}(m-\tau)\times\ell_{1}(m-\tau)\) matrices over \(\mathbb{F}_{2}\), i.e., \[\mathcal{T}_{\ell_{0},\ell_{1},m}:M_{\ell_{0}\times\ell_{1}}(\mathbb{R}) \to M_{\ell_{0}(m-\tau)\times\ell_{1}(m-\tau)}(\mathbb{F}_{2}) \tag{10}\] by letting \(\mathcal{T}_{\ell_{0},\ell_{1},m}(B)=\overline{B}\), where \(B=[b_{i,j}]\in\mathbb{R}^{\ell_{0}\times\ell_{1}}\) and \(\overline{B}=[\mathcal{A}_{\tau,r}(b_{i,j})]\in\mathbb{F}_{2}^{\ell_{0}(m- \tau)\times\ell_{1}(m-\tau)}\). In this paper, the code with \(\mathcal{T}_{\ell_{0},\ell_{1},m}(B)\) as the parity-check matrix has a binary codeword of size \(\ell_{1}\). \((m-\tau)\), where \(\ell_{0}<\ell_{1}\). By default, the codeword is arranged in an \((m-\tau)\times\ell_{1}\) array of bits in column-first order, and we refer to this code as a binary array code. Also, we refer to this code as a binary MDS array code if each codeword can be restored by any \(\ell_{1}-\ell_{0}\) columns. ## III Definitions of Variant codes This section defines two new classes of binary array codes (i.e., V-ETBR and V-ESIP codes), which both are variants of codes over the polynomial ring \(\mathbb{R}_{p,\tau}\). All variant codes are based on binary parity-check matrices. One can see that the generalized RDP codes in Section II-C are a special case of the V-ESIP codes. To begin with, we define two codes over the polynomial ring \(\mathbb{R}_{p,\tau}\) as follows: **Definition 1**.: **(ETBR Codes)** _Let \(2\leq r<n\), and \(H=[h_{i,j}]_{0\leq i<r,0\leq j<n}\in\mathbb{R}^{r\times n}\). Define \(\text{ETBR}(n,r,m=p\tau,H)\) as a code over \(\mathbb{R}_{p,\tau}\) determined by the parity-check matrix \(H\) that is reduced to over \(\mathbb{R}_{p,\tau}\), where each element in \(H\) over \(\mathbb{R}\) is modulo \(f_{p,\tau}(x)\) to be an element over \(\mathbb{R}_{p,\tau}\)._ **Remark 1**.: _In Definition 1, the form of \(H\) is not fixed and covers \(H_{BR}\) in (4), so we refer to \(\text{ETBR}(n,r,m,H)\) as an extended BR code._ **Definition 2**.: **(ESIP Codes)** _Let \(n\geq 2,r\geq 2\), and \(H^{\prime}=[H|\widehat{I}]\in\mathbb{R}^{r\times(n+r-1)}\), where the definition of \(H\) is the same as in Definition 1 and \(\widehat{I}\) is the matrix after removing the first column of the \(r\times r\) identity matrix. Define \(\text{ESIP}(n,r,m=p\tau,H^{\prime})\) as a code over \(\mathbb{R}_{p,\tau}\) determined by the parity-check matrix \(H^{\prime}\) that is reduced to over \(\mathbb{R}_{p,\tau}\), where each element in \(H\) over \(\mathbb{R}\) is modulo \(f_{p,\tau}(x)\) to be an element over \(\mathbb{R}_{p,\tau}\)._ **Remark 2**.: _In Definition 2, the \(\text{ESIP}(n,r,m=p,H^{\prime})\) is exactly the shortened IP code given by (7) if \(H=H_{BR}\). Obviously, \(H^{\prime}\) has a wider range of parameters so that the ESIP codes can be regarded as an extension of shortened IP codes._ The variant codes corresponding to ETBR and ESIP codes, i.e., V-ETBR and V-ESIP codes, are defined below. When we refer to ETBR/ESIP codes and V-ETBR/V-ESIP codes as corresponding, it means that they are determined by the same matrix \(H\) or \(H^{\prime}\) over \(\mathbb{R}\). **Definition 3**.: **(V-ETBR Codes)** _Define V-ETBR\((n,r,m=p\tau,H)\) as a binary array code whose parity-check matrix is \(\mathcal{T}_{r,n,m}(H)\), where \(2\leq r<n\), \(\mathcal{T}_{r,n,m}\) is defined in (10), and the definition of \(H\) is the same as that in Definition 1._ **Definition 4**.: **(V-ESIP Codes)** _Define V-ESIP\((n,r,m=p\tau,H^{\prime})\) as a binary array code whose parity-check matrix is \(\mathcal{T}_{r,n+r-1,m}(H^{\prime})\), where \(n\geq 2,r\geq 2\), \(\mathcal{T}_{r,n+r-1,m}\) is defined in (10), and defined in (10), and the definition of \(H^{\prime}\) is the same as that in Definition 2._ Conventionally, the last \(r\) columns of the array corresponding to the codeword in the above codes are referred to as parity columns and all other columns are referred to as information columns. We have the following relationship. **Lemma 1**.: _Let \(H^{\prime}\) in Definition 4 be determined by a Vandermonde matrix \(H\) such that \(h_{1,j}=x^{p-j}\) and \(h_{i,j}=h_{1,j}^{i}\) for \(2\leq i<r,0\leq j<p\), and \(p\) is a prime number. Then V-ESIP\((p,r,m=p,H^{\prime})\) is exactly the generalized \(\text{RDP}(p+r-1,r)\) described in Sec. II-C._ Proof.: From Definition 4, \(\mathcal{T}_{r,p+r-1,p}(H^{\prime})\) is the parity-check matrix of the V-ESIP\((p,r,m=p,H^{\prime})\) and is given by (11) at the top of this page. Let \(\mathbf{b}_{0},\mathbf{b}_{1},\cdots,\mathbf{b}_{p-2}\in\mathbb{F}_{2}^{p-1}\) denote all \(p-1\) information columns in the codeword. We next show that any parity column generated by the V-ESIP code is the same as that in the generalized RDP code described in Sec. II-C. Let \(\mathbf{b}_{p-1},\mathbf{b}_{p},\cdots,\mathbf{b}_{p+r-2}\in\mathbb{F}_{2}^{p-1}\) denote all \(r\) parity columns of the V-ESIP code. One can easily know from (11) that \(\mathbf{b}_{p-1}\) is obtained by bit-wise XORing all \(p-1\) information columns. For any \(i\in[1,r)\), the \(i\)-th parity column of the V-ESIP code is obtained by \[\mathbf{b}_{p-1+i}^{\mathrm{T}}=\sum_{j=0}^{p-1}\mathcal{A}_{1,1}(x^{(p-j)i}) \mathbf{\cdot}\mathbf{b}_{j}^{\mathrm{T}}=\sum_{j=0}^{p-1}\mathcal{A}_{1,0}( x^{(p-j)i})\mathbf{\cdot}(\mathbf{b}_{j},0)^{\mathrm{T}}. \tag{12}\] We remark that calculating \(\mathcal{A}_{1,0}(x^{(p-j)i})\cdot(\mathbf{b}_{j},0)^{\mathrm{T}}\) is equivalent to removing the last element from the result of \(\mathcal{A}_{0,0}(x^{(p-j)i})\cdot(\mathbf{b},0)^{\mathrm{T}}\). Furthermore, \(\mathcal{A}_{0,0}(x^{p-j})\) can be regarded as the operator of performing \(j\) times down-cyclic shift on a vector and \(\mathcal{A}_{0,0}(x^{(p-j)i})=\left(\mathcal{A}_{0,0}(x^{p-j})\right)^{i}\). Assume that each data column has an imaginary bit attached at the end, thus, (12) indicates that each \(\mathbf{b}_{p-1+i},i\in[1,r)\) can be obtained by bit-wise XORing the first \(p\) columns after each column has been subjected to down-cyclic shifts according to \(i\) times the corresponding column index size. Each parity column needs to remove the last bit in the result. The above is consistent with the graphical representation of the generalized RDP code, as shown in Fig. 2. This completes the proof. Lemma 1 explicitly provides a binary parity-check matrix for the generalized RDP\((p+r-1,r)\), which was not given in other literature. From the perspective of the binary parity-check matrix, all fast computations about the generalized RDP codes, such as those proposed in [21, 27], can be regarded as scheduling schemes for matrix operations over binary fields. Furthermore, any existing scheduling algorithm for general matrix operations over binary fields can be used to accelerate the computation of generalized RDP codes. Recall that Theorem 1 established a connection between generalized RDP codes and shortened IP codes, which can be viewed as a special case of the connection between V-ESIP codes and ESIP codes. It remains an open problem whether there exists a general connection between V-ESIP and ESIP codes, as well as between V-ETBR and ETBR codes. The next section is devoted to these issues. ## IV Connections & explicit constructions This section presents the general connections between the variant codes defined above and their corresponding codes over polynomial rings. Some explicit constructions for the V-ETBR/V-ESIP MDS array codes are then proposed. ### _General connections_ This subsection starts by exploring the conditions of V-ETBR/V-ESIP codes to be binary MDS array codes. We first analyze the rank of the square matrix \(\mathcal{T}_{\ell,\ell,m}(V)\). The following lemma is useful. **Lemma 2**.: _Assume that \(\ell_{0}\geq 2,\ell_{1}\geq 2\), \(x^{\tau}+1|a_{i,j}+a_{i,k}\) with \(a_{i,j},a_{i,k}\in\mathbb{R},i\in[0,\ell_{0}),j,k\in[0,\ell_{1})\). Let each \(\mathbf{a}_{i,j}\) denote the binary coefficient vector of \(a_{i,j}\), i.e., \(a_{i,j}=\mathbf{a}_{i,j}\cdot(1,x,\cdots,x^{m-1})^{\mathrm{T}}\), and let \(\mathbf{\overline{a}}_{i,j}\) denote the vector after \(\mathbf{a}_{i,j}\), denotes the last \(\tau\) elements. If all vectors in the terma\(\mathbf{\overline{x}}_{i,0},\cdots,\mathbf{\overline{a}}_{i,\ell_{1}-1}) \in\mathbb{F}_{2}^{1\times(m-\tau)\ell_{1}}\overset{\ell_{0}-1}{\underset{i= 0}{\sum}}\overset{\ell_{0}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_ {1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{ \ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{ \ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_ {1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_ {1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_ {1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_ {1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{ \underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{ \ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{\sum}} \overset{\ell_{1}-1}{\underset{i=0}{\sum}}\overset{\ell_{1}-1}{\underset{i=0}{ \sum}}\overset{\ell_{1}-1}{\underset{i=0}{ Proof.: According to TABLE I, \(\mathcal{T}_{\ell,\ell,m}(V)\) is composed of \(\mathcal{B}_{\tau}(0,\ell),\mathcal{B}_{\tau}(1,\ell),\cdots,\mathcal{B}_{\tau} (\ell-1,\ell)\). Since each \(\mathcal{B}_{\tau}(i,\ell),i\in[0,\ell)\), has full row rank, we only need to prove that there are no \(\overline{\mathbf{v}}_{0,\ell},\overline{\mathbf{v}}_{1,\ell},\cdots, \overline{\mathbf{v}}_{\ell-1,\ell}\), which are \(\mathbb{F}_{2}\)-linearly dependent. By contradiction, assume that there exists \(\overline{\mathbf{v}}_{0,\ell},\overline{\mathbf{v}}_{1,\ell},\cdots, \overline{\mathbf{v}}_{\ell-1,\ell}\) such that they are \(\mathbb{F}_{2}\)-linearly dependent, i.e., \(\sum_{i=0}^{\ell-1}c_{i}\overline{\mathbf{v}}_{i,\ell}=\mathbf{0}_{1\times(m- \tau)\ell}\) where each \(c_{i}\in\mathbb{F}_{2}\) and \(c_{0},c_{1},\cdots,c_{\ell-1}\) are not all zero. We first consider the third condition of \(x^{\tau}+1|v_{i,j}\). According to Remark 3 and the facts that \(x^{\tau}+1|v_{i,j}\), each \(\mathbf{v}_{i,\ell}\) is the vector consisting of the binary coefficient vectors in \(q_{i,\ell}\cdot(v_{i,0},v_{i,1},\cdots,v_{i,\ell-1})\), where \(q_{i,\ell}\in\mathbb{R}\setminus\{0\}\) and \(\deg(q_{i,\ell})<m-\tau\). Then \(\sum_{i=0}^{\ell-1}c_{i}\mathbf{v}_{i,\ell}=(\mathbf{0}_{1\times m},\cdots, \mathbf{0}_{1\times m})\) Therefore, we have \(\sum_{i=0}^{\ell-1}c_{i}q_{i,\ell}\cdot v_{i,j}=0\mod x^{m}+1\) for \(j\in[0,\ell)\). By taking \(j=0,1,\cdots,\ell-1\), the above equations can be converted into \[\Gamma_{0}\cdot\begin{pmatrix}c_{0}\cdot q_{0,\ell}\\ c_{1}\cdot q_{1,\ell}\\ \vdots\\ c_{\ell-1}\cdot q_{\ell-1,\ell}\end{pmatrix}=\mathbf{0}^{\mathrm{T}}, \tag{14}\] where \(\mathbf{0}\) is a zero-row vector and \[\Gamma_{0}=\left(\begin{array}{cccc}v_{0,0}&v_{1,0}&\cdots&v_{\ell-1,0}\\ v_{0,1}&v_{1,1}&\cdots&v_{\ell-1,1}\\ \vdots&\vdots&\ddots&\vdots\\ v_{0,\ell-1}&v_{1,\ell-1}&\cdots&v_{\ell-1,\ell-1}\end{array}\right). \tag{15}\] In (14), all operations are performed in \(\mathbb{R}\). Note that each \(c_{i}\in\mathbb{F}_{2}\) and \(\deg(q_{i,\ell})<m-\tau\), we can solve the above linear equations in \(\mathbb{R}_{p,\tau}\). Since \(|\Gamma_{0}|=|V|\) is invertible over \(\mathbb{R}_{p,\tau}\), then \(c_{0}q_{0,\ell}\cdots,c_{\ell-1}q_{\ell-1,\ell}\) in (14) must all be zero according to Cramer's rule. Moreover, each \(q_{i,\ell}\neq 0\) with \(\deg(q_{i,\ell})<m-\tau\), so that \(c_{0}=c_{1}=\cdots=c_{\ell-1}=0\). This contradicts the assumption at the beginning. Consider the third condition of \(x^{\tau}+1|v_{i,j}+v_{i,k}\) instead, then Lemma 3 gives \(\sum_{i=0}^{\ell-1}c_{i}\mathbf{v}_{i,\ell}=(\mathbf{0}_{1\times(m-\tau)}, \mathbf{u}|\cdots|\mathbf{0}_{1\times(m-\tau)},\mathbf{u})\), where \(\mathbf{u}\in\mathbb{F}_{2}^{m-\tau}\). Since \(v_{0,j}=1,\forall j\in[0,\ell)\), we have \[\sum_{i=1}^{\ell-1}c_{i}\mathbf{v}_{i,\ell}\] \[= c_{0}\cdot\mathbf{v}_{0,\ell}+(\mathbf{0}_{1\times(m-\tau)}, \mathbf{u}|\cdots|\mathbf{0}_{1\times(m-\tau)},\mathbf{u})=(\mathbf{u}^{ \prime}|\cdots|\mathbf{u}^{\prime}), \tag{16}\] where \(\mathbf{u}^{\prime}\in\mathbb{F}_{2}^{m}\). Based on the fact that any two part of the form in (16) sum to zero, we have \(\sum_{i=1}^{\ell-1}c_{i}q_{i,\ell}\cdot(v_{i,j}+v_{i,k})=0\mod x^{m}+1\) for \(0\leq j<k<\ell\). By taking \((j,k)=(0,1),(0,2),\cdots,(0,\ell-1)\), the above equations can be converted into \[\Gamma_{1}\cdot\begin{pmatrix}c_{1}\cdot q_{1,\ell}\\ c_{2}\cdot q_{2,\ell}\\ \vdots\\ c_{\ell-1}\cdot q_{\ell-1,\ell}\end{pmatrix}=\mathbf{0}^{\mathrm{T}}, \tag{17}\] where \(\mathbf{0}\) is a zero-row vector and \[\Gamma_{1}=\left(\begin{array}{cccc}v_{1,0}+v_{1,1}&\cdots&v_{\ell-1,0}+v_{ \ell-1,1}\\ v_{1,0}+v_{1,2}&\cdots&v_{\ell-1,0}+v_{\ell-1,2}\\ \vdots&\ddots&\vdots\\ v_{1,0}+v_{1,\ell-1}&\cdots&v_{\ell-1,0}+v_{\ell-1,\ell-1}\end{array}\right). \tag{18}\] In (17), all operations are performed in \(\mathbb{R}\). Note that each \(c_{i}\in\mathbb{F}_{2}\) and \(\deg(q_{i,\ell})<m-\tau\), we next solve the above linear equations in \(\mathbb{R}_{p,\tau}\). One can know that the determinant of \(\Gamma_{1}\) is equal \[\left|\begin{array}{cccc}1&0&\cdots&0\\ 1&v_{1,0}+v_{1,1}&\cdots&v_{\ell-1,0}+v_{\ell-1,1}\\ \vdots&\vdots&\ddots&\vdots\\ 1&v_{1,0}+v_{1,\ell-1}&\cdots&v_{\ell-1,0}+v_{\ell-1,\ell-1}\end{array}\right| \tag{19}\] \[= \left|\begin{array}{cccc}1&v_{1,0}&\cdots&v_{\ell-1,0}\\ 1&v_{1,1}&\cdots&v_{\ell-1,1}\\ \vdots&\vdots&\ddots&\vdots\\ 1&v_{1,\ell-1}&\cdots&v_{\ell-1,\ell-1}\end{array}\right|=|V|,\] where the first equality is obtained by subtracting the appropriate multiple of the first column from all other columns. Thus, \(|\Gamma_{1}|=|V|\) is also invertible over \(\mathbb{R}_{p,\tau}\). Then \(c_{1}q_{1,\ell}\cdots,c_{\ell-1}q_{\ell-1,\ell}\) in (14) must all be zero according to Cramer's rule. Similarly, note that each \(q_{i,\ell}\neq 0\) with \(\deg(q_{i,\ell})<m-\tau\), so we must have that \(c_{1}=\cdots=c_{\ell-1}=0\), then \(c_{0}=0\). This contradicts the assumption at the beginning. This completes the proof. In Lemma 3, the latter two conditions are easily met. More precisely, the third condition only requires that any \(v_{i,j}\) is a multiple of \(x^{\tau}+1\), and the second condition can be satisfied by the following lemma, which is easily obtained through the proof of Proposition 6 in [25]. **Lemma 4**.: _([25]) If \(\gcd(a,x^{m}+1)=x^{\tau}+1\) or \(\gcd(a+b,x^{m}+1)=x^{\tau}+1\), where \(a,b\in\mathbb{R}\), then \(\mathcal{A}_{\tau}(a,b)\) has full row rank over \(\mathbb{F}_{2}\)._ We now present the connection between V-ETBR/V-ESIP codes and ETBR/ESIP codes. **Corollary 1**.: _(_**Connection**_) V-ETBR\((n,r,m=p\tau,H)\) is a binary MDS array code if_ 1. _The corresponding ETBR_\((n,r,m,H)\) _is an MDS code over_ \(\mathbb{R}_{p,\tau}\)_._ 2. _For any_ \(0\leq i<r,0\leq j<n\)_, then_ \(\gcd(h_{i,j},x^{m}+1)=x^{\tau}+1\)_._ _When \(h_{0,j}=1,\forall j\in[0,n)\), the above last condition is replaced with_ 1. _For any_ \(1\leq i<r,0\leq j,k<n\) _and_ \(j\neq k\)_, then_ \(\gcd(v_{i,j},x^{m}+1)=x^{\tau}+1\) _or_ \(\gcd(v_{i,j}+v_{i,k},x^{m}+1)=x^{\tau}+1\)_._ Proof.: We only need to prove that any \(\mathcal{T}_{\ell\times\ell}(V)\) for \(\ell=r\) has full rank, where all elements in \(V\) are determined by \(H\). This is easily derived from Lemma 3 and 4. **Corollary 2**.: _(_**Connection**_) When the first row of \(H\) in \(H^{\prime}\) is an all-one row, the V-ESIP\((n,r,m=p\tau,H^{\prime})\) is a binary MDS array code if_ 1. _The corresponding ESIP_\((n,r,m,H^{\prime})\) _is an MDS code over_ \(\mathbb{R}_{p,\tau}\)_._ 2. _For any_ \(1\leq i<r\) _and_ \(0\leq j<k<n\)_, then_ \(x^{\tau}+1|h_{i,j}+h_{i,k}\)_._ Proof.: Without loss of generality, we only need to prove \(\mathcal{T}_{\ell,\ell,m}(V)\) for any \(1<\ell\leq r\) has full rank, where elements in \(\mathcal{T}_{\ell,\ell,m}(V)\) are determined by \(H^{\prime}\) and \(\{v_{0,j}=1\}_{j=0}^{\ell-1}\). We prove this via Lemma 3. First, the first condition of Lemma 3 is satisfied since the corresponding ESIP\((n,r,m,H^{\prime})\) is an MDS code over \(\mathbb{R}\). Furthermore, the fact that \(V\) with \(\ell=2\) have full rank over \(\mathbb{R}_{p,\tau}\) leads to \(\gcd(h_{i,j}+h_{i,k},f_{p,\tau}(x))=1,1\leq i<r,0\leq j<k<n\). Recall that the condition of \(x^{\tau}+1|h_{i,j}+h_{i,k}\), then we have \(\gcd(h_{i,j}+h_{i,k},x^{m}+1)=x^{\tau}+1\). One can easily see from Lemma 4 that the second condition of Lemma 3 is thus satisfied. The latter third condition of Lemma 3 is obviously satisfied. This completes the proof. **Remark 4**.: _Now, the correctness of Theorem 1 can be readily proven by Corollary 2, just by setting \(\tau=1\). Theorem 1 requires \(p\) to be an odd prime number for shortened IP codes. Corollary 2 provides additional clarification by demonstrating that \(p\) only needs to be odd._ In Corollary 2, the rightmost end of \(H^{\prime}\) is not necessarily an identity matrix. Since the existence of an identity matrix can simplify encoding, we consider the following case. Note that the following case does not require the first row of \(H\) to be an all-one row, only the last column to be constrained. **Corollary 3**.: _(_**Connection**_) Assuming the rightmost end of \(H^{\prime}\) is an \(r\times r\) identity matrix, i.e., the last column of \(H\) is \((1,0,0,\cdots,0)^{\text{T}}\), then the V-ESIP\((n,r,m=p\tau,H^{\prime})\) is a binary MDS array code if_ 1. _The corresponding ESIP_\((n,r,m,H^{\prime})\) _is an MDS code over_ \(\mathbb{R}_{p,\tau}\)_._ 2. _For any_ \(0\leq i<r\) _and_ \(0\leq j<n-1\)_, then_ \(\gcd(h_{i,j},x^{m}+1)=x^{\tau}+1\)_._ Proof.: Without loss of generality, we only need to prove \(\mathcal{T}_{\ell,\ell,m}(V)\) for any \(1\leq\ell\leq r\) has full rank, where elements in \(\mathcal{T}_{\ell,\ell,m}(V)\) are determined by \(H\) after removing the last column. Similarly, we prove this via Lemma 3. First, the first condition of Lemma 3 is satisfied since the corresponding ESIP\((n,r,m,H^{\prime})\) is an MDS code over \(\mathbb{R}\). Lemma 4 and \(\gcd(h_{i,j},x^{m}+1)=x^{\tau}+1\) leads to that the second condition of Lemma 3 holds. Finally, the former third condition of Lemma 3 obviously holds. This completes the proof. ### _Explicit constructions_ We next present some explicit constructions for the V-ETBR/V-ESIP MDS array codes. To begin with, suppose that \(f_{p,1}(x)\) in (2) can be completely factorized into \(f_{p,1}(x)=f_{0}(x)\cdot f_{1}(x)\cdots f_{u-1}(x),\) where each \(f_{i}(x)\) is irreducible polynomial over \(\mathbb{F}_{2}[x]\) and \(\lambda=deg(f_{0}(x))\leq deg(f_{1}(x))\leq\cdots\leq deg(f_{u-1}(x))\). Note that \(\lambda=p-1\) if \(2\) is a primitive element in \(p\)-ary finite field \(\mathbb{F}_{p}\)[2]. Then we have the following construction for the V-ESIP MDS array codes with any number of parity numbers, based on Cauchy matrices. **Construction 1**.: _(_**V-ESIP MDS** _array codes with \(r\geq 2\)) Let \(\{a_{0},\cdots,a_{r-1}\}\) and \(\{b_{0},\cdots,b_{n-2}\}\) are two sets of elements from \(\mathbb{R}\), where \(\deg(a_{i})<\lambda,\deg(b_{j})<\lambda\) and \(a_{i}\neq b_{j}\) for any \(i,j\), then the V-ESIP\((n,r\geq 2,m=p\tau,H^{\prime}=[H_{I}|I_{r\times r}])\) is a binary MDS array code, where \(H_{I}=[(x^{\tau}+1)\cdot g_{i}]\in\mathbb{R}^{r\times(n-1)}\) and \(g_{i}\) denotes the inverse of \(a_{i}+b_{j}\) over \(\mathbb{R}_{p,\tau}\) that always exists due to the degree of \(a_{i}+b_{j}\) less than \(\lambda\)._ Proof.: We prove this via Corollary 3. Let \(H^{\prime}_{I}=[g_{i}=\frac{1}{a_{i}+b_{j}}]\in\mathbb{R}_{p,\tau}^{r\times(n-1)}\) be a Cauchy matrix. Obviously, the determinant of any square sub-matrix of \(H^{\prime}_{I}\) is invertible over \(\mathbb{R}_{p,\tau}\), since it is the product of some elements in the sets \(\{a_{i}+a_{j}\}_{i\neq j},\{b_{i}+b_{j}\}_{i\neq j},\{\frac{1}{a_{i}+b_{j}}\}\)[28], where any \(a_{i},b_{j}\) has the degree less than \(\lambda\). Note that the determinant of the corresponding square sub-matrix of \(H_{I}\) is \(x^{\tau}+1\) times the above result, and \(\gcd(x^{\tau}+1,f_{p,\tau}(x))=1\). This results in the determinant of any square sub-matrix of \(H_{I}\) having to be invertible over \(\mathbb{R}_{p,\tau}\). Thus, the first condition of Corollary 3 is satisfied. Furthermore, we have \(\gcd((x^{\tau}+1)g_{i},f_{p,\tau}(x))=1\), leading to the second condition of Corollary 3 holds. This completes the proof. Based on Vandermonde matrices, the following provides the construction of the V-ETBR MDS array codes with any number of parity columns. Note that this construction can also be found in [25]. For completeness, a different proof is provided based on the connection between V-ETBR and ETBR codes in Corollary 1. From the proof of any construction proposed in this paper, one can see that the codes over \(\mathbb{R}_{p,\tau}\) corresponding to the variant codes are also MDS codes. In addition, [25] provided a fast scheduling scheme for the syndrome computation of the Vandermonde-based V-ETBR MDS array codes with \(r\leq 3\). The next section of this paper will propose the generalization of this scheme suitable for any size of \(r\). To simplify the representation of elements in the Vandermonde matrix \(H\), we let \(h_{0,i}=1,\forall i\in[0,n),\) and \(h_{i}:=h_{1,i},\forall i\in[0,n),\) such that \(h_{j,i}=h_{i}^{j},\forall j\in[1,r),i\in[0,n)\). **Construction 2**.: _(V-ETBR MDS array codes with \(r\geq 2\)) Let \(H\in\mathbb{R}^{r\times n}\) be a Vandermonde matrix, \(n=2^{n_{0}},n_{0}\leq\lambda\), and \(h_{i}=(1+x^{r})\cdot h_{i}^{\prime},\forall i\in[0,n)\), where \(\{h_{i}^{\prime}\}_{0\leq i\leq n}\) is given by \(h_{0}^{\prime}=0\) and \(h_{i+2^{j}}^{\prime}=h_{i}^{\prime}+x^{j},0\leq j<n_{0},0\leq i<2^{j}\). Then the V-ETBR\((n,2\leq r<n,m=p\tau,H)\) is a binary MDS array code._ Proof.: We prove this via Corollary 1. Since the degree of any \(h_{i}^{\prime}\) is less than \(\lambda\), we have \(gcd(h_{i}^{j},x^{m}+1)=(x^{r}+1)\cdot gcd((h_{i}^{\prime})^{j},f_{p,1}^{r}(x) )=x^{r}+1\), where \(1\leq i<r\) and \(0\leq j<k<n\). This results in that the latter second condition of Corollary 1 holds. For the first condition of Corollary 1, we have \(gcd(h_{j}+h_{k},f_{p,\tau}(x))=gcd(h_{j}^{\prime}+h_{k}^{\prime},f_{p,1}^{r}( x))=1\), where \(0\leq j<k<n\). Then any \(r\times r\) Vandermonde sub-matrix of \(H\) is invertible over \(\mathbb{R}_{p,\tau}\), leading to the ETRB\((n,r,m=p\tau,H)\) being MDS code over \(\mathbb{R}_{p,\tau}\). This completes the proof. According to Corollary 2, it is not difficult to check that the V-ESIP\((n,r=3,m=p\tau,H^{\prime})\) is also a binary MDS array code if \(H^{\prime}\) is determined by \(H\) in Construction 2. The code has a systematic parity-check matrix that contains a \(3\times 3\) identity matrix and has been focused on in [25]. The following provides the Vandermonde matrix-based construction for the V-ESIP MDS array code with \(r=4\). **Construction 3**.: _(V-ESIP MDS array codes with \(r=4\)) Let \(H\in\mathbb{R}^{r\times n}\) be a Vandermonde matrix, \(n=2^{n_{1}}+1,n_{1}\leq w=\lfloor\frac{\lambda-1}{2}\rfloor\), \(h_{n-1}=0\), and \(h_{i}=(h_{i}^{\prime}+x^{w})\cdot(1+x^{\tau})\), where \(i\in[0,2^{n_{1}})\) and \(\{h_{i}^{\prime}\}_{0\leq i<2^{n_{1}}}\) is given by \(h_{0}^{\prime}=0,h_{i+2^{j}}^{\prime}=h_{i}^{\prime}+x^{j},0\leq j<n_{1},0 \leq i<2^{j}\). Then the V-ESIP\((n,r=4,m=p\tau,H^{\prime})\) is a systematic binary MDS array code._ Proof.: We prove this via Corollary 2. The second condition in Corollary 2 obviously holds. For the first condition in Corollary 2, we only need to prove that any \(4\times 4\) sub-matrix of \(H^{\prime}\) is invertible over \(\mathbb{R}_{p,\tau}\). Specifically, we first consider any \(4\times 4\) sub-matrix of \(H\) in \(H^{\prime}\), which is a Vandermonde square matrix. Clearly, for any \(0\leq i<j<n-1\), we have that \(gcd(h_{i}+h_{j},f_{p,\tau}(x))=gcd(h_{i}^{\prime}+h_{j}^{\prime},f_{p,\tau}^{ \prime}(x))\) and \(gcd(h_{i}+h_{n-1},f_{p,1}^{r}(x))=gcd(h_{i}^{\prime}+x^{w},f_{p,1}^{r}(x))\). Since each \(deg(h_{i}^{\prime})<w<\lambda\), then \(gcd(h_{i}+h_{j},f_{p,\tau}(x))=1,\forall 0\leq i<j<n\). This indicates that any \(4\times 4\) sub-matrix of \(H\) is invertible over \(\mathbb{R}_{p,\tau}\). Next, we focus on the remaining cases. We only need to determine if the following matrices are invertible over \(\mathbb{R}_{p,\tau}\), \[\begin{pmatrix}1&1\\ h_{i}^{3}&h_{j}^{3}\end{pmatrix},\quad\begin{pmatrix}1&1&1\\ h_{i}&h_{j}&h_{k}\\ h_{i}^{3}&h_{j}^{3}&h_{k}^{3}\end{pmatrix},\quad\begin{pmatrix}1&1&1\\ h_{i}^{2}&h_{j}^{2}&h_{k}^{2}\\ h_{i}^{3}&h_{j}^{3}&h_{k}^{3}\end{pmatrix}, \tag{20}\] where \(0\leq i<j<k<n\). According to generalized Vandermonde determinants [29], the determinants of the above three matrices are respectively (let \(h_{n-1}^{\prime}=0\)) \[\begin{split} h_{i}^{3}+h_{j}^{3}&=(h_{i}+h_{j})(1+x^{r})^{2} \cdot\left((h_{i}^{\prime})^{2}+h_{i}^{\prime}h_{j}^{\prime}+(h_{j}^{\prime})^{ 2}\right.\\ &\left.+(h_{i}^{\prime}+h_{j}^{\prime})x^{w}+x^{2w}\right), \\ h_{i}+h_{j}+h_{k}=(1+x^{r})(h_{i}^{\prime}+h_{j}^{\prime}+h_{k}^{ \prime}+x^{w}),\\ h_{i}h_{j}+h_{i}h_{k}+h_{j}h_{k}=(1+x^{r})^{2}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.(h_{i}^{ \prime}h_{j}^{\prime}+h_{i}^{\prime}h_{k}^{\prime}+h_{j}^{\prime}h_{k}^{\prime }+x^{2w}),\right.\end{split} \tag{21}\] where \(0\leq i<j<k<n\). Since all \(h_{i}^{\prime},h_{j}^{\prime},h_{k}^{\prime}\) have degree less than \(w\), the above three values are not zero. Furthermore, due to \(2w<\lambda\), it is not difficult to see that they are all coprime with \(f_{p,\tau}(x)\). Then all the matrices in (20) are invertible over \(\mathbb{R}_{p,\tau}\). This completes the proof. **Remark 5**.: _It is clear that all the codes in Construction 1, 2 and 3 allow the total number of data columns to reach the exponential size with respect to the design parameter \(p\). This is suitable for the needs of large-scale storage systems [25]. In addition, it is possible to construct the new codes using other matrices, such as Moore matrices [30] and some matrices searched by computers. All proposed connections in Section IV-A offer great flexibility in constructing the variant codes._ ## V Fast Computations This section presents fast computations for the V-ETBR MDS array codes in Construction 2 and the V-ESIP MDS array codes in Construction 3. One can know from coding theory that the product of any parity-check matrix and its corresponding codeword is zero [31]. Formally, \(\mathbf{0}^{\mathrm{T}}=\widehat{H}\cdot\widehat{\mathbf{x}}^{\mathrm{T}}\), where \(\mathbf{0}\) denotes a zero vector, \(\widehat{H}\) denotes a binary parity-check matrix, and \(\widehat{\mathbf{x}}\) denotes the corresponding codeword. It follows that \(\widehat{H}\cdot\mathbf{x}^{\mathrm{T}}=\widehat{H}_{e}\cdot\mathbf{e}^{ \mathrm{T}}\), where \(\mathbf{x}\) denotes the codeword after all erased symbols are set to zero, \(\mathbf{e}\) denotes the vector consisting of all erased symbols, and \(\widehat{H}_{e}\) denotes the sub-matrix of \(\widehat{H}\) corresponding to \(\mathbf{e}\). The above leads to the following common framework for encoding and decoding procedures [3, 32]: (when encoding, all parity symbols can be regarded as erased symbols.) Step 1. Compute syndrome \(\mathbf{s}^{\mathrm{T}}:=\widehat{H}\cdot\mathbf{x}^{\mathrm{T}}\). Step 2. Solve linear equations \(\mathbf{s}^{\mathrm{T}}=\widehat{H}_{e}\cdot\mathbf{e}^{\mathrm{T}}\). ### _For Construction 2_ Here, \(\widehat{H}=\mathcal{T}_{r,n,m}(H)\), then the syndrome computation is \(\mathbf{s}^{\mathrm{T}}=\mathcal{T}_{r,n,m}(H)\cdot\mathbf{x}^{\mathrm{T}}\). Let \(\mathbf{x}=(\mathbf{x}_{0},\cdots\mathbf{x}_{n-1})\) and \(\mathbf{s}=(\mathbf{s}_{0},\cdots\mathbf{s}_{r-1})\) with each \(\mathbf{x}_{i}\in\mathbb{F}_{2}^{m-r},\mathbf{s}_{i}\in\mathbb{F}_{2}^{m-r}\). #### V-A1 Syndrome computation For any \(i\in[0,r)\), we have \[\begin{split}\mathbf{s}_{i}^{\mathrm{T}}&=\sum_{j=0}^{2^{n_{0}}-1} \mathcal{A}_{r,\tau}(h_{j}^{i})\cdot\mathbf{x}_{j}^{\mathrm{T}}\\ &=\sum_{j=0}^{2^{n_{0}}-1}\mathcal{A}_{r,\tau}\left((h_{j}^{\prime} )^{i}\cdot(1+x^{r})^{i}\right)\cdot\mathbf{x}_{j}^{\mathrm{T}}.\end{split} \tag{22}\] The following is dedicated to demonstrating that when \(n_{0}\) approaches infinity, \(\mathbf{s}\) can be calculated with the asymptotic complexity of \(\lfloor\lg r\rfloor+1\) XORs per data bit. We first focus on the auxiliary calculation, i.e., \((\mathbf{s}_{i}^{*})^{\mathrm{T}}=\sum_{j=0}^{2^{n_{0}}-1}\mathcal{A}_{0,0} \left((h_{j}^{\prime})^{i}\cdot(1 where \(i\in[0,r)\) and \[P(i,\{\mathbf{x}_{j}^{*}\}_{j=0}^{2^{n_{0}}-1}):=\sum_{j=0}^{2^{n_{0}}-1}\mathcal{ A}_{0,0}\left((h_{j}^{\prime})^{i}\right)\cdot(\mathbf{x}_{j}^{*})^{\mathrm{T}}. \tag{24}\] Formula (24) can be calculated via Lemma 5. **Lemma 5**.: _Let \(\mathbf{y}^{\mathrm{T}}=(\mathbf{y}_{0},\mathbf{y}_{1},\cdots,\mathbf{y}_{2^{n _{0}}-1})^{\mathrm{T}}=(R_{n_{0}}\otimes I_{m})\cdot(\mathbf{x}_{0}^{*}, \mathbf{x}_{1}^{*},\cdots,\mathbf{x}_{2^{n_{0}}-1}^{\mathrm{T}}),\) where each \(\mathbf{y}_{i}\in\mathbb{F}_{2}^{m},\mathbf{x}_{i}^{*}\in\mathbb{F}_{2}^{m},\)\(\otimes\) denotes Kronecker product, \(I_{m}\) denotes an \(m\times m\) identity matrix, and \(R_{n_{0}}\) is a Reed-Muller matrix defined by \(R_{0}=(1)\) and \(R_{i+1}=\begin{pmatrix}R_{i}&R_{i}\\ &R_{i}\end{pmatrix},\forall i\in\mathbb{N}.\) Then (24) can be quickly calculated via (25), which is given at the bottom of this page. In (25), \(b(i)\) is the number of 1's in the binary representation of \(i\),_ \[\delta(i;t;\ell_{0},\ell_{1},\cdots,\ell_{t-1}) \tag{26}\] \[= \begin{cases}x^{\ell_{0}i_{0}},&\text{if }t=1,\\ \sum_{i_{1}\in\mathcal{C}_{i_{0}}^{\ell_{0}}:i_{2}\in\mathcal{C}_{i_{1}}^{ \ell_{1}}:\atop\cdots\cdot\cdot\cdot\ell_{t-1}\in\mathcal{C}_{i_{t-2}}^{\ell_ {t}}}\!\!x^{\ell_{0}i_{0}+\sum_{\xi=1}^{t-1}(\ell_{\xi}-\ell_{\xi-1})\cdot i_{ \xi}},&\text{if }t>1,\end{cases}\] _where \(i_{0}=i\), \(C_{i}\) is the set consisting of the positions of non-zero bits in the binary representation of \(i\), i.e., \(i=\sum_{j\in C_{i}}2^{j}\), and \(C_{i}^{\prime}=\{j|0<j<i,C_{j}\subset C_{i}\}.\)_ Proof.: Please see Appendix A for the details. From Lemma 5, the syndrome computation in (23) can be completed through the following steps (pre-processing: calculate all required parameters \(\delta\) according to (26)): 1. Calculate \(\{\mathbf{y}_{i}|i\in[0,2^{n_{0}}),b(i)\leq[\lg r]\}\) via the Reed-Muller transform with input \((\mathbf{x}_{0}^{*},\cdots,\mathbf{x}_{2^{n_{0}}-1}^{*})\) and transform matrix \(R_{n_{0}}\otimes I_{m}\). Note that \(\lfloor\lg r\rfloor=\max\{b(i)|i\in[1,r)\}\). 2. Calculate \(\{P(i,\{\mathbf{x}_{j}^{*}\}_{j}\}_{j=0}^{2^{n_{0}}-1})\}_{i=0}^{r-1}\) according to (25). 3. Calculate \(\{\mathbf{s}_{i}\}_{i=0}^{r-1}\) according to (23). In Step 2, the operation of multiplying \(\mathcal{A}_{0,0}(\delta(\bullet))\) by a vector can be simplified when \(r<8\) due to the fact that \(\delta(\bullet)\) is the sum of at most two monomials in this case. Specifically, we have \(\mathcal{A}_{0,0}(x^{i}+x^{j})=\mathcal{A}_{0,0}(x^{i})+\mathcal{A}_{0,0}(x^{j})\) for all \(i,j\in\mathbb{N}\), and each \(\mathcal{A}_{0,0}(x^{i})\) can be viewed as a cyclic-shift operator that shifts a vector by \(i\) bits. Therefore, when \(r<8\), the multiplication of \(\mathcal{A}_{0,0}(\delta(\bullet))\) by a vector can be easily implemented using at most one vector addition and one circular shift. When \(r\geq 8\), it is best to use matrix-vector multiplication for this operation. This is because \(\delta(\bullet)\) contains many monomials that need to be summed, and there exist scheduling algorithms that can reduce the complexity of matrix-vector multiplications, such as those described in [23, 24]. In Step 3, we have \(\mathcal{A}_{\tau,0}\left((1+x^{\tau})^{i}\right)=\mathcal{A}_{\tau,0}(1)\cdot \Pi_{j\in C_{i}}\mathcal{A}_{0,0}(1+x^{\tau 2^{j}})\), leading to Step 3 being completed in the same way as above. #### Iii-B2 Complexity analysis In the above process, Step 1 requires only a portion of the Reed-Muller transform, and one can know from [3] that it produces XORs with the number of \((m-\tau)\cdot((\lfloor\lg r\rfloor+1)n+o(n))\)[3], where little-o notation is used to describe an upper bound that cannot be tight. Step 2 produces matrix-vector multiplications and vector additions that are both \(\sum_{i=1}^{r-1}\sum_{t=1}^{b(i)}\binom{n_{0}}{t}-r+1\). When \(r\) is a constant, it is not difficult to check that \(\lim_{m_{0}\rightarrow\infty}\frac{\sum_{i=1}^{r-1}\sum_{t=0}^{b(i)}\binom{n_{0 }}{t}}{2^{n_{0}}/m_{0}}=0\). Thus, the total number of XORs required for Step 2 is \(m^{2}\cdot o(2^{n_{0}}/n_{0})\). Step 3 produces \(r-1\) matrix-vector multiplications. In summary, when \(r\) and \(\tau\) are constants and \(n=2^{n_{0}}\) approaches infinity, the asymptotic complexity of the above syndrome computation is dominated by Step 1, and requires \(\lfloor\lg r\rfloor+1\) XORs per data bit. Note that \(m=p\tau\) and \(p=\Theta(n_{0})\), where big-\(\Theta\) notation is used to describe a bound within a constant factor. For visualization, TABLE II lists the total computational complexities required for the proposed syndrome computation with different parameters. It can be observed that the numerical results are close to the theoretical ones, especially when \(n_{0}\) is large enough. Indeed, the syndrome computation proposed in [25], which reaches an asymptotic complexity of two XORs per data bit, is a special case of the above scheme at \(r=3\). ### _For Construction 3_ Here, let \(\mathbf{x}=(\mathbf{x}_{0},\cdots\mathbf{x}_{n+3})\) of each \(\mathbf{x}_{i}\in\mathbb{F}_{2}^{m-\tau}\) be a codeword, and \(\mathbf{s}=(\mathbf{s}_{0},\cdots\mathbf{s}_{3})\) of each \(\mathbf{s}_{i}\in\mathbb{F}_{2}^{m-\tau}\) the corresponding syndrome. Note that in Construction 3, \(n=2^{n_{1}}+1\) and the parity-check matrix \(\mathcal{T}_{r,n,m}(H^{\prime})\) is systematic. #### Iii-B1 Syndrome computation For any \(i\in[0,4)\), we have \[\mathbf{s}_{i}^{\mathrm{T}}=\mathbf{x}_{2^{n_{1}}+i}^{\mathrm{T}}+ \sum_{j=0}^{2^{n_{1}}-1}\mathcal{A}_{\tau,\tau}(h_{j}^{i})\cdot\mathbf{x}_{j}^{ \mathrm{T}} \tag{28}\] \[= \mathbf{x}_{2^{n_{1}}+i}^{\mathrm{T}}+\sum_{j=0}^{2^{n_{1}}-1} \mathcal{A}_{\tau,0}\left((h_{j}^{\prime}+x^{\mathrm{w}})^{i}(1+x^{\tau})^{i} \right)\cdot(\mathbf{x}_{j}^{\mathrm{s}})^{\mathrm{T}}\] \[= \mathbf{x}_{2^{n_{1}}+i}^{\mathrm{T}}+\mathcal{A}_{\tau,0}\left((1+ x^{\tau})^{i}\right)\sum_{j=0}^{2^{n_{1}}-1}\mathcal{A}_{0,0}\left((h_{j}^{\prime}+x^{ \mathrm{w}})^{i}\right)\cdot(\mathbf{x}_{j}^{\mathrm{s}})^{\mathrm{T}},\] where each \(\mathbf{x}_{j}^{*}=(\mathbf{x}_{i},0,0,\cdots,0)\in\mathbb{F}_{2}^{m}\). In the above formula, we can derive (27), which is shown in the bottom of the previous page. This means that this syndrome computation can also be accelerated by the calculation of (24). From the above, the syndrome computation can be completed through the following steps: 1. Calculate \(\{\mathbf{y}_{i}|i\in[0,2^{n_{1}}),b(i)\leq 2\}\) via the Reed-Muller transform with input \((\mathbf{x}_{0}^{*},\cdots,\mathbf{x}_{2^{n_{1}}-1}^{*})\) and transform matrix \(R_{n_{1}}\otimes I_{m}\). 2. Calculate \(\{P(i,\{\mathbf{x}_{j}^{*}\}_{j=0}^{2^{n_{1}}-1})\}_{i=0}^{3}\) according to (25). 3. Calculate (28) according to (27). #### V-B2 Complexity analysis It is clear that Step 3 only requires few vector additions and circular shifts. When \(r,\tau\) are constants and \(n_{0}\) approaches infinity, the asymptotic complexity of the above is dominated by the first two steps, and it is obviously the same as that in Sec. V-A2, i.e., \(\lfloor\lg r\rfloor+1=3\) XORs per data bit. ### _Comparison with existing codes_ We have proposed fast syndrome computations for the new binary MDS array codes. In the remaining step of the encoding/decoding process, i.e., solving linear equations, the inverse of the coefficient matrix only needs to be calculated once in practice to handle a large amount of data. This results in the computational complexity of solving linear equations being dominated by matrix-vector multiplication, which requires at most \(r^{2}(m-\tau)^{2}\) XORs.1 If \(r,\tau\) are constants, then \(\lim_{n\rightarrow\infty}\frac{r^{2}(m-\tau)^{2}}{(m-\tau)n}=0\), where \(m=p\tau\) and \(p=\Theta(\lg n)\). Thus, in asymptotic analysis, the total computational complexity of encoding/decoding is dominated by syndrome computation. TABLE III lists the asymptotic complexities of different MDS array codes. The fourth column shows the maximum number of data columns for each code, and the fifth column shows the asymptotic complexities of encoding and decoding, both of which are equal. It can be observed that the proposed codes not only have more flexible column size and design parameter \(p\), but also has exponentially growing total number of data columns with respect to \(p\) and minimal asymptotic encoding/decoding complexity. Footnote 1: In fact, this computational complexity can be reduced by scheduling algorithms for matrix-vector multiplication in the binary field, such as “four Russians” algorithm [33] or other heuristic algorithms in [23, 24]. It is worth mentioning that the asymptotic complexities of the proposed codes are the same as those of the RS codes proposed in [3], which are the lowest asymptotic complexities known in the literature for MDS codes. However, the proposed codes require only XORs and circular shifts, whereas the RS codes require inefficient field multiplications. Furthermore, in the proposed codes, the operation of cyclically shifting an \(m\)-element vector and then XORing it bit-wise with another \(m\)-element vector can be accomplished with only \(m\) XORs. This results in the fact that the proposed syndrome computation requires a small number of XORs at \(r<8\). For instance, with the number of information and parity columns being 128 and 4 respectively, the proposed V-ETBR and V-ESIP MDS array codes require 3.22 and 3.17 XORs per data bit to complete the syndrome computations. In contrast, even if field multiplication is implemented with the same efficiency as field addition, the corresponding RS codes in [3] require 3.35 XORs per data bit. ## VI Conclusion This paper reformulates and generalizes the variant technology of deriving generalized RDP codes from shortened IP codes, and then proposes two new classes of binary array codes, termed V-ETBR and V-ESIP codes. In particular, the connections between V-ETBR/V-ESIP codes and the codes over a special polynomial ring are presented. Based on these connections, V-ETBR and V-ESIP MDS array codes with any number of parity columns are explicitly provided. To improve encoding/decoding efficiency, this paper also proposes two fast syndrome computations that correspond to the constructed V-ETBR and V-ESIP MDS array codes. Compared to existing MDS array codes, the proposed codes can not only have a significantly larger total number of data columns when the design parameters \(r,\tau\) are given, but also have the lowest asymptotic encoding/decoding complexity. More precisely, the asymptotic encoding/decoding complexity is \(\lfloor\lg r\rfloor+1\) XORs per data bit when \(r,\tau\) are constants and the total number of data columns approaches infinity. This is also the lowest known asymptotic complexity in MDS codes [3]. The sum in (24) can be divided into two parts, i.e., \[\begin{split}& P(i,\{\mathbf{x}_{j}^{*}\}_{j=0}^{2^{n_{0}-1}})= \sum_{j=0}^{2^{n_{0}-1}-1}\mathcal{A}_{0,0}\left((h_{j}^{\prime})^{i}\right) \cdot(\mathbf{x}_{j}^{*})^{\mathrm{T}}\\ &+\sum_{j=0}^{2^{n_{0}-1}-1}\mathcal{A}_{0,0}\left((h_{2^{n_{0}- 1}+j}^{\prime})^{i}\right)\cdot(\mathbf{x}_{2^{n_{0}-1}+j}^{*})^{\mathrm{T}}. \end{split} \tag{30}\] From Construction 2, \((h_{2^{n_{0}-1}+j}^{\prime})^{i}\) in (30) can be converted into \[\begin{split}&(h_{2^{n_{0}-1}+j}^{\prime})^{i}=(h_{j}^{\prime}+x ^{n_{0}-1})^{\sum_{\xi\in C_{i}}2^{\xi}}\\ =&\Pi_{\xi\in C_{i}}(h_{j}^{\prime}+x^{n_{0}-1})^{ 2^{\xi}}\\ =&\Pi_{\xi\in C_{i}}\left((h_{j}^{\prime})^{2^{\xi}} +x^{(n_{0}-1)\cdot 2^{\xi}}\right)\\ =&(h_{j}^{\prime})^{i}+x^{(n_{0}-1)i}+\sum_{\xi\in C_ {i}^{\prime}}\left((h_{j}^{\prime})^{\xi}\cdot x^{(n_{0}-1)\cdot(i-\xi)} \right)\end{split} \tag{31}\] where \(C_{i}^{\prime}\) is defined in Lemma 5. Using the above formula, (30) can be reformulated as \[\begin{split}& P(i,\{\mathbf{x}_{j}^{*}\}_{j=0}^{2^{n_{0}-1}})=P( i,\{\mathbf{x}_{j}^{*}+\mathbf{x}_{2^{n_{0}-1}+j}^{*}\}_{j=0}^{2^{n_{0}-1}-1})\\ &+\mathcal{A}_{0,0}(x^{(n_{0}-1)\cdot i})\cdot\sum_{j=0}^{2^{n_{ 0}-1}-1}(\mathbf{x}_{2^{n_{0}-1}+j}^{*})^{\mathrm{T}}\\ &+\sum_{i_{1}\in C_{i}^{\prime}}\mathcal{A}_{0,0}(x^{(n_{0}-1)(i- i_{1})})\sum_{j=0}^{2^{n_{0}-1}-1}\mathcal{A}_{0,0}((h_{j}^{\prime})^{i_{1}}) \mathbf{x}_{2^{n_{0}-1}+j}^{\mathrm{T}}\\ =& P(i,\{(\mathbf{x}_{j}^{*})(1)\}_{j=0}^{2^{n_{0}-1 }-1})\\ &+\mathcal{A}_{0,0}\left(x^{(n_{0}-1)\cdot i}\right)\cdot\sum_{j=0 }^{2^{n_{0}-1}-1}(\mathbf{x}_{2^{n_{0}-1}+j}^{*})^{\mathrm{T}}\\ &+\sum_{i_{1}\in C_{i}^{\prime}}\mathcal{A}_{0,0}\left(x^{(n_{0}- 1)\cdot(i-i_{1})}\right)\cdot P(i_{1},\{\mathbf{x}_{2^{n_{0}-1}+j}^{*}\}_{j=0} ^{2^{n_{0}-1}-1})\end{split} \tag{32}\] where \(0\leq i<r\) and each \((\mathbf{x}_{j}^{*})^{(1)}=\mathbf{x}_{j}^{*}+\mathbf{x}_{n/2+j}^{*}\). In the above formula, the first and third terms can be calculated recursively. In particular, \(\sum_{j=0}^{2^{n_{0}-1}-1}(\mathbf{x}_{2^{n_{0}-1}+j}^{*})^{\mathrm{T}}\) in (32) can be calculated by the Reed-Muller transform with input \(\mathbf{x}^{*}=(\mathbf{x}_{0}^{*},\mathbf{x}_{1}^{*},\cdots,\mathbf{x}_{2^{n _{0}-1}}^{*})\). Let the Reed-Muller transform be \(\mathbf{y}^{\mathrm{T}}=(\mathbf{y}_{0},\mathbf{y}_{1},\cdots,\mathbf{y}_{2^{n _{0}}-1})^{\mathrm{T}}=(R_{n_{0}}\otimes I_{m})\cdot(\mathbf{x}^{*})^{\mathrm{ T}}\), we have \(P(0,\{\mathbf{x}_{j}^{*}\}_{j=0}^{2^{n_{0}-1}})=\mathbf{y}_{0}^{\mathrm{T}}\), and (32) for \(i\geq 1\) can be completely expanded as (29), which is given at the bottom of this page. In (29), \(b(i)\) and \(\delta(i;t;\ell_{0},\ell_{1},\cdots,\ell_{t-1})\) are defined in Lemma 5. This completes the proof.
2302.07353
Dead or alive: Distinguishing active from passive particles using supervised learning
A longstanding open question in the field of dense disordered matter is how precisely structure and dynamics are related to each other. With the advent of machine learning, it has become possible to agnostically predict the dynamic propensity of a particle in a dense liquid based on its local structural environment. Thus far, however, these machine-learning studies have focused almost exclusively on simple liquids composed of passive particles. Here we consider a mixture of both passive and active (i.e.\ self-propelled) Brownian particles, with the aim to identify the active particles from minimal local structural information. We compare a state-of-the-art machine learning approach for passive systems with a new method we develop based on Voronoi tessellation. Both methods accurately identify the active particles based on their structural properties at high activity and low concentrations of active particles. Our Voronoi method is, however, substantially faster to train and deploy because it requires fewer, and easy to compute, input features. Notably, both become ineffective when the activity is low, suggesting a fundamentally different structural signature for dynamic propensity and non-equilibrium activity. Ultimately, these efforts might also find relevance in the context of biological active glasses such as confluent cell layers, where subtle changes in the microstructure can hint at pathological changes in cell dynamics.
Giulia Janzen, Xander L. J. A. Smeets, Vincent E. Debets, Chengjie Luo, Cornelis Storm, Liesbeth M. C. Janssen, Simone Ciarella
2023-02-14T21:34:59Z
http://arxiv.org/abs/2302.07353v2
# Dead or alive: Distinguishing active from passive particles using supervised learning ###### Abstract A longstanding open question in the field of dense disordered matter is how precisely structure and the dynamics are related to each other. With the advent of machine learning, it has become possible to agnostically predict the dynamic propensity of a particle in a dense liquid based on its local structural environment. Thus far, however, these machine learning studies have focused almost exclusively on simple liquids composed of passive particles. Here we consider a mixture of both passive and active (i.e. self-propelled) Brownian particles, with the aim to identify the active particles from minimal local structural information. We find that the established machine learning approaches for passive systems are ineffective for our goal, implying that dynamic propensity and non-equilibrium activity carry a fundamentally different structural signature. To distinguish passive from active particles, we instead develop a pseudo-static machine learning method that uses both local structural order parameters and their averaged fluctuations as input. Our final neural network is able to detect with almost 100% accuracy which particles are active and which ones are not. Hence, our machine learning model can identify distinct dynamical single-particle properties with minimal dynamical information. Ultimately, these efforts might also find relevance in the context of biological active glasses such as confluent cell layers, where subtle changes in the microstructure can hint at pathological changes in cell dynamics. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. ## 1 Introduction A central notion in the study of active particulate matter--systems of discrete entities which consume energy to perform work and move autonomously--is that the presence of activity can dramatically alter both spatial organization and (collective) dynamics [1]. The fact that active matter is intrinsically out-of-equilibrium renders the standard tools of statistical physics of limited use, and leaves open the question which quantifiers most accurately characterize and predict the dynamics of active matter [2]. This is a profound issue especially in densely disordered phases such as liquids and glasses, where the relation between spatial structure and emergent dynamics is notoriously obscure [1, 3]. A better understanding of structure-dynamics relations in active matter would be highly desirable both from a fundamental and more applied perspective. Notably, in the context of biological tissues and confluent cell layers, subtle structural changes can correlate with the motile properties of the individual cells, with relevance in processes such as cancer metastasis and embryonic development [4, 5]. Recent efforts have demonstrated that machine learning (ML) approaches are extremely effective in finding simple structural indicators that predict dynamical properties in densely disordered (near-)equilbrium systems [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. These findings firmly establish a correlation between local structure and the propensity of passive particles to move in a crowded environment. While ML has also been applied with considerable success in purely active systems [25, 26, 27, 28, 29, 30, 31, 32], in practice and particularly in biological settings the system of interest will generally contain actors whose activity parameters are different, and distributed. In tumor tissue, for instance, there may be a distribution of mesenchymal (more motile, active) and epithelial (more stationary, passive) phenotypes, where generically the presence of mesenchymal cells is associated with greater metastatic potential [5, 33]. Thus, developing the ability to reliably distinguish active from passive elements in dense collectives holds promising diagnostic and prognostic potential. Inspired by this challenge, our work addresses a seemingly simple issue: Can we identify the active species in a binary model system of active and passive particles? Naturally, this would be most conveniently done based on dynamical information (i.e. a'movie') of the system, but in biological settings such moving images, appropriately time-resolved, may be difficult to obtain. This is why we complicate the challenge considerably, and demand that the identification is performed based solely on static images ('snapshots') of the system. Below we demonstrate that existing ML techniques based purely on static information such as those initially developed for simulated supercooled liquids [9, 10, 11, 13, 14, 16]--approaches that are very effective in predicting the dynamic propensity of passive particles from snapshots--are not effective in our settings. This suggests that dynamic propensity and activity are fundamentally different in systems with heterogenous activity. We then introduce a modified ML approach that can correctly classify the active and passive particles using multiple _unordered_ snapshots of the system as input (Fig. 1). Our approach does require multiple configurations of the system, but importantly does not require any information about the _time-ordering_ of those configurations. Therefore, the input contains significantly less dynamical information than an actual movie of the system. We call this approach pseudo-static. Briefly, we use a multilayer perceptron neural network that takes as input the local environment of each particle described by the statistics of its bond order parameters. We demonstrate that our supervised learning approach generalizes well to different compositions and activity parameters, and the perceptron predictions are explainable. Most importantly, the results of the neural network are accurate, achieving 95%-100% fidelity in its active/passive labeling over a broad range of compositions and activity parameters. The remainder of this paper is organized as follows. In the Methods section we first describe our computer simulation model of mixed active and passive particles, the ML perceptron model, and the processed data (structural features) used as input for the perceptron. For the processed data, we make a distinction between the purely static approach established previously for dynamic propensities in passive systems, and our novel pseudo-static approach which also includes _fluctuations_ of the relevant structural quantities. We then report our main results, demonstrating that the pseudo-static approach accurately identifies the active particles, while the purely static method fails. We also demonstrate the good generalizability of our neural network to other parameter regimes, and infer how the trained perceptron distinguishes between active and passive particles using Local Interpretable Model-Agnostic Explanations (LIME). We end with brief concluding remarks. ## 2 Methods ### Simulation model For our model system we take the three-dimensional Kob-Andersen Lennard-Jones mixture [34, 35, 36], which in the context of passive particles has been extensively studied and consists of \(N_{\rm A}=800\) and \(N_{\rm B}=200\) spheres of type A and B, respectively. We extend the model to an active-passive mixture by adding an intrinsic self-propulsion force to a fraction \(\Phi_{a}\) of particles (the same proportion for each particle type). This gives the following equation of motion for each particle \(i\)[37, 38, 39]: \[\dot{\mathbf{r}}_{i}=\zeta^{-1}\left(\mathbf{F}_{i}+\mathbf{f}_{i}\right)+ \boldsymbol{\xi}_{i}. \tag{1}\] Here, \(\mathbf{r}_{i}\) denotes the position of particle \(i\), \(\zeta\) the friction coefficient, and \(\mathbf{F}_{i}\) and \(\mathbf{f}_{i}\) the interaction and self-propulsion force acting on particle \(i\), respectively. Moreover, \(\boldsymbol{\xi}_{i}\) represents a Gaussian noise with zero mean and variance \((\boldsymbol{\xi}_{i}(t)\boldsymbol{\xi}_{j}(t^{\prime}))_{\rm noise}=2k_{B}T \zeta^{-1}\boldsymbol{\xi}_{ij}\delta(t-t^{\prime})\), with \(k_{B}T\) the thermal energy, \(T\) the temperature, \(k_{B}\) the Boltzmann constant (we set \(k_{B}=1\)), \(t\) the time, and \(\boldsymbol{\mathbf{I}}\) the \(3\times 3\) unit matrix. The total interaction force on particle \(i\) (of type \(\alpha={\rm A,B}\)) due to all other particles \(j\) (of type \(\beta={\rm A,B}\)) is \(\mathbf{F}_{i}=-\sum_{j\neq i}\nabla_{i}V_{\alpha\beta}(r_{ij})\), where \(r_{ij}=|\boldsymbol{r}_{ij}|=|\boldsymbol{r}_{j}-\boldsymbol{r}_{i}|\) is the radial distance between particles \(i\) and \(j\). For the interaction potential we use a standard Lennard-Jones potential \[V_{\alpha\beta}(r)=\begin{cases}4\epsilon_{\alpha\beta}\left[\left(\frac{ \sigma_{\alpha\beta}}{r}\right)^{12}-\left(\frac{\sigma_{\alpha\beta}}{r} \right)^{6}+C_{\alpha\beta}\right]\,\ r\leq r_{\alpha\beta}^{c}\,\\ reached steady state conditions. Note that all results are presented in reduced units where \(\sigma_{\rm AA}\), \(\epsilon_{\rm AA}\), \(\epsilon_{\rm AA}/k_{\rm B}\), and \(\zeta\sigma_{\rm AA}^{2}/\epsilon_{\rm AA}\) represent the units of length, energy, temperature, and time, respectively [35]. Afterward, we save the configuration of the particles every 13 time units. This time interval is an order of magnitude larger than the relaxation time of the fully passive reference system and therefore allows each configuration to be considered statistically independent of the previously saved one [45]. In total we retrieve \(10\,000\) different independent configurations for each studied setting. Perceptron modelWe treat the identification of the active particles as a binary classification problem. For this we use a multilayer perceptron neural network consisting of one hidden layer of 200 neurons. The configuration of our neural network was optimized by performing a grid search; i.e. a scan over the hyperparameter space in order to select the ones providing the best performance. The optimal hyperparameters that we used in the final model are reported in Table 1. More information, including visual representations of the comparative scoring of models with different hyperparameters, is reported in the supplementary material. To quantify the network's performance we use two indicators: (i) accuracy and (ii) f1-score. Accuracy is the most basic metric for classification defined as the number of correct predictions divided by the total number of predictions. This metric, however, is not optimal when the classes are unbalanced, which is the case when the fraction of active particles \(\Phi_{a}\neq 0.5\). Hence, when we have a different number of active and passive particles we evaluate the model with the f1-score \[\text{f1-score}=2\frac{\text{precision}\cdot\text{recall}}{\text{precision}+ \text{recall}}\] where the precision is the sum of true positives across all classes divided by the sum of both true and false positives over all classes, and the recall is the sum of true positives across all classes divided by the sum of true and false negatives across all classes. The f1-score reaches its largest value of 1 when the model has perfect precision and recall and its lowest value of 0 if either the precision or the recall is equal to zero. Briefly, during our optimization routine, we found that standard solvers such as stochastic gradient descent or adam are outperformed by limited-memory BFGS (lbfgs) [46, 47], and the optimal activation function was found to be the hyperbolic tangent (tanh). We then regularized the model using Ridge regression [48] with a prefactor alpha = 0.05. After this, the scores of the model were consistently above 0.99, so we have not performed any further optimization of the remaining parameters, and instead selected the locally optimal values reported in Table 1. In the end, after calculating all the relevant structural input features (discussed below), the training of the model described here takes only several minutes on a standard laptop. Static approachAs a first test, we seek to distinguish the active particles from the passive ones using only instantaneous static information, i.e. a single snapshot. For this we use local, particle-resolved structural properties as input for our machine learning model. In particular we calculate the structural features introduced in Ref. [16] that have been able to effectively predict the dynamic \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline solver & ’lbfgs’ \\ \hline activation & ’tanh’ \\ \hline alpha & 0.05 \\ \hline max\_iter & 10 ** 9 \\ \hline max\_fun & 10** 5 \\ \hline validation\_fraction & 0.1 \\ \hline early\_stopping & True \\ \hline hidden\_layer\_sizes & (200,) \\ \hline n\_iter\_no\_change & 10 \\ \hline tol & 0.0001 \\ \hline \end{tabular} \end{table} Table 1: The hyperparameters used for the multilayer perceptron. The parameter names refer to the inputs for the multilayer perceptron implementation provided by the scikit-learn [49] library. Figure 1: Sketch of our pseudo-static machine learning approach for the identification of active particles in an active/passive mixture. propensity of each particle [16, 17] in purely passive systems. Since the dynamic propensity quantifies the average squared displacement of a particle from its initial condition [50, 51], and active particles have an additional self-propelling force, it is natural to expect that active particles have a larger dynamic propensity. We also confirm this by measuring the mean-squared displacement, reported in the supplementary material [52, 53]. This means that if the dynamical information remains encoded in the structure even for active systems, then this static approach should be able to pick it up and identify the active particles from their enhanced propensity. To proceed with the static approach we compute radial and angular descriptors defined in Ref. [16]. The 0-th order radial descriptors \[G_{i}^{(0)}(r,\delta,\alpha)=\sum_{j\neq i;\alpha_{j}=\alpha}e^{\frac{(r_{ij} -r)^{2}}{2\delta^{2}}} \tag{4}\] measure for each particle \(i\) of species \(\alpha=\{A,B\}\) the density of only particles of type \(\alpha_{j}=\alpha\) at distance \(r\) in a shell of width \(2\delta\). The 0-th order angular descriptors \[q_{i}^{(0)}(l,r,\delta)=\sqrt{\frac{4\pi}{2l+1}\sum_{m=1}^{l}|\hat{q}_{i}^{(0 )}\left(l,m,r,\delta\right)|^{2}}, \tag{5}\] are defined as an expansion of the local density in terms of spherical harmonics using \[\hat{q}_{i}^{(0)}(l,r,\delta)=\frac{1}{Z}\sum_{j\neq i}e^{\frac{(r_{ij}-r)^{2 }}{2\delta^{2}}}Y_{l}^{m}(\mathbf{r}_{ij}), \tag{6}\] where \(Y_{l}^{m}\) are the spherical harmonics of order \(l\) and \(Z\) is a normalization constant. For each particle \(i\), we also compute the higher-order descriptors \(G_{i}^{(n)}(r,\delta,\alpha)\) and \(q_{i}^{(n)}(l,r,\delta)\) defined from \[X_{i}^{(n)}=\frac{1}{C}\sum_{j:r_{ij}<r_{e}}e^{-r_{ij}/r_{e}}X_{j}^{(n-1)}, \tag{7}\] where \(X=G_{i}\) and \(q_{i}\) are averaged in a shell of radius \(r_{c}/\sigma_{AA}=2.3\) that approximately corresponds to the second minimum in the radial distribution function. Overall, the input set for the static approach consists of \(G_{i}^{(n)}(r,\delta,\alpha)\) and \(q_{i}^{(n)}(l,r,\delta)\) with \(n=0,1,2\). For \(n=0\), we compute 50 radial descriptors \(G_{i}^{(0)}(r,\delta,\alpha)\) per species with \(r/\sigma_{AA}\in[0,5]\) and \(\delta=0.1\), and 192 angular descriptors, \(q_{i}^{(0)}(l,r,\delta)\), with \(\delta=0.1\), \(l\in[1,12]\) and \(r/\sigma_{AA}\in[1,2.5]\). In sum, we use a total of 876 static quantities as input in the static approach. _Pseudo-static approach._ To perform the pseudo-static approach, we use somewhat simpler structural order parameters that are faster to calculate and easier to interpret in comparison to the purely static approach. For this we use the \(l\)-th Voronoi weighted order parameters [54] \[Q_{l}(i)=\left(\frac{4\pi}{2l+1}\sum_{m=-l}^{l}\left|\sum_{f\in\mathcal{F}(i) }\frac{A(f)}{A(i)}Y_{l}^{m}(\theta_{f},\phi_{f})\right|^{2}\right)^{\frac{1}{ 2}}, \tag{8}\] where the inner sum is taken over all shared boundaries (facets) \(f\) of the Voronoi cell containing particle \(i\), \(A(f)\) is the area of the facet, \(A(i)\) is the total area of the Voronoi cell, and \(\theta_{f}\) and \(\phi_{f}\) are the spherical angles of the outer normal vector of the facet \(f\). These order parameters have previously also been shown to describe local crystal structures, e.g., cubic (\(q_{4,6}\)), BCC (\(q_{8}\)), and FCC (\(q_{12}\)) [54]. In addition, we use the \(l\)th averaged Voronoi weighted order parameters \[\bar{Q}_{l}(i)=\left(\frac{4\pi}{2l+1}\sum_{m=-l}^{l}\left|\frac{1}{\tilde{N} (i)}\sum_{k=0}^{\tilde{N}(i)}\sum_{f\in\mathcal{F}(k)}\frac{A(f)}{A}Y_{lm}( \theta_{f},\phi_{f})\right|^{2}\right)^{\frac{1}{2}} \tag{9}\] where the middle sum is taken over all the \(\tilde{N}\) neighbors of the particle \(i\), including particle \(i\) itself. The difference between the parameters is that those in Eq. (8) only describe the first shell of particles around particle \(i\), while the parameters in Eq. (9) also describe the second shell of particles around particle \(i\). Importantly, since these are stochastic quantities, we perform one additional processing step and calculate, for each particle \(i\), the mean, median, minimum, maximum, and standard deviation corresponding to the distribution of \(Q_{l}(i)\) averaged over the number of configurations in the training set. To capture more static information, we also calculate the 5th, 25th, 75th, and 95th percentile of these distributions. It will be these statistics, which characterize the distribution of \(Q_{l}(i)\) with \(l=2,\ldots,12\), as well as the particle type (A or B), that are used as input for the pseudo-static ML model. In total this model considers 199 input quantities; note that this feature space is significantly smaller than that of the static approach. The statistics are computed over 10 000 independent configurations. ## 3 Results and Discussion _Dynamic propensity and self-propelled motion do not carry the same structural signature._ We first establish that the purely static approach, i.e. a perceptron trained with the structural properties \(\{G_{i}^{(n)}(r,\delta,\alpha),q_{i}^{(n)}(l,r,\delta)\}\) with \(n=0,1,2\) as input, cannot distinguish between active and passive particles. The dashed line in Fig. 2 shows that with a 50/50 active/passive mixture the static approach yields an accuracy of approximately 50%, which is more or less the same accuracy as a random coin flip in the identification of the active particles. This is the case even if the activity is small (\(F_{a}=1\)) or very large (\(F_{a}=20\)). In Fig. 3 (dashed line) we show that the performance measured by the f1-score does not improve when there are fewer active particles (small \(\Phi_{a}\)) or more (large \(\Phi_{a}\)). Therefore, we conclude that this static approach cannot adequately distinguish between active and passive particles. However, the same static features have been used to compute the dynamic particle propensity in passive systems [12, 13, 16], identifying clear correlations between local static structure and local dynamics. Active particles have faster dynamics compared to passive particles [52, 53], and hence their dynamic propensity is intrinsically larger. Notice that similar indicators also predict localized plastic events [55, 56]. Given that this static approach fails to reliably distinguish between active and passive particles, we conclude that if a local structure-dynamics relation exists in active systems, then it is significantly different from passive systems. _Pseudo-static approach: distinguishing dead from alive._ Since a purely static approach fails to identify active particles in a passive/active mixture, we employ a pseudo-static method. This approach does not require a fully time-resolved dynamical trajectory, from which it would be trivial to identify active particles based on their displacements, but instead it requires some knowledge of the averaged statistical fluctuations of the local structure. We obtain these statistics from a collection of snapshots that do not have to be time-ordered. Figure 2 reports the accuracy that our pseudo-static ML model achieves in the classification of a 50/50 active/passive mixture, for different values of the active force \(F_{a}\). We compare the accuracy that we get when the model is trained and tested at a single specific value of \(F_{a}\) (black circles), with the accuracy of a model trained when fixing \(F_{a}=10\) (red circles). When the active force \(F_{a}\) is very small, it is difficult to distinguish between motion related to active forces and passive Brownian motion, so the pseudo-static approach fails as much as the static approach, once again confirming that the statics-dynamics connection is very subtle in active systems. For \(F_{a}\geq 5\) the pseudo-static approach achieves \(>90\%\) accuracy, clearly outperforming the purely static method. As expected, we also see that the accuracy of the pseudo-static approach increases when the activity is strong, because the difference between passive and active particles becomes more significant. Furthermore, a single model trained at an intermediate value of \(F_{a}=10\) is able to produce good predictions for \(F_{a}>F_{a}^{\rm train}\), thus showing reasonable generalizability to unseen parameter regimes, although the accuracy gets lower for \(F_{a}<F_{a}^{\rm train}\). Lastly, noting that the static approach does not even reach 60% accuracy for very large activity, this suggests that activity does not leave a clear, simple signature in the instantaneous local structure. In Figure 3 we evaluate the performance of the active/passive classifiers as a function of the percentage of active particles \(\Phi_{a}\) at fixed \(F_{a}=10\). Here the pseudo-static approach achieves very good predictive power, quantified by the very large f1-score of \(\sim 1\), even when the model is trained only at \(\Phi_{a}=0.5\) (red circles). Thus, the model also generalizes well to other active/passive stoi Figure 3: The f1-score as a function of the fraction of active particles \(\Phi_{a}=\Phi_{a}^{\rm test}\), with \(F_{a}=10\). The black points represent scores obtained from separate models, where each model was trained using \(\Phi_{a}^{\rm train}=\Phi_{a}^{\rm test}\). The red points represent scores obtained from a single model trained with data from \(\Phi_{a}=0.5\). In the inset we highlight the small score drop that happens for \(\Phi_{a}^{\rm test}\to 1\). Figure 2: Model accuracy as a function of the active force \(F_{a}=F_{a}^{\rm test}\), with \(\Phi_{a}=0.5\). The black points represent scores obtained from separate models, where each one was trained using \(F_{a}^{\rm train}=F_{a}^{\rm test}\). The red points represent scores obtained from a single global model trained with data from \(F_{a}=10\). chometries. Once again, the static approach (dashed line) is not effective for any value of \(\Phi_{a}\). When the fraction of active particles approaches 1, we highlight in the inset that the model becomes slightly less accurate, though the f1-score still remains above 0.97. Our interpretation of this small score drop is that it is easier to identify a single particle that is moving due to activity (small \(\Phi_{a}\), black circles) rather than identifying a single passive particle with many active neighbors (large \(\Phi_{a}\)), since the activity of the neighbors usually disrupts the local environment. Model explanationHaving thus established an effective pseudo-static ML model that can accurately distinguish active from passive particles, let us now seek to gain more insight into the decisions made by the model. That is, rather than using it as a black box [58, 59] we calculate the model _explanations_ using LIME [60]. In brief, these explanations describe the correlation between a given input parameter and the probability of predicting a given particle to be active, and can be considered local approximations to the model. The correlation values given by the explanations are used to assess the relevance of certain parameters to the overall predictions made by the model; the stronger the deviation from 0, the more relevant a feature is for determining whether a particle is active or not. Figure 4 presents an aggregation of the explanations over all particles in an active/passive mixture with \(\Phi_{a}=0.5\) and \(F_{a}=10\). The boxplots in Fig. 4(a) show the distribution of the correlation between a given parameter and the probability that a given particle is predicted to be active by the model. For clarity we show only the 23 most important features, and we note that the ordering may vary somewhat depending on the initialization settings of the LIME algorithm. While there is no single dominant feature, overall we see that the features related to \(Q_{5}\) (e.g. its mean and its 50th, 75th, and 95th percentiles) are relatively strongly correlated to the model prediction. In Fig. 4(b) we average all the statistical features related to the same physical \(Q_{i}\) observable, also confirming that \(Q_{5}\) is a relatively important structural property. This finding is consistent with recent work on densely disordered passive Lennard-Jones particles, which found that \(Q_{5}\) produces the largest contribution in a principal component analysis [22]. Thus, local 5-fold symmetries constitute relevant descriptors of passive particles, and if we incorporate their fluctuations using our pseudo-static technique, they also provide informative structural signatures of active particles in a disordered mixture. However, our results indicate that active/passive mixtures generally have a broad spectrum of structural features with no single dominant signal, and hence we conclude that active particles do not assume well defined local structures, even at relatively small activity. This lack of a single, unique feature underlying the Figure 4: Importance of the structural features in the identification of active and passive particles, as determined by LIME [57]. The parameters are ordered according to their mean importance. In (a) we report the 23 most important features, while in (b) we average the features relative to the same physical observable [Eq. (8) or (9)]. In (a) the notation \(\texttt{On\_mM}\) refers to the \(m\)th percentile of the distribution of \(Q_{n}\), with \(\texttt{Qn\_50\%}\) the median. The data was produced from the pseudo-static ML model trained for a mixture with \(\Phi_{a}=0.5\) and \(F_{a}=10\). structure-dynamics relation is also similar to the case of fully passive disordered systems [22, 61]. Lastly we notice the importance of the particle type, i.e. the species label \(A\) or \(B\) [the last feature listed in Fig. 4(b)]. While on average the particle type is weakly correlated to the identification of active particles, there are some significant outliers (black circles) with a strong correlation. Our interpretation is that, consistently with passive Kob-Andersen mixtures [62], type-B particles are smaller, which increases their mobility. However, type-B particles only constitute 20% of the mixture [63]. We hypothesize that the model implicitly knows that the particle identity is not important for 80% of the cases (the majority of large particles), while it is significant to differentiate the 20% of small particles. ## 5 Conclusions This work addresses the issue of identifying active entities in a densely disordered mixture of active and passive particles based on minimal local structural information. We propose a pseudo-static machine learning method that is both efficient and precise. While it is easy to observe that the relation between local structure and particle dynamics in active systems is different from the one in passive disordered systems, we have found that it is impossible to identify active particles using the same descriptors which are exceptionally good [9, 10, 11, 13, 14, 16] at predicting the dynamic propensity of particles in passive dense liquids. In our pseudo-static approach, the information about local structure is complemented by statistical measurements of the same features at different times. This information on the structural _fluctuations_ is generally much easier to obtain experimentally than a fully time-resolved dynamical trajectory, and it allows us to identify the active particles very precisely in a computationally efficient manner. Overall our machine learning model achieves good performances for a wide range of active forces and active/passive mixture compositions. Furthermore the model also performs well when extrapolating to other active forces and compositions outside its training range, rendering the approach fairly robust. This transferability can be very useful in experiments, where the active force or the precise number of active species may be difficult to quantify. Lastly, we explain the decisions of our ML model in order to understand how it is able to identify the active entities from the pseudo-static input. We find that, consistent with the passive case [22], there is no single dominant structural feature but rather a broad spectrum of features contributing to the final model outcome. In the future we plan to use additional structural descriptors such as SOAP parameters [64], directional information [32] and higher order correlations [65, 66] to evaluate if a clear connection can be established between local static structure and dynamics in active systems. In summary we have introduced a simple machine learning method that is able to identify active particles in a crowded environment using a small collection of statistically independent snapshots. This work serves as a step to better understand the elusive structure-dynamics relation in densely disordered non-equilibrium systems, but we also believe that our method can be a useful tool to study experimental systems such as biological cells, where the most active entities are hard to identify with the naked eye. It is thus our hope that the here presented approach can help to process large experimental datasets and contribute to the discovery of new connections between structural and dynamical properties at both the single-particle and collective level. ###### Acknowledgements. This work has been financially supported by the Dutch Research Council (NWO) through a START-UP grant (VED, CL, and LMCJ), Physics Projectruimte grant (GJ and LMCJ), ENW-XL grant (CS and LMCJ), and Vidi grant (LMCJ).
2310.17351
The generalized characteristic polynomial, corresponding resolvent and their application
We introduced previously the generalized characteristic polynomial defined by $P_C(\lambda)={\rm det}\,C(\lambda),$ where $C(\lambda)=C+{\rm diag}\big(\lambda_1,\dots,\lambda_n\big)$ for $C\in {\rm Mat}(n,\mathbb C)$ and $\lambda=(\lambda_k)_{k=1}^n\in \mathbb C^n$ and gave the explicit formula for $P_C(\lambda)$. In this article we define an analogue of the resolvent $C(\lambda)^{-1}$, calculate it and the expression $(C(\lambda)^{-1}a,a)$ for $a\in \mathbb C^n$ explicitly. The obtained formulas and their variants were applied to the proof of the irreducibility of unitary representations of some infinite-dimensional groups.
A. V. Kosyak
2023-10-26T12:36:37Z
http://arxiv.org/abs/2310.17351v1
# The generalized characteristic polynomial, ###### Abstract We introduced previously the generalized characteristic polynomial defined by\(P_{C}(\lambda)=\det C(\lambda)\), where \(C(\lambda)=C+\operatorname{diag}\bigl{(}\lambda_{1},\ldots,\lambda_{n}\bigr{)}\) for \(C\in\operatorname{Mat}(n,\mathbb{C})\) and \(\lambda=(\lambda_{k})_{k=1}^{n}\in\mathbb{C}^{n}\) and gave the explicit formula for \(P_{C}(\lambda)\). In this article we define an analogue of the resolvent \(C(\lambda)^{-1}\), calculate it and the expression \((C(\lambda)^{-1}a,a)\) for \(a\in\mathbb{C}^{n}\) explicitly. The obtained formulas and their variants were applied to the proof of the irreducibility of unitary representations of some infinite-dimensional groups. keywords: characteristic polynomial, generalized characteristic polynomial, generalized resolvent, estimates, infinite-dimensional group, irreducible representation, Ismagilov's conjecture Msc: [2020] 22E65 [5, 15, 26, 40] + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: + Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: [MISSING_PAGE_POST] Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: [MISSING_PAGE_POST] Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: [MISSING_PAGE_POST] Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: [MISSING_PAGE_POST] Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: [MISSING_PAGE_POST] Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote: Footnote †: Footnote †: FootnoteFootnote †: [MISSING_PAGE_POST] Footnote: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote †: Footnote: Footnote †: Footnote †: FootnoteFootnote †: Footnote †: Footnote †: Footnote †: FootnoteFootnote: Footnote †: Footnote †: Footnote: FootnoteFootnote †: FootnoteFootnote: Footnote †: FootnoteFootnote †: FootnoteFootnote: FootnoteFootnote †: FootnoteFootnote: Footnote †: FootnoteFootnote: Footnote: FootnoteFootnote †: Footnote: Footnote: Footnote: FootnoteFootnote †: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote †: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnoteFootnote: Footnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnoteFootnote: Footnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: Footnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote: FootnoteFootnote ###### Contents * 1 Summary of the key formulas * 2 Characteristic polynomials * 3 The generalized characteristic polynomial and its properties * 4 Gram determinants and Gram matrices * 4.1 How far is a vector from a hyperplane * 5 The explicit expression for \(C^{-1}(\lambda)\) and \((C^{-1}(\lambda)a,a)\) * 5.1 The case where \(C\) is the Gram matrix * 6 Some estimates * 7 Application * 7.1 The general idea * 7.2 The Ismagilov conjecture * 7.3 Group \(B_{0}^{\mathbb{N}}\), arbitrary mesure \(\mu\) * 7.3.1 Group \(B_{0}^{\mathbb{N}}\), Gaussian centered mesure * 7.4 Koopman's representation * 7.5 Group \(\mathrm{GL}_{0}(2\infty,\mathbb{R})\) acting on \(m\) infinite rows * 7.5.1 Case \(m=3\) ## 1 Summary of the key formulas For the generalized characteristic polynomial \(P_{C}(\lambda)\!=\!\det\Big{(}C\!+\!\mathrm{diag}\big{(}\lambda_{1},\ldots, \lambda_{n}\big{)}\!\Big{)}\) we have (for notations see Definitions 3.2, 5.1 and Remark 3.1): \[P_{C}(\lambda)=\det C(\lambda)=\sum_{\emptyset\subseteq\alpha \subseteq\{1,2,\ldots,n\}}\lambda_{\alpha}A_{\alpha}^{\alpha}(C)=\Big{(}\prod_ {k=1}^{n}\lambda_{k}\Big{)}\sum_{\emptyset\subseteq\alpha\subseteq\{1,2, \ldots,n\}}\frac{M_{\alpha}^{\alpha}(C)}{\lambda_{\alpha}},\] \[C(\lambda)^{-1}=\frac{1}{P_{C}(\lambda)}\Big{(}\prod_{k=1}^{n} \lambda_{k}\Big{)}\sum_{\emptyset\neq\alpha\subseteq\{1,2,\ldots,n\}}\frac{A^{ T}(C_{\alpha})}{\lambda_{\alpha}},\] \[\big{(}C(\lambda)^{-1}a,a\big{)}=\frac{1}{P_{C}(\lambda)}\Big{(} \prod_{k=1}^{n}\lambda_{k}\Big{)}\sum_{\emptyset\neq\alpha\subseteq\{1,2,\ldots,n\}}\frac{\big{(}A^{T}(C_{\alpha})a_{\alpha},a_{\alpha}\big{)}}{\lambda_{ \alpha}},\] \[1+(C(\lambda)^{-1}a,a)=\frac{\det\bigl{(}C(\lambda)+a\otimes a \bigr{)}}{\det C(\lambda)},\quad\text{where}\quad a\otimes a=(a_{k}a_{r})_{k,r =1}^{n}.\] Another presentation of \(1+(C(\lambda)^{-1}a,a)\), that we will use, is given in Theorem 5.3. ## 2 Characteristic polynomials Consider an \(n\times n\) matrix \(C\). The _characteristic polynomial_ of \(C\), denoted by \(p_{C}(t)\) is the polynomial defined by \(p_{C}(t)=\det(tI-C)\), where \(I\) denotes the \(n\times n\) identity matrix. By the _Cayley-Hamilton theorem_ we have \(p_{C}(C)=0\). Some authors define the characteristic polynomial as \(p_{C}(t)=\det(C-tI)\). For a \(2\times 2\) matrix \(C\), the characteristic polynomial is thus given by \[p_{C}(t)=t^{2}-t\operatorname{tr}(C)+\det C.\] Using the language of _exterior algebras_, the characteristic polynomial of an \(n\times n\) matrix \(C\) may be expressed as \[p_{C}(t)=\sum_{k=0}^{n}t^{n-k}(-1)^{k}\mathrm{tr}\left(\bigwedge^{k}C\right)= \sum_{k=0}^{n}t^{n-k}(-1)^{k}c_{k}, \tag{2.1}\] where \(c_{k}=\mathrm{tr}\left(\bigwedge^{k}C\right)\) is the trace of the \(k^{th}\)_exterior power_ of \(C\), which has dimension \(\binom{n}{k}\). This trace may be computed as _the sum of all principal minors of \(C\) of size \(k\)_ (see Definition 3.2 and Remark 3.1): \[c_{k}=\sum_{\emptyset\subseteq\alpha\subseteq\{1,2,\ldots,n\},\,|\alpha|=k} \lambda_{\alpha}M_{\alpha}^{\alpha}(C). \tag{2.2}\] The recursive _Faddeev-LeVerrier algorithm_ computes these coefficients more efficiently [5]. When the characteristic of the field of the coefficients is \(0\), each such trace may alternatively be computed as a single determinant, that of the \(k\times k\) matrix, \[c_{k}=\mathrm{tr}\left(\bigwedge^{k}C\right)=\frac{1}{k!}\left|\begin{array}[] {ccccc}\mathrm{tr}\,C&k-1&0&\cdots&0\\ \mathrm{tr}\,C^{2}&\mathrm{tr}\,C&k-2&\cdots&0\\ \vdots&\vdots&&\ddots&\vdots\\ \mathrm{tr}\,C^{k-1}&\mathrm{tr}\,C^{k-2}&&\cdots&1\\ \mathrm{tr}\,C^{k}&\mathrm{tr}\,C^{k-1}&&\cdots&\mathrm{tr}\,C\end{array} \right|. \tag{2.3}\] Theorem 5.1, formula (5.4) gives the expression for \(\left(C+\mathrm{diag}\big{(}\lambda_{1},\ldots,\lambda_{n}\big{)}\right)^{-1}\). In particular, for _resolvent_\((tI-C)^{-1}\) we have \[(tI-C)^{-1}=\frac{1}{p_{C}(t)}\Big{[}\sum_{k=1}^{n}t^{n-k}(-1)^{k+1}\sum_{ \alpha\subseteq\{1,2,\ldots,n\},\,|\alpha|=k}A^{T}(C_{\alpha})\Big{]}, \tag{2.4}\] where notation \(A(C_{\alpha})\) are defined in Definition 5.1. For a \(3\times 3\) matrix \(C\) we have for example \[(tI-C)^{-1}=\frac{1}{p_{C}(t)}\Big{[}t^{2}\sum_{k=1}^{3}A^{T}(C_{k})-t\sum_{1 \leq k<r\leq 3}^{3}A^{T}(C_{kr})+A^{T}(C_{123})\Big{]}. \tag{2.5}\] ## 3 The generalized characteristic polynomial and its properties **Definition 3.1**.: For a matrix \(C\in\operatorname{Mat}(n,\mathbb{C})\) and \(\lambda=(\lambda_{k})_{k=1}^{n}\in\mathbb{C}^{n}\) define the _generalization of the characteristic polynomial_\(p_{C}(t)=\det{(tI-C)}\), \(t\in\mathbb{C}\) as follows: \[P_{C}(\lambda)=\det{C(\lambda)},\quad\text{where}\quad C(\lambda)=\operatorname {diag}\bigl{(}\lambda_{1},\ldots,\lambda_{n}\bigr{)}+C. \tag{3.1}\] **Definition 3.2**.: For a matrix \(C\in\operatorname{Mat}(n,\mathbb{C}),\ a\in\mathbb{C}^{n}\) and fixed \(1\leq i_{1}<i_{2}<\ldots i_{r}\leq n\) rows and \(1\leq j_{1}<j_{2}<\ldots j_{r}\leq n\) columns \(1\!\leq\!r\!\leq\!n\) denote by \[M_{j_{1}j_{2}\ldots j_{r}}^{i_{1}i_{2}\ldots i_{r}}(C)\quad\text{ and}\quad A_{j_{1}j_{2}\ldots j_{r}}^{i_{1}i_{2}\ldots i_{r}}(C)\] the corresponding _minors_ and _cofactors_ of the matrix \(C\). **Lemma 3.1**.: ([9, Ch.1.4.3]) _For the generalized characteristic polynomial \(P_{C}(\lambda)\) of \(C\!\in\!\operatorname{Mat}(n,\mathbb{C})\) and \(\lambda=(\lambda_{1},\lambda_{2},...,\lambda_{n})\in\mathbb{C}^{n}\) we have_ \[P_{C}(\lambda)=\det{C}+\sum_{r=1}^{n}\sum_{1\leq i_{1}<i_{2}<\ldots<i_{r}\leq n }\lambda_{i_{1}}\lambda_{i_{2}}...\lambda_{i_{r}}A_{i_{1}i_{2}\ldots i_{r}}^{i _{1}i_{2}\ldots i_{r}}(C). \tag{3.2}\] **Remark 3.1**.: If we set \(\lambda_{\alpha}=\lambda_{i_{1}}\lambda_{i_{2}}\cdots\lambda_{i_{r}}\), where \(\alpha=\{i_{1},i_{2},\ldots,i_{r}\}\) and \(A_{\alpha}^{\alpha}(C)=A_{i_{1}i_{2}\ldots i_{r}}^{i_{1}i_{2}\ldots i_{r}}(C)\), \(M_{\alpha}^{\alpha}(C)=M_{i_{1}i_{2}\ldots i_{r}}^{i_{1}i_{2}\ldots i_{r}}(C)\), \(\lambda_{\emptyset}=1\), \(A_{\emptyset}^{\emptyset}(C)=\det{C}\) and \(|\alpha|=r\), (see Definition 5.1) we may write (3.2) as follows: \[P_{C}(\lambda)=\det{C(\lambda)}=\sum_{\emptyset\subseteq\alpha\subseteq\{1,2,\ldots,n\}}\lambda_{\alpha}A_{\alpha}^{\alpha}(C). \tag{3.3}\] Writing \(\widehat{\alpha}=\{1,2,\ldots,n\}\setminus\alpha\), we have \(A_{\alpha}^{\alpha}(C)=M_{\widehat{\alpha}}^{\widehat{\alpha}}(C)\), hence \[P_{C}(\lambda)=\det{C(\lambda)}=\Big{(}\prod_{k=1}^{n}\lambda_{k}\Big{)}\sum_ {\emptyset\subseteq\alpha\subseteq\{1,2,\ldots,n\}}\frac{M_{\alpha}^{\alpha}(C )}{\lambda_{\alpha}}. \tag{3.4}\] ## 4 Gram determinants and Gram matrices **Definition 4.1**.: Gram determinants were introduced in 1879 by J.P. Gram [4]. For vectors \(x_{1},x_{2},\ldots,x_{n}\) in some Hilbert space \(H\) the _Gram matrix_\(\gamma(x_{1},x_{2},\ldots,x_{n})\) is defined by the formula (see also [3], Chap IX, SS5) \[\gamma(x_{1},x_{2},\ldots,x_{n})=(x_{k},x_{m})_{k,m=1}^{n}.\] The determinant of this matrix is called the _Gram determinant_ for the vectors \(x_{1},x_{2},...,x_{n}\) and is denoted by \(\Gamma(x_{1},x_{2},\ldots,x_{n})\) \[\Gamma(x_{1},x_{2},\ldots,x_{n}):=\det\gamma(x_{1},x_{2},\ldots,x_{n}). \tag{4.1}\] Some authors use the notation \(G(x_{1},x_{2},\ldots,x_{n})\). **Remark 4.1**.: A Gram determinant is equal to the square of the \(n-\)dimensional volume of the _parallelotope_ constructed on \(x_{1},x_{2},...,x_{n}\). Fix some notations \[X=X_{mn}=\left(\begin{array}{cccc}x_{11}&x_{12}&...&x_{1n}\\ x_{21}&x_{22}&...&x_{2n}\\...&...&...&...\\ x_{m1}&x_{m2}&...&x_{mn}\end{array}\right), \tag{4.2}\] \[x_{k}=(x_{1k},x_{2k},\ldots,x_{mk})\in\mathbb{R}^{m},\quad y_{r}=(x_{r1},x_{r 2},\ldots,x_{rn})\in\mathbb{R}^{n}. \tag{4.3}\] Then, obviously, we get \[X^{*}X=\left(\begin{array}{cccc}(x_{1},x_{1})&(x_{1},x_{2})&\ldots&(x_{1},x_ {n})\\ (x_{2},x_{1})&(x_{2},x_{2})&\ldots&(x_{2},x_{n})\\ \ldots&\ldots&\ldots&\ldots\\ (x_{n},x_{1})&(x_{n},x_{2})&\ldots&(x_{n},x_{n})\end{array}\right)=\gamma(x_{ 1},x_{2},\ldots,x_{n}), \tag{4.4}\] \[XX^{*}=\left(\begin{array}{cccc}(y_{1},y_{1})&(y_{1},y_{2})&\ldots&(y_{1},y _{m})\\ (y_{2},y_{1})&(y_{2},y_{2})&\ldots&(y_{2},y_{m})\\ \ldots&\ldots&\ldots&\ldots\\ (y_{m},y_{1})&(y_{m},y_{2})&\ldots&(y_{m},y_{m})\end{array}\right)=\gamma(y_{ 1},y_{2},...,y_{m}). \tag{4.5}\] Therefore, we have \[\Gamma(x_{1},x_{2},...,x_{n})=\det(X^{*}X)=\det(XX^{*})=\Gamma(y_{1},y_{2},...,y_{m}). \tag{4.6}\] ### How far is a vector from a hyperplane We start with a classical result, see, e.g., [3]. Consider the hyperplane \(V_{n}\) generated by \(n\) arbitrary vectors \(f_{1},\ldots,f_{n}\) in some Hilbert space \(H\). **Lemma 4.1** ([3; 1]).: _The square of the distance \(d(f_{0},V_{n})\) of a vector \(f_{0}\) from the hyperplane \(V_{n}\) is given by the ratio of two Gram determinants (see Definition 4.1)_ \[d^{2}(f_{0},V_{n})=\frac{\Gamma(f_{0},f_{1},f_{2},\ldots,f_{n})}{\Gamma(f_{1}, f_{2},\ldots,f_{n})}. \tag{4.7}\] Proof.: We follow closely the book by Axiezer and Glazman [1]. Set \(f=\sum_{k=1}^{n}t_{k}f_{k}\in V_{n}\) and \(h=f-f_{0}\). Since \(h\) should be orthogonal to \(V_{n}\) we conclude that \(f_{r}\perp h\), i.e., \((f_{r},h)=0\) for all \(r\), or \[\sum_{k=1}^{n}t_{k}(f_{r},f_{k})=(f_{r},f_{0}),\quad 1\leq r\leq n. \tag{4.8}\] Set \(A=\gamma(f_{1},f_{2},\ldots,f_{n})\) and \(b=(f_{k},f_{0})_{k=1}^{n}\in\mathbb{R}^{n}\). By definition we have \[d^{2}=\min_{f\in V_{n}}\|f-f_{0}\|^{2}=(At,t)-2(t,b)+(f_{0},f_{0}). \tag{4.9}\] Since \(d^{2}=(h,h)=(f_{0},h)\) we conclude that \(d^{2}=\sum_{k=1}^{n}t_{k}(f_{0},f_{k})-(f_{0},f_{0})\) or \[\sum_{k=1}^{n}t_{k}(f_{0},f_{k})=(f_{0},f_{0})-d^{2}. \tag{4.10}\] So we have the system of equations: \[\left\{\begin{array}{ccccc}t_{1}(f_{1},f_{1})+t_{2}(f_{1},f_{2})+\cdots+t_{ n}(f_{1},f_{n})&=&(f_{1},f_{0})\\ t_{1}(f_{2},f_{1})+t_{2}(f_{2},f_{2})+\cdots+t_{n}(f_{2},f_{n})&=&(f_{2},f_{0} )\\ \cdots\\ t_{1}(f_{n},f_{1})+t_{2}(f_{n},f_{2})+\cdots+t_{n}(f_{n},f_{n})&=&(f_{n},f_{0} )\\ t_{1}(f_{0},f_{1})+t_{2}(f_{0},f_{2})+\cdots+t_{n}(f_{0},f_{n})&=&(f_{0},f_{0} )-d^{2}\end{array}\right.\,. \tag{4.11}\] Excluding \(t_{k}\) from the system we get \(d^{2}=\frac{\Gamma(f_{0},f_{1},f_{2},\ldots,f_{n})}{\Gamma(f_{1},f_{2},\ldots, f_{n})}\). Formula (4.7) also follows from Remark 4.1. **Remark 4.2**.: From the system (4.11) we conclude that \(At=b\), where \(b=(f_{k},f_{0})_{k=1}^{n}\in\mathbb{R}^{n}\), hence \(t=A^{-1}b\). By (4.9) we get \[d^{2}=(f_{0},f_{0})-(A^{-1}b,b)=\frac{\Gamma(f_{0},f_{1},f_{2},\ldots,f_{n})}{ \Gamma(f_{1},f_{2},\ldots,f_{n})}. \tag{4.12}\] See also [9, Chap. 4.3, Lemma 4.3.2]. ## 5 The explicit expression for \(C^{-1}(\lambda)\) and \((C^{-1}(\lambda)a,a)\) Fix \(C\in\operatorname{Mat}(n,\mathbb{C})\), \(a\in\mathbb{C}^{n}\) and \(\lambda\in\mathbb{C}^{n}\). Our aim is to find the explicit formulas for \(C(\lambda)^{-1}\) and \((C(\lambda)^{-1}a,a)\), where \(C(\lambda)\) is defined by (3.1). **Definition 5.1**.: For \(\alpha=\{i_{1},i_{2},\ldots,i_{r}\}\subset\{1,2,\ldots,n\}\) set \(M(\alpha)(C)=M_{\alpha}^{\alpha}(C)\). Let also \(C_{\alpha}=C_{i_{1}i_{2}\ldots i_{r}}\) be the corresponding _submatrix_ of the matrix \(C\) and \(a_{\alpha}=(a_{i_{1}},a_{i_{2}},\ldots,a_{i_{r}})\). The elements of the matrix \(C_{\alpha}\) are on the intersection of \(i_{1},i_{2},\ldots,i_{r}\) rows and column of the matrix \(C\). Denote by \(A(C_{i_{1}i_{2}\ldots i_{r}})\) the matrix of the cofactors of the first order of the matrix \(C_{i_{1}i_{2}\ldots i_{r}}\), another name is _adjugate matrix_, occasionally known as _adjunct matrix_: \[A(C_{i_{1}i_{2}\ldots i_{r}})=(A_{j}^{i}(C_{i_{1}i_{2}\ldots i_{r}}))_{1\leq i,j\leq r}. \tag{5.1}\] The minor of order zero is often defined to be \(1\), and therefore, we set \(A(C_{k})=1\) for \(1\leq k\leq n\). As usual, denote by \(B^{T}\) the _matrix transposed_ to \(B\). Let \(n=3\), then \(A(C_{123})=A(C)\) is the following matrix: \[A(C)=A(C_{123})=\left(\begin{array}{ccc}A_{1}^{1}&A_{2}^{1}&A_{3}^{1}\\ A_{1}^{2}&A_{2}^{2}&A_{3}^{2}\\ A_{1}^{3}&A_{2}^{3}&A_{3}^{3}\end{array}\right)=\left(\begin{array}{ccc}M_{2 3}^{23}&-M_{13}^{23}&M_{12}^{23}\\ -M_{23}^{13}&M_{13}^{13}&-M_{12}^{13}\\ M_{23}^{12}&-M_{13}^{12}&M_{12}^{12}\end{array}\right), \tag{5.2}\] we write \(M_{rs}^{ij}\) instead of \(M_{rs}^{ij}(C)\) and \(A_{j}^{i}\) instead of \(A_{j}^{i}(C)\). **Remark 5.1**.: If \(\det C_{i_{1}i_{2}\ldots i_{r}}\neq 0\) we have \[A^{T}(C_{i_{1}i_{2}\ldots i_{r}})=\det C_{i_{1}i_{2}\ldots i_{r}}\Big{(}C_{i_{ 1}i_{2}\ldots i_{r}}\Big{)}^{-1}. \tag{5.3}\] In what follows we need to consider the submatrix \(A^{T}\big{(}C_{i_{1}i_{2}\ldots i_{r}}\big{)},\ 1\leq r\leq n\), of the matrix \(C\in\operatorname{Mat}(n,\mathbb{C})\) as an _appropriate element of \(\operatorname{Mat}(n,\mathbb{C})\)_. **Theorem 5.1**.: _For the matrix \(C(\lambda)\) defined by (3.1), \(a\in\mathbb{C}^{n}\) and \(\lambda\in\mathbb{C}^{n}\) we have_ \[C(\lambda)^{-1}=\frac{1}{P_{C}(\lambda)}\Big{(}\prod_{k=1}^{n} \lambda_{k}\Big{)}\sum_{r=1}^{n}\sum_{1\leq i_{1}<i_{2}<\ldots i_{r}\leq n} \frac{A^{T}(C_{i_{1}i_{2}\ldots i_{r}})}{\lambda_{i_{1}}\lambda_{i_{2}}\ldots \lambda_{i_{r}}}, \tag{5.4}\] \[\big{(}C(\lambda)^{-1}a,a\big{)}=\frac{1}{P_{C}(\lambda)}\Big{(} \prod_{k=1}^{n}\lambda_{k}\Big{)}\sum_{r=1}^{n}\sum_{1\leq i_{1}<i_{2}<\cdots< i_{r}\leq n}\frac{(A^{T}(C_{i_{1}i_{2}\ldots i_{r}})a_{i_{1}i_{2}\ldots i_{r}},a_{i_{1}i_{2}\ldots i_{r}})}{\lambda_{i_{1}}\lambda_{i_{2}}\ldots\lambda_{i_{ r}}},\] (5.5) \[\big{(}C(\lambda)^{-1}a,a\big{)}=\frac{1}{P_{C}(\lambda)}\Big{(} \prod_{k=1}^{n}\lambda_{k}\Big{)}\sum_{\alpha\subseteq\{1,2,\ldots,n\},\,| \alpha|\geq 1}\frac{\big{(}A^{T}(C_{\alpha})a_{\alpha},a_{\alpha}\big{)}}{ \lambda_{\alpha}}. \tag{5.6}\] Proof. For \(n=2\) we have \[C(\lambda)\!=\!\left(\begin{array}{cc}c_{11}+\lambda_{1}&c_{12}\\ c_{21}&c_{22}+\lambda_{2}\end{array}\right),\,\,\,A^{T}(C_{12})\!=\!\left( \begin{array}{cc}A_{1}^{1}&A_{1}^{2}\\ A_{2}^{1}&A_{2}^{2}\end{array}\right)\!=\!\left(\begin{array}{cc}c_{22}&-c_{ 12}\\ -c_{21}&c_{11}\end{array}\right), \tag{5.7}\] \[C(\lambda)^{-1}\!=\!\frac{1}{P_{C}(\lambda)}\left(\begin{array}{ cc}c_{22}+\lambda_{2}&-c_{12}\\ -c_{21}&c_{11}+\lambda_{1}\end{array}\right)\!=\!\frac{1}{P_{C}(\lambda)} \left[\left(\begin{array}{cc}\lambda_{2}&0\\ 0&\lambda_{1}\end{array}\right)\!+\!A^{T}(C_{12})\right]=\] \[\frac{\lambda_{1}\lambda_{2}}{P_{C}(\lambda)}\left[\left( \begin{array}{cc}\lambda_{1}^{-1}&0\\ 0&\lambda_{2}^{-1}\end{array}\right)\!+\!\frac{A^{T}(C_{12})}{\lambda_{1} \lambda_{2}}\right]\!=\!\frac{\lambda_{1}\lambda_{2}}{P_{C}(\lambda)}\left[ \sum_{k=1}^{2}\frac{A^{T}(C_{k})}{\lambda_{k}}\!+\!\frac{A^{T}(C_{12})}{ \lambda_{1}\lambda_{2}}\right], \tag{5.8}\] recall that \(A^{T}(C_{1})=\left(\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\right),\,\,A^{T}(C_{2})=\left(\begin{smallmatrix}0&0\\ 0&1\end{smallmatrix}\right)\). Therefore, \[\left(C(\lambda)^{-1}a,a\right)=\frac{1}{P_{C}(\lambda)}\left[(c_ {22}+\lambda_{2})a_{1}^{2}-(c_{12}+c_{21})a_{1}a_{2}+(c_{11}+\lambda_{1})a_{2 }^{2}\right]\] \[=\frac{1}{P_{C}(\lambda)}\left[\lambda_{2}a_{1}^{2}+\lambda_{1}a_ {2}^{2}+c_{22}a_{1}^{2}+c_{11}a_{2}^{2}-(c_{12}+c_{21})a_{1}a_{2}\right]=\] \[\left(1\!+\!\frac{M(1)}{\lambda_{1}}+\frac{M(2)}{\lambda_{2}}+ \frac{M(12)}{\lambda_{1}\lambda_{2}}\right)^{-1}\!\!\left[\!\frac{a_{1}^{2}}{ \lambda_{1}}\!+\!\frac{a_{2}^{2}}{\lambda_{2}}\!+\frac{(A^{T}(C_{12})a_{12},a_ {12})}{\lambda_{1}\lambda_{2}}\!\right]. \tag{5.9}\] For \(n=3\) we have by (3.4) \[P_{C}(\lambda)\!=\!\lambda_{1}\lambda_{2}\lambda_{3}\Big{(}1\!+\! \sum_{k=1}^{3}\frac{M(k)}{\lambda_{k}}+\sum_{1\leq k<r\leq 3}\frac{M(kr)}{ \lambda_{k}\lambda_{r}}+\frac{M(123)}{\lambda_{1}\lambda_{2}\lambda_{3}}\Big{)}, \tag{5.10}\] \[C(\lambda)=\left(\begin{array}{cc}c_{11}+\lambda_{1}&c_{12}&c_ {13}\\ c_{21}&c_{22}+\lambda_{2}&c_{23}\\ c_{31}&c_{32}&c_{33}+\lambda_{3}\end{array}\right),\quad C(\lambda)^{-1}= \frac{1}{P_{C}(\lambda)}\times\] \[\left(\begin{array}{cc}\lambda_{2}\lambda_{3}\left(1\!+\!\frac{ M_{2}^{2}}{\lambda_{2}}\!+\!\frac{M_{3}^{3}}{\lambda_{3}}\!+\!\frac{M_{23}^{23}}{ \lambda_{2}\lambda_{3}}\right)&-M_{23}^{13}\!-\!\lambda_{3}M_{2}^{1}&M_{23}^{12 }\!-\!\lambda_{2}M_{3}^{1}\\ -M_{13}^{23}\!-\!\lambda_{3}M_{1}^{2}&\lambda_{1}\lambda_{3}\left(1\!+\!\frac {M_{1}^{2}}{\lambda_{1}}\!+\!\frac{M_{3}^{2}}{\lambda_{3}}\!+\!\frac{M_{13}^{ 13}}{\lambda_{1}\lambda_{3}}\right)&-M_{13}^{12}\!-\!\lambda_{1}M_{3}^{2}\\ -M_{12}^{23}\!-\!\lambda_{2}M_{1}^{3}&-M_{12}^{13}\!-\!\lambda_{1}M_{2}^{3}& \lambda_{1}\lambda_{2}\Big{(}1\!+\!\frac{M_{1}^{2}}{\lambda_{1}}\!+\!\frac{M_{ 2}^{2}}{\lambda_{2}}\!+\!\frac{M_{12}^{12}}{\lambda_{1}\lambda_{2}}\Big{)} \end{array}\right)=\] \[\frac{\lambda_{1}\lambda_{2}\lambda_{3}}{P_{C}(\lambda)}\left( \begin{array}{cc}\frac{1}{\lambda_{1}}\!+\!\frac{M_{2}^{2}}{\lambda_{1} \lambda_{2}}\!+\!\frac{M_{3}^{3}}{\lambda_{1}\lambda_{2}}\!+\!\frac{M_{23}^{23} }{\lambda_{1}\lambda_{2}\lambda_{3}}&-\frac{M_{23}^{13}}{\lambda_{1}\lambda_{2 }\lambda_{3}}\!-\!\frac{M_{2}^{1}}{\lambda_{1}\lambda_{2}}&\frac{M_{23}^{23}}{ \lambda_{1}\lambda_{2}\lambda_{3}}\!-\!\frac{M_{3}^{1}}{\lambda_{1}\lambda_{3}} \\ -\frac{M_{13}^{23}}{\lambda_{1}\lambda_{2}\lambda_{3}}\!-\!\frac{M_{1}^{2}}{ \lambda_{1}\lambda_{2}}&\frac{1}{\lambda_{2}}\!+\!\frac{M_{1}^{2}}{\lambda_{1} \lambda_{2}}\!+\!\frac{M_{2}^{3}}{\lambda_{2}\lambda_{3}}\!+\!\frac{M_{13}^{ 13}}{\lambda_{1}\lambda_{2}\lambda_{3}}&-\frac{M_{13}^{12}}{\lambda_{1}\lambda_{2 }\lambda_{3}}\frac{M_{2}^{2}}{\lambda_{2}\lambda_{3}}\\ -\frac{M_{12}^{23}}{\lambda_{1}\lambda_{2}\lambda_{3}}\!-\!\frac{M_{1}^{3}}{ \lambda_{1}\lambda_{3}}&-\frac{M_{12}^{13}}{\lambda_{1}\lambda_{2}\lambda_{3}} \!-\!\frac{M_{2}^{3}}{\lambda_{1}\lambda_{2}\lambda_{3}}\!-\!\frac{M_{2}^{3}}{ \lambda_{2}\lambda_{3}}&\frac{1}{\lambda_{3}}\!+\!\frac{M_{1}^{1}}{\lambda_{1} \lambda_{3}}\!+\!\frac{M_{2}^{2}}{\lambda_{2}\lambda_{3}}\!+\!\frac{M_{12}^{12 }}{\lambda_{1}\lambda_{2}\lambda_{3}}\end{array}\right).\] Finally, we get \[C(\lambda)^{-1}\!=\!\frac{\lambda_{1}\lambda_{2}\lambda_{3}}{P_{C}(\lambda)} \left[\sum_{k=1}^{3}\frac{A^{T}(C_{k})}{\lambda_{k}}+\!\!\sum_{1\leq r<s\leq 3} \frac{A^{T}(C_{rs})}{\lambda_{r}\lambda_{s}}+\frac{A^{T}(C_{123})}{\lambda_{1} \lambda_{2}\lambda_{3}}\right], \tag{5.11}\] we use (5.7) and (5.2). Therefore, \[\big{(}C(\lambda)^{-1}a,a\big{)}=\frac{\lambda_{1}\lambda_{2}\lambda_ {3}}{P_{C}(\lambda)}\Big{[}\frac{a_{1}^{2}}{\lambda_{1}}+\frac{a_{2}^{2}}{ \lambda_{2}}+\frac{a_{3}^{2}}{\lambda_{3}}+\frac{(A^{T}(C_{12})a_{12},a_{12})}{ \lambda_{1}\lambda_{2}}+\] \[\frac{(A^{T}(C_{13})a_{13},a_{13})}{\lambda_{1}\lambda_{3}}+\frac {(A^{T}(C_{23})a_{23},a_{23})}{\lambda_{2}\lambda_{3}}+\frac{(A^{T}(C_{123})a_ {123},a_{123})}{\lambda_{1}\lambda_{2}\lambda_{3}}\Big{]}. \tag{5.12}\] For \(n=4\) we have \[C(\lambda)=\left(\begin{array}{cccc}c_{11}+\lambda_{1}&c_{12}&c_{13}&c_{14} \\ c_{21}&c_{22}+\lambda_{2}&c_{23}&c_{24}\\ c_{31}&c_{32}&c_{33}+\lambda_{3}&c_{34}\\ c_{41}&c_{42}&c_{43}&c_{44}+\lambda_{4}\end{array}\right).\] The general formulas are as follows \[C(\lambda)^{-1}=\frac{1}{P_{C}(\lambda)}\Big{(}\prod_{k=1}^{n} \lambda_{k}\Big{)}\sum_{\alpha\subseteq\{1,2,...,n\},\,|\alpha|\geq 1}\frac{A^{T} (C_{\alpha})}{\lambda_{\alpha}},\] \[\big{(}C(\lambda)^{-1}a,a\big{)}=\frac{1}{P_{C}(\lambda)}\Big{(} \prod_{k=1}^{n}\lambda_{k}\Big{)}\sum_{\alpha\subseteq\{1,2,...,n\},\,|\alpha| \geq 1}\frac{\big{(}A^{T}(C_{\alpha})a_{\alpha},a_{\alpha}\big{)}}{\lambda_{ \alpha}}.\] that proves (5.4)-(5.6). We make convention in (5.5), that \(A(C_{k})\!=\!1\). **Example 5.1**.: For the matrix \(C(\lambda)\) we have by (3.4) \[C(\lambda)=\left(\begin{array}{cccc}1+\lambda_{1}&1&...&1\\ 1&1+\lambda_{2}&...&1\\ &&...&\\ 1&1&...&1+\lambda_{n}\end{array}\right), \tag{5.13}\] \[\det C(\lambda)=\Big{(}\prod_{k=1}^{n}\lambda_{k}\Big{)}\Big{(}1+ \sum_{k=1}^{n}\frac{1}{\lambda_{k}}\Big{)},\] (5.14) \[C(\lambda)^{-1}=\left(1+\sum_{k=1}^{n}\frac{1}{\lambda_{k}}\right) ^{-1}\Big{[}\sum_{k=1}^{n}\frac{A^{T}(C_{k})}{\lambda_{k}}+\sum_{1\leq k<r \leq n}\frac{A^{T}(C_{kr})}{\lambda_{k}\lambda_{r}}\Big{]},\] where \(A^{T}(C_{kr})=\big{(}\begin{smallmatrix}1&-1\\ -1&1\end{smallmatrix}\big{)}\) and \(A^{T}(C_{krs})=0\) for \(1\leq k<r<s\leq n\). ### The case where \(C\) is the Gram matrix Fix two natural numbers \(n,m\in\mathbb{N}\) with \(m\leq n\), two matrices \(A_{mn}\) and \(X_{mn}\), vectors \(g_{k}\in\mathbb{C}^{m-1},\ 1\leq k\leq n\) and \(a\in\mathbb{C}^{n}\) as follows \[A_{mn}\!=\!\left(\begin{array}{cccc}a_{11}&a_{12}&...&a_{1n}\\ a_{21}&a_{22}&...&a_{2n}\\ &&...&\\ a_{m1}&a_{m2}&...&a_{mn}\end{array}\right),\ g_{k}=\left(\begin{array}{c}a_{2k }\\ a_{3k}\\...\\ a_{mk}\end{array}\right)\in\mathbb{C}^{m-1},\ a=(a_{1k})_{k=1}^{n}\in\mathbb{C}^ {n}. \tag{5.15}\] Set \[C=\gamma(g_{1},g_{2},\ldots,g_{n})=\left(\begin{array}{cccc}(g_{1},g_{1})&( g_{1},g_{2})&\ldots&(g_{1},g_{n})\\ (g_{2},g_{1})&(g_{2},g_{2})&\ldots&(g_{2},g_{n})\\ &&\ldots&\\ (g_{n},g_{1})&(g_{n},g_{2})&\ldots&(g_{n},g_{n})\end{array}\right). \tag{5.16}\] We calculate \(P_{C}(\lambda),\ C^{-1}(\lambda)\) and \((C^{-1}(\lambda)a,a)\) for an arbitrary \(n\). Consider the matrix \[X_{mn}\!=\!\left(\begin{array}{cccc}x_{11}&x_{12}&...&x_{1n} \\ x_{21}&x_{22}&...&x_{2n}\\ &&...&\\ x_{m1}&x_{m2}&...&x_{mn}\end{array}\right),\quad\mbox{where}\quad x_{rk}\!= \!\frac{a_{rk}}{\sqrt{\lambda}_{k}}, \tag{5.17}\] \[\bar{x}_{k}\!=\!(x_{rk})_{r=2}^{m}=\frac{g_{k}}{\sqrt{\lambda_{k} }}\!\in\!\mathbb{C}^{m-1}. \tag{5.18}\] For \(k\in\mathbb{N}\) define \(\Delta(y_{1},y_{2},\ldots,y_{k})\) as follows: \[\Delta(y_{1},y_{2},\ldots,y_{k})=\frac{\det(I+\gamma(y_{1},y_{2},\ldots,y_{k}) )}{\det(I+\gamma(y_{2},\ldots,y_{k}))}-1. \tag{5.19}\] **Lemma 5.2**: _For \(A\in\mathrm{GL}(n,\mathbb{C})\) and \(a\in\mathbb{C}^{n}\) we have_ \[1+(A^{-1}a,a)=\frac{\det\bigl{(}A+a\otimes a\bigr{)}}{\det\left(A\right)}. \tag{5.20}\] Define \(a\otimes a\) as \((a_{k}a_{r})_{k,r=1}^{n}\in\mathrm{Mat}(n,\mathbb{C})\). Then by (2.1) we have \[\det\bigl{(}A+a\otimes a\bigr{)}=\det(A)\mathrm{det}\bigl{(}1+A^{-1}a\otimes a \bigr{)}=\det(A)\Bigl{(}1+(A^{-1}a,a)\Bigr{)},\] since \(\operatorname{tr}(A^{-1}a\otimes a)=(A^{-1}a,a)\) and \(c_{k}=\operatorname{tr}\Big{(}\bigwedge^{k}\big{(}A^{-1}a\otimes a\big{)}\Big{)}=0\) for all \(k>1\). To verify the last statement, by (2.3) it is sufficient to verify that \(\operatorname{tr}\!\left(D^{k}\right)=\big{(}\operatorname{tr}D\big{)}^{k}\) for \(D=A^{-1}a\otimes a\). Indeed, we have \(\operatorname{tr}D=(A^{-1}a,a)\) and \[D^{k}=(A^{-1}a,a)^{k-1}D\quad\text{therefore,}\quad\operatorname{tr}\!\left(D^{k }\right)=\big{(}\operatorname{tr}D\big{)}^{k}. \tag{5.21}\] \(\square\) If we take \(A=C(\lambda)\) we will get \[1+(C(\lambda)^{-1}a,a)=\frac{\det\!\left(C(\lambda)+a\otimes a\right)}{\det C (\lambda)}. \tag{5.22}\] **Theorem 5.3**.: _Let \(C\) be defined by (5.16) and \(a,\lambda\in\mathbb{C}^{n}\), then_ \[1+\big{(}C(\lambda)^{-1}a,a\big{)}=\frac{\det\!\left(I_{m}+\gamma(y_{1},y_{2},\ldots,y_{m})\right)}{\det\!\left(I_{m-1}+\gamma(y_{2},\ldots,y_{m})\right)} =1+\Delta(y_{1},y_{2},\ldots,y_{m}), \tag{5.23}\] _where \(y_{k}\) for \(1\leq k\leq m\) are defined by (4.3) and \(\Delta(y_{1},y_{2},\ldots,y_{m})\) is defined by (5.19)._ Proof. By Lemma 5.2 it is sufficient to show that \[\det\!\left(C(\lambda)\!+\!a\otimes a\right)\!=\!\Big{(}\prod_{k= 1}^{n}\lambda_{k}\Big{)}\!\det\!\left(I+\gamma(y_{1},\ldots,y_{m})\right)\!,\] \[\det C(\lambda)\!=\!\Big{(}\prod_{k=1}^{n}\lambda_{k}\Big{)}\! \det\!\left(I\!+\!\gamma(y_{2},\ldots,y_{m})\right)\!.\] Indeed, we have \[C(\lambda)+a\otimes a=\gamma(g_{1},\ldots,g_{n})+\operatorname{ diag}(\lambda_{k})_{k=1}^{n}+(a_{1k}a_{1r})_{k,r=1}^{n}=\] \[\big{(}(g_{k},g_{r})+a_{1k}a_{1r}\big{)}_{k,r=1}^{n}+\operatorname {diag}(\lambda_{k})_{k=1}^{n}\stackrel{{\eqref{eq:C(1)}}}{{=}} \big{(}(x_{k},x_{r})\sqrt{\lambda_{k}\lambda_{r}}\big{)}_{k,r=1}^{n}+\] \[\operatorname{diag}(\lambda_{k})_{k=1}^{n}=\operatorname{diag}( \sqrt{\lambda_{k}})_{k=1}^{n}\Big{(}I+\gamma(x_{1},\ldots,x_{n})\Big{)} \operatorname{diag}(\sqrt{\lambda_{k}})_{k=1}^{n}.\] Therefore, \[\det\big{(}C(\lambda)+a\otimes a\big{)}=\Big{(}\prod_{k=1}^{n} \lambda_{k}\Big{)}\!\det\!\left(I+\gamma(x_{1},\ldots,x_{n})\right)\stackrel{{ \eqref{eq:C(1)}}}{{=}}\] \[\Big{(}\prod_{k=1}^{n}\lambda_{k}\Big{)}\!\det\!\left(I+\gamma(y_{ 1},\ldots,y_{m})\right)\!.\] Further, \[\det C(\lambda)=\det\Bigl{(}\gamma(g_{1},\ldots,g_{n})+\text{diag}( \lambda_{k})_{k=1}^{n}\Bigr{)}\stackrel{{\eqref{eq:C_1\(1\)1_1\(2\)2_3\(4\)1_3\(4\)1_2\(3\)4_1\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(2\)3_4\(1\)2_3\(4\)1_2\(3\)4_1\(3\)4_1\(2\)3_4\(1\)3_4\(1\)2_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)1_3\(4\)13\(4\)1_3\(4\)13\(4\)13\(4\)1_3\(4\)13\(4\)1_3\(4\)1_3\(4\)13\(4\)1_3\(4\)13\(4\)1_3\(4\)13\(4\)13\(4\)1_3\(4\)13\(4\)13\(4\)1_3\(4\)1_3\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)1_3\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)1_3\(4\)13\(4\)13\(4\)13\(4\)13\(4\)1_3\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13_41\(3\)413\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13_41\(3\)413\(4\)13\(4\)13\(4\)13_41_34_13_41\(3\)413\(4\)13\(4\)13\(4\)13\(4\)13\(4\)13_41_34_13\(4\)13\(4\)13_41\(3\)413\(4\)13\(4\)13\(4\)13\(4\)13_41_34_13\(4\)13_43\(4\)13\(4\)13_43_41\(3\)413\(4\)13\(4\)13\(4\)13_43\(4\)13_43_43_43_44_13_43_43_44_13_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_433_43_43_43_43_43_43_43_43_43_43_433_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_433_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_43_433_ Proof.: Consider a new scalar product in \(H\) defined as follows: \[(f,g)_{A}:=(Af,g),\quad f,g\in H. \tag{6.5}\] Since \[(Ax,x)=(x,x)_{A}=\|x\|_{A}^{2}\quad\text{and}\quad 1=(b,x)=(A^{-1}b,x)_{A},\] the minimum \(\|x\|_{A}^{2}\) will be achived on the vector \(x_{0}=sA^{-1}b\) generating hyperplane \(1=(A^{-1}b,x)_{A}\) and lying on this hyperplane. We get \[1=(b,sA^{-1}b),\quad\text{therefore}\quad s=\frac{1}{(A^{-1}b,b)},\quad x_{0}= \frac{1}{(A^{-1}b,b)}A^{-1}b.\] Finally, we get \((Ax_{0},x_{0})=\frac{1}{(A^{-1}b,b)}\). **Counterexample 6.3**.: _For a positive definite operator \(A=\operatorname{diag}(\lambda_{k})_{k=1}^{\infty}\) in \(l_{2}(\mathbb{N})\) where \(\lambda_{k}=\frac{1}{k}\) and \(b=(b_{k})_{k\in\mathbb{N}}\in l_{2}(\mathbb{N})\) with \(b_{k}=\frac{1}{k}\) we have \(b\not\in D(A^{-1})\), since \((A^{-1}b)_{k}\equiv 1\) for all \(k\in\mathbb{N}\) hence, \(A^{-1}b\not\in l_{2}(\mathbb{N})\). In this case \(\frac{1}{(A^{-1}b,b)}=0\). Indeed, for the corresponding projections \(A_{n},\ b_{n}\) on \(\mathbb{R}^{n}\) we have_ \[(A_{n}^{-1}b_{n},b_{n})=\sum_{k=1}^{n}\frac{1}{k}\to\infty.\] ## 7 Application ### The general idea In the concrete examples considered in [6]-[11] the possibility to approximate a lot of functions in \(L^{\infty}(X_{m},\mu)\) using Lemma 6.1, follows from the fact that \[\lim_{n\to\infty}(C_{n}(\lambda)^{-1}a_{n},a_{n})=\infty. \tag{7.1}\] By Theorem 5.3 we have \[\big{(}C_{n}(\lambda)^{-1}a_{n},a_{n}\big{)}\!=\!\Delta(y_{1}^{(n)},y_{2}^{(n )},\ldots,y_{m}^{(n)})\!=\!\frac{\det\!\big{(}I_{m}+\gamma(y_{1}^{(n)},y_{2}^ {(n)},\ldots,y_{m}^{(n)})\big{)}}{\det\!\big{(}I_{m-1}+\gamma(y_{2}^{(n)}, \ldots,y_{m}^{(n)})\big{)}}\!-\!1. \tag{7.2}\] Finally, by Lemma 7.1 and Lemma 7.2, proved in [12], we have \[\lim_{n\to\infty}\frac{\det\!\big{(}I_{m}+\gamma(y_{1}^{(n)},y_{2}^{(n)}, \ldots,y_{m}^{(n)})\big{)}}{\det\!\big{(}I_{m-1}+\gamma(y_{2}^{(n)},\ldots,y_ {m}^{(n)})\big{)}}\!=\!\infty.\] **Lemma 7.1** ([12]).: _Let \(f_{r}=(f_{rk})_{k\in\mathbb{N}}\), be \(m+1\) infinite real vectors \(0\leq r\leq m\) such that for all \(\big{(}C_{0},\ldots,C_{m}\big{)}\in\mathbb{R}^{m+1}\setminus\{0\}\) holds_ \[\sum_{r=0}^{m}C_{r}f_{r}\not\in l_{2}(\mathbb{N}),\quad\text{i.e.,}\quad\sum_{ k\in\mathbb{N}}\Big{|}\sum_{r=0}^{m}C_{r}f_{rk}\Big{|}^{2}=\infty. \tag{7.3}\] _Denote by \(f_{r}^{(n)}=(f_{rk})_{k=1}^{n}\in\mathbb{R}^{n}\) the projections of the vectors \(f_{r}\) on the subspace \(\mathbb{R}^{n}\). Then for all \(s\) with \(0\leq s\leq m\)_ \[\frac{\Gamma(f_{0},f_{1},\ldots,f_{m})}{\Gamma(f_{0},\ldots,\hat{f}_{s}, \ldots,f_{m})}:=\lim_{n\to\infty}\frac{\Gamma(f_{0}^{(n)},f_{1}^{(n)}\ldots,f _{m}^{(n)})}{\Gamma(f_{0}^{(n)},\ldots,\widehat{f_{s}^{(n)}},\ldots,f_{m}^{(n )})}=\infty, \tag{7.4}\] _where \(\hat{f}_{s}\) means that the vector \(f_{s}\) is absent and \(\Gamma(f_{0},f_{1},\ldots,f_{m})\) is the Gram determinant._ **Lemma 7.2** ([12]).: _Let we have \(m+1\) real vectors \((f_{k})_{k=0}^{m}\) such that \(\sum_{k=0}^{m}C_{k}f_{k}\not\in l_{2}(\mathbb{N})\) for any nontrivial combination \((C_{k})_{k=0}^{m}\). Then for any \(s,\,0\leq s\leq m\)_ \[\frac{\det\bigl{(}I_{m+1}+\gamma(f_{0},\ldots,f_{m})\bigr{)}}{\det\bigl{(}I_{ m}+\gamma(f_{0},\ldots,\hat{f}_{s},\ldots,f_{m})\bigr{)}}\!=\!\lim_{n\to\infty} \!\!\frac{\det\bigl{(}I_{m+1}+\gamma(f_{0}^{(n)},\ldots,f_{m}^{(n)})\bigr{)} }{\det\bigl{(}I_{m}+\gamma(f_{0}^{(n)},\ldots,\widehat{f_{s}^{(n)}},\ldots,f _{m}^{(n)})\bigr{)}}\!=\!\infty. \tag{7.5}\] _Here \(I_{m}\!=\!\operatorname{diag}(1,\ldots,1)\in\operatorname{Mat}(m,\mathbb{R},)\) and \(\gamma(f_{0},\ldots,f_{m})\) is the Gram matrix._ Proof.: The proof follows from Lemma 7.1 and (3.2). **Remark 7.1**.: We note that \(\frac{\Gamma(f_{0},f_{1},\ldots,f_{m})}{\Gamma(f_{1},\ldots,f_{m})}\) is the square of the _height_ of the _parallelotope_ generated by the vectors \(f_{0},f_{1},\ldots,f_{m}\in\mathbb{R}^{m+1}\), see Lemma 4.1. ### The Ismagilov conjecture To construct the regular representation for an infinite-dimensional group \(G\), first we should find some larger topological group \(\widetilde{G}\) and a measure \(\mu\) on \(\widetilde{G}\) such that \(G\) is a dense subgroup in \(\widetilde{G}\), and the measure is right or left \(G\)-quasi-invariant, i.e., \(\mu^{R_{t}}\sim\mu\) for all \(t\in G\), (or \(\mu^{L_{s}}\sim\mu\) for all \(s\in G\)), here \(\sim\) means _equivalence_, for details see [9]. We use notation \(\mu^{f}(\Delta)=\mu\bigl{(}f^{-1}(\Delta)\bigr{)}\) for \(f:X\to X\), where \(\Delta\) is some measurable set in \(X\). Consider the right and the left actions \(R_{t},L_{s}\) of the group \(G\) on \(\widetilde{G}\) defined below: \[R_{t}x=xt^{-1},\quad L_{s}x=sx,\quad t,s\in G,\ x\in\widetilde{G}.\] Denote by \(\mu^{R_{t}}\), \(\mu^{L_{s}}\) the images of the measure \(\mu\) under the map \(R_{t},L_{s}:\widetilde{G}\to\widetilde{G}\). The right and left representations \(T^{R,\mu},T^{L,\mu}:G\to U(L^{2}(\widetilde{G},\mu))\) are naturally defined in the Hilbert space \(L^{2}(\widetilde{G},\mu)\) by the following formulas: \[(T^{R,\mu}_{t}f)(x)=(d\mu(xt)/d\mu(x))^{1/2}f(xt), \tag{7.6}\] \[(T^{L,\mu}_{s}f)(x)=(d\mu(s^{-1}x)/d\mu(x))^{1/2}f(s^{-1}x). \tag{7.7}\] The right regular representation of infinite-dimensional groups can be irreducible if no left actions are _admissible_ for the measure \(\mu\), i.e., when \(\mu^{L_{t}}\perp\mu\) for all \(t\in G\backslash\{e\}\). In this case a von Neumann algebra \(\mathfrak{A}^{T^{L,\mu}}\) generated by the left regular representation \(T^{L,\mu}\) is trivial. More precisely: **Conjecture 7.3** (Ismagilov, 1985): _The right regular representation_ \[T^{R,\mu}:G\to U(L^{2}(\widetilde{G},\mu))\] _is irreducible if and only if_ _1) \(\mu^{L_{t}}\perp\mu\quad\text{for all}\quad t\in G\backslash\{e\},\ \ \text{(where $\perp$ stands for singular),}\)_ _2) the measure \(\mu\) is \(G\)-ergodic._ Conditions 1) and 2) are the necessary conditions of the irreducibility. The problem is to prove that they are sufficient ones too. **Remark 7.2**: _This conjecture was expressed by Rais Salmanovich Ismagilov in his referee report of the author's PhD Thesis, 1985. It was verified for a lot of particular cases. In the general case, it is an open problem. In the case of a finite field \(\mathbb{F}_{p}\) we need some additional conditions for the irreducibility [8]._ ### Group \(B^{\mathbb{N}}_{0}\), arbitrary mesure \(\mu\) Let \(B^{\mathbb{N}}_{0}\) be the group of finite real upper-triangular matrices with unities on the principal diagonal and let \(B^{\mathbb{N}}\) be the group of all such matrices (not necessarily finite): \[B^{\mathbb{N}}_{0}=\{I+x=I+\sum_{k<n}x_{kn}E_{kn}\mid x\text{ is finite}\},\] \[B^{\mathbb{N}}=\{I+x=I+\sum_{k<n}x_{kn}E_{kn}\mid x\text{ is arbitrary}\}.\] Let \(\mu\) be an arbitrary probability measure on the group \(B^{\mathbb{N}}\). If \(\mu^{R_{t}}\sim\mu\) and \(\mu^{L_{t}}\sim\mu\) for all \(t\in B^{\mathbb{N}}_{0}\), an analogue of the right \(T^{R,\mu}\) and the left \(T^{L,\mu}\) regular representations of the group \(B^{\mathbb{N}}_{0}\), i.e., \(T^{R,\mu},\ T^{L,\mu}:B^{\mathbb{N}}_{0}\to U(H_{\mu})\) are defined in the space \(H_{\mu}=L^{2}(B^{\mathbb{N}},\mu)\) by (7.6) and (7.6). For the generators \(A^{R,\mu}_{kn}\) (\(A^{L,\mu}_{kn}\)) of the one-parameter groups \(I+tE_{kn},\)\(t\in\mathbb{R},\)\(k<n,\) corresponding to the right \(T^{R,\mu}\) (respectively the left \(T^{L,\mu}\)) regular representation we have the following formulas: \[A^{R,\mu}_{kn}=\frac{d}{dt}T^{R,\mu}_{I+tE_{kn}}|_{t=0}=\sum_{r= 1}^{k-1}x_{rk}D_{rn}(\mu)+D_{kn}(\mu), \tag{7.8}\] \[A^{L,\mu}_{kn}=\frac{d}{dt}T^{L,\mu}_{I+tE_{kn}}|_{t=0}=-(D_{kn} (\mu)+\sum_{m=n+1}^{\infty}x_{nm}D_{km}(\mu)), \tag{7.9}\] where \(D_{kn}(\mu)=\frac{\partial}{\partial x_{kn}}+\frac{d}{dt}\bigg{(} \frac{d\mu(x(I+tE_{kn}))}{d\mu(x)}\bigg{)}^{1/2}|_{t=0}.\) For an arbitrary product measure \(\mu=\otimes_{k<n}\mu_{kn},\) we have \[D_{kn}(\mu)=\frac{\partial}{\partial x_{kn}}+\frac{\partial}{\partial x_{kn} }\Big{(}\ln\mu_{kn}^{1/2}(x_{kn})\Big{)}, \tag{7.10}\] where we write \(d\mu_{kn}(x)=\mu_{kn}(x)dx,\)\(x\in\mathbb{R}.\) #### 7.3.1 Group \(B^{\mathbb{N}}_{0}\), Gaussian centered mesure See details in [9, Ch. 2.1]. Let us define the Gaussian product-measure \(\mu_{b}\) on the group \(B^{\mathbb{N}}\) in the following way: \[d\mu_{b}(x)=\otimes_{k<n}(b_{kn}/\pi)^{1/2}\exp(-b_{kn}x_{kn}^{2})dx_{kn}= \otimes_{k<n}d\mu_{b_{kn}}(x_{kn}), \tag{7.11}\] where \(b=(b_{kn})_{k<n}\) is some set of positive numbers. In this case we have \[A^{R,\mu}_{kn}=\frac{d}{dt}T^{R,\mu}_{I+tE_{kn}}|_{t=0}=\sum_{r= 1}^{k-1}x_{rk}D_{rn}+D_{kn},\quad D_{kn}=\frac{\partial}{\partial x_{kn}}-b_{ kn}x_{kn}, \tag{7.12}\] It turns out that the measure \(\mu_{b}\) is always \(B^{\mathbb{N}}_{0}\)-right-quasi-invariant. Therefore, we can construct a family of analogues of the right \(T^{R,\mu_{b}}\) and the left \(T^{L,\mu_{b}}\) (if the measure \(\mu_{b}\) is \(B^{\mathbb{N}}_{0}\)-left-quasi-invariant) regular representations of the group \(B^{\mathbb{N}}_{0}\) in the space \(L_{\,2}(B^{\mathbb{N}},\mu_{b}).\) They are defined by (7.6) and (7.7). **Theorem 7.4** ([6, 9]).: _The right regular representation \(T^{R,\mu_{b}}\) of the group \(B_{0}^{\mathbb{N}}\) is irreducible if and only if_ _1) \(\mu^{L_{t}}\perp\mu\quad\text{for all}\quad t\in B_{0}^{\mathbb{N}}\backslash\{e\},\)_ _2) the measure \(\mu\) is \(B_{0}^{\mathbb{N}}\)-ergodic._ **Definition 7.1**.: Let \(\alpha:G\to\operatorname{Aut}(X)\) be a _measurable action_ of a group \(G\) on a measurable space \((X,\mu)\). Recall that the probability measure \(\mu\) on some \(G\)-space \(X\) is called _ergodic_ if any function \(f\in L^{1}(X,\mu)\) with property \(f(\alpha_{t}(x))=f(x)\) a.e. (almost everywhere) \(\operatorname{mod}\mu\) is constant. **Lemma 7.5** ([9], Lemma 2.1.6).: _We have \(\mu_{b}^{L_{t}}\perp\mu_{b}\quad\text{for all}\quad t\in B_{0}^{\mathbb{N}}\backslash e\) if and only if_ \[S_{kn}^{L}(\mu_{b})=\sum_{m=k+1}^{\infty}\frac{b_{km}}{b_{nm}}=\infty\quad \text{for all}\quad k\!<\!n. \tag{7.13}\] _Idea of the proof of irreducibility, for details see [6, 9]. The conditions 1) and 2 are necessary conditions of the irreducibility of the representation \(T^{R,\mu_{b}}\). We show that they are also a sufficient ones. Let \(\mathfrak{A}(B_{0}^{\mathbb{N}})\) be a von Neumann algebra generated by the representation \(T^{R,\mu_{b}}\):_ \[\mathfrak{A}(B_{0}^{\mathbb{N}})=\Big{(}T_{t}^{R,\mu_{b}}\mid t\in B_{0}^{ \mathbb{N}}\Big{)}^{\prime\prime}. \tag{7.14}\] To prove the irreducibility, it is sufficient to show that \(U_{kn}(t)\in\mathfrak{A}(B_{0}^{\mathbb{N}})\) for all \(k,n\in\mathbb{N}\), \(k<n\), where \(U_{kn}(t)=e^{itx_{kn}}\). In this case we have \[L^{\infty}(B^{\mathbb{N}},\mu_{b})\!\subset\!\mathfrak{A}(B_{0}^{\mathbb{N}}) \quad\text{hence},\quad\big{(}\mathfrak{A}(B_{0}^{\mathbb{N}})\big{)}^{ \prime}\!\subset\!\Big{(}L^{\infty}(B^{\mathbb{N}},\mu_{b})\Big{)}^{\prime}=L ^{\infty}(B^{\mathbb{N}},\mu_{b}), \tag{7.15}\] since the algebra \(L^{\infty}(B^{\mathbb{N}},\mu_{b})\) is _maximal abelian_. Let now some bounded operator \(A\) commute with the representation \([T_{t}^{R,\mu_{b}},A]=0\) for all \(t\in B_{0}^{\mathbb{N}}\). Then by (7.15), \(A\in L^{\infty}(B^{\mathbb{N}},\mu_{b})\), i.e, \(A\) is a multiplication operator on some function \(a\in L^{\infty}(B^{\mathbb{N}},\mu_{b})\). The commutation \([T_{t}^{R,\mu_{b}},a]=0\) implies \(a(xt)=a(x)\) a.e. \(\operatorname{mod}\mu_{b}\). By ergodicity of the measure \(\mu_{b}\) on \(B^{\mathbb{N}}\) we conclude that \(a(x)=const\) hence \(A=CI\), i.e, representation \(T^{R,\mu_{b}}\) is irreducible. To illustrate the approximation we show here only that \(e^{itx_{12}}\in\mathfrak{A}(B_{0}^{\mathbb{N}})\), or \(x_{12}\)\(\eta\)\(\mathfrak{A}(B_{0}^{\mathbb{N}})\), i.e., that operator \(x_{12}\) is _affiliated_ with an algebra \(\mathfrak{A}(B_{0}^{\mathbb{N}})\). **Definition 7.2**.: Recall that, a not necessarily bounded self-adjoint operator \(A\) in a Hilbert space \(H\), is said to be _affiliated_ with a von Neumann algebra \(M\) of operators in this Hilbert space \(H\) if \(e^{itA}\in M\) for all \(t\in\mathbb{R}\). One writes \(A\ \eta\ M\). We show that the operator \(x_{12}\) can be approximated in the _strong resovent sense_ by the linear combinations of the following operators \(A_{1k}A_{2k},\ k\geq 3\). By (7.12) we get \[A_{1k}A_{2k}=D_{1n}(x_{12}D_{1k}+D_{2k})=x_{12}D_{1k}^{2}+D_{1k}D_{2k}\ k\geq 3. \tag{7.16}\] By [9], Lemma 2.1.9, the convergence \(\sum_{k=N_{1}}^{N_{2}}t_{k}A_{1k}A_{2k}\to x_{12}\) holds if and only if \(S_{12}^{L}(\mu_{b})=\sum_{k=3}^{\infty}\frac{b_{1k}}{b_{2k}}=\infty\). And this is precisely the condition of orthogonality \(\mu_{b}^{L_{t}}\perp\mu_{b}\), see Lemma 7.5. We give here more conceptual proof of this fact. Using the appropriate Fourier transform \(F_{2}\) in the variables \((x_{1k},x_{2k})_{k=3}^{\infty}\) see details in [9, Section 2.1.3, formula (2.15)] we get \(F_{2}(D_{1k})=y_{1k},\ F_{2}(D_{2k})=y_{2k},\ k\geq 3\) therefore, \[F_{2}(A_{1k}A_{2k})=x_{12}y_{1k}^{2}+y_{1k}y_{2k}. \tag{7.17}\] The corresponding measure \(\mu_{1/4b}(y)\) in variables \((y_{1k},y_{2k})_{k=3}^{\infty}\) is defined by \[d\mu_{1/4b}(y)=\otimes_{k=1}^{2}\otimes_{n=3}^{\infty}\sqrt{\frac{1}{4b_{kn} \pi}}\exp\Big{(}-\frac{y_{kn}^{2}}{4b_{kn}}\Big{)}dy_{kn}=\otimes_{k=1}^{2} \otimes_{n=3}^{\infty}d\mu_{1/4b_{kn}}(y_{kn}). \tag{7.18}\] The corresponding canonical measure \(\mu_{1/2}(z)\) is as follows: \[d\mu_{1/2}(z)=\otimes_{k=1}^{2}\otimes_{n=3}^{\infty}\sqrt{\frac{1}{2\pi}} \exp\Big{(}-\frac{z_{kn}^{2}}{2}\Big{)}dz_{kn}=\otimes_{k=1}^{2}\otimes_{n=3 }^{\infty}d\mu_{1/2}(z_{kn}). \tag{7.19}\] In the _canonical coordinates_\(z_{kn}\) the expression \(F_{2}(A_{1k}A_{2k})\) will have the following form \[x_{12}2b_{1k}z_{1k}^{2}+2\sqrt{b_{1k}b_{2k}}z_{1k}z_{2k}=2\sqrt{b_{1k}}\big{(} x_{12}z_{1k}^{2}+a_{k}z_{1k}z_{2k}\big{)},\quad a_{k}=\sqrt{b_{2k}/b_{1k}}.\] Let us denote by \(\langle f_{n}\mid n\in\mathbb{N}\rangle\) the _closure of the linear space_ generated by the set of vectors \((f_{n})_{n\in\mathbb{N}}\) in a Hilbert space \(H\). **Lemma 7.6**.: _Set \(f_{0}=x_{12},f_{k}=x_{12}z_{1k}^{2}+a_{k}z_{1k}z_{2k}\). We have \(f_{0}\in\langle f_{k}\mid k\geq 3\rangle\) if and only if \(\sum_{k=3}^{\infty}\frac{1}{a_{k}^{2}}=\infty\)._ Proof. Consider the hyperplane \(V_{n}\) generated by \(n\) vectors \(f_{3},\ldots,f_{n+3}\). By Lemma 4.1 we have \[d^{2}(f_{0},V_{n})=\frac{\Gamma(f_{0},f_{3},f_{4},\ldots,f_{n+3})}{\Gamma(f_{3}, f_{4},\ldots,f_{n+3})}. \tag{7.20}\] Further, \[\gamma(f_{0},f_{3},f_{4},\ldots,f_{n+3})=\left(\begin{array}{cccc}1&1&...&1\\ 1&1+a_{3}^{2}&...&1\\ &&...&\\ 1&1&...&1+a_{n+3}^{2}\end{array}\right) \tag{7.21}\] and \[\gamma(f_{3},f_{4},\ldots,f_{n+3})=\left(\begin{array}{cccc}1+a_{3}^{2}&1&...&1\\ 1&1+a_{4}^{2}&...&1\\ &&...&\\ 1&1&...&1+a_{n+3}^{2}\end{array}\right). \tag{7.22}\] Finally, by (5.14) we get \[d^{2}(f_{0},V_{n}) =\frac{\det\bigl{(}\gamma(f_{0},f_{3},f_{4},\ldots,f_{n+3})\bigr{)} }{\det\bigl{(}\gamma(f_{3},f_{4},\ldots,f_{n+3})\bigr{)}}\stackrel{{ \eqref{eq:V_n}}}{{=}}\frac{\Bigl{(}\prod_{k=3}^{n+3}a_{k}^{2} \Bigr{)}}{\Bigl{(}\prod_{k=3}^{n+3}a_{k}^{2}\Bigr{)}\Bigl{(}1+\sum_{k=3}^{n+3} \frac{1}{a_{k}^{2}}\Bigr{)}}\] \[=\Bigl{(}1+\sum_{k=3}^{n+3}\frac{1}{a_{k}^{2}}\Bigr{)}^{-1}.\qed\] ### Koopman's representation Let \(\alpha:G\to\operatorname{Aut}(X)\) be a measurable action of a group \(G\) on a measurable space \((X,\mu)\) with \(G\)-quasi-invariant measure \(\mu\), i.e, \(\mu^{\alpha_{t}}\sim\mu\) for all \(t\in G\). With these date one can associate the representation \(\pi^{\alpha,\mu,X}:G\to U(L^{2}(X,\mu))\), by the following formula: \[(\pi_{t}^{\alpha,\mu,X}f)(x)=(d\mu(\alpha_{t^{-1}}(x))/d\mu(x))^{1/2}f(\alpha_ {t^{-1}}(x)),\quad f\in L^{2}(X,\mu). \tag{7.23}\] In the case of an invariant measure this representation called _Koopman's representation_. We keep the same name for representation (7.23). The following conjecture is a natural generalization of Ismagilov's conjecture. **Conjecture 7.7**.: _The representation (7.23) is irreducible if and only if_ _1) \(\mu^{g}\perp\mu\quad\text{for all}\quad g\in Z_{\mathrm{Aut}(X)}(\alpha(G)) \backslash\{e\},\)_ _2) the measure \(\mu\) is \(G\)-ergodic._ Here \(Z_{G}(H)\) is a _centralizer_ of the subgroup \(H\) in the group \(G\): \(Z_{G}(H)=\{g\in G\ |\ \{g,a\}=e\ \forall a\in H\},\) where \(\{g,a\}=gag^{-1}a^{-1}\). In general, Conjecture 7.7 is false, our aim is to find when it holds, see the following section. ### Group \(\mathrm{GL}_{0}(2\infty,\mathbb{R})\) acting on \(m\) infinite rows Let us denote by \(\mathrm{Mat}(2\infty,\mathbb{R})\) the space of all real matrices that are infinite in both directions: \[\mathrm{Mat}(2\infty,\mathbb{R})=\Big{\{}x=\sum_{k,n\in\mathbb{Z}}x_{kn}E_{kn}, \ x_{kn}\in\mathbb{R}\Big{\}}. \tag{7.24}\] The group \(G=\mathrm{GL}_{0}(2\infty,\mathbb{R})=\varinjlim_{n,i^{s}}\mathrm{GL}(2n+1, \mathbb{R})\) is defined as the inductive limit of the general linear groups \(G_{n}=\mathrm{GL}(2n+1,\mathbb{R})\) with respect to the _symmetric embedding_\(i^{s}\): \[G_{n}\ni x\mapsto i^{s}_{n+1}(x)=x+E_{-(n+1),-(n+1)}+E_{n+1,n+1}\in G_{n+1}. \tag{7.25}\] For a fixed natural number \(m\), consider a \(G\)-space \(X_{m}\) as the following subspace of the space \(\mathrm{Mat}(2\infty,\mathbb{R})\): \[X_{m}=\Big{\{}x\in\mathrm{Mat}(2\infty,\mathbb{R})\ |\ x=\sum_{k=1}^{m}\sum_{n \in\mathbb{Z}}x_{kn}E_{kn}\Big{\}}. \tag{7.26}\] The group \(\mathrm{GL}_{0}(2\infty,\mathbb{R})\) acts from the right on the space \(X_{m}.\) Namely, the right action of the group \(\mathrm{GL}_{0}(2\infty,\mathbb{R})\) is correctly defined on the space \(X_{m}\) by the formula \(R_{t}(x)=xt^{-1},\ t\in G,\ x\in X_{m}\). We define a Gaussian non-centered product measure \(\mu:=\mu^{m}:=\mu^{m}_{(b,a)}\) on the space \(X_{m}:\) \[\mu^{m}_{(b,a)}(x)=\otimes_{k=1}^{m}\otimes_{n\in\mathbb{Z}}\mu_{(b_{kn},a_{ kn})}(x_{kn}), \tag{7.27}\] where \[d\mu_{(b_{kn},a_{kn})}(x_{kn})=\sqrt{\frac{b_{kn}}{\pi}}e^{-b_{kn}(x_{kn}-a_{ kn})^{2}}dx_{kn} \tag{7.28}\] and \(b=(b_{kn})_{k,n},\ b_{kn}>0,\)\(a=(a_{kn})_{k,n},\)\(a_{kn}\in\mathbb{R},\)\(1\leq k\leq m,\)\(n\in\mathbb{Z}.\) Define the unitary representation \(T^{R,\mu,m}\) of the group \(\mathrm{GL}_{0}(2\infty,\mathbb{R})\) on the space \(L^{2}(X_{m},\mu^{m}_{(b,a)})\) by the formula: \[(T^{R,\mu,m}_{t}f)(x)=\big{(}d\mu^{m}_{(b,a)}(xt)/d\mu^{m}_{(b,a)}(x)\big{)}^{ 1/2}f(xt),\ f\in L^{2}(X_{m},\mu^{m}_{(b,a)}). \tag{7.29}\] Obviously, the _centralizer_\(Z_{\mathrm{Aut}(X_{m})}(R(G))\subset\mathrm{Aut}(X_{m})\) contains the group \(L(\mathrm{GL}(m,\mathbb{R}))\), i.e., the image of the group \(\mathrm{GL}(m,\mathbb{R})\) with respect to the left action \(L:\mathrm{GL}(m,\mathbb{R})\to\mathrm{Aut}(X_{m})\), \(L_{s}(x)\!=\!sx\), \(s\in\mathrm{GL}(m,\mathbb{R})\), \(x\in X_{m}\). We prove the following theorem. **Theorem 7.8**.: _The representation \(T^{R,\mu,m}\!:\!\mathrm{GL}_{0}(2\infty,\mathbb{R})\!\to\!U\Big{(}L^{2}(X_{m}, \mu^{m}_{(b,a)})\Big{)}\) is irreducible if and only if_ \[(i) (\mu^{m}_{(b,a)})^{L_{s}}\perp\mu^{m}_{(b,a)}\quad\mbox{for all} \quad s\in\mathrm{GL}(m,\mathbb{R})\backslash\{e\};\] \[(ii) \mbox{the measure}\quad\mu^{m}_{(b,a)}\quad\mbox{is $G$-ergodic}.\] In [9, 10] this result was proved for \(m\leq 2\). In [11] it was proved for \(m=3\). Note that conditions (i) and (ii) are necessary conditions for irreducibility. **Remark 7.3**.: Any Gaussian product-measure \(\mu^{m}_{(b,a)}\) on \(X_{m}\) is \(\mathrm{GL}_{0}(2\infty,\mathbb{R})\)-right-ergodic [14, SS3, Corollary 1], see Definition 7.1. For non-product-measures this is not true in general. #### 7.5.1 Case \(m=3\) **Remark 7.4**.: (The idea of the proof of irreducibility, see details in [11]). Let us denote by \(\mathfrak{A}^{m}\) the _von Neumann algebra_ generated by the representation \(T^{R,\mu,m}\), i.e., \(\mathfrak{A}^{m}=(T^{R,\mu,m}_{t}\mid t\in G)^{\prime\prime}\). For \(\alpha\!=\!(\alpha_{k})\!\in\!\{0,1\}^{m}\) define the von Neumann algebra \(L^{\infty}_{\alpha}(X_{m},\mu^{m})\) as follows: \[L^{\infty}_{\alpha}(X_{m},\mu^{m})\!=\!\Big{(}\exp(itB^{\alpha}_{kn})\mid 1\leq k \leq m,\ t\in\mathbb{R},\ n\in\mathbb{Z}\Big{)}^{\prime\prime},\] where \(B^{\alpha}_{kn}\!=\!\left\{\begin{array}{cl}x_{kn},&\mbox{if}\quad\alpha_{k }=0\\ i^{-1}D_{kn},&\mbox{if}\quad\alpha_{k}=1\end{array}\right.\) and \(D_{kn}=\partial/\partial x_{kn}-b_{kn}(x_{kn}-a_{kn})\). **The proof of the irreducibility is based on four facts**: 1) we can approximate by generators \(A_{kn}=A^{R,m}_{kn}=\frac{d}{dt}T^{R,\mu,m}_{I+tE_{kn}}|_{t=0}\) the set of operators \((B^{\alpha}_{kn})^{m}_{k=1}\), \(n\!\in\!\mathbb{Z}\)_for some_\(\alpha\!\in\!\{0,1\}^{m}\) depending on the measure \(\mu^{m}\) using the orthogonality condition \((\mu^{m})^{L_{s}}\perp\mu^{m}\) for all \(s\in\mathrm{GL}(m,\mathbb{R})\backslash\{e\}\), 2) it is sufficient to verify the approximation only on the _cyclic vector_\(\mathbf{1}(x)\!\equiv\!1\), since the representation \(T^{R,\mu,m}\) is _cyclic_, 3) the subalgebra \(L^{\infty}_{\alpha}(X_{m},\mu^{m})\) is a _maximal abelian subalgebra_ in \(\mathfrak{A}^{m}\), 4) the measure \(\mu^{m}\) is \(G\)-ergodic. Here the _generators_\(A_{kn}\) are given by the formulas: \[A_{kn}\!=\!\sum_{r=1}^{m}x_{rk}D_{rn},\quad k,n\in\mathbb{Z},\quad\mbox{where} \quad D_{kn}=\partial/\partial x_{kn}-b_{kn}(x_{kn}-a_{kn}).\] **Remark 7.5**.: _Scheme of the proof._ We prove the irreducibility as follows \[\left(\mu^{L_{s}}\perp\mu\;\;\text{for all}\;\;s\in\mathrm{GL}(3, \mathbb{R})\setminus\{e\}\right)\Leftrightarrow\left(\begin{smallmatrix} \text{criteria}\\ \text{of}\\ \text{orthogonality}\end{smallmatrix}\right)\&\] (7.30) \[\left(\begin{smallmatrix}\text{Lemma \ref{lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemmalemma:lemmalemma: For \(m=3\), consider three rows as follows \[\left(\begin{array}{cccccc}...&a_{11}&a_{12}&...&a_{1n}&...\\...&a_{21}&a_{22}&...&a_{2n}&...\\...&a_{31}&a_{32}&...&a_{2n}&...\end{array}\right)\quad\mbox{and set}\quad \lambda_{k}=\frac{1}{2b_{1k}}+\frac{1}{2b_{2k}}+\frac{1}{2b_{3k}}. \tag{7.37}\] Denote by \(Y_{1},Y_{2}\) and \(Y_{3}\) the three following vectors: \[x_{rk}=a_{rk}/\sqrt{\lambda_{k}},\;\;k\in\mathbb{Z},\quad Y_{r}=(x_{rk})_{k \in\mathbb{Z}}. \tag{7.38}\] **Lemma 7.11**.: _For any \(l\in\mathbb{Z}\) we have_ \[D_{rl}\mathbf{1}\in\langle A_{kl}\mathbf{1}\mid k\in\mathbb{Z}\rangle\quad \Leftrightarrow\quad\Delta(Y_{r},Y_{s},Y_{t})=\infty,\] _where \(\{r,s,t\}\) is a cyclic permutation of \(\{1,2,3\}\)._ _Acknowledgement._ The author is very grateful to Prof. K.-H. Neeb, Prof. M. Smirnov and Dr P. Moree for their personal efforts to make academic stays possible at their respective institutes. The author visited: MPIM from March to April 2022 and from January to April 2023, University of Augsburg from June to July 2022, and University of Erlangen-Nuremberg from August to December 2022, all during the Russian invasion in Ukraine. Also, Prof. R. Kashaev kindly invited him to Geneva. Further, he would like to pay his respect to Prof. P. Teichner at MPIM, for his immediate efforts started to help mathematicians in Ukraine after the Russian invasion. Since the spring of 2023 A. Kosyak is an Arnold Fellow at the London Institute for Mathematical Sciences, and he would like to express his gratitude to Mrs S. Myers Cornaby to Miss A. Ker Mercer and to Dr M. Hall and especially to the Director of LIMS Dr T. Fink and Prof. Y.-H. He.
2303.17269
uGMRT observations of the hot-Saturn WASP 69b: Radio-Loud Exoplanet-Exomoon Survey II (RLEES II)
Exomoons have so far eluded ongoing searches. Several studies have exploited transit and transit timing variations and high-resolution spectroscopy to identify potential exomoon candidates. One method of detecting and confirming these exomoons is to search for signals of planet-moon interactions. In this work, we present the first radio observations of the exomoon candidate system WASP 69b. Based on the detection of alkali metals in the transmission spectra of WASP-69b, it was deduced that the system might be hosting an exomoon. WASP 69b is also one of the exoplanet systems that will be observed as part of JWST cycle-1 GTO. This makes the system an excellent target to observe and follow up. We observed the system for 32 hrs at 150 MHz and 218 MHz using the upgraded Giant Metrewave Radio Telescope (uGMRT). Though we do not detect radio emission from the systems, we place strong $3\sigma$ upper limits of 3.3 mJy at 150 MHz and 0.9 mJy at 218 MHz. We then use these upper limits to estimate the maximum mass loss from the exomoon candidate.
Mayank Narang, Apurva V. Oza, Kaustubh Hakim, P. Manoj, Himanshu Tyagi, Bihan Banerjee, Arun Surya, Prasanta K. Nayak, Ravinder K. Banyal, Daniel P. Thorngren
2023-03-30T10:16:34Z
http://arxiv.org/abs/2303.17269v1
# uGMRT observations of the hot-Saturn WASP 69b: ###### Abstract Exomoons have so far eluded ongoing searches. Several studies have exploited transit and transit timing variations and high-resolution spectroscopy to identify potential exomoon candidates. One method of detecting and confirming these exomoons is to search for signals of planet-moon interactions. In this work, we present the first radio observations of the exomoon candidate system WASP 69b. Based on the detection of alkali metals in the transmission spectra of WASP-69b, it was deduced that the system might be hosting an exomoon. WASP 69b is also one of the exoplanet systems that will be observed as part of JWST cycle-1 GTO. This makes the system an excellent target to observe and follow up. We observed the system for 32 hrs at 150 MHz and 218 MHz using the upgraded Giant Metrewave Radio Telescope (uGMRT). Though we do not detect radio emission from the systems, we place strong 3\(\sigma\) upper limits of 3.3 mJy at 150 MHz and 0.9 mJy at 218 MHz. We then use these upper limits to estimate the maximum mass loss from the exomoon candidate. keywords: radio continuum: planetary systems -- planets and satellites: magnetic fields -- planets and satellites: aurorae -- exoplanets ## 1 Introduction The discovery of an exomoon is the next natural step in the galactic hierarchy of celestial objects. Signatures of alkali metals such as Na and K have been reported in the high-resolution spectra of about 20 transiting giant exoplanets (e.g., Charbonneau et al., 2002; Wyttenbach et al., 2017). By analogy with the Na escape signature from the Jupiter-Io system, Oza et al. (2019) proposed that ionizing alkali clouds fueled by evaporative mass loss from exomoons orbiting these exoplanets can explain the observed alkaline exospheres of transiting exoplanet systems (Gebek & Oza, 2020; Wyttenbach et al., 2017). The finding by Cassidy et al. (2009) that exomoons around close-in gas giant exoplanets have stable orbits over astronomical timescales led to the demonstration that the stellar tide will melt the interiors of exomoons and evaporate their surfaces. In our solar system, due to tidal heating, Io exhibits evaporation in the form of extreme mass loss \(\sim\) 1000 kg/s. While Io's eccentricity and tidal heating are due to Europa and Ganymede locking it into a Laplace resonance (42:1) around Jupiter's gravitational well (Peale et al., 1979), exomoons of close-in exoplanets have a large periodic eccentricity due to stellar forcing (Cassidy et al., 2009). The impact of the stellar tide on the tidal heating rate (\(\dot{E}_{a}\)) is a strong inverse function of the planet's orbital period (\(\tau_{p}\)): \(\dot{E}_{a}\propto\tau_{p}^{-5}\)(Cassidy et al., 2009). The evaporation limit for an Io-mass exomoon orbiting a gas giant was found (Oza et al., 2019) to be within a critical orbital period of \(\tau_{c}=1\) day (for hydrodynamic mass loss, Perez-Becker & Chiang, 2013) and \(\tau_{c}=2.6\) days (for tidally driven mass loss, Charnoz et al., 2021; but see Dobos et al., 2021) consistent with the dearth of evaporated metals within the radius resulting from this critical period. Transiting exoplanets are ideal candidates for exo-Ios because they allow for transit follow-ups with JWST. However, an independent detection of these exomoons is necessary before considerable telescope time is devoted to investigating them. One method of detecting exomoons is by observing the radio emissions due to (exo)planet-(exo) moon interaction. The Io-controlled decametric (Io-DAM) emission (Bigg, 1964) is one such example of planet-moon interaction that leads to a detectable signal. These emissions are generated by the interaction between Io and Jupiter's magnetic field. Io is a highly volcanic moon, and the volcanoes on its surface emit a large amount of ionized gas. Due to ongoing volcanism, the moon possesses an atmosphere of highly ionized SO\({}_{2}\)(Lellouch et al., 2007), which produces an ionosphere around the moon. This ionized gas is then captured by Jupiter's strong magnetic field, forming a plasma torus around the planet (see Figure 1). As Io orbits within this plasma torus, it generates a current that flows between the moon and the planet (Goldreich & Lynden-Bell, 1969; Griessmeier et al., 2007), giving rise to a unipolar inductor. The interaction between Io and the plasma torus also generates magnetic field oscillations known as Alfven waves (Belcher, 1987), which lead to the production of electric fields parallel to the Jovian magnetic field line (Neubauer, 1980; Crary, 1997; Saur, 2004). The electrons then accelerate along magnetic field lines, whose gyration produces radio emission via an electron cyclotron maser instability (ECMI) process (e.g. Wu & Lee, 1979; Treumann, 2006). If a similar mechanism also operates in exomoon-exoplanet systems, then their emission might also be detectable, making these exomoons "Radio-loud", similar to Io. Several attempts have been made to detect radio emissions from exoplanets, but no successful detection has yet been reported (see Griessmeier, 2017; Lazio, 2018; Narang et al., 2021, 2021, 2021). However, radio emission due to star-planet interaction from the star has been reported in a few cases (e.g., Vedantham et al., 2020; Callingham et al., 2021). Recently Narang et al. (2023) carried out the first dedicated search to detect radio emissions from a sample of exoplanets that showed possible signatures of an exomoon in their transmission spectra. Though no detections were made in that survey, it opened up the possibility of using radio observations to search for exomoons. Discovery of exomoons via radio emission hinges on several factors, including observing the emission at the correct frequency (which in turn depends on the magnetic field of the exoplanets, which are largely unknown), the emission being beamed towards us during our observation run, the evaporation from the exomoon being powerful enough to lead to a detectable signature. All these make the detection of radio emissions from planet-moon interactions a challenging but plausible experiment. In this work, we present our observation of the hot-Saturn WASP-69b using the upgraded Giant Metrewave Radio Telescope (uGMRT) to study the planet-moon interaction and search for a volcanic exomoon. The star WASP-69 is a K5 star at a distance of 50 pc (Bailer-Jones et al., 2021). The WASP-69 system is host to WASP-69b, a hot-Saturn with a mass of 0.26 \(\pm\) 0.017 \(M_{J}\) (radius of 1.057 \(\pm\) 0.047 \(R_{J}\)) and an orbital period of 3.86 days (0.045 AU, Anderson et al., 2014). A strong signature of Na was reported by Casasayas-Barris et al. (2017). The atmosphere of WASP-69b has been studied widely with ground-based high-resolution spectroscopy and HST (Nortmann et al., 2018; Estrela et al., 2021). Oza et al. (2019) attributed the Na detection in the atmosphere of WASP-69b to the presence of an exo-Io. Furthermore, the system WASP-69b will be observed as a cycle-1 JWST GTO target with both NIRCAM (proposal ID 1185 Greene et al., 2017) and MIRI (proposal ID 1177 Greene et al., 2017). These data sets may possess infrared signatures of hot spots of a tidally-heated exomoon (Peters & Turner, 2013), which may be evident in NIRCAM and MIRI observations. This makes WASP-69b not just an excellent candidate for uGMRT observations but also for follow-up ground and space-based observations. WASP-69b was previously observed at 150 MHz with GMRT as part of the TIFR GMRT Sky Survey (TGSS, Intema et al., 2017). The TGSS observations reached an rms of 2.2 mJy (see Figure 2). However, no emission was detected from the source. In Section 2, we describe the details of the observations and the data reduction process. Next, we present our findings and discuss them in Section 3, followed by a summary in Section 4. ## 2 Observation and data reduction The WASP-69 system was observed for 32 hrs with uGMRT (proposal ID 41_068, PI Kaustubh Hakim). We observed the system in the band 2 (120-250 MHz) of uGMRT. The system was observed for five consecutive days. At each pointing, the system was observed for 6-7 hrs. The complete log of the observations is listed in Table 1. For all five pointings, the observation setup was the same. The flux calibrator 3C286 was observed at the beginning of the observation, while the flux calibrator 3C48 was observed at the end of the observation run. We observed the phase calibrator 2047-026 in a loop with the target WASP-69 with 27 mins of WASP-69 and 6 mins on the phase calibrator 2047-026. To reduce the band 2 (120-250 MHz) uGMRT data, we used Source Peeling and Atmospheric Modeling (SPAM) pipeline (Intema et al., 2009; Intema, 2014, 2014). SPAM is a python-based extension to Astronomical Image Processing System (AIPS) Greisen (2003). SPAM was developed to reduce low-frequency radio interferometric observations using telescopes such as GMRT. SPAM has inbuilt routines for flagging RFI and bad data. SPAM also includes direction-dependent ionospheric calibration and image-plane ripple suppression, which can further improve the image quality. However, the SPAM pipeline does not support the processing of large fractional bandwidths (\(\delta f/f>~{}0.2\)). Thus natively, the SPAM is not capable of reducing the wideband data from uGMRT. A workaround for this is to split the bandwidth into smaller chunks (subbands) that can be processed independently. The calibrated output visibilities can then be jointly imaged to produce the final image. The band-2 (120-250 MHz) of uGMRT has a break in the middle (165-185 MHz) and can be divided into two frequency ranges. Thus we decided to split band-2 into two different subbands with a bandwidth of about \(\sim 30\) MHz around regions of relatively low radio frequency interference. We selected channel numbers from 600-1100 (500 channels) corresponding to a band center of \(\sim 218\) MHz and channel numbers from 1350-1750 (400 channels) corresponding to a band center of \(\sim 150\) MHz. These channels were relatively free of RFI. Therefore, we processed the two sub-bands independently and produced the final images. The rms noise of these two sub-bands is very different, so we decided not to combine the two images to produce a wideband image. \begin{table} \begin{tabular}{c c c c c} \hline Date of & Start time & Duration & rms & rms \\ observation & – & – & 150 MHz & 218 MHz \\ – & (UTC) & (hr) & (mJy/b) & (mJy/b) \\ \hline 2021 Nov 05\({}^{th}\) & 0830 & 7 & 5.5 & 0.5 \\ 2021 Nov 06\({}^{th}\) & 1030 & 7 & 1.1 & 0.4 \\ 2021 Nov 07\({}^{th}\) & 0830 & 6 & 4.9 & 0.3 \\ 2021 Nov 08\({}^{th}\) & 0830 & 6 & 4.3 & 0.3 \\ 2021 Nov 09\({}^{th}\) & 0830 & 6 & 2.2 & 0.5 \\ \hline \end{tabular} \end{table} Table 1: Summary of observation and the rms sensitivity reached during our observation run ## 3 Results ### Observed upper limits on the radio flux density The WASP 69 system was observed with uGMRT for five days in band 2 (120-250 MHz), totaling 32 hrs. In Figure 3, we show the band-2 150 MHz images of the WASP-69 field, while in Figure 4, we show the band-2 218 MHz images of the WASP-69 field. In Table 1, we have listed the rms reached during these observations. The rms value for the WASP-69 field ranges from 1.1-5.5 mJy/beam at 150 MHz and between 0.3-0.5 mJy/beam at 218 MHz. Using the lowest value of rms and assuming 3\(\times\)_rms_ as an upper limit to the radio flux density \(S_{\nu}\), we get \(S_{\nu}\) = 3.3 mJy at 150 MHz and \(S_{\nu}\) = 0.9 mJy at 218 MHz. Our observations at 218 MHz are some of the deepest observations that have been carried out at these frequencies (e.g., Lecavelier Des Etangs et al., 2009, 2011; Narang et al., 2021b; O'Gorman et al., 2018; Narang, 2022). ### Maximum radio power The maximum radio power \(P_{\nu}\) emitted by the exoplanet-exomoon system can be calculated from the observed upper limits on the radio flux density (e.g., Lazio et al., 2004; Griessmeier et al., 2007): \(P_{\nu}=S_{\nu}\Delta\nu\,\Omega d^{2}\), where \(\Delta\nu\) is the bandwidth of emission such that \(\Delta\nu=\nu_{c}/2\), \(\Omega\) = 0.16 (average value of Io-DAM Zarka et al., 2004) is the angle of the Figure 1: A schematic representation of the ECMI emission process between an exoplanet, exomoon, and exoplasma-torus. The satellite semi-major axis \(a_{s}\) is defined by dynamics and stability criteria (Cassidy et al., 2009), the mass loss by stellar tides and irradiation (Oza et al., 2019), and the scale height defines the volume of the plasma torus responsible for the beamed emission \(S_{{}_{\rm{\nu_{c}}}}\). The cyclotron frequency \(\nu_{c}\) = 2.8 \(B_{p}\) determines the selected observational frequency and can result in non-exomoon emission (gray cone) and radio-loud exomoon-exoplanet emission (black cone) based on the field strength at the satellite orbit \(B_{s}\sim\)\(B_{p}\) (\(R_{p}/a_{s}\))\({}^{3}\). Figure 2: The TGSS 150 MHz GMRT image (magenta contours) of the WASP-69 field at 150 MHz overlaid on the ztf g band image. The green circle marks the position of the WASP-69. The contours plotted are 5, 7, 10, 15, and 25 \(\times\)\(\sigma\). The beam is shown as a red ellipse at the bottom left corner. Figure 3: The uGMRT image (magenta contours) of the WASP-69 field at 150 MHz for each individual observation night overlaid on the _x_tf g band image. The green circle marks the position of the WASP-69. The contours plotted are 5, 7, 10, 15, and 25 \(\times\)\(\sigma\). The beam is shown as a red ellipse at the bottom left corner. Figure 4: The uGMRT image (magenta contours) of the WASP-69 field at 218 MHz for each individual observation night overlaid on the ztf g band image. The green circle marks the position of the WASP-69. The contours plotted are 5,10, 30, and 50 \(\times\)\(\sigma\). The beam is shown as a red ellipse at the bottom left corner. emission cone, and \(d\) is the distance of the exoplanet from Earth. We find the maximum radio power that could be emitted from the WASP-69 system is 9 \(\times 10^{14}\) W (at 150 MHz) and 4 \(\times 10^{14}\) W (at 218 MHz). Compared to the maximum radio power emitted from the Io Flux Tube (IFT) of \(\sim 10^{8}\)-\(10^{10}\) W (Bhardwaj et al., 2001), our upper limits are \(10^{4.9}\)-\(10^{6.9}\) times higher at 150 MHz and \(10^{4.6}\)-\(10^{6.6}\) times higher at 218 MHz. ### Constraining the mass loss rate from the exomoon The maximum emitted radio power from an exoplanet-exomoon interaction depends on the plasma mass density and the magnetic field (Neubauer, 1980). At low plasma densities, the radio power \(P_{-}\) scales linearly with the magnetic field at the satellite location, \(B_{s}\), and as the square root of plasma mass density \(\rho_{s}\). At high plasma densities, the radio power \(P_{+}\) scales with the square of the magnetic field and is independent of plasma mass density (Noyola et al., 2014), \[P_{-} \propto B_{s}\sqrt{\rho_{s}}, \tag{1}\] \[P_{+} \propto B_{s}^{2}. \tag{2}\] Although the determination of the magnitude of plasma density is beyond the scope of this paper, these two limits can be used to make qualitative arguments on the mass loss rate from the exomoon \(\dot{M}\), which scales linearly with plasma density: \(\dot{M}\propto\rho_{s}\). Moreover, for a given planetary magnetic field \(B_{0}\), the magnetic field at the satellite location drops as \(B_{s}\propto B_{0}/a_{s}^{3}\) (see Sect. 4.1 for discussion on magnetic field strength). Therefore, at the lower limit of plasma density, the mass loss rate from the exomoon is proportional to the sixth power of the exomoon semi-major axis, \[\dot{M}\propto(\frac{P_{-}}{B_{s}})^{2}=(\frac{P_{-}}{B_{0}})^{2}\ a_{s}^{6}. \tag{3}\] For stable exomoons, the value of \(a_{s}\) is between the Roche limit and half the hill radius (Cassidy et al., 2009). For WASP 69 b, this range for a satellite having a composition similar to Io is between 1.16-2.1 \(R_{J}\)(Oza et al., 2019). This is much closer than the orbital distance of 5.9 \(R_{J}\) of Io around Jupiter. Taking \(a_{s}=1.63R_{J}\) (average of the Roche limit and half the Hill radius), the minimum radio power ratio of WASP 69 b to IFT of \(10^{4.9}\) at 150 MHz, and the magnetic field ratio of \(\sim 13\) (\(B_{0,W69b}=54\) G at 150 MHz, which is fixed by the search frequency, \(\nu=2.8B_{0}\), cf. \(B_{0,Jup}=4.17\) G)), we find that the mass loss rate from a hidden exo-Io is roughly 17000 times higher than Io. However, to better constrain the mass loss from the system, further observations of the companion alkali doublet K predicted to be roughly \(\sim\)10 \(\times\) less abundant than Na Gebek and Oza (2020), are required in ground-based high-resolution spectroscopy. With NIRCAM/MIRI (proposal ID 1185 and 1177) and JWST, volcanically-vented molecules such as SO\({}_{2}\) CO\({}_{2}\), CO are expected to be apparent due to tidally-heated volcanism as seen at Io from both ground-based (CRIRES/VLT Lellouch et al., 2015) and space-based spectrographs (JIRAM/JUNO, Mura et al., 2020). ## 4 Discussion There could be several reasons why no radio emission was detected from these systems. In the following subsection, we discuss some of them. ### Cyclotron Frequency and Gas Giant Magnetic Field Strength The cyclotron frequency \(\nu_{c}\) for emission is given as \(\nu_{c}=2.8B_{0}\) where \(B_{0}\) is in Gauss and \(\nu_{c}\) in MHz. The choice of observing the system in band-2 (120-250 MHz) of uGMRT was based on hot Saturns having magnetic fields of \(\sim\) 40-100 G (Yadav and Thorngren, 2017). If the magnetic field is not within this range, we would not detect it. Furthermore, the probable detection of radio emission from the hot Jupiter \(\tau\) Bootis b by Turner et al. (2021) between 15-30 MHz has challenged the notion of hot-Jupiters and hot-Saturns having a strong magnetic field. Turner et al. (2021) estimate the magnetic field of \(\tau\) Bootis b to be in the range of 5-11 G. If WASP-69b also possesses such a small magnetic field, then no emission would be detectable in the band-2 (120-250 MHz) of uGMRT. To test this hypothesis, we further computed the magnetic field of WASP-69b. We followed the formalism from Yadav and Thorngren (2017) to estimate the magnetic fields. We used the evolution models from Thorngren and Fortney (2018) to derive the heat flux from the interiors of the planets (also see Christensen et al., 2009). The magnetic field on the dynamo surface is given as (from Reiners and Christensen, 2010) \[B_{rms}^{dyn}\left[\mathrm{G}\right]=4.8\times 10^{3}(M_{P}L_{P}^{2})^{1/6}R_{P} ^{-7/6}. \tag{4}\] where \(M_{P}\), \(L_{P}\), and \(R_{P}\) are the mass, luminosity, and radius of the planet (all normalized to solar values). Assuming scaling law for the dynamo radius from Yadav and Thorngren (2017), the dipole magnetic field strength at the pole is thus \[B_{dipole}^{solar}=\frac{B_{rms}^{dyn}}{\sqrt{2}}\left(\frac{R_{dyn}}{R_{P}} \right)^{3} \tag{5}\] where \(R_{dyn}\) is the dynamo radius. By plugging in the values for the WASP 69 system, we estimate the \(B_{dipole}^{polar}=15\) G. This gives \(\nu_{c}=42\) MHz. This is much lower than the frequency at which we observed the system. ### Time variable emission The decameter emission from Jupiter due to the interaction between Jupiter and Io or Jupiter and the solar wind is modulated with a period of a few milliseconds to months (Lecacheux et al., 2004; Marques et al., 2017; Zarka et al., 1996; Ryabov et al., 2014). The emission from exo-moons can also be highly time variable and modulated with the moon's phase around the planet. The emission from these exomoons can be highly beamed (e.g., Queinnec and Zarka, 1998; Zarka et al., 2004; Lamy et al., 2022) and emitted in a narrow cone. In such a case, the emission will only be observable during certain phases of the moon around the planet and the planet around its host star. During our observation run, we cover about 35 % (32 hrs / 92.6 hrs) of the orbital phase of the planet. However, if the emission cone was not pointed towards Earth, we would miss it. ### Exomoon Flux Density As stated in Narang et al. (2023), radio emission due to exomoon-exoplanet interaction can be inherently weak. If the mass loss rate from the exomoon is lower, then we will not be able to detect any emission. Furthermore, WASP 69 is located at 50 pc, and with our current sensitivity of telescopes, we will not be able to detect any signal if the strength of the radio emission from the system is at the same level as Io Flux Tube. The next generation of telescopes with high sensitivity are perhaps required for the detection of radio emission arising from exoplanet-exomoon interaction. ## 5 Summary This work presents the first radio observations of the exoplanetary system WASP-69 using uGMRT. The WASP-69 system is an exo-Io candidate system based on the presence of strong alkaline metal lines in the transmission spectra of the planet (Oza et al., 2019). The WASP-69 system was observed in hand-2 (120-250 MHz) of uGMRT. For this analysis, we divided band-2 of uGMRT into two sub-bands at 150 MHz and 218 MHz. We observed the WASP-69 field for 32 hrs covering about 20% of the orbital phase of the planet over five pointings. At 150 MHz, we achieved a 3 \(\sigma\) upper limit of 3.3 mJy, while an upper limit of 0.9 mJy was obtained at 218 MHz. However, no radio emission was detected from the system. This implies that the exomoon may not be radio-loud at the synchrotron frequencies searched for due to either a lower mass loss rate from the exomoon or a lower magnetic field strength of the parent planet. Moreover, the emission could be highly time variable and beamed. Therefore, deeper and more frequent observations at lower radio frequencies of the systems than what is currently possible are thus required to detect exomoons. The upcoming generation of radio telescopes, including the next-generation VLA (ngVLA) (McKinnon et al., 2019) and Square Kilometre Array (SKA) (Dewdney et al., 2009), will surpass the current telescopes in sensitivity. This increased sensitivity creates the possibility of detecting faint signals originating from the interaction between exoplanets and exomoons even at a much lower frequency. The magnetic field strength for WASP 69b is estimated to be 15 G, which is similar to the estimates derived for other exoplanets (e.g., Yadav & Thorngren, 2017; Narang et al., 2023). The emission generated from the exoplanet-exomoon interaction from exoplanets with such low magnetic field strengths will fall below 100 MHz in frequency. Consequently, the SKA appears to be a well-suited instrument for detecting these elusive exomoons. ## 6 Acknowledgment This work is based on observations made with the Giant Metrewave Radio Telescope, which is operated by the NCRA TIFR and is located at Khodad, Maharashtra, India. KH is supported by the FED-tWIN research program STELLA funded by the Belgian Science Policy Office (BELSPO). ## 7 Data Availability The data presented in this article are available on the GMRT archive at [https://naps.ncra.tifr.res.in/goa/](https://naps.ncra.tifr.res.in/goa/), and can be accessed with proposal id 41_068.
2301.01272
A gallery of diagonal stability conditions with structured matrices (and review papers)
This note presents a summary and review of various conditions and characterizations for matrix stability (in particular diagonal matrix stability) and matrix stabilizability.
Zhiyong Sun
2023-01-01T20:30:34Z
http://arxiv.org/abs/2301.01272v1
# A gallery of diagonal stability conditions with structured matrices (and review papers) ###### Abstract This note presents a summary and review of various conditions and characterizations for matrix stability (in particular diagonal matrix stability) and matrix stabilizability. keywords: Matrix stability, matrix stabilizability, diagonal stability. ## 1 Definitions and notations * A square real matrix is a **Z-matrix** if it has nonpositive off-diagonal elements. * A **Metzler** matrix is a real matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero). * A Z-matrix with positive principal minors is an **M-matrix**. * Note: There are numerous equivalent characterizations for M-matrix (Fiedler and Ptak, 1962; Plemmons, 1977). A more commonly-used condition is the following: A matrix \(A\in\mathbb{R}^{n\times n}\) is called an M-matrix, if its non-diagonal entries are non-positive and its eigenvalues have positive real parts. * A square matrix \(A\) is **(positive) stable* * if all its eigenvalues have positive parts. Equivalently, a square matrix \(A\) is (positive) stable iff there exists a positive definite matrix \(D\) such that \(AD+DA^{T}\) is positive definite. * Note: in control system theory we often define stable matrix as the set of square matrices whose eigenvalues have negative real parts (a.k.a. **Hurwitz* * matrix). The two definitions of stable matrices will be distinguished in the context. * A square complex matrix is a **P-matrix** if it has positive principal minors. * A square complex matrix is a \(P_{0}^{+}\)**-matrix** if it has nonnegative principal minors and at least one principal minor of each order is positive. * A real square matrix \(A\) is **multiplicative D-stable** (in short, **D-stable**) if \(DA\) is stable for every positive diagonal matrix \(D\). * A square matrix \(A\) is called **totally stable** if any principal submatrix of \(A\) is D-stable. * A real square matrix \(A\) is said to be **additive D-stable** if \(A+D\) is stable for every nonnegative diagonal matrix \(D\). * A real square matrix \(A\) is said to be **Lyapunov diagonally stable* * if there exists a positive diagonal matrix \(D\) such that \(AD+DA^{T}\) is positive definite. * Note: Lyapunov diagonally stable matrices are often referred to as just **diagonally stable* * matrices or as **Volterra-Lyapunov stable**, or as **Volterra dissipative* * in the literature (see e.g., (Logofet, 2005)). * A matrix \(A=\{a_{ij}\}\in\mathbb{R}^{n\times n}\) is **generalized _row_-diagonally dominant**, if there exists \(x=(x_{1},x_{2},\cdots,x_{n})\in\mathbb{R}^{n}\) with \(x_{i}>0\), \(\forall i\), such that \[|a_{ii}|x_{i}>\sum_{j=1,j\neq i}^{n}|a_{ij}|x_{j},\forall i=1,2,\cdots,n.\] (1) * A matrix \(A=\{a_{ij}\}\in\mathbb{R}^{n\times n}\) is **generalized _column_-diagonally dominant**, if there exists \(x=(x_{1},x_{2},\cdots,x_{n})\in\mathbb{R}^{n}\) with \(x_{i}>0\), \(\forall i\), such that \[|a_{jj}|x_{j}>\sum_{i=1,i\neq j}^{n}|a_{ij}|x_{i},\forall j=1,2,\cdots,n.\] (2) * Note: the set of generalized _column_-diagonally dominant matrices is equivalent to the set of generalized _row_-diagonally dominant matrices (Varga, 1976; Sun et al., 2021). They are also often referred to as **quasi-diagonally dominant* * matrices (Kaszkurewicz and Bhaya, 2012). * For a real matrix \(A=\{a_{ij}\}\in\mathbb{R}^{n\times n}\), we associate it with a **comparison matrix**\(M_{A}=\{m_{ij}\}\in\mathbb{R}^{n\times n}\), defined by \[m_{ij}=\left\{\begin{array}{ccc}|a_{ij}|,&\mbox{if}&j=i;\\ -|a_{ij}|,&\mbox{if}&j\neq i.\end{array}\right.\] A given matrix \(A\) is called an **H-matrix* * if its comparison matrix \(M_{A}\) is an M-matrix. * The set of H-matrix is equivalent to the set of quasi-diagonally dominant matrices (Kaszkurewicz and Bhaya, 2012; Sun et al., 2021). * A square matrix \(A\) is **diagonally stabilizable** if there exists a diagonal matrix \(D\) such that \(DA\) is stable. Note: Many definitions above for real matrices also carry over to complex matrices; the distinction between real and complex matrices will be made clear in the context. ## 2 Conditions for diagonally stabilizable matrices _A key motivating question: Given a square matrix \(A\), can we find a diagonal matrix \(D\) such that the matrix \(DA\) is stable?_ Fisher and Fuller (Fisher and Fuller, 1958) proved the following result: **Theorem 1**.: _(Fisher and Fuller, 1958) If \(P\) is a real \(n\times n\) matrix fulfilling the condition:_ * _(A):_ \(P\) _has at least one sequence of non-zero principal minors_ \(M_{k}\) _of every order_ \(k=1,2,\cdots,n\)_, such that_ \(M_{k-1}\) _is one of the_ \(k\) _first principal minors of_ \(M_{k}\)_;_ _then there exists a real diagonal matrix \(D\) such that the characteristic equation of \(DP\) is stable._ The Fisher-Fuller theorem is also formulated as the following alternative version: **Theorem 2**.: _Let \(P\) be an \(n\times n\) real matrix all of whose leading principal minors are positive. Then there is an \(n\times n\) positive diagonal matrix \(D\) such that all the roots of \(DP\) are positive and simple._ Fisher later gave a simple proof for a similar yet stronger result (Fisher, 1972). **Theorem 3**.: _(Fisher, 1972) If \(P\) is an \(n\times n\) real matrix that has at least one nested set of principal minors, \(M_{k}\), such that \((-1)^{k}M_{k}>0,\forall k=1\cdots,n\), then there exists a real diagonal matrix \(D\) with positive diagonal elements such that the characteristic roots of \(DP\) are all real, negative, and distinct._ **Remark 1**.: _Some remarks on the conditions of diagonally stabilizable matrices are in order._ * _The above theorems involve determining the sign of (at least) one nested set of principal minors. In_ _(Johnson et al.,_ 1997)__, sufficient conditions are determined for an_ \(n\)_-by-_\(n\) _zero-nonzero pattern to allow a nested sequence of nonzero principal minors. In particular, a method is given to sign such a pattern so that it allows a nested sequence of_ \(k\)_-by-_\(k\) _principal minors with_ \(\text{sign}(-1)^{k}\) _for_ \(k=1,\cdots,n\)_._ * _The condition in the Fisher-Fuller theorem appears as a sufficient condition for matrix diagonal stabilizability. A necessary condition for matrix diagonal stabilizability is: for each order_ \(k=1\cdots,n\)_, at least one_ \(k\times k\) _principal minor of_ \(P\) _is non-zero. It is unclear what would be_ **the** _necessary and sufficient condition._ Ballantine (Ballantine, 1970) extended the above Fisher-Fuller theorem to the complex matrix case. **Theorem 4**.: _(Ballantine, 1970) Let \(A\) be an \(n\times n\)**complex** matrix all of whose leading principal minors are nonzero. Then there is an \(n\times n\)**complex** diagonal matrix \(D\) such that all the roots of \(DA\) are positive and simple._ **Remark 2**.: _It is shown in (Hershkowitz, 1992) the above Ballantine theorem cannot be strengthened by replacing "complex diagonal matrix \(D\)" by "positive diagonal matrix \(D\)". A counterexample is shown in (Hershkowitz, 1992) involving a \(2\times 2\) complex matrix \(A\) with positive leading principal minors that there exists no positive diagonal matrix \(D\) such that the eigenvalues of \(DA\) are positive._ A related problem to characterize diagonal stabilizable matrix is the **Inverse Eigenvalue Problem** (IEP), and Friedland (Friedland, 1977) proved the following theorem. **Theorem 5**.: _(Friedland, 1977) Let \(A\) be a given \(n\times n\)**complex** valued matrix. Assume that all the principal minors of \(A\) are different from zero. Then for any specified set \(\lambda=\{\lambda,\cdots,\lambda_{n}\}\in\mathbb{C}^{n}\) there exists a diagonal **complex** valued matrix \(D\) such that the spectrum of \(AD\) is the set \(\lambda\). The number of such \(D\) is finite and does not exceed \(n!\). Moreover, for almost all \(\lambda\) the number of the diagonal matrices \(D\) is exactly \(n!\)._ **Remark 3**.: _The Friedland theorem of the IEP problem in the complex matrix case cannot be directly carried over to the real case. Further, it is easy to show with a counterexample of a \(2\times 2\) matrix that eigenvalue positionability in the real case cannot always be guaranteed, even with nonzero principal minors._ In (Hershkowitz, 1992) the following two theorems are proved. **Theorem 6**.: _(Hershkowitz, 1992) Let \(A\) be a **complex** square matrix with positive leading principal minors, and let \(\epsilon\) be any positive number. Then there exists a positive diagonal matrix \(D\) such that the eigenvalues of \(DA\) are simple, and the argument of every such eigenvalue is less in absolute value than \(\epsilon\)._ **Theorem 7**.: _(Hershkowitz, 1992) Let \(A\) be a complex square matrix with real principal minors and positive leading principal minors. Then there exists a positive diagonal matrix \(D\) such that \(DA\) has simple positive eigenvalues._ **Remark 4**.: _The above theorems all present certain sufficient conditions to characterize diagonally stabilizable matrix and the IEP problem, and they are not necessary. A necessary condition for the diagonal matrix \(D\) to exist is that for each order \(i\), at least one \(i\times i\) principal minor of \(A\) is nonzero. However, a full characterization (with necessary and sufficient condition) for diagonally stabilizable matrix still remains an open problem._ A variation of the diagonal matrix stabilization problem is the following: * Problem (*): Given a real square matrix \(G\), find a real diagonal matrix \(D\) such that the product \(GD\) is Hurwitz together with all its principal submatrices. Surprisingly, a necessary and sufficient condition exists for solving the above problem as shown in (Locatelli and Schiavoni, 2012). Let \(\mathcal{M}:=\{1,2,\cdots,m\}\) and \(\mathcal{F}:=\{f|f\subset\mathcal{M}\}\). Further, for any \(m\times m\) matrix \(\Delta\), denote by \(\Delta(f)\) the principal submatrix obtained from it after removing the rows and columns with indexes in \(f\), \(f\in\mathcal{F}\). The main result of (Locatelli and Schiavoni, 2012) proves the following: **Theorem 8**.: _(Locatelli and Schiavoni, 2012) Problem (*) admits a solution if and only if_ \[\text{det}(G(f))\text{det}(G_{D}(f))>0,\forall f\in\mathcal{F} \tag{3}\] _where \(G_{D}=\text{diag}\{g_{ii}\}\). Moreover, if the above condition is satisfied, then there exists \(\bar{\epsilon}>0\) such that, for any given \(\epsilon\in(0,\bar{\epsilon})\), the matrix_ \[D:=G_{D}Z(\epsilon),Z(\epsilon):=-\text{diag}\{\epsilon^{i}\} \tag{4}\] _solves the stabilization problem (*)._ ## 3 Conditions for diagonally stable matrices We give a short summary of available conditions for diagonally stable matrices (excerpts from (Barker et al., 1978), (Cross, 1978) and (Hershkowitz, 2006)). * (Barker et al., 1978) Lyapunov diagonally stable matrices are P-matrices. * (Barker et al., 1978) A matrix \(A\) being Lyapunov diagonally stable is equivalent to that there exists a positive diagonal matrix \(D\) such that \(x^{T}DAx>0\) for all nonzero vectors \(x\). * (Barker et al., 1978) A \(2\times 2\) real matrix is Lyapunov diagonally stable if and only if it is a P-matrix. * (Cross, 1978) For a given Lyapunov diagonally stable matrix \(P\), all principal submatrices of \(P\) are Lyapunov diagonally stable. * (Barker et al., 1978) A real square matrix \(A\) is Lyapunov diagonally stable if and only if for every nonzero real symmetric positive semidefinite matrix \(H\) the matrix \(HA\) has at least one positive diagonal element. * Note: this result is termed the BBP theorem, and is proved again in (Shorten et al., 2009) with a simpler proof. * (Cross, 1978) The set of Lyapunov diagonally stable matrices is a strict subset of multiplicative D-stable matrices. * (Cross, 1978) The set of Lyapunov diagonally stable matrices is a strict subset of additive D-stable matrices. * Note: Multiplicative D-stable and additive D-stable matrices are not necessarily diagonally stable. * A Z-matrix is Lyapunov diagonally stable if and only if it is a P -matrix (that is, it is an M-matrix). * A non-singular H-matrix with nonnegative diagonal parts is Lyapunov diagonally stable. * A quasi-diagonal dominant matrix with nonnegative diagonal parts is Lyapunov diagonally stable. Note the equivalence of Hurwitz H-matrix and quasi-diagonal dominant matrix (Sun et al., 2021). The following facts are shown in (Cross, 1978) and (Kaszkurewicz and Bhaya, 2012): * For normal matrices and within the set Z, D-stability, additive D-stability, and diagonal stability are all equivalent to matrix stability. * If a matrix \(A\) is Hurwitz stable, D-stable, or diagonally stable, then the matrices \(A^{T}\) and \(A^{-1}\) also have the corresponding properties. In (Shorten and Narendra, 2009) Shorten and Narendra showed the following necessary and sufficient condition for matrix diagonal stability (an alternative proof of the theorem of Redheffer via the KYP lemma): **Theorem 9**.: _(_Shorten and Narendra_,_ 2009_)_ _and (Redheffer, 1985) Let \(A\in\mathbb{R}^{n\times n}\) be a Hurwitz matrix with negative diagonal entries. Let \(A_{n-1}\) denote the \([n-1\times n-1]\) leading sub-matrix of \(A\), and \(B_{n-1}\) denote the corresponding block of \(A^{-1}\). Then, the matrix \(A\) is diagonally stable, if and only if there is a common diagonal Lyapunov function for the LTI systems \(\Sigma_{A_{n-1}}\) and \(\Sigma_{B_{n-1}}\)._ The above theorem involves finding a common diagonal Lyapunov function for a set of LTI systems, which may be restrictive and computationally demanding in practical applications especially when the dimension of the matrix \(A\) is large. ## 4 Relations of matrix stability and diagonal stability The paper (Berman and Hershkowitz, 1983) characterizes the relations of certain special matrices for matrix diagonal stability. They define * \(\mathscr{A}\;=\;\{A\;:\;\text{there exists a positive definite diagonal matrix}\;D\;\text{such that}\;\;AD\;+\;DA^{T}\;\text{is positive definite}\}\); i.e., \(\mathscr{A}\) denotes the set of diagonally stable matrices; * \(\mathscr{L}=\{A:\text{there exists a positive definite matrix}\;D\;\text{such that}\;\;AD+DA^{T}\;\text{is positive definite}\}\); i.e., \(\mathscr{L}\) denotes the set of (positive) stable matrices; * \(\mathscr{P}=\{A:\text{the principle minors of}\;A\;\text{are positive}\}\); i.e., \(\mathscr{P}\) denotes the set of P-matrices; * \(\mathscr{S}=\{A:\text{there exists a positive vector}\;x\;\text{such that}\;Ax\;\text{is positive}\}\); i.e., \(\mathscr{S}\) denotes the set of semipositive matrices. The main result of (Berman and Hershkowitz, 1983) is cited and shown in Fig. 1. In general, these different sets of structured matrices are not equivalent, and the set \(\mathscr{A}\) is a subset of the other sets. However, for Z-matrices, these sets are equivalent. In particular, for the case of Z-matrices, the characterizations of these sets give equivalent conditions for M-matrices (upon a sign change). Note there are yet many more conditions to characterize M-matrices; see e.g., (Plemmons, 1977). The review paper (Hershkowitz, 1992) presents the implication relations between matrix stability conditions, and the equivalent relations of matrix stabilities for Z-matrices, as cited in Figs. 2 and 3. Again, as shown in Figs. 3, for Z-matrices, all the stability types are equivalent. Figure 1: Relations of matrix stability under different matrix types: the main theorem in (Berman and Hershkowitz, 1983) Figure 3: For Z-matrices, all the stability types are equivalent. Cited from (Hershkowitz, 1992) Figure 2: The implication relations between matrix stability conditions, cited from (Hershkowitz, 1992) The survey paper (Logofet, 2005) presents some beautiful flower-shaped characterizations of the relations among matrix stabilities, as cited in Figs. 4 and 5. ## 5 Stability conditions with submatrices and Schur complement Stability conditions of'structured' matrices often involve stability properties of submatrices, which employ block submatrices and their Schur complements to determine stability. In (Narendra and Shorten, 2010), Narendra and Shorten presented necessary and sufficient conditions to characterize if a given Metzler matrix is Hurwitz, based on the fact that a Hurwitz Metzler matrix is diagonally stable. These conditions are generalized in (Souza et al., 2017). We recall some main stability criteria from (Souza et al., 2017). **Lemma 1**.: _Let \(A\in\mathbb{R}^{n\times n}\) be a Metzler matrix partitioned in blocks of compatible dimensions as \(A=[A_{11},A_{12};A_{21},A_{22}]\) with \(A_{11}\) and \(A_{22}\) being square matrices. Then the following statements are equivalent._ * \(A\) _is Hurwitz stable._ * \(A_{11}\) _and its Schur complement_ \(A/A_{11}:=A_{22}-A_{21}A_{11}^{-1}A_{12}\) _are Hurwitz stable Metzler matrices._ * \(A_{22}\) _and its Schur complement_ \(A/A_{22}:=A_{11}-A_{12}A_{22}^{-1}A_{21}\) _are Hurwitz stable Metzler matrices_ **Remark 5**.: _Some remarks are in order._ Figure 5: Petals of sign-stable matrices within the Flower. Cited from (Logofet, 2005, Fig.4) * _For a structured matrix, the property that its Schur complements also preserve the same stability and structure properties is termed_ **Schur complement stability property**_. Other types of structured matrices that have Schur complement stability property include symmetric matrices, triangular matrices, and Schwarz matrices. See_ _(_Souza et al._,_ 2017_)__._ * _The result on M-matrix in Lemma_ 1 _can be generalized to H-matrix: Let_ \(A\) _be a H-matrix partitioned in blocks of compatible dimensions as_ \(A=[A_{11},A_{12};A_{21},A_{22}]\) _with_ \(A_{11}\) _and_ \(A_{22}\) _being square matrices. If_ \(A\) _is Hurwitz stable, then_ \(A_{11}\) _and its Schur complement_ \(A/A_{11}:=A_{22}-A_{21}A_{11}^{-1}A_{12}\) _are Hurwitz stable H matrices, or_ \(A_{22}\) _and its Schur complement_ \(A/A_{22}:=A_{11}-A_{12}A_{22}^{-1}A_{21}\) _are Hurwitz stable H matrices._ * _Schur complement and its_ **closure property** _for several structured matrices (including diagonal matrices, triangular matrices, symmetric matrices, P-matrices, diagonal dominant matrices, M-matrices etc.) are discussed in_ _(_Zhang_,_ 2006_, Chap. 4)__._ ## 6 Application examples of matrix diagonal stability conditions The Fisher-Fuller theorem on diagonal matrix stabilizability (Theorem 1 and its variations) has been rediscovered several times by the control system community, and has been applied in solving distributed stabilization and decentralized control problems in practice. This section reviews two application examples. ### Conditions for decentralized stabilization In (Corfmat and Morse, 1973) Corfmat and Morse solved the following problem: * For given and fixed real matrices \(A\) and \(P\), find (if possible) a non-singular diagonal matrix \(D\) such that \(I+ADP\) is Schur stable (i.e., all eigenvalues of \(I+ADP\) are located within the unit circle in the complex plane. To solve the above problem they proved the following: **Theorem 10**.: _If \(A\) is an \(n\times n\)**strongly non-singular** matrix, then there exists a diagonal matrix \(D\) such that \((I+DA)\) is Schur stable._ Note: in (Corfmat and Morse, 1973) a matrix is termed **strongly non-singular**, if its all \(n\) leading principal minors are nonzero. **Theorem 11**.: _If \(A\) is a fixed non-singular matrix, then there exists a permutation matrix \(P\) such that \(PA\) is strongly non-singular._ Solution to decentralized stabilization: the non-singularity of \(A\) is a necessary and sufficient condition for the existence of a permutation matrix \(P\) and a non-singular diagonal matrix \(D\) such that \((I+ADP)\) is Schur stable. ### Distributed stabilization of persistent formations In (Yu et al., 2009), the problem on persistent formation stabilization involves studying the stabilizability of the following differential equation \[\dot{z}=\Delta Az\] where \(\Delta\) is a diagonal or possibly block diagonal matrix, and \(A\) is a rigidity-like matrix on formation shapes. To solve the formation stabilization problem in (Yu et al., 2009) the following result is employed ((Yu et al., 2009, Theorem 3.2)): **Theorem 12**.: _Suppose A is an \(m\times m\) non-singular matrix with every leading principal minor nonzero. Then there exists a diagonal \(D\) such that the real parts of the eigenvalues of \(DA\) are all negative._ We remark that this is a reformulation of the Fisher-Fuller theorem. ## 7 A selection of key review papers and books on matrix stability and diagonal stability conditions * The survey paper (Hershkowitz, 1992) that presents a summary of relevant matrix stability results and the developments, up until 1992. * The paper (Bhaya et al., 2003) that presents comprehensive discussions and characterizations for various classes of matrix stability conditions. * The paper (Hershkowitz and Keller, 2003) that studies the relations between positivity of principal minors, sign symmetry and stability of matrices. * The review paper (Hershkowitz, 2006) that presents an concise overview on matrix stability and inertia. * The book (Kaszkurewicz and Bhaya, 2012) on matrix diagonal stability in systems and computation. * The summary paper (Logofet, 2005) that presents a review and some beautiful connections/relations on different matrix stabilities. * The very long survey paper (Kushel, 2019) that provides a unifying viewpoint on matrix stability, and its historical development. * The recent book (Johnson et al., 2020) on positive matrix, P-matrix and inverse M-matrix.
2310.18062
Group Actions from Algebraic Flops
This paper constructs derived autoequivalences associated to an algebraic flopping contraction \(X\to X_{\con}, \) where \(X\) is quasi-projective with only mild singularities. These functors are constructed naturally using bimodule cones, and we prove these cones are locally two-sided tilting complexes by using the local-global properties and a key commutative diagram. The main result is that these autoequivalences combine to give an action of the fundamental group of an associated infinite hyperplane arrangement on the derived category of \(X.\) This generalises and simplifies \cite{DW3}, by finally removing reliance on subgroups, and it also lifts many other results from the complete local setting.
Caroline Namanya
2023-10-27T11:25:03Z
http://arxiv.org/abs/2310.18062v1
# Group actions from algebraic flops ###### Abstract. This paper constructs derived autoequivalences associated to an algebraic flopping contraction \(X\to X_{\mathrm{con}}\), where \(X\) is quasi-projective with only mild singularities. These functors are constructed naturally using bimodule cones, and we prove these cones are locally two-sided tilting complexes by using the local-global properties and a key commutative diagram. The main result is that these autoequivalences combine to give an action of the fundamental group of an associated infinite hyperplane arrangement on the derived category of \(X.\) This generalises and simplifies [4], by finally removing reliance on subgroups, and it also lifts many other results from the complete local setting. ## 1. Introduction Over the years, studying autoequivalence of derived categories of coherent sheaves has been of increasing interest in algebraic geometry. In [4] using a new invariant _the noncommutative deformation algebra_ called the contraction algebra associated to any contractible rational curve in any \(3\)-fold, a particular derived autoequivalence associated to a general flopping curve was described. More generally, consider a global flopping contraction \(X\to X_{\mathrm{con}}\) as in Setup 8.1, where \(X\) has only mild singularities. After passing to the complete local setting by taking the formal fibre (see SS2.3), by [4] we can associate two combinatorial objects: a finite hyperplane arrangement \(\mathcal{H}\) and an infinite hyperplane arrangement \(\mathcal{H}^{\mathrm{aff}}.\) The question is whether these \(\mathcal{H}\) and \(\mathcal{H}^{\mathrm{aff}},\) which are local objects, still control the derived autoequivalences of the global \(X.\) The intuition, and main result of [4], is that there is a subgroup \(K\) of the fundamental group of the complexified complement \(\pi_{1}(\mathbb{C}^{n}\backslash\mathcal{H}_{\mathbb{C}})\) which acts on the derived category \(\mathrm{D}^{\mathrm{b}}(\mathrm{coh}\,X).\) This is problematic since [N] proves that \(K\neq\pi_{1}\) in general. The main point of this paper is that the subgroup \(K\) is not important, and indeed we prove that it is possible to construct a full action of \(\pi_{1}\) directly. A hint of how to do this appears in [A], where bimodule cone autoequivalences are constructed using contraction algebra technology. It turns out that this vastly generalises. ### Main results Before stating our main results, all the hard work in this paper goes into establishing the Zariski local situation. As such, consider first an algebraic flopping contraction \(U\to\operatorname{Spec}R,\) where \(U\) has only mild singularities (see Setup 2.2). It is well known, even in this algebraic setting by [12], that \(U\) is derived equivalent to some \(R\)-algebra \(\Lambda.\) As explained below, we will use bimodule cones on \(\Lambda\) to construct two-sided tilting complexes. This will then induce derived autoequivalences of \(U,\) as follows. **Theorem 1.1** (=6.13, 7.9).: _Consider an algebraic flopping contraction \(U\to\operatorname{Spec}R\) as in Setup 2.2, with associated finite hyperplane arrangement \(\mathcal{H}\) and infinite hyperplane arrangement \(\mathcal{H}^{\mathrm{aff}}.\)_ 1. _For every wall crossing in_ \(\mathcal{H}\) _or_ \(\mathcal{H}^{\mathrm{aff}},\) _there exists an autoequivalence of_ \(\mathrm{D}^{\mathrm{b}}(\mathrm{coh}\,U).\)__ 2. _There exists group homomorphisms_ \[\pi_{1}(\mathbb{C}^{n}\setminus\mathcal{H}_{\mathbb{C}})\xrightarrow{g} \operatorname{Auteq}\mathrm{D}^{\mathrm{b}}(\mathrm{coh}\,U)\] In addition to the subgroup \(K\) defined in [4] being problematic, the generalisation to the algebraic setting comes with two costs, namely there is lack of Krull-Schimdt (since \(R\) is not complete local), and there are no obvious algebraic objects which correspond to the chambers of \(\mathcal{H}\) and \(\mathcal{H}^{\mathrm{aff}}\), and so no obvious way of producing autoequivalences via wall crossing. With Theorem 1.1 in hand, all results globalise. Now, let \(h\colon X\to X_{\mathrm{con}}\) be a global \(3\)-fold flopping contraction as in Setup 8.1. As explained in SS8, in this global setting there is a product of finite arrangements \(\mathfrak{H}\) and a product of infinite arrangements \(\mathfrak{H}^{\mathrm{aff}}\). We remark that the following generalises [13, Theorem 1.2] in two ways: in the finite case \(\mathfrak{H}\) it removes the assumption that the curves are individually floppable, and the infinite case \(\mathfrak{H}^{\mathrm{aff}}\) is new. The following is our main result. **Theorem 1.2** (=8.5).: _Under the global assumptions of Setup 8.1, there exists group homomorphisms_ ### Bimodule Constructions Here we briefly explain the construction of the functors in the image of \(m\) and \(m^{\mathrm{aff}}\) in Theorem 1.2. After reverting to the formal fibre in the sense of SS2.3, there is an associated hyperplane arrangement which depends on Dynkin data (see SS2.3 and [14]). Given a wall \(i\) in some chamber \(D\) we can find an atom \(\mathfrak{a}\) from a fixed chamber \(C_{+}\) to \(D\) followed by monodromy around wall \(i\). One example is illustrated in the following diagram We then associate an endomorphism algebra \(\mathrm{B}\) to the chamber \(D\). This algebra inherits \(n+1\) primitive idempotents \(e_{i}\), (for more details see Setup 3.1), and there is a ring homomorphism \(\mathrm{B}\to\mathrm{B}_{i}\), where \(\mathrm{B}_{i}=\mathrm{B}\,/(1-e_{i})\). For \(f\colon U\to\mathrm{Spec}\,R\) an algebraic flopping contraction of Setup 2.2, to construct an autoequivalence on \(U\) associated to the above monodromy, we consider an endomorphism algebra \(\Lambda\) that is derived equivalent to \(U\), as in SS2.1. For any choice of \((\mathfrak{a},i)\) as above (see also Setup 3.1), we first define a functor diagram and then construct a natural transformation \(\Psi_{\mathfrak{a},i}\circ\Psi_{\mathfrak{a},i}^{\mathrm{RA}}\to\mathrm{Id}_{\Lambda}\) by exhibiting a certain bimodule map \[\Lambda\to{}_{\Lambda}\hat{\Lambda}\otimes(\mathbb{M}\otimes\tau_{\mathfrak{a }}\otimes Z_{\mathfrak{a},i})\otimes\hat{\Lambda}_{\Lambda},\] in the derived category of \(\Lambda\)-\(\Lambda\) bimodules. Thus taking the cone in the derived category of \(\Lambda\)-\(\Lambda\) bimodules gives a triangle \[C_{\alpha,i}\to{}_{\Lambda}\Lambda_{\Lambda}\to{}_{\Lambda}\hat{\Lambda}\otimes_{ \Lambda}^{\mathbf{L}}\left(\mathbb{M}\otimes_{\Lambda}^{\mathbf{L}}\mathbf{ \tau}_{\alpha}\otimes_{\mathbf{B}}^{\mathbf{L}}Z_{\alpha,i}\right)\otimes_{ \Lambda}^{\mathbf{L}}\hat{\Lambda}_{\Lambda}\to C_{\alpha,i}[1].\] Define \(\mathsf{Twist}_{\alpha,i}:=\mathbf{R}\mathrm{Hom}({}_{\Lambda}C_{\alpha,i_{ \Lambda}},-)\) and \(\mathsf{Twist}_{\alpha,i}^{*}:=-\otimes_{\Lambda}^{\mathbf{L}}\,C_{\alpha,i_{ \Lambda}}.\) The following is the main technical result. **Theorem 1.3** (=5.3).: \({}_{\Lambda}C_{\Lambda}={}_{\Lambda}C_{\alpha,i_{\Lambda}}\) _is a two-sided tilting complex, giving rise to the autoequivalence \(\mathsf{Twist}_{\alpha,i}\) of \(\mathrm{D}^{\mathrm{b}}(\mathrm{mod}\,\Lambda)\). Furthermore, this fits in a commutative diagram_ _where the top functor is a composition of mutation and Morita equivalences._ The commutative diagram above intertwines the complete local monodromy (in the top functor) with the Zariski local autoequivalence \(\mathsf{Twist}_{\alpha,i}\). In case of the finite arrangement \(\mathcal{H}\), these twist functors also behave well with respect to the contraction algebra equivalences of August [A]. As notation, let \(\Lambda_{\mathrm{con}}\) be the contraction algebra of [DW1]. **Corollary 1.4** (=5.5).: _For any choice of \((\alpha,i)\) in the finite arrangement \(\mathcal{H}\) the following diagram commutes_ _where \(\mathrm{J}_{\alpha,i}\) are the compositions of the standard equivalences of [A], recalled in SS5._ The key point of Theorem 1.3 is that it is much more general. Contraction algebras only exist in the finite \(\mathcal{H}\), whereas the autoequivalences in the infinite \(\mathcal{H}^{\mathrm{aff}}\) in Theorem 1.3 are much more general. **Conventions.**\(R\) is a normal isolated cDV singularity. We will drop both super and subscripts from the tensors as much as possible, whilst still maintaining their natural meanings and contexts in the background. When tensoring by bimodules, we will suppress the obvious module structure, so for a bimodule \({}_{\Lambda}X_{\Gamma}\), we write \(\otimes_{\Lambda}\Lambda_{\Gamma}\). **Acknowledgements.** This work forms part of the author's PhD, and was funded by an IMU Breakout Graduate Fellowship. The author would like to recognize support from the ERC Consolidator Grant 101001227 (MMiMMa) for a one year visit as a PhD student 2021-22 at University of Glasgow, where part of this work was done. The author is immensely grateful to her PhD supervisors Michael Wemyss and David Ssevviriri for their helpful guidance, and thanks Wahei Hara and Jenny August for the many helpful discussions. ## 2. Preliminaries The following section gives definitions, terminologies and notation that will be used. ### Geometric setup Recall that a projective birational morphism \(f\colon X\to Y\) is small if the exceptional locus is has a codimension at least two. When the dimension is three, this translates into \(f\) does not contract a divisor. **Definition 2.1**.: _A flop is a commutative diagram_ _where \(f^{\pm}\) are small projective birational morphisms, and the canonical bundles \(\omega_{X^{\pm}}\) are trivial over \(Y\)._ We will refer to \(f^{-}\) and \(f^{+}\) as flopping contractions. For threefolds, a flop is a process of cutting out rational curves \(C_{i}\) and replacing them with a union of other rational curves, without contracting any divisors. **Setup 2.2**.: [4, Setup 2.3] Suppose that \(f\colon U\to\operatorname{Spec}R\) is a flopping contraction which is an isomorphism away from _precisely one point_\(\mathfrak{m}\in\operatorname{Max}R\). We assume that \(U\) has only Gorenstein terminal singularities. As notation, above \(\mathfrak{m}\) is a connected chain \(C\) of \(n\) curves with reduced scheme structure \(C^{\operatorname{red}}=\bigcup_{j=1}^{n}C_{j}\) such that each \(C_{j}=\mathbb{P}^{1}\) Under Setup 2.2, by [23, Theorem A] there is a tilting bundle \(\mathcal{V}=\mathcal{O}_{U}\oplus\mathcal{N}\) on \(U\) inducing a derived equivalence \[\operatorname{D}^{\operatorname{b}}(\operatorname{coh}U)\xrightarrow{ \operatorname{\mathbf{R}Hom}(\mathcal{V},\ldots)}\operatorname{D}^{ \operatorname{b}}(\operatorname{mod}\Lambda),\] where \(\Lambda:=\operatorname{End}_{U}(\mathcal{V})\cong\operatorname{End}_{R}(f_{*} \mathcal{V})=\operatorname{End}_{R}(R\oplus f_{*}\mathcal{N})\), by [23, 4.2.1]. ### General hyperplane arrangements A real hyperplane arrangement, written \(\mathcal{H}\), is a finite set of hyperplanes in \(\mathbb{R}^{n}\). Such an arrangement is called Coxeter if it arises as the set of reflection hyperplanes of a finite real reflection group. \(\mathcal{H}\) is simplicial if \(\cap_{H\in\mathcal{H}}H=0\) and all chambers in \(\mathbb{R}^{n}\backslash\mathcal{H}\) are open simplicial cones. All Coxeter arrangements are simplicial, but the converse is false. **Definition 2.3**.: [23, Definition 2.6] _Let \(\Gamma_{\mathcal{H}}\) be the oriented graph associated to the hyperplane arrangement \(\mathcal{H}\) defined as follows. The vertices of \(\Gamma_{\mathcal{H}}\) are the chambers of \(\mathcal{H}\), i.e. the connected components of \(\mathbb{R}^{n}\setminus\mathcal{H}\). There is a unique arrow \(a\colon v_{1}\to v_{2}\) from chamber \(v_{1}\) to chamber \(v_{2}\) if the chambers are adjacent, otherwise there is no arrow._ By definition, if there is an arrow \(a\colon v_{1}\to v_{2}\), then there is a unique arrow \(b\colon v_{2}\to v_{1}\) with the opposite direction of \(a\). For an arrow \(a\colon v_{1}\to v_{2}\), call \(s(a):=v_{1}\) the source of \(a\), and \(t(a):=v_{2}\) the target of \(a\). A positive path of length \(n\) in \(\Gamma_{\mathcal{H}}\) is a formal symbol \[p=a_{n}\circ\ldots\circ a_{2}\circ a_{1}\] whenever there exists a sequence of vertices \(v_{0},\ldots,v_{n}\) of \(\Gamma_{\mathcal{H}}\) and arrows \(a_{i}\colon v_{i-1}\to v_{i}\) in \(\Gamma_{\mathcal{H}}\). Set \(s(p):=v_{0},t(p):=v_{n}\), and call \(l(p):=n\) the length of \(p\). If \(q=b_{m}\circ\ldots\circ b_{2}\circ b_{1}\) is another positive path with \(t(p)=s(q)\), we consider the formal symbol \[q\circ p:=b_{m}\circ\ldots\circ b_{2}\circ b_{1}\circ a_{n}\circ\ldots\circ a_ {2}\circ a_{1}\] and call it the composition of \(p\) and \(q\). **Definition 2.4**.: [23, Definition 2.6] _A positive path is called minimal if there is no positive path in \(\Gamma_{\mathcal{H}}\) of smaller length, and with the same endpoints. The positive minimal paths are called atoms. A positive path is called reduced if it does not cross any hyperplane twice._ ### Specific hyperplane arrangements Returning to the setting of 2.2, set \(\mathcal{R}=\hat{R}\) to be the completion of \(R\) at the unique singular point \(\mathfrak{m}\). The natural morphism \(R\to\hat{R}=\mathcal{R}\), induces the following diagram (2.A) where \(\varphi\) is called the formal fibre. The morphism \(\mathcal{U}\to\operatorname{Spec}\mathcal{R}\) is a formal flopping contraction, and for a generic element \(g\in\mathcal{R}\), slicing induces the following diagram (2.A) By Reid's general elephant [13], \(\mathcal{R}/g\) is an ADE Kleinian singularity, \(\psi\) is a partial resolution morphism, and \(\mathcal{X}\) is the minimal resolution. By the McKay correspondence, the exceptional curves \(C_{i}\) of \(\mathcal{X}\) are indexed by vertices \(i\) in a Dynkin diagram \(\Delta\). We will write \(\mathfrak{J}\in\Delta\) for the curves that are contracted to \(\mathfrak{J}\), and so \(\mathcal{J}^{c}=\Delta\setminus\mathfrak{J}\) are those that survive. The Dynkin data \((\Delta,\mathfrak{J})\) gives rise to two hyperplane arrangements. One \(\mathcal{H}_{\mathfrak{J}}\) is finite and the other \(\mathcal{H}_{\mathfrak{J}}^{\text{aff}}\) is infinite. Both live inside \(\mathbb{R}^{|\Delta|-|\mathfrak{J}|}\). The hyperplane arrangement \(\mathcal{H}_{\mathfrak{J}}\) is calculated by restricting all the positive roots of \(\Delta\) to the subset \(\mathcal{J}^{c}\). To calculate \(\mathcal{H}_{\mathfrak{J}}^{\text{aff}}\), given a restricted root \(a=(a_{i})_{i\in\mathcal{J}^{c}}\), the hyperplane \(\sum_{i\in\mathcal{J}^{c}}a_{i}x_{i}=0\) appearing in \(\mathcal{H}_{\mathfrak{J}}\) gets translated over the set of integers to give an infinite family \(\sum_{i\in\mathcal{J}^{c}}a_{i}x_{i}\in\mathbb{Z}\). This is repeated on every restricted root to give \(\mathcal{H}_{\mathfrak{J}}^{\text{aff}}\), for full details see e.g [13, 14]. Note that when \(\mathfrak{J}=\emptyset\), that is \(\mathcal{X}=\mathfrak{J}\), then \(\mathcal{H}\) is the finite \(\Lambda\)DE root system and \(\mathcal{H}^{\text{aff}}\) is the extended affine root system. **Example 2.5**.: For the example of Dynkin data \((\Delta,\mathfrak{J})\) with \(\mathfrak{J}=\mathfrak{coo}\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{ \circle{1.eps}}}\), where by convention \(\mathfrak{J}\) are the unshaded nodes, in the diagram below the full figure is the affine hyperplane arrangement \(\mathcal{H}_{\mathfrak{J}}^{\text{aff}}\). Zooming in to only the pink lines is the finite hyperplane arrangement \(\mathcal{H}_{\mathfrak{J}}\). When \(\mathcal{H}\) is simplicial, we write \[\mathcal{H}_{\mathbb{C}}:=\bigcup_{H\in\mathcal{H}}H_{\mathbb{C}},\] where \(H_{\mathbb{C}}\) is the complexification of a hyperplane \(H\). We then denote the fundamental group of the complexified complement by \(\pi_{1}(\mathbb{C}^{n}\setminus\mathcal{H}_{\mathbb{C}})\) or \(\pi_{1}(\mathbb{C}^{n}\setminus\mathcal{H}_{\mathbb{C}}^{\text{aff}})\), for \(\mathcal{H}\) or \(\mathcal{H}^{\text{aff}}\) respectively. ### Mutation As motivation for finding derived equivalences of a ring, the idea of mutation is to find new tilting modules from old ones, by removing one indecomposable summand of a module and replacing it with another. Let \(\mathcal{R}\) be a complete local cDV singularity. A module \(N\in\operatorname{mod}\mathcal{R}\) is said to be maximal Cohen-Macaulay(= CM) if \[\operatorname{depth}_{\mathcal{R}}N:=\inf\{i\geq 0\mid\operatorname{Ext}_{ \mathcal{R}}^{i}(\mathcal{R}/\mathfrak{m},N)\neq 0\}=\dim\mathcal{R},\] and we write \(\operatorname{CM}\mathcal{R}\) for the category of \(\operatorname{CM}\mathcal{R}\)-modules. Further, \(N\in\operatorname{mod}\mathcal{R}\) is said to be reflexive if the natural morphism \(N\to N^{**}\) is an isomorphism, where \((-)^{*}:=\operatorname{Hom}_{\mathcal{R}}(-,\mathcal{R})\), and we write \(\operatorname{ref}\mathcal{R}\) for the category of reflexive \(\mathcal{R}\)-modules. **Definition 2.6**.: _A module \(N\in\operatorname{ref}\mathcal{R}\) is called a modifying module if \(\operatorname{End}_{\mathcal{R}}(N)\in\operatorname{CM}\mathcal{R}\), and \(N\in\operatorname{ref}\mathcal{R}\) is called a maximal modifying module(= \(\operatorname{MM}\)) if it is modifying and_ \[\operatorname{add}N=\{A\in\operatorname{ref}\mathcal{R}\mid\operatorname{ End}_{\mathcal{R}}(N\oplus A)\in\operatorname{CM}\mathcal{R}\}.\] _If \(N\) is \(\operatorname{MM}\), then \(\operatorname{End}_{\mathcal{R}}(N)\) is called a maximal modification algebra(= \(\operatorname{MMA}\))._ We next summarize mutation, following [14]. For \(C,D\in\operatorname{mod}\mathcal{R}\), a morphism \(g\colon D_{0}\to C\) is called a right (\(\operatorname{add}D\))-approximation if \(D_{0}\in\operatorname{add}D\) and \[\operatorname{Hom}_{\mathcal{R}}(D,D_{0})\xrightarrow{\mathcal{J}_{g}} \operatorname{Hom}_{\mathcal{R}}(D,C)\] is surjective. The left (\(\operatorname{add}D\))-approximation can be defined dually. Given a modifying \(\mathcal{R}\)-module \(N\), with an indecomposable summand \(N_{i}\), consider 1. a right \(\left(\operatorname{add}\!\frac{N}{N_{i}}\right)\)-approximation of \(N_{i}\), namely \(V_{i}\stackrel{{ a_{i}}}{{\to}}N_{i}\) 2. a right \(\left(\operatorname{add}\!\frac{N}{N_{i}}\right)^{*}\)-approximation of \(N_{i}^{*}\), namely \(V_{i}^{*}\stackrel{{ b_{i}}}{{\to}}N_{i}^{*}\) which give exchange sequences \[0\to\ker a_{i}\to V_{i}\stackrel{{ a_{i}}}{{\to}}N_{i}\quad \text{and}\quad 0\to\ker b_{i}\to U_{i}^{*}\stackrel{{ b_{i}}}{{\to}}N_{i}^{*}. \tag{2.B}\] From this 1. the right mutation of \(N\) at \(N_{i}\) is defined by \[\mathsf{v}_{i}(N):=\frac{N}{N_{i}}\oplus\ker a_{i},\] i.e remove the indecomposable summand \(N_{i}\) and replace it with \(\ker a_{i}\). 2. The left mutation of \(N\) at \(N_{i}\) is defined by \[\mathsf{\mu}_{i}(N):=\frac{N}{N_{i}}\oplus(\ker b_{i})^{*}.\] As in [10, Appendix A], write \[\Phi_{i}\colon\operatorname{D}^{\mathrm{b}}(\operatorname{End}_{\mathcal{R}}(N ))\to\operatorname{D}^{\mathrm{b}}(\operatorname{End}_{\mathcal{R}}(\mathsf{ v}_{i}N))\] for the associated derived equivalence \(\mathbf{R}\mathrm{Hom}_{\operatorname{End}_{\mathcal{R}}(N)}(\operatorname{ Hom}_{\mathcal{R}}(N,\mathsf{v}_{i}N),-)\). ### Affine Auslander-McKay bijection We will consider the general situation of Setup 2.2 where we input a flopping contraction \(f\colon U\to\operatorname{Spec}R\) with terminal Gorestein singularities, with formal fibre \(\mathcal{U}\to\operatorname{Spec}\mathcal{R}\). In this case set \(N:=f_{*}\left(\mathcal{O}_{\mathcal{U}}\oplus\mathcal{N}\right),\) where \(\mathcal{O}_{\mathcal{U}}\oplus\mathcal{N}\) is the [VdB] tilting bundle on \(\mathcal{U}\). Complete locally we have the following diagram, Using same notation as in [10], we will write \(\operatorname{MM}^{N}\mathcal{R}\) for those modifying reflexive \(\mathcal{R}\)-modules that have a two-term approximation by \(\operatorname{add}\!N\) and have the same number of indecomposable summands as \(N\). Further, write \(\operatorname{MMG}^{N}\mathcal{R}\) for those in \(\operatorname{MM}^{N}\mathcal{R}\) which have \(\mathcal{R}\) as a summand. By the general version of affine Auslander-McKay correspondence see e.g [10, 0.18], \(\operatorname{MM}^{N}\mathcal{R}\) coincides with the mutation classes of \(N\) and \(\operatorname{MMG}^{N}\mathcal{R}\) coincides with the Cohen-Macaulay mutation classes of \(N\), and there are the following bijections under which wall crossing corresponds to mutation. As notation, write \(N_{D}\) for the element in \(\operatorname{MM}^{N}\mathcal{R}\) corresponding to chamber \(D\). **Example 2.7**.: Continuing the Example 2.5, the following green chamber (or alcove) corresponds to \(N\) and has \(3\) summands (corresponding to the three walls). We can mutate to obtain all elements of \(\operatorname{MM}^{N}\!{\mathcal{R}}\) by beginning in the green chamber. Bounding the above figure to following the pink box illustrates the chambers in the finite arrangement \({\mathcal{H}}.\) We can obtain all elements of \(\operatorname{MM}^{N}\!{\mathcal{R}}\) (which correspond to the chambers within the pink box) by repeatedly mutating all summands except \({\mathcal{R}}.\) ## 3. Preliminary Lemmas Returning to Setup 2.2, consider the flopping contraction \(U\to\operatorname{Spec}R\) with formal fibre \({\mathcal{U}}\to\operatorname{Spec}{\mathcal{R}}.\) As explained in SS2.1, \(\Lambda=\operatorname{End}_{R}(R\oplus f_{\star}{\mathcal{N}}),\) thus after completion \[{\mathcal{R}}\oplus\widehat{f_{\star}{\mathcal{N}}}={\mathcal{R}}^{\oplus a_{ 0}}\oplus N_{1}^{\oplus a_{1}}\oplus\ldots\oplus N_{n}^{\oplus a_{n}},\] for some \(a_{i}\in{\mathbb{N}},\) and so \[\hat{\Lambda}=\operatorname{End}_{\mathcal{R}}({\mathcal{R}}^{\oplus a_{0}} \oplus N^{\oplus a_{1}}\oplus\ldots\oplus N^{\oplus a_{n}}).\] Set \(\operatorname{A}=\operatorname{End}_{\mathcal{R}}(N)\) with \(N={\mathcal{R}}\oplus N_{1}\oplus\ldots\oplus N_{n},\) and write that \({\mathcal{R}}:=R\otimes_{R_{\mathfrak{m}}}R_{\mathfrak{m}}\otimes\hat{R}_{ \mathfrak{m}}.\) **Setup 3.1**.: Given a choice of an atom \(\boldsymbol{\alpha}\colon C_{+}\to D\) in \({\mathcal{H}}\) or \({\mathcal{H}}^{\mathsf{aff}}\) and any wall \(i\) of \(D\) we next associate a functor to \(\operatorname{D}^{\mathrm{b}}(\Lambda).\) Note that \(\operatorname{A}\) is the basic algebra Morita equivalent to \(\hat{\Lambda},\) by e.g. [2, 2.6]. Consider \(\operatorname{B}:=\operatorname{End}_{\mathcal{R}}(N_{D}),\) where \(N_{D}\) is described in SS2.5. Now, \(\operatorname{B}\) inherits \(n+1\) primitive idempotents \(e_{i}\) corresponding to the walls of \(D.\) For purposes of notation, write \(\Phi_{\boldsymbol{\alpha}}\) for composition of mutation functors corresponding to the path \(\boldsymbol{\alpha},\) and set \(\operatorname{B}_{i}:=\operatorname{B}/(1-e_{i}).\) Consider the composition \[\psi_{\boldsymbol{\alpha},i}:=\operatorname{D}(\operatorname{B}_{i})\to \operatorname{D}(\operatorname{B})\xrightarrow{\boldsymbol{\alpha}_{\star}} \operatorname{D}(\operatorname{A})\xrightarrow{\sim}\operatorname{D}( \hat{\Lambda})\to\operatorname{D}(\Lambda) \tag{3.A}\] where the first is induced by the ring homomorphism \(\operatorname{B}\to\operatorname{B}_{i},\) the second is the composition mutations, the third is the Morita equivalence as in [2, 5.4] and above, and the fourth is induced by the ring homomorphism \(\Lambda\to\hat{\Lambda}.\) Thus, given information of \((\mathfrak{x},i)\), which is complete local since \(\mathcal{H}\) and \(\mathcal{H}^{\text{aff}}\) are constructed complete locally, we have obtained a functor \(\Psi_{\mathfrak{a},i}\) between the derived categories of algebraic objects. Given that \(\phi_{\mathfrak{a}}\) is the composition of standard equivalences, we may write \(\phi_{\mathfrak{a}}\cong\mathbf{R}\mathrm{Hom}_{\mathrm{B}}(\mathfrak{x} \intercal_{\mathfrak{a}},-)\) for some A-B bimodule \(\uptau_{\mathfrak{a}}.\) Similarly, write \(\mathrm{Hom}_{\mathrm{A}}(\hat{\iota}_{\hat{\Lambda}}\mathbb{M},-)\) for the Morita equivalence between A and \(\hat{\Lambda}.\) We have the following result. **Lemma 3.2**.: _There is a functorial isomorphism_ \[\Psi_{\mathfrak{a},i}\cong\left\{\begin{aligned} &\mathbf{R}\mathrm{Hom}_{ \mathrm{A}}\left(\hat{\Lambda}\otimes_{\hat{\Lambda}}^{\mathbf{L}}\mathbb{M} \otimes_{\hat{\Lambda}}^{\mathbf{L}}\mathbf{\tau}_{\mathfrak{a}}\otimes_{ \hat{\mathrm{B}}}^{\mathbf{L}}\mathrm{B}_{i},-\right)\\ &-\otimes_{\mathrm{B}_{i}}^{\mathbf{L}}\left(\mathrm{B}_{i} \otimes_{\mathrm{B}}^{\mathbf{L}}\uptau_{\mathfrak{a}}^{*}\otimes_{\hat{ \Lambda}}^{\mathbf{L}}\mathbb{M}^{*}\otimes_{\hat{\Lambda}}^{\mathbf{L}}\hat{ \Lambda}\right)\end{aligned}\right. \tag{3.B}\] Proof.: Consider the functor diagram \[\mathrm{D}(\mathrm{B}_{i})\xrightarrow[-\otimes_{\mathrm{B}_{i}}^{\mathbf{R} \mathrm{Hom}_{\mathrm{A}}(\mathrm{A}\uptau_{\mathfrak{a}},-)}-\otimes_{ \mathrm{B}_{i}}^{\mathbf{L}}\mathrm{B}_{i\mathrm{B}}\mathrm{D}(\mathrm{B}) \xrightarrow[-\otimes_{\mathrm{B}_{i}}^{\mathbf{L}}\uptau_{\mathfrak{a}}^{*} \mathrm{A}]{}\mathrm{D}(\mathrm{A})\xrightarrow[-\otimes_{\hat{\Lambda}}^{ \mathbf{L}}\uptau_{\hat{\Lambda}}^{*}]{}\mathrm{D}(\mathrm{A})\xrightarrow[- \otimes_{\hat{\Lambda}}^{\mathbf{L}}\uptau_{\hat{\Lambda}}^{*}]{}\mathrm{D}( \mathrm{A}).\] The left most and right most functors are restriction of scalars, and so in both cases the top functor equals the bottom one. The middle functors are equivalences, so again in each case, the top functor is isomorphic to the bottom functor, now by e.g [10, p.44]. The statement follows since (3.B) is the composition. **Lemma 3.3**.: \(\rho:=\left(\mathrm{B}_{i}\otimes_{\mathrm{B}}^{\mathbf{L}}\uptau_{\mathfrak{ a}}^{*}\otimes_{\hat{\Lambda}}^{\mathbf{L}}\mathbb{M}^{*}\otimes_{\hat{ \Lambda}}^{\mathbf{L}}\hat{\Lambda}\right)\cong\mathbb{F}\Phi_{\mathfrak{a}} \left(\mathrm{B}_{i}\right)\) _as right \(\Lambda\)-modules._ Proof.: Observe that \(\mathrm{B}_{i}\mapsto\mathrm{B}_{i}\mapsto\Phi_{\mathfrak{a}}(\mathrm{B}_{i} )\mapsto\mathbb{F}\Phi_{\mathfrak{a}}(\mathrm{B}_{i})\mapsto\mathbb{F}\Phi_{ \mathfrak{a}}(\mathrm{B}_{i})\) under the composition defining \(\Psi_{\mathfrak{a},i}\). Thus \[\mathbb{F}\Phi_{\mathfrak{a}}(\mathrm{B}_{i})\cong\Psi_{\mathfrak{a},i}( \mathrm{B}_{i})\stackrel{{\ref{eq:B_i}}}{{\cong}}\mathrm{B}_{i} \otimes_{\mathrm{B}_{i}}^{\mathbf{L}}\rho\cong\rho\] as right \(\Lambda\)-modules. **Claim 3.4**.: \(\mathbb{F}\Phi_{\mathfrak{a}}(S_{i})\) _is a finite length module supported only at the maximal ideal \(\mathfrak{m}\)._ Proof.: Since \(\mathfrak{a}\) is an atom, by torsion pairs (see e.g [12, SS5]), \(\Phi_{\mathfrak{a}}(S_{i})\) is a module or a shift of a module, in particular \(\Phi_{\mathfrak{a}}(S_{i})\) is in a single homological degree. Thus, since \(\mathbb{F}\) is a Morita equivalence, \(\mathbb{F}\Phi_{\mathfrak{a}}(S_{i})\) is also in a single homological degree. Since \(\mathrm{B}_{i}\) is a finite dimensional algebra filtered by the simple \(S_{i}\), then \(\mathbb{F}\Phi_{\mathfrak{a}}\left(\mathrm{B}_{i}\right)\) is filtered by \(\mathbb{F}\Phi_{\mathfrak{a}}(S_{i}).\) It is known that \(S_{i}\) is supported only at the maximal ideal \(\mathfrak{m}\), thus for \(\mathfrak{p}\neq\mathfrak{m}\) \[\Phi_{\mathfrak{a}}(S_{i})_{\mathfrak{p}}=\mathbf{R}\mathrm{Hom}_{\mathrm{B}}( \uptau_{\mathfrak{a}},S_{i})_{\mathfrak{p}}=\mathbf{R}\mathrm{Hom}_{\mathrm{ B}_{\mathfrak{p}}}(\uptau_{\mathfrak{a}\mathfrak{p}},S_{i\mathfrak{p}})=0.\] So, \(\mathbb{F}\Phi_{\mathfrak{a}}(S_{i})=\Phi_{\mathfrak{a}}(S_{i})\otimes_{\hat{ \Lambda}}\mathbb{M}^{*}\) is also supported at only the maximal ideal \(\mathfrak{m}.\) Thus, \(\mathbb{F}\Phi_{\mathfrak{a}}(S_{i})_{\hat{\Lambda}}\in\mathfrak{f}\hat{\Lambda}\). Since by Iyama-Reiten [13]\(\mathfrak{f}\hat{\Lambda}\hookrightarrow\mathfrak{f}\hat{\Lambda}\), then \(\mathbb{F}\Phi_{\mathfrak{a}}(S_{i})\) is a finite length \(\Lambda\)-module supported only at the maximal ideal \(\mathfrak{m}\). Recall from SS2.3 the associated hyperplane arrangements \(\mathcal{H}\) and \(\mathcal{H}^{\text{aff}}\). **Lemma 3.5**.: _If \(\mathfrak{a}\) is an atom in \(\mathcal{H}\) or \(\mathcal{H}^{\text{aff}}\), then \(\mathrm{H}^{\mathrm{t}}(\Phi_{\mathfrak{a}}^{-1}(\mathrm{B}_{i}))=0\) for all but one \(t\)._ Proof.: \(\mathrm{B}_{i}\) is filtered by the simple \(S_{i}\). By the torsion pairs described in [13], since \(\mathfrak{a}\) is an atom we know that \(\Phi_{\mathfrak{a}}^{-1}(S_{i})\) is only in one homological degree. Thus \(\Phi_{\mathfrak{a}}^{-1}(\mathrm{B}_{i})\) is in one homological degree only. The following is also well known. **Lemma 3.6**.: _With notation as above, the following statements hold._ 1. _If_ \(Z\) _is a_ \(\mathfrak{f}\hat{\Lambda}\)_-module, then the natural map_ \(Z\otimes_{\hat{\Lambda}}\hat{\Lambda}\otimes_{\hat{\Lambda}}\hat{\Lambda}\to Z\) _is an isomorphism of_ \(\hat{\Lambda}\)_-modules._ 2. _If_ \(a\in\mathrm{D}_{\mathfrak{f}}^{\mathrm{b}}(\mathrm{mod}\,\hat{\Lambda}),\) _then_ \(a\otimes_{\hat{\Lambda}}\hat{\Lambda}\otimes_{\hat{\Lambda}}\hat{\Lambda}\cong a\) _in_ \(\mathrm{D}_{\mathfrak{f}}^{\mathrm{b}}(\mathrm{mod}\,\hat{\Lambda}).\)__ Proof.: (1) Consider the following functors between categories where the bottom left functor is fully faithful by [IR]. By [DW1, 2.15], [IR, p1100], the rightmost functors are an equivalence of categories, thus composing we have The claimed morphism is the counit, and this is an isomorphism since the right adjoint functor is fully faithful. (2) By [IR, 2.5] there is an equivalence of categories, so with each a finite length -module. Since by Lemma 3.6(1) via the counit, it follows that as complexes, as required. ## 4. Construction of Bimodule Cones In the local Zariski setting of 2.2 where is derived equivalent to, this section uses bimodule cones to construct endofunctors of. **Proposition 4.1**.: _For any choice of as in Setup 3.1, there exists in the derived category of - bimodules, such that, given the functorial diagram_ (4.A) _setting, there are functorial triangles_ 1., 2., Proof.: (1) We expand the diagram in (4.A) to where We will construct by exhibiting a certain bimodule map. First, the ring homomorphism is a map of B-B bimodules and thus induces a natural transformation. But this is equal to We next construct a natural transformation. Tensoring on the left by and on the right by gives rise to a bimodule map which induces the claimed. Now, applying the same trick for, setting, then which induces. Lastly, tensoring (4.B) on both sides by \(\hat{\Lambda}\) gives a \(\hat{\Lambda}\)-\(\hat{\Lambda}\) bimodule map (4.C) Composing this with the ring homomorphism \(\Lambda\to\hat{\Lambda}\) thus gives a bimodule map \[\Lambda\to{\rm\Lambda}\hat{\Lambda}\otimes(\mathbb{M}\otimes\uptau_{\alpha} \otimes Z_{\alpha,i})\otimes\hat{\Lambda}_{\Lambda}.\] Taking the cone in the derived category of \(\Lambda\)-\(\Lambda\) bimodules gives a triangle \[C\to{}_{\Lambda}\Lambda_{\Lambda}\to{}_{\Lambda}\hat{\Lambda}\otimes_{\hat{ \Lambda}}^{\mathbf{L}}\left(\mathbb{M}\otimes_{\Lambda}^{\mathbf{L}}\uptau_{ \alpha}\otimes_{\mathrm{B}}^{\mathbf{L}}Z_{\alpha,i}\right)\otimes_{\hat{ \Lambda}}^{\mathbf{L}}\hat{\Lambda}_{\Lambda}\to C[1]. \tag{4.D}\] This induces a functorial triangle \(\Psi_{\alpha,i}\circ\Psi_{\alpha,i}^{\mathrm{R}\Lambda}\to\mathsf{Id}_{ \Lambda}\to\mathbf{R}\mathrm{Hom}({}_{\Lambda}C_{\Lambda},-)\to.\) Defining \(\mathsf{Twist}_{\alpha,i}:=\mathbf{R}\mathrm{Hom}({}_{\Lambda}C_{\Lambda},-)\), yields the result. (2) (4.D) also induces a functorial triangle \(-\otimes_{\Lambda}^{\mathbf{L}}C_{\Lambda}\to\mathsf{Id}\to\Psi_{\alpha,i} \circ\Psi_{\alpha,i}^{\mathrm{L}\Lambda}\to\), thus defining \(\mathsf{Twist}_{\alpha,i}^{\ast}:=-\otimes_{\Lambda}^{\mathbf{L}}C_{\Lambda}\) gives the claim. **Remark 4.2**.: In the proof of Proposition 4.1, \(Z_{\alpha,i}:=\mathrm{B}_{i}\otimes_{\mathrm{B}}^{\mathbf{L}}\uptau_{\alpha} ^{\ast}\otimes_{\mathrm{A}}^{\mathbf{L}}\mathbb{M}^{\ast}\). We will use this notation below. **Remark 4.3**.: \(\mathsf{Twist}_{\alpha,i}^{\ast}\) is the left adjoint of \(\mathsf{Twist}_{\alpha,i}\). Later, in Remark 6.14 we will prove that it is the inverse. ## 5. The Key Commutative Diagram Under the Zariski local Setup 2.2 and given any choice of \((\upalpha,i)\) as defined in Setup 3.1, this section proves that the functors \(\mathsf{Twist}_{\upalpha,i}\) and \(\mathsf{Twist}_{\upalpha,i}^{\ast}\) constructed in SS4 intertwine with the known equivalences in the complete local setting, through a key commutative diagram. Given a choice of \((\upalpha,i)\), in particular with associated homomorphism \(\mathrm{B}\to\mathrm{B}_{i}\), set \(I_{i}\) to be the kernel of the homomorphism, which is a two-sided ideal of \(\mathrm{B}\). **Claim 5.1**.: _Given an atom \(\upalpha\colon C_{+}\to D\) and a wall \(i\) of \(D,\) set \(M=N_{D}\) so that \(\mathrm{B}:=\mathrm{End}_{\mathcal{R}}(M)\) and \(\upnu_{i}\mathrm{B}=\mathrm{End}_{\mathcal{R}}(\upnu_{i}M).\) Then_ \[I_{i}\cong\mathrm{Hom}_{\mathcal{R}}(M,\upnu_{i}M)\otimes_{\upnu_{i}\mathrm{ B}}^{\mathbf{L}}\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,M)\] _as \(\mathrm{B}\)-bimodules._ Proof.: The proof of [15, 5.10(1)] shows \[I_{i}\cong\mathrm{Hom}_{\mathcal{R}}(M,\upnu_{i}M)\otimes_{\upnu_{i}\mathrm{ B}}^{\mathbf{L}}\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,M)\] as right modules by showing that \(\mathrm{Hom}_{\mathcal{R}}(M,\upnu_{i}M)\otimes_{\upnu_{i}\mathrm{B}}^{ \mathbf{L}}\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,M)\) is concentrated in degree zero. Hence truncating in the category of bimodules there is an isomorphism of \(\mathrm{B}\)-bimodules \[\mathrm{Hom}_{\mathcal{R}}(M,\upnu_{i}M)\otimes_{\upnu_{i}\mathrm{B}}^{ \mathbf{L}}\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,M)\cong\mathrm{Hom}_{ \mathcal{R}}(M,\upnu_{i}M)\otimes_{\upnu_{i}\mathrm{B}}\mathrm{Hom}_{\mathcal{R }}(\upnu_{i}M,M),\] so it suffices to show that \[\mathrm{Hom}_{\mathcal{R}}(M,\upnu_{i}M)\otimes_{\upnu_{i}\mathrm{B}}\mathrm{ Hom}_{\mathcal{R}}(\upnu_{i}M,M)\cong\mathrm{B}\,e_{i}\,\mathrm{B}\] as \(B\)-bimodules. Adding \(\frac{M}{M_{i}}\xrightarrow{\mathrm{Id}}\frac{M}{M_{i}}\) approximately to the exchange sequence (2.B), there exists an exact sequence \[0\to\upnu_{i}M\to V\xrightarrow{b}M, \tag{5.A}\] where \(b\) is an \(\left(\mathrm{add}\frac{M}{M_{i}}\right)\)-approximation of \(M\). We now claim that applying \(\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,-)\) to (5.A) gives an exact sequence \[0\to\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,\upnu_{i}M)\to\mathrm{Hom}_{\mathcal{ R}}(\upnu_{i}M,V)\to\mathrm{Hom}_{\mathcal{R}}(\upnu_{i}M,M)\to 0. \tag{5.B}\] To see this, set \(\Gamma=\mathrm{End}_{\mathcal{R}}\left(\frac{M}{M_{i}}\right)\) and \(\mathbb{G}=\mathrm{Hom}_{\mathcal{R}}\left(\frac{M}{M_{i}},-\right)\), then since \(b\) is an \(\left(\mathrm{add}\frac{M}{M_{i}}\right)\)-approximation of \(M\), \[0\to\mathbb{G}(\upnu_{i}M)\to\mathbb{G}V\to\mathbb{G}M\to 0\] is exact. Applying \(\operatorname{Hom}_{\mathcal{T}}(\mathbb{G}(\mathsf{v}_{i}M),-)\) and dropping \(\operatorname{Hom}\)'s gives a commutative diagram in which the top line is exact. By e.g [19, 6.10(2)], \(\operatorname{End}_{\Gamma}(\mathbb{G}(\mathsf{v}_{i}M))\cong\operatorname{ End}_{\mathcal{R}}(\mathsf{v}_{i}M)\in\operatorname{CMR}\), so \[\operatorname{\mathsf{f}}_{\mathcal{R}}\operatorname{Ext}_{\Gamma}^{1}( \mathbb{G}(\mathsf{v}_{i}M),\mathbb{G}(\mathsf{v}_{i}M))=0,\] by [19, 2.7]. Since \(\mathcal{R}\) is isolated, \(\operatorname{Ext}^{1}=0.\) Thus the bottom line is also exact, as claimed. Since (5.B) is exact, applying \(\operatorname{Hom}_{\mathcal{R}}(M,\mathsf{v}_{i}M)\otimes_{\mathsf{v}\mathrm{ B}}-\) gives an exact sequence in the top line of the following commutative diagram whilst applying \(\operatorname{Hom}_{\mathcal{R}}(M,-)\) to (5.A) gives the bottom line (see [19, A.7(3)]). The leftmost vertical maps are isomorphisms since \(\operatorname{Hom}_{\mathcal{R}}(\mathsf{v}_{i}M,\mathsf{v}_{i}M)\) and \(\operatorname{Hom}_{\mathcal{R}}(\mathsf{v}_{i}M,V)\) are projective \(\mathsf{v}_{i}\mathrm{B}\)-modules. Thus \[\operatorname{Hom}_{\mathcal{R}}(M,\mathsf{v}_{i}M)\otimes_{\mathsf{v}\mathrm{ B}}\operatorname{Hom}_{\mathcal{R}}(\mathsf{v}_{i}M,M)\cong\operatorname{ker}( \operatorname{B}\to\tfrac{\mathrm{B}}{\operatorname{\mathsf{B}}\operatorname{ \mathsf{E}}_{i}\mathrm{B}})=\operatorname{B}e_{i}\mathrm{B}=I_{i}\] via the map \[\operatorname{Hom}_{\mathcal{R}}(M,\mathsf{v}_{i}M)\otimes_{\mathsf{v}\mathrm{ B}}\operatorname{Hom}_{\mathcal{R}}(\mathsf{v}_{i}M,M)\to I_{i}\] sending \(a\otimes b\mapsto b\circ a.\) Since this clearly a bimodule map, thus bimodule isomorphism, the statement follows. **Lemma 5.2**.: _There is a functorial isomorphism_ \[\Phi_{\alpha}\circ\Phi_{i}^{2}\circ\Phi_{\alpha}^{-1}\cong\mathbf{R} \operatorname{Hom}_{\mathrm{A}}(\mathsf{\tau}_{\alpha}\otimes_{\mathrm{B}}^{ \mathsf{L}}I_{i}\otimes_{\mathrm{B}}^{\mathsf{L}}\mathsf{\tau}_{\alpha}^{*},-).\] Proof.: For the B-bimodule \(I_{i}\) defined by the natural exact sequence \(I_{i}\to\mathrm{B}\to\mathrm{B}_{i}\), by Claim 5.1\(\Phi_{i}\circ\Phi_{i}\cong\mathbf{R}\operatorname{Hom}_{\mathrm{B}}(I_{i},-).\) So, \[\Phi_{\alpha}\circ\Phi_{i}^{2}\circ\Phi_{\alpha}^{-1} =\mathbf{R}\operatorname{Hom}_{\mathrm{A}}(\mathsf{\tau}_{\alpha },-)\circ\mathbf{R}\operatorname{Hom}_{\mathrm{B}}(I_{i},-)\circ\mathbf{R} \operatorname{Hom}_{\mathrm{A}}(\mathsf{\tau}_{\alpha}^{*},-)\] \[\cong\mathbf{R}\operatorname{Hom}_{\mathrm{A}}(\mathsf{\tau}_{ \alpha}\otimes_{\mathrm{B}}^{\mathsf{L}}I_{i}\otimes_{\mathrm{B}}^{\mathsf{L }}\mathsf{\tau}_{\alpha}^{*},-),\] as required. The point of the following result is that we can view the functor \(\mathsf{Twist}_{\alpha,i}\) as a lift of the complete local equivalence \(\Phi_{\alpha}\circ\Phi_{i}^{2}\circ\Phi_{\alpha}^{-1}\). **Proposition 5.3**.: _Under the Zariski local setup (2.2), the following hold._ 1. _There exists a triangle in the derived category of_ \(\hat{\Lambda}\)_-_\(\hat{\Lambda}\) _bimodules_ \[\mathbb{M}\otimes_{\mathrm{A}}^{\mathsf{L}}\mathsf{\tau}_{\alpha}\otimes_{ \mathrm{B}}^{\mathsf{L}}I_{i}\otimes_{\mathrm{B}}^{\mathsf{L}}\mathsf{\tau}_{ \alpha}^{*}\otimes_{\mathrm{A}}^{\mathsf{L}}\mathsf{M}^{*}\to_{\hat{\Lambda}} \hat{\Lambda}_{\hat{\Lambda}}\overset{h}{\to}\mathbb{M}\otimes_{\mathrm{A}}^{ \mathsf{L}}\mathsf{\tau}_{\alpha}\otimes_{\mathrm{B}}^{\mathsf{L}}Z_{\alpha,i}.\] (5.C) 2. _There is a commutative diagram_ \[\begin{CD}\operatorname{D}(\hat{\Lambda})@>{\mathbb{F}^{-1}}>{}>\operatorname{D }(\mathrm{A})@>{\Phi_{i}^{-1}}>{}>\operatorname{D}(\mathrm{B})@>{\Phi_{i}^{2}}>{}> \operatorname{D}(\mathrm{B})@>{\Phi_{\alpha}}>{}>\operatorname{D}(\mathrm{A}) @>{\mathbb{F}}>{}>\operatorname{D}(\hat{\Lambda})\\ @V{}V{}V@V{}V{\operatorname{\mathsf{Twist}}_{\alpha,i}}>{}>\operatorname{D}( \Lambda)\end{CD}\] Proof.: (1) The result is implied by tensoring the exact sequence \(I_{i}\to\mathrm{B}\to\mathrm{B}_{i}\), using the isomorphisms \(\mathsf{\tau}_{\alpha}\otimes\mathsf{\tau}_{\alpha}^{*}\cong{}_{\Lambda} \mathrm{A}\mathrm{A}_{\mathrm{A}}\) and \(\mathbb{M}\otimes\mathbb{M}^{*}\cong{}_{\hat{\Lambda}}\hat{\Lambda}_{\hat{ \Lambda}}\). (2) Observe that \[\mathsf{Twist}_{\alpha,i}\circ\mathrm{F} =\mathbf{R}\operatorname{Hom}_{\hat{\Lambda}}({}_{\hat{\Lambda}} \circ\mathbb{C}_{\hat{\Lambda}}^{\mathsf{L}}\hat{\Lambda},-),\] \[\mathrm{F}\circ(\mathbb{F}\circ\Phi_{\alpha}\circ\Phi_{i}^{2}\circ \Phi_{\alpha}^{-1}\circ\mathbb{F}^{-1}) =\mathbf{R}\operatorname{Hom}_{\hat{\Lambda}}({}_{\hat{\Lambda}} \otimes_{\mathrm{A}}^{\mathsf{L}}\mathsf{M}\otimes_{\mathrm{A}}^{\mathsf{L}} \mathsf{\tau}_{\alpha}\otimes_{\mathrm{B}}^{\mathsf{L}}I_{i}\otimes_{\mathrm{B}}^{ \mathsf{L}}\mathsf{\tau}_{\alpha}^{*}\otimes_{\mathrm{A}}^{\mathsf{L}}\mathsf{ M}^{*},-),\] thus it suffices to prove that \(C\otimes\hat{\Lambda}\cong\hat{\Lambda}\otimes\mathbb{M}\otimes\tau_{\alpha} \otimes I_{i}\otimes\tau_{\alpha}^{*}\otimes\mathbb{M}^{*}\) in the derived category of \(\Lambda\)-\(\hat{\Lambda}\) bimodules, where to ease notation, we have written \(\otimes\) instead of \(\otimes\). Tensoring (4.D) by \(\otimes_{\Lambda}^{\mathbf{L}}\hat{\Lambda}_{\hat{\Lambda}}\) gives \[{}_{\Lambda}C\otimes_{\Lambda}^{\mathbf{L}}\hat{\Lambda}_{\hat{\Lambda}} \rightarrow{}_{\Lambda}\Lambda\otimes_{\Lambda}^{\mathbf{L}}\hat{\Lambda}_{ \hat{\Lambda}}\xrightarrow{f\otimes\mathsf{Id}}{}_{\Lambda}\hat{\Lambda} \otimes_{\Lambda}^{\mathbf{L}}\mathbb{M}\otimes_{\Lambda}^{\mathbf{L}}\tau_{ \alpha}\otimes_{\mathrm{B}}^{\mathbf{L}}Z_{\alpha,i}\otimes_{\mathrm{A}}^{ \mathbf{L}}\hat{\Lambda}\otimes_{\Lambda}^{\mathbf{L}}\hat{\Lambda}_{\hat{ \Lambda}}\rightarrow \tag{5.D}\] Since \(\alpha\) is an atom, by Lemma 3.5\(\Phi_{\alpha}^{-1}(\mathrm{B}_{i})\) is in one degree only, and by the proof it has finite length. Since \(\mathbb{F}\) is induced from a Morita equivalence, \(\Phi^{-1}\mathbb{F}^{-1}(\mathrm{B}_{i})\) is also one degree only, and has finite length. Now, \(\Phi_{\alpha}^{-1}\mathbb{F}^{-1}(\mathrm{B}_{i})=\mathrm{B}_{i}\otimes_{ \mathrm{B}}^{\mathbf{L}}\tau_{\alpha}^{*}\otimes_{\Lambda}^{\mathbf{L}}\mathbb{ M}^{*}\) as one sided modules, which implies that \(\mathrm{B}_{i}\otimes_{\mathrm{B}}^{\mathbf{L}}\tau_{\alpha}^{*}\otimes_{ \Lambda}^{\mathbf{L}}\mathbb{M}^{*}\) is one degree only and in that degree the cohomology is finite dimensional. Now, the natural map \(\psi:Z_{\alpha,i}\otimes_{\hat{\Lambda}}(\hat{\Lambda}\otimes_{\hat{\Lambda} }\hat{\Lambda})\rightarrow{}_{\mathrm{B}}Z_{\alpha,i\hat{\Lambda}}\) sending \(z\otimes r\otimes s\mapsto zrs\) is a bijection by Lemma 3.6(2), so since this is clearly a bimodule homomorphism, it follows that \(Z_{\alpha,i}\otimes\hat{\Lambda}\otimes\hat{\Lambda}\cong Z_{\alpha,i}\) as \(\mathrm{B}\)-\(\hat{\Lambda}\) bimodules. Now, the following diagram commutes (5.E) since \[\begin{CD}@>{}>{}>{}>\end{CD}\] and \(1\otimes h(1)r=1\otimes h(r)=1\otimes rh(1)=r\otimes h(1)\). Completing the top and bottom lines in (5.E) to triangles, using (5.C) Since the rightmost vertical maps are isomorphisms, it follows that so is the leftmost, and thus \({}_{\Lambda}C\otimes\hat{\Lambda}_{\hat{\Lambda}}\cong{}_{\Lambda}\hat{\Lambda }\otimes\mathbb{M}\otimes\tau_{\alpha}\otimes I_{i}\otimes\tau_{\alpha}^{*} \otimes\mathbb{M}^{*}_{\hat{\Lambda}}\), as required. **Corollary 5.4**.: _The following diagram commutes_ Proof.: Take LA of 5.3(2), using Remark 4.3 to see that \(\mathsf{Twist}_{\alpha,i}^{*}\) is left adjoint to \(\mathsf{Twist}_{\alpha,i}\). To set notation, given \((\alpha,i)\) in the finite arrangement \(\mathcal{H}\), say \(\alpha\colon C_{+}\to D\). Write \(\hat{\Lambda}_{\mathrm{con}}\) for the contraction algebra in chamber \(C_{+}\) and \(\hat{\Gamma}_{\mathrm{con}}\) for the contraction algebra in chamber \(D\). Mimicking the introduction and Proposition 5.3, consider the composition \[{}_{\mathrm{A},i}:=\mathrm{D}^{\mathrm{b}}(\hat{\Lambda}_{\mathrm{con}}) \xrightarrow{F_{\alpha}}\mathrm{D}^{\mathrm{b}}(\hat{\Gamma}_{\mathrm{con}}) \xrightarrow{F_{\alpha}^{2}}\mathrm{D}^{\mathrm{b}}(\hat{\Gamma}_{\mathrm{con }})\xrightarrow{F_{\alpha}^{-1}}\mathrm{D}^{\mathrm{b}}(\hat{\Gamma}_{\mathrm{ con}})\xrightarrow{F_{\alpha}^{-1}}\mathrm{D}^{\mathrm{b}}(\hat{\Lambda}_{ \mathrm{con}}),\] where \(F_{i}\) and \(F_{\alpha}\) are the standard equivalences in [A]. The following lifts results from [A] to the Zariski local setting. **Corollary 5.5**.: _For any choice of \((\mathfrak{a},i)\) in the finite arrangement \(\mathcal{H},\) the following diagram commutes_ Proof.: The result follows from the following The top diagram commutes by [A] and the bottom diagram is Proposition 5.3(2). ## 6. One and Two sided tilting In this section, we will prove that for any \((\mathfrak{a},i),\) the functor \(\mathsf{Twist}_{\mathfrak{a},i}\colon\operatorname{D}^{\mathrm{b}}(\Lambda) \to\operatorname{D}^{\mathrm{b}}(\Lambda)\) is an equivalence. ### Tilting Generalities Throughout this subsection, \(S\) is a commutative Noetherian ring, and \(\Gamma\) is a module-finite \(S\)-algebra. **Lemma 6.1**.: _If \(M\in\operatorname{Mod}S\), then \(M=0\) if and only if \(M_{\mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}}=0\) for all \(\mathfrak{p}\in\operatorname{Spec}S.\)_ Proof.: By [AM, Proposition 3.8 ]\(M=0\) iff \(M_{\mathfrak{p}}=0\) for all \(\mathfrak{p}\in\operatorname{Spec}S.\) By faithful flatness [M, Theorem 7.2], this is to equivalent to \(M_{\mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}}=0\) for all \(\mathfrak{p}\in\operatorname{Spec}S.\) **Lemma 6.2**.: _If \(T\in\operatorname{K}^{\mathrm{b}}(\operatorname{proj}\Gamma),\) then there are isomorphisms_ \[\operatorname{Hom}_{\operatorname{D}(\Gamma)}(T_{\Gamma},T_{ \Gamma})\otimes_{S}S_{\mathfrak{p}} \cong\operatorname{Hom}_{\operatorname{D}(\Gamma_{\mathfrak{p}})}(T_{ \mathfrak{p}},T_{\mathfrak{p}})\] \[\operatorname{Hom}_{\operatorname{D}(\Gamma_{\mathfrak{p}})}(T_ {\mathfrak{p}},T_{\mathfrak{p}})\otimes_{S_{\mathfrak{p}}}\hat{S} \cong\operatorname{Hom}_{\operatorname{D}(\Gamma)}(T_{\mathfrak{p}}\otimes_{S _{\mathfrak{p}}}\hat{S},T_{\mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S})\] Proof.: This is Iyama-Reiten, proof of (3) and (4) in [IR, Theorem 3.1]. **Definition 6.3**.: _A one-sided tilting complex for \(\Gamma\) is an object \(T\in\operatorname{K}^{\mathrm{b}}(\operatorname{proj}\Gamma)\) such that_ 1. \(\operatorname{Hom}(T,T[t])=0\) _for all_ \(t\neq 0\)__ 2. _If_ \(x\in\operatorname{D}(\operatorname{Mod}\Gamma)\) _with_ \(\mathbf{R}\mathrm{Hom}_{\Gamma}(T,x)=0\)_, then_ \(x\cong 0.\)__ By [K, p.80] Definition 6.3 is equivalent to the other definitions in the literature. **Lemma 6.4**.: _Given \(T\in\operatorname{K}^{\mathrm{b}}(\operatorname{proj}\Gamma),\) the following are equivalent._ 1. \(T\) _is one-sided tilting complex_ 2. \(T\otimes_{S}S_{\mathfrak{p}}\) _is one-sided tilting_ \(\Gamma_{\mathfrak{p}}\)_-complex for all_ \(\mathfrak{p}\in\operatorname{Spec}S\)__ 3. \(T\otimes_{S}S_{\mathfrak{p}}\otimes\hat{S}\) _is one-sided tilting_ \(\hat{\Gamma}_{\mathfrak{p}}\)_-complex for all_ \(\mathfrak{p}\in\operatorname{Spec}S\)__ Proof.: (1)\(\Rightarrow\)(2). This holds by e.g [IR, Lemma 2.7]. (2)\(\Rightarrow\)(3). We show \(T_{\mathfrak{p}}\) is tilting implies \(A=T_{\mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}}\) is tilting. It is clear, since \(\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}}\) is exact, taking projectives to projectives, that \(A\in\operatorname{K}^{\mathrm{b}}(\operatorname{proj}\hat{\Gamma}_{ \mathfrak{p}}).\) We know that \(\operatorname{Hom}_{\operatorname{D}(\Gamma_{\mathfrak{p}})}(T_{\mathfrak{p}},T_{\mathfrak{p}}[t])=0\) for all \(t\neq 0\) by assumption. Thus by Lemma 6.2, for all \(t\neq 0\) we have \[0=\operatorname{Hom}_{\operatorname{D}(\Gamma_{\mathfrak{p}})}(T_{\mathfrak{p} },T_{\mathfrak{p}}[t])\otimes_{S_{\mathfrak{p}}}\hat{S}\cong\operatorname{ Hom}_{\operatorname{D}(\hat{\Gamma})}(T_{\mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S},T_{ \mathfrak{p}}[t]\otimes_{S_{\mathfrak{p}}}\hat{S})=\operatorname{Hom}_{ \operatorname{D}(\hat{\Gamma})}(A,A[t]).\] Now let \(x\in\mathrm{D}(\mathrm{Mod}\,\hat{\Gamma}_{\mathfrak{p}})\) such that \(\mathbf{R}\mathrm{Hom}_{\hat{\Gamma}_{\mathfrak{p}}}(A,x)=0.\) Then for all \(t\in\mathbb{Z}\) \[0=\mathrm{H}^{t}(\mathbf{R}\mathrm{Hom}_{\hat{\Gamma}_{\mathfrak{p}}}(A,x)) =\mathrm{Hom}_{\mathrm{D}(\hat{\Gamma}_{\mathfrak{p}})}(A,x[t])\] \[=\mathrm{Hom}_{\mathrm{D}(\hat{\Gamma}_{\mathfrak{p}})}(T_{ \mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}},x[t])\] \[=\mathrm{Hom}_{\mathrm{D}(\hat{\Gamma}_{\mathfrak{p}})}(T_{ \mathfrak{p}},\mathrm{res}(x)[t]).\] Thus \(\mathbf{R}\mathrm{Hom}_{\hat{\Gamma}_{\mathfrak{p}}}(T_{\mathfrak{p}}, \mathrm{res}(x))=0,\) so since \(T_{\mathfrak{p}}\) is tilting, \(\mathrm{res}(x)\cong 0.\) Since restriction of scalars is exact, \(x\cong 0\) (3)\(\Rightarrow\)(1). By assumption \[0=\mathrm{Hom}_{\mathrm{D}(\hat{\Gamma})}(T_{\mathfrak{p}}\otimes_{S_{ \mathfrak{p}}}\hat{S},T_{\mathfrak{p}}[t]\otimes_{S_{\mathfrak{p}}}\hat{S}) \cong\mathrm{Hom}_{\mathrm{D}(\hat{\Gamma})}(T,T[t])\otimes_{S_{\mathfrak{p}} }\hat{S}.\] Thus \(\mathrm{Hom}(T,T[t])=0\) for all \(t\neq 0,\) since \(-\otimes_{S_{\mathfrak{p}}}\hat{S}\) is faithfully flat [M, Theorem 7.2]. By assumption \(T\in\mathrm{K}^{\mathrm{b}}(\mathrm{proj}\,\Gamma)\). Let \(x\in\mathrm{D}(\mathrm{Mod}\,\Gamma)\) such that \(\mathbf{R}\mathrm{Hom}(T,x)=0.\) Then for all \(t\in\mathbb{Z},\mathrm{Hom}_{\mathrm{D}(\Gamma)}(T,x[t])=0.\) Thus, tensoring by \(\otimes S_{\mathfrak{p}}\), using Lemma 6.2 \[\mathrm{Hom}_{\mathrm{D}(\Gamma)}(T,x[t])\otimes_{S}S_{\mathfrak{p}}\otimes_{S _{\mathfrak{p}}}\hat{S}_{\mathfrak{p}}\cong\mathrm{Hom}_{\mathrm{D}(\hat{ \Gamma}_{\mathfrak{p}})}(\hat{T}_{\mathfrak{p}},\hat{x}_{\mathfrak{p}}[t]).\] Since \(\hat{T}_{\mathfrak{p}}\) is tilting, \(\hat{x}_{\mathfrak{p}}\cong 0.\) Thus, since \(-\otimes S_{\mathfrak{p}}\) and \(-\otimes\hat{S}_{\mathfrak{p}}\) are exact, for all \(t\in\mathbb{Z}\) we have \[\mathrm{H}^{t}(x)\otimes_{S_{\mathfrak{p}}}S_{\mathfrak{p}}\otimes_{S_{ \mathfrak{p}}}\hat{S}_{\mathfrak{p}}\cong\mathrm{H}^{t}(x\otimes_{S_{ \mathfrak{p}}}S_{\mathfrak{p}}\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}} )=0.\] But \(\otimes_{S_{\mathfrak{p}}}\hat{S}_{\mathfrak{p}}\) is faithfully flat, so by Lemma 6.1\(\mathrm{H}^{t}(x)\cong 0\) for all \(t\in\mathbb{Z}\), so \(x\cong 0\) as claimed. **Corollary 6.5**.: _Suppose \(T\in\mathrm{K}^{\mathrm{b}}(\mathrm{proj}\,\Gamma)\) and choose \(\mathfrak{m}\in\mathrm{Max}\,S\). If \(T_{\mathfrak{p}}\) is tilting for all \(\mathfrak{p}\neq\mathfrak{m}\) and \(T_{\mathfrak{m}}\otimes\hat{S}\) is tilting, then \(T\) is tilting._ Proof.: The proof follows from Lemma 6.4 by considering the two cases, one when the prime ideal \(\mathfrak{p}\) is not equal the maximal ideal \(\mathfrak{m}\) and another when \(\mathfrak{p}=\mathfrak{m}\). ### One sided tilting on \(\Lambda\) We now revert to setting 2.2, where \(\Lambda\) is derived equivalent to \(U\) where \(f\colon U\to\mathrm{Spec}\,R\) is a flopping contraction and \(R\) is an isolated cDV. This subsection proves that \(C_{\Lambda}\in\mathrm{D}^{\mathrm{b}}(\mathrm{mod}\,\Lambda)\) constructed in Proposition 4.1 is a one-sided tilting complex. Recall that the setup 3.1 defines \(\mathrm{B}_{i}\) and \(\mathrm{B}\). **Lemma 6.6**.: _If \(y\in\mathrm{D}^{\mathrm{b}}(\mathrm{mod}\,\mathrm{B})\) is perfect, then \(y\otimes_{\mathrm{B}}^{\mathbf{L}}\mathrm{B}_{i_{\mathrm{B}}}\) is perfect._ Proof.: The exact sequence \(0\to I_{i}\to\mathrm{B}\to\mathrm{B}_{i}\to 0\) gives a triangle, \(y\otimes_{\mathrm{B}}^{\mathbf{L}}I_{i}\to y\to y\otimes_{\mathrm{B}}^{ \mathbf{L}}\mathrm{B}_{i}.\) Now, \(-\otimes_{\mathrm{B}}^{\mathbf{L}}I_{i}\) is an equivalence by 5.1, so \(y\otimes_{\mathrm{B}}^{\mathbf{L}}I_{i}\) is perfect. By the 2 out of 3 property \(y\otimes_{\mathrm{B}}^{\mathbf{L}}\mathrm{B}_{i_{\mathrm{B}}}\) is perfect. **Definition 6.7**.: _Let \(\mathcal{T}\) be a triangulated category. Then we say an object \(A\in\mathcal{T}\) is homologically finite if for any object \(B\in\mathcal{T}\), all \(\mathrm{Hom}_{\mathcal{T}}(A,B[i])\) are trivial except for a finite number of \(i\in\mathbb{Z}\)._ By [14, Proposition 2.18], for module-finite algebras, being a perfect complex is equivalent to being a homologically finite complex. **Proposition 6.8**.: \(C_{\Lambda}\in\mathrm{K}^{\mathrm{b}}(\mathrm{proj}\,\Lambda)\)_._ Proof.: The exact sequence \(0\to I_{i}\to\mathrm{B}\to\mathrm{B}_{i}\to 0\) induces a triangle \[\begin{CD}\left(\hat{\Lambda}\otimes\mathbb{M}\otimes\uptau_{\alpha}\otimes I_{i} \right)_{\mathrm{B}}\xrightarrow{\ \ \ \ }\left(\hat{\Lambda}\otimes\mathbb{M}\otimes\uptau_{\alpha}\right)_{\mathrm{B}} \xrightarrow{\ \ \ }\left(\hat{\Lambda}\otimes\mathbb{M}\otimes\uptau_{\alpha}\otimes\mathrm{B}_{i} \right)_{\mathrm{B}}\xrightarrow{\ \ \ \ \ }\\ so (4.D) becomes \(C\to{}_{\Lambda}\Lambda_{\Lambda}\to{}_{\Lambda}M\otimes^{\mathbf{L}}_{\hat{ \Lambda}}\hat{\Lambda}_{\Lambda}\to C[1].\) Now observe that \[M_{\hat{\Lambda}}\cong\mathbb{F}\Phi_{\alpha}\left(\Phi_{\alpha}^{-1}\mathbb{F }^{-1}(\hat{\Lambda})\otimes^{\mathbf{L}}_{\mathrm{B}}B_{i\mathrm{B}}\right).\] Since \(\Phi_{\alpha}^{-1}\mathbb{F}^{-1}(\hat{\Lambda})\) is perfect by above, \(\Phi_{\alpha}^{-1}\mathbb{F}^{-1}(\hat{\Lambda})\otimes^{\mathbf{L}}_{\mathrm{B }}B_{i\mathrm{B}}\) is perfect by Lemma 6.6. Since \(\Phi_{\alpha},\mathbb{F}\) are equivalences, \(M\in\mathrm{K}^{\mathrm{b}}(\operatorname{proj}\hat{\Lambda})\). To show that \(M\otimes\hat{\Lambda}_{\Lambda}\in\mathrm{K}^{\mathrm{b}}(\operatorname{proj} \Lambda)\), note that the complex \(M\in\mathrm{D}^{\mathrm{b}}_{\mathrm{fl}}(\operatorname{mod}\hat{\Lambda})\) since \(\mathrm{B}_{i}\in\operatorname{fl}\mathrm{B}\). Thus \[0 =\operatorname{Hom}_{\mathrm{D}^{\mathrm{b}}(\Lambda)}(M\otimes \hat{\Lambda}_{\Lambda},x[t])\] \[\xrightarrow{\reflectbox{eq:M}}\operatorname{Hom}_{\mathrm{D}^{ \mathrm{b}}(\hat{\Lambda})}(M\otimes\hat{\Lambda}\otimes_{\Lambda}\hat{ \Lambda},\hat{x}_{\mathfrak{m}}[t])=0 \text{(since Supp}\,N\otimes\hat{\Lambda}=\{\mathfrak{m}\})\] \[\xrightarrow{\reflectbox{eq:M}}\operatorname{Hom}_{\mathrm{D}^{ \mathrm{b}}(\hat{\Lambda})}(M,\hat{x}_{\mathfrak{m}}[t])=0\] Since \(M\in\mathrm{K}^{\mathrm{b}}(\operatorname{proj}\hat{\Lambda})\) by above, it is homologically finite. Thus, by above, \(M\otimes\hat{\Lambda}\) is also homologically finite, which in turn implies it is perfect. Considering (4.D), by the \(2\) out of \(3\) property, \(C_{\Lambda}\in\mathrm{K}^{\mathrm{b}}(\operatorname{proj}\Lambda)\), since both \(\Lambda\) and \(M\otimes\hat{\Lambda}\) are. **Lemma 6.9**.: _As a right \(\Lambda\)-module, \(\operatorname{Ext}^{\mathrm{t}}_{\Lambda}(C_{\Lambda},C_{\Lambda})=0\) for all \(t\neq 0.\) Thus C is one sided tilting complex._ Proof.: As a triangle of right \(\Lambda\)-modules, (4.D) can be rewritten as \[C_{\Lambda}\to\Lambda_{\Lambda}\to\hat{\Lambda}\otimes^{\mathbf{L}}_{\hat{ \Lambda}}\left(\mathbb{M}\otimes^{\mathbf{L}}_{\Lambda}\tau_{\alpha}\otimes^{ \mathbf{L}}_{\mathrm{B}}Z_{\alpha,i}\right)\otimes^{\mathbf{L}}_{\hat{ \Lambda}}\hat{\Lambda}_{\Lambda}\to,\] which when localized at a prime ideal \(\mathfrak{p}\) becomes \[C_{\Lambda}\otimes_{R}R_{\mathfrak{p}}\to\Lambda_{\Lambda}\otimes_{R}R_{ \mathfrak{p}}\to\left(\hat{\Lambda}\otimes^{\mathbf{L}}_{\hat{\Lambda}} \mathbb{M}\otimes^{\mathbf{L}}_{\Lambda}\tau_{\alpha}\otimes^{\mathbf{L}}_{ \mathrm{B}}Z_{\alpha,i}\otimes^{\mathbf{L}}_{\hat{\Lambda}}\hat{\Lambda}_{ \Lambda}\right)\otimes_{R}R_{\mathfrak{p}}\to \tag{6.A}\] When \(\mathfrak{p}\neq\mathfrak{m}\), since \(\hat{\Lambda}\otimes\mathbb{M}\otimes\tau_{\alpha}\otimes Z_{\alpha,i}\otimes \hat{\Lambda}\) is supported at only the maximal ideal \(\mathfrak{m}\), \((\hat{\Lambda}\otimes\mathbb{M}\otimes\tau_{\alpha}\otimes Z_{\alpha,i}\otimes \hat{\Lambda})_{\mathfrak{p}}=0\). Thus \(C_{\mathfrak{p}}\cong\Lambda_{\mathfrak{p}}\), which is tilting in \(\mathrm{D}^{\mathrm{b}}(\Lambda_{\mathfrak{p}})\). When \(\mathfrak{p}=\mathfrak{m}\), by the last line in the proof of Proposition 5.3(2), there is an isomorphism of \(\Lambda\)-\(\hat{\Lambda}\) bimodules \[C\otimes_{\Lambda}\hat{\Lambda}\cong\hat{\Lambda}\otimes\mathbb{M}\otimes \boldsymbol{\tau}_{\alpha}\otimes I_{i}\otimes\boldsymbol{\tau}_{\alpha}^{ \star}\otimes\mathbb{M}^{\star}.\] Considering these as simply right \(\hat{\Lambda}\)-modules, it follows that \[C\otimes_{R}R_{\mathfrak{m}}\otimes\hat{R}\cong\mathbb{F}\Phi_{\alpha} \Phi_{i}^{2}\Phi_{\alpha}^{-1}\mathbb{F}^{-1}(\hat{\Lambda}).\] Since \(\hat{\Lambda}\) is tilting in \(\mathrm{D}^{\mathrm{b}}(\hat{\Lambda})\) and \(\mathbb{F},\Phi_{\alpha},\Phi_{i}\) are all equivalences, it follows that \(C\otimes_{R}R_{\mathfrak{m}}\otimes\hat{R}\) is also tilting. Since \(C_{\Lambda}\in\mathrm{K}^{\mathrm{b}}(\operatorname{proj}\Lambda)\) by Proposition 6.8, using Corollary 6.5 it follows that \(C\) is a one-sided tilting complex. ### Two sided tilting on \(\Lambda\) **Definition 6.10**.: _A bimodule \({}_{\Gamma}X_{\Lambda}\), is called a two-sided tilting complex if_ 1. \(\Gamma\to\operatorname{End}_{\mathrm{D}(\Lambda)}(X,X)\) _induced by_ \(-\otimes^{\mathbf{L}}_{\Gamma}X\) _is an isomorphism._ 2. \(X_{\Lambda}\) _is a one sided tilting complex in the sense of Definition_ 6.3_._ **Theorem 6.11**.: \({}_{\Lambda}C_{\Lambda}\) _is a two-sided tilting complex._ Proof.: We know \(C_{\Lambda}\) is one sided tilting by Lemma 6.9. Thus it suffices to prove that \[f\colon\Lambda\xrightarrow{-\otimes^{\mathbf{L}}_{\Lambda}C}\operatorname{ Hom}_{\mathrm{D}(\Lambda)}(C,C)\] is an isomorphism. Isomorphisms of \(R\)-modules can be checked locally. So, it suffices to show that \[\Lambda_{\mathfrak{p}}\xrightarrow{f_{\mathfrak{p}}=-\otimes^{\mathbf{L}}_{ \mathfrak{p}_{\mathfrak{p}}}C_{\mathfrak{p}}}\operatorname{Hom}_{\mathrm{D}( \Lambda_{\mathfrak{p}})}(C_{\mathfrak{p}},C_{\mathfrak{p}}), \tag{6.B}\] is an isomorphism, for all \(\mathfrak{p}\in\operatorname{Spec}R\). When \(\mathfrak{p}\neq\mathfrak{m}\), considering the triangle (4.D), \({}_{\Lambda_{\mathfrak{p}}}C_{\mathfrak{p}_{\mathfrak{a}_{\mathfrak{a}_{ \mathfrak{a}_{\mathfrak{a}_{\mathfrak{a}_{\mathfrak{a}_{\mathfrak{a}_{\mathfrak{a}_{a}}}}}}}}}}\), thus \(f_{\mathfrak{p}}\) is an isomorphism. When \(\mathfrak{p}=\mathfrak{m}\), now \(f_{\mathfrak{m}}\) is an isomorphism if and only if \(\hat{f}_{\mathfrak{m}}=\hat{f}\) is an isomorphism. By the universal property of completion [M, p.57], \(\hat{f}\) is a continuous ring homomorphism such that commutes. By Lemma 6.2, since \(C_{\Lambda}\) is perfect commutes. But by Corollary 5.4 and setting the diagram commutes, and thus by uniqueness, \(\hat{f}=s\circ\operatorname{G}_{\mathfrak{\alpha},i}^{-1}\circ r^{-1}.\) Since \(r,s\) are bijections, and so is \(\operatorname{G}_{\mathfrak{\alpha},i}^{-1}\), it follows that the composition \(\hat{f}\) is an isomorphism as required. The following is part of the main result of [R2], specifically [R3, Theorem 1.1]. **Theorem 6.12**.: _Let \(\Gamma\) and \(\Lambda\) be module finite \(R\)-algebras, and \(X\) a complex of \(\Gamma\)-\(\Lambda\) bimodules. The following are equivalent._ 1. \(-\otimes_{\Gamma}^{\mathrm{L}}X\colon\operatorname{D}(\operatorname{Mod} \Gamma)\to\operatorname{D}(\operatorname{Mod}\Lambda)\) _is an equivalence._ 2. \(-\otimes_{\Gamma}^{\mathrm{L}}X\) _induces an equivalence_ \(\operatorname{K}^{\mathrm{b}}(\operatorname{proj}\Gamma)\to\operatorname{K}^ {\mathrm{b}}(\operatorname{proj}\Lambda)\)_._ 3. \(-\otimes_{\Gamma}^{\mathrm{L}}X\) _induces an equivalence_ \(\operatorname{D}^{\mathrm{b}}(\operatorname{mod}\Gamma)\to\operatorname{D}^ {\mathrm{b}}(\operatorname{mod}\Lambda)\)_._ 4. \(X\) _is a two sided tilting complex._ **Corollary 6.13**.: _For any choice \((\mathfrak{\alpha},i)\) as in Setup 3.1, \({}_{\Lambda}C_{\Lambda}\) in SS4 is a two-sided tilting complex, giving rise to the autoequivalence \(\operatorname{\mathsf{Twist}}_{\mathfrak{\alpha},i}\) of \(\operatorname{D}^{\mathrm{b}}(\operatorname{mod}\Lambda)\)._ Proof.: \({}_{\Lambda}C_{\Lambda}\) as constructed in Proposition 4.1 is a two sided tilting complex on \(\Lambda\) by Theorem 6.11. Thus \({}_{\Lambda}C_{\Lambda}\) induces a derived autoequivalence on the triangulated category \(\operatorname{D}^{\mathrm{b}}(\operatorname{mod}\Lambda)\), by applying Theorem 6.12. ### Corollaries The following are immediate. **Remark 6.14**.: \(\operatorname{\mathsf{Twist}}_{\mathfrak{\alpha},i}^{-1}=\operatorname{ \mathsf{Twist}}_{\mathfrak{\alpha},i}^{*}\), since the adjoint to an equivalence is necessarily the inverse. **Notation 6.15**.: As in the proof of Theorem 6.11, set \[\operatorname{G}_{\mathfrak{\alpha},i}:=\operatorname{D}(\hat{\Lambda}) \xrightarrow{r^{-1}}\operatorname{D}(\Lambda)\xrightarrow{\phi_{-\mathfrak{ \alpha}}^{-1}}\operatorname{D}(\hat{\Lambda})\xrightarrow{\phi_{-\mathfrak{ \alpha}}^{2}}\operatorname{D}(\hat{\Lambda})\xrightarrow{\phi_{-\mathfrak{ \alpha}}^{2}}\operatorname{D}(\hat{\Lambda})\xrightarrow{\pi}\operatorname{D}( \hat{\Lambda}).\] **Lemma 6.16**.: _There are commutative diagrams_ Proof.: The first and third diagrams are Proposition 5.3(2) and Corollary 5.4 respectively. The second follows from the first since \(\operatorname{G}_{\boldsymbol{\alpha},i}\) and \(\operatorname{Twist}_{\boldsymbol{\alpha},i}\) are invertible. Indeed, by Proposition 5.3(2) \[\operatorname{Twist}_{\boldsymbol{\alpha},i}\circ\operatorname{F}\circ \operatorname{G}_{\boldsymbol{\alpha},i}^{-1}=\operatorname{F}\circ \operatorname{G}_{\boldsymbol{\alpha},i}\circ\operatorname{G}_{\boldsymbol{ \alpha},i}^{-1}=\operatorname{F} \tag{6.C}\] thus pre-composing (6.C) with \(\operatorname{Twist}_{\boldsymbol{\alpha},i}^{-1}\), gives the desired result \(\operatorname{F}\circ\operatorname{G}_{\boldsymbol{\alpha},i}^{-1}= \operatorname{Twist}_{\boldsymbol{\alpha},i}^{-1}\circ\operatorname{F}\). The fourth follows from the third similarly. ## 7. Group actions on \(\operatorname{D}^{\mathrm{b}}(\operatorname{coh}U)\) In this section we study the twist functor defined in SS4 from a geometric viewpoint. We use this to produce a group action on the derived category of coherent sheaves of \(U\), where the irreducible rational curves may or may not be individually floppable. ### Geometric Twist Let \(f\colon U\to\operatorname{Spec}R\) be an algebraic flopping contraction as in 2.2. Now, fixing notation as in SS2.1, there is a tilting bundle \(\mathcal{V}=\mathcal{O}_{U}\oplus\mathcal{N}\) on \(U\) inducing a derived equivalence (7.A) **Definition 7.1**.: 1. _The geometric twist functor_ \(\operatorname{\mathsf{GeoTwist}}_{\boldsymbol{\alpha},i}\) _is defined to be the composition_ \[\begin{CD}\operatorname{D}^{\mathrm{b}}(\operatorname{coh}U)@>{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\cdot **Definition 7.3**.: _Consider geometric twist kernel_ \[\mathcal{W}:=\mathbf{R}\mathrm{Hom}_{\Lambda}(C,\Lambda)\otimes_{\Lambda\otimes_{ \mathbb{C}\Lambda^{\mathrm{op}}}}^{\mathbf{L}}\mathcal{V}\boxtimes\mathcal{V}^{*},\] _and the inverse geometric twist kernel_ \[\mathcal{W}^{*}:=C\otimes_{\Lambda\otimes_{\mathbb{C}\Lambda^{\mathrm{op}}}}^{ \mathbf{L}}\mathcal{V}\boxtimes\mathcal{V}^{*}.\] The names are justified by following Lemma. **Lemma 7.4**.: \(\mathsf{GeoTwist}_{\mathfrak{x},i}\cong\mathrm{FM}(\mathcal{W})\) _and \(\mathsf{GeoTwist}_{\mathfrak{x},i}^{*}\cong\mathrm{FM}(\mathcal{W}^{*}).\)_ Proof.: The proof is word for word [13, Lemma 6.16]. Now, suppose that the flopping locus is a chain of curves \(C,\) consider the open set \(W=U\setminus C\) and write \(j\colon W\to U\) for the inclusion. The contravariant functor \(j^{*}\colon U\to W\) is called the restriction on \(U.\) **Lemma 7.5**.: _For any \((\mathfrak{x},i),\) then \(j^{*}\circ\mathsf{GeoTwist}_{\mathfrak{x},i}\cong j^{*}.\)_ Proof.: Consider \(\mathbb{F}\Phi_{\mathfrak{x}}(\mathrm{B}_{i}),\) which is a \(\operatorname{fl}\Lambda\)-module and has finite projective dimension, and set \(\mathcal{E}_{\mathfrak{x},i}:=\mathbb{F}\Phi_{\mathfrak{x}}(\mathrm{B}_{i}) \otimes_{\Lambda}^{\mathbf{L}}\mathcal{V}\) as its image across the derived equivalence. By [13, Proposition 7.14] there exists an exact triangle \[\mathbf{R}\mathrm{Hom}_{U}(\mathcal{E}_{\mathfrak{x},i},-)\otimes_{\mathrm{B} _{i}}^{\mathbf{L}}\mathcal{E}_{\mathfrak{x},i}\to\mathsf{Id}\to\mathsf{GeoTwist} _{\mathfrak{x},i}\to, \tag{7.B}\] and applying \(j^{*}\) to (7.B) yields \[\mathbf{R}\mathrm{Hom}_{U}(j^{*}\mathcal{E}_{\mathfrak{x},i},-)\otimes_{ \mathrm{B}_{i}}^{\mathbf{L}}j^{*}\mathcal{E}_{\mathfrak{x},i}\to j^{*}\to j^{*} \mathsf{GeoTwist}_{\mathfrak{x},i}\to. \tag{7.C}\] We know that \(\mathcal{E}_{\mathfrak{x},i}\) is supported on the curve \(C,\) indeed by Claim 3.4, \(\mathbb{F}\Phi_{\mathfrak{x}}(\mathrm{B}_{i})\) is supported at only the maximal ideal \(\mathfrak{m}.\) Thus \(j^{*}\mathcal{E}_{\mathfrak{x},i}=0,\) so the result holds by properties of triangles in (7.C). **Corollary 7.6**.: _For any \((\mathfrak{x},i),\) then \(j^{*}\circ\mathsf{GeoTwist}_{\mathfrak{x},i}^{-1}\cong j^{*}.\)_ Proof.: Since \(j^{*}\circ\mathsf{GeoTwist}_{\mathfrak{x},i}^{-1}\circ\mathsf{GeoTwist}_{ \mathfrak{x},i}\cong j^{*}\overset{7.5}{\cong}j^{*}\circ\mathsf{GeoTwist}_{ \mathfrak{x},i},\) the result follows by right composing with \(\mathsf{GeoTwist}_{\mathfrak{x},i}^{-1}\). **Proposition 7.7**.: _Let \(H=\mathsf{GeoTwist}_{\mathfrak{x},i_{1}}^{\pm}\dots\mathsf{GeoTwist}_{\mathfrak{ x},i_{\ell}}^{\pm 1}\). If \(H\) is isomorphic to \(-\otimes\mathcal{L}\) where \(\mathcal{L}\) is a line bundle, then \(\mathcal{L}\cong\mathcal{O}.\) In particular, \(H\cong\mathsf{Id}.\)_ Proof.: First, by Lemma 7.5 and Corollary 7.6, \(j^{*}H(\mathcal{O}_{U})=j^{*}\mathcal{O}_{U}=\mathcal{O}_{W}.\) But \(H(\mathcal{O}_{U})\cong\mathcal{O}\otimes\mathcal{L}\) by assumption. The line bundle \(\mathcal{L}\) is reflexive and \[\mathcal{L} =j_{*}j^{*}\mathcal{L}\] ( by [ 5, 2.11] since \[C\subseteq U\] has codimension two) \[\cong j_{*}\mathcal{O}_{W}\] ( \[j^{*}\mathcal{L}\cong\mathcal{O}_{W}\] ) \[\cong j_{*}j^{*}\mathcal{O}_{U} (j^{*}\mathcal{O}_{U}\cong\mathcal{O}_{W})\] \[\cong\mathcal{O}_{W} (\text{again by [\lx@sectionsign 2.11]})\] So \(\mathcal{L}\cong\mathcal{O}_{W},\) as required. Write \(\mathcal{O}_{x}\) for the skyscraper sheaf of a closed point \(x.\) **Lemma 7.8**.: _For any \((\mathfrak{x},i)\) and any \(x\notin C,\mathsf{GeoTwist}_{\mathfrak{x},i}^{\pm 1}(\mathcal{O}_{x})\cong \mathcal{O}_{x}.\)_ Proof.: Applying (7.B) to \(\mathcal{O}_{x}\) yields \[\mathbf{R}\mathrm{Hom}_{U}(\mathcal{E}_{\mathfrak{x},i},\mathcal{O}_{x}) \otimes_{\mathrm{B}_{i}}^{\mathbf{L}}\mathcal{E}_{\mathfrak{x},i}\to\mathcal{O}_ {x}\to\mathsf{GeoTwist}_{\mathfrak{x},i}(\mathcal{O}_{x})\to, \tag{7.D}\] \(\mathbf{R}\mathrm{Hom}_{U}(\mathcal{E}_{\mathfrak{x},i},\mathcal{O}_{x})=0,\) since \(x\notin C,\) thus by properties of triangles, \(\mathsf{GeoTwist}_{\mathfrak{x},i}(\mathcal{O}_{x})\cong\mathcal{O}_{x}.\) Applying \(\mathsf{GeoTwist}_{\mathfrak{x},i}^{-1},\) gives \(\mathcal{O}_{x}\cong\mathsf{GeoTwist}_{\mathfrak{x},i}^{-1}(\mathcal{O}_{x}).\) **Proposition 7.9**.: _Suppose that \(f\colon U\to\mathrm{Spec}\,R\) is a flopping contraction as in Setup 2.2, then there exists group homomorphisms_ \[\pi_{1}(\mathbb{C}^{n}\setminus\mathcal{H})\xrightarrow{g^{\#}}\mathrm{Auteq} \,\mathrm{D}^{\mathrm{b}}(\mathrm{coh}\,U)\] Proof.: We prove the \(g^{\mathsf{aff}}\) version, with the finite version being easier. It is known that \(\pi_{1}(\mathbb{C}^{n}\setminus\mathcal{Y}^{\mathsf{aff}})\) is generated by \(\ell_{\alpha,i}\), where these correspond to a monodromy around a hyperplane \(i\) through an atom \(\alpha\) in the hyperplane arrangement \(\mathcal{Y}^{\mathsf{aff}}\). The map from the free group generated by the \(\ell_{\alpha,i}\) to \(\operatorname{\mathsf{Auteq}}\operatorname{\mathsf{D}}^{\mathrm{b}}(\operatorname {\operatorname{coh}}U)\) sending \[\ell_{\alpha,j}\mapsto\mathsf{GeoTwist}_{\alpha,i}\] is a well defined group homomorphism. It suffices to show that for any arbitrary relation \(\ell_{\alpha,i_{1}}^{\pm 1},i_{\alpha}^{\pm 1}\ell_{\alpha,i_{2}}^{\pm 1} \ldots\ell_{\alpha,i_{t}}^{\pm 1}=1\) in \(\pi_{1}(\mathbb{C}^{n}\setminus\mathcal{Y}^{\mathsf{aff}})\), then \(g^{\mathsf{aff}}(\ell_{\alpha_{1},i_{1}}^{\pm 1}\ell_{\alpha_{2},i_{2}}^{\pm 1} \ldots\ell_{\alpha_{t},i_{t}}^{\pm 1})=\operatorname{\mathsf{Id}}.\) We recall that the skyscraper sheaf \(\mathcal{O}_{x}\) is a sheaf supported at a single point, and we consider two cases, namely \(x\in C\) and \(x\notin C\). When \(x\in C\), recalling the notation in (2.A), consider the diagram The top most rectangle commutes by Lemma 6.16, and the bottom by definition. The left and right commute by [25]. Further since [25] proves the existence of the affine action in the complete local setting, \(\operatorname{G}_{\alpha_{1},i_{1}}^{\pm 1}\operatorname{G}_{\alpha_{2},i_{2}}^{ \pm 1}\cdots\operatorname{G}_{\alpha_{t},i_{t}}^{\pm 1}\cong\operatorname{\mathsf{Id}}.\) Thus if \(x\in C\), then \(x\in\mathcal{U}\), so \[\mathsf{GeoTwist}_{\alpha_{1},i_{1}}^{\pm 1}\operatorname{\mathsf{GeoTwist}}_{ \alpha_{2},i_{2}}^{\pm 1}\cdots\mathsf{GeoTwist}_{\alpha_{t},i_{t}}^{\pm 1}( \mathcal{O}_{x})=\mathcal{O}_{x}\] follows. If \(x\notin C\), then by repeated use of Lemma 7.8, \[\mathsf{GeoTwist}_{\alpha_{1},i_{1}}^{\pm 1}\operatorname{\mathsf{GeoTwist}}_{ \alpha_{2},i_{2}}^{\pm 1}\cdots\mathsf{GeoTwist}_{\alpha_{t},i_{t}}^{\pm 1}( \mathcal{O}_{x})=\mathcal{O}_{x}.\] Thus in all cases \(\mathcal{O}_{x}\) is fixed under \(\mathsf{GeoTwist}_{\alpha_{1},i_{1}}^{\pm 1}\operatorname{\mathsf{GeoTwist}}_{ \alpha_{2},i_{2}}^{\pm 1}\cdots\mathsf{GeoTwist}_{\alpha_{t},i_{t}}^{\pm 1}\). As in [25, Proposition 7.18], also see [25, 26], it follows that \[\mathsf{GeoTwist}_{\alpha_{1},i_{1}}^{\pm 1}\operatorname{\mathsf{GeoTwist}}_{ \alpha_{2},i_{2}}^{\pm 1}\cdots\mathsf{GeoTwist}_{\alpha_{t},i_{t}}^{\pm 1}\cong- \otimes\mathcal{L},\] for some line bundle. By Proposition 7.7 the result follows. ## 8. The Quasi-Projective Case ### Autoequivalences on \(X\) In this subsection we globalise the algebraic flopping contraction \(f\colon U\to\operatorname{Spec}R\) to obtain autoequivalences on quasi-projective varieties. We now consider the following setup. **Setup 8.1**.: Let \(h\colon X\to X_{\mathrm{con}}\) be a \(3\)-fold flopping contraction, where X is quasi projective and has only Gorenstein terminal singularities. To describe an autoequivalence on \(X\), we consider the following, where the preimage of each point \(p_{k}\) is a finite chain of curves. Associated to each singular point \(p_{k}\) is a corresponding finite hyperplane arrangement \(\mathcal{H}_{k}\) and a corresponding infinite hyperplane arrangement \(\mathcal{H}_{k}^{\text{aff}}\). Most of what follows is adapted from the techniques in [10], and to make this paper self contained we summarise it below. Choose an affine open subset \(\operatorname{Spec}R\) of \(X_{\operatorname{con}}\) containing only one \(p_{k}\), and set \(U\) to be its preimage. (8.A) From this, consider the following diagram Below, it will be convenient to change the structure sheaf and consider general ringed spaces \((X,\mathcal{A})\). By basic theory, a coherent sheaf on \(X\) is defined with reference to the sheaf of rings that contains its geometric information. When \(\mathcal{A}\) is fixed, we define an abelian category of coherent sheaves \(\operatorname{coh}(X,\mathcal{A})\), where if \(\mathcal{A}=\mathcal{O}_{X}\) then \(\operatorname{coh}(X,\mathcal{A})\cong\operatorname{coh}X\). Now, reinterpreting (7.A) implies the equivalence \[\operatorname{D}^{\text{b}}(\operatorname{coh}U)\xrightarrow{\sim} \operatorname{D}^{\text{b}}(\operatorname{coh}(\operatorname{Spec}R, \operatorname{End}_{R}(f_{*}\mathcal{V}))),\] where \(\operatorname{End}_{U}(\mathcal{V})\cong\operatorname{End}_{R}(f_{*} \mathcal{V})\cong\Lambda\) as in SS2.1. In order to lift the derived equivalence from \(\hat{\Lambda}\) to \(\Lambda\), the main trick in SS3 is to consider the restriction and extension of scalars between module categories (8.B) then replace the bimodule morphisms defined in Proposition 4.1 as follows \[\left(\hat{\Lambda}\to{}_{\Lambda}\hat{\Lambda}\otimes(\mathbb{M}\otimes \tau_{\mathfrak{a}}\otimes Z_{\mathfrak{a},i})\otimes\hat{\Lambda}_{\Lambda} \right)\leadsto\left(\Lambda-\hat{\Lambda}\to{}_{\Lambda}\hat{\Lambda}\otimes( \mathbb{M}\otimes\tau_{\mathfrak{a}}\otimes Z_{\mathfrak{a},i})\otimes\hat{ \Lambda}_{\Lambda}\right).\] We now repeat this trick using sheaves of algebras. Reverting to Setup 8.1, let \(\mathcal{P}=\mathcal{O}_{X}\oplus\mathcal{P}_{0}\) be the local progenerator on \(X\) of [11], 3.31. As in [10, Proposition 2.5(2)] the fibre of \(h\) has dimension of at most one, so by [10, Assumption 2.3], there is an equivalence \[\mathbf{R}h_{*}\mathbf{R}\mathcal{H}om_{X}(\mathcal{P},-)\colon\operatorname{D }^{\text{b}}(\operatorname{coh}X)\to\operatorname{D}^{\text{b}}(\operatorname{ coh}(X_{\operatorname{con}},\mathcal{E}nd_{X_{\operatorname{con}}}(h_{*} \mathcal{P}))).\] Set \(\mathcal{A}:=\mathcal{E}nd_{X_{\operatorname{con}}}(h_{*}\mathcal{P}^{*})\), so that \(\mathcal{A}\cong(\mathcal{E}nd_{X_{\operatorname{con}}}(h_{*}\mathcal{P}))^{ \text{op}}\). There are now functors \[\operatorname{Qcoh}(\operatorname{Spec}R,\operatorname{End}(f_{*}\mathcal{P}^ {*}))\xrightarrow{i^{-1}}\operatorname{Qcoh}(X_{\operatorname{con}},\mathcal{ A})\] by e.g [12, 18.3.2], where the inverse image functor \(i^{-1}\) is left adjoint to the push forward \(i_{*}\). Similarly, there is an adjointion \[\operatorname{Qcoh}\left(\operatorname{Spec}R,i^{-1}\mathcal{A}\otimes_{ \mathbb{C}}i^{-1}\mathcal{A}^{\text{op}}\right)\xrightarrow{i^{-1}}\operatorname {Qcoh}(X_{\operatorname{con}},\mathcal{A}\otimes_{\mathbb{C}}\mathcal{A}^{ \text{op}})\ \,\] where \(i^{-1}\mathcal{A}=\operatorname{End}(f_{*}\mathcal{P}^{*})\) and \(i^{-1}\mathcal{A}^{\text{op}}=\operatorname{End}(f_{*}\mathcal{P}^{*})^{ \text{op}}\). Now \(i^{-1}({}_{A}\mathcal{A}_{\mathcal{A}}):={}_{\Lambda}\Lambda_{\Lambda}\), and for any choice of \((\mathfrak{a},i)\) in Setup 3.1, there is a bimodule map \(\Lambda\to Q\) by SS4 where \(\Lambda\hat{\Lambda}\otimes(\mathbb{M}\otimes\uptau_{\mathfrak{a}}\otimes Z_{ \mathfrak{a},i})\otimes\hat{\Lambda}_{\Lambda}.\) Re-interpreting this as a bimodule map \(i^{-1}\mathcal{A}\to Q,\) we push this forward to give \(i_{*}i^{-1}\mathcal{A}\to i_{*}Q.\) We now play the same trick as in (8.B) above, namely we replace \[\left(i_{*}i^{-1}\mathcal{A}\to i_{*}Q\right)\rightsquigarrow\left(\mathcal{A} \to i_{*}i^{-1}\mathcal{A}\to i_{*}Q\right).\] Thus there is a bimodule map \[\mathcal{A}\to{}_{\mathcal{A}}(i_{*}Q)_{\mathcal{A}}. \tag{8.C}\] Taking the cone in the derived category of \(\mathcal{A}\) bimodules gives a triangle \[\mathcal{C}\to\mathcal{A}\to{}_{\mathcal{A}}(i_{*}Q)_{\mathcal{A}}\to\mathcal{ C}[1]. \tag{8.D}\] **Definition 8.2**.: _Under the the global Setup 8.1, for any singular point \(p_{k}\) consider the associated \(\mathcal{H}_{k}\) and \(\mathcal{H}_{k}^{\mathsf{aff}}.\) For any \((\mathfrak{a},j)\) consider \(\mathcal{C}:=\mathcal{C}_{\mathfrak{a},j}\) constructed above, and define_ \[\mathsf{Twist}_{X,\mathfrak{a},j},\mathsf{Twist}_{X,\mathfrak{a},j}^{*}\colon \operatorname{D}(\operatorname{coh}X)\to\operatorname{D}(\operatorname{coh}X),\] _by \(\mathsf{Twist}_{X,\mathfrak{a},j}=\mathbf{R}\mathcal{H}om_{\mathcal{A}}( \mathcal{C},-)\) and \(\mathsf{Twist}_{X,\mathfrak{a},j}^{*}=-\otimes_{\mathcal{A}}^{\mathsf{L}} \mathcal{C}.\)_ **Theorem 8.3**.: _Under the the global Setup 8.1, for any singular point \(p_{k}\) consider the associated \(\mathcal{H}_{k}\) and \(\mathcal{H}_{k}^{\mathsf{aff}}.\) Then for any \((\mathfrak{a},j),\)\(\mathsf{Twist}_{X,\mathfrak{a},j}\) and \(\mathsf{Twist}_{X,\mathfrak{a},j}^{*}\) are equivalences._ Proof.: Applying \(i^{-1}\) to (8.D) yields the following triangle \[i^{-1}\mathcal{C}\to i^{-1}\mathcal{A}\to i^{-1}\left({}_{\mathcal{A}}Q_{ \mathcal{A}}\right)\to.\] Under the Zariski local Setup 2.2, we have that \(i^{-1}\mathcal{A}={}_{\Lambda}\Lambda_{\Lambda},i^{-1}\left({}_{\mathcal{A}}Q _{\mathcal{A}}\right)={}_{\Lambda}Q_{\Lambda}\) and thus consequently \(i^{-1}\mathcal{C}=C\) from the Zariski local setup. Choose an affine open cover of \(X_{\operatorname{con}}\) containing \(\operatorname{Spec}R\) in (8.A), where no other open set contains \(p_{k}\). To ease notation, write \(V=\operatorname{Spec}R,\) and consider \[\mathbf{R}\mathcal{H}om_{A|_{V}}(\mathcal{C}|_{V},-)\colon\operatorname{D}( \operatorname{Mod}\mathcal{A}|_{V})\to\operatorname{D}(\operatorname{Mod} \mathcal{A}|_{V}).\] Since \(V\) is affine, by [13, Setup 4.1], \(\mathcal{A}|_{V}\) corresponds to \(\Lambda\) and \(\mathcal{C}|_{V}\) to \({}_{\Lambda}C_{\Lambda}.\) Hence, the functor \(\mathbf{R}\mathcal{H}om_{A|_{V}}(\mathcal{C}|_{A},-)\) becomes \[\mathbf{R}\mathrm{Hom}_{\Lambda}(C,-)\colon\operatorname{D}(\operatorname{Mod }\Lambda)\to\operatorname{D}(\operatorname{Mod}\Lambda)\] which is simply the equivalence \(\mathsf{Twist}_{\mathfrak{a},j}\) on \(\Lambda\). On the other opens \(W\) \[\mathbf{R}\mathcal{H}om_{A|_{W}}(\mathcal{C}|_{W},-)=\mathbf{R}\mathcal{H}om_{A| _{W}}(\mathcal{A}|_{W},-)\] since \(Q|_{W}=0.\) Thus on all opens in the covering of \(X_{\operatorname{con}},\mathbf{R}\mathcal{H}om_{A}(\mathcal{C},-)\) restricts to an equivalence. As in [13, SS5.2], this implies that \(\mathbf{R}\mathcal{H}om_{\mathcal{A}}(\mathcal{C},-)\) is an equivalence. Its adjoint \(\mathsf{Twist}_{X,\mathfrak{a},j}^{*}\) must be the inverse, and so is also an equivalence. ### Group actions on \(\operatorname{D}^{\mathrm{b}}(\operatorname{coh}X)\) The following is a technical result leading to our main result. **Theorem 8.4**.: _For each singular point \(p_{k},\) there exists group homomorphisms_ Proof.: We prove the infinite version, with the finite version being similar. By Proposition 7.9, there exists a group homomorphism \[\pi_{1}(\mathbb{C}^{n_{k}}\setminus(\mathcal{H}_{k}^{\mathsf{aff}})_{\mathbb{C }})\to\operatorname{Auteq}\operatorname{D}^{\mathrm{b}}(\operatorname{coh}U_{ k}). \tag{8.E}\] For any choice \((\mathfrak{a},j)\) in Setup 3.1 associated to \(\mathcal{H}_{k}^{\mathsf{aff}},\) temporarily write \[\operatorname{Geo}_{k}\mathsf{Twist}_{\mathfrak{a},j}\text{ or }\operatorname{Geo}_{k} \mathsf{Twist}_{\mathfrak{a},j}^{-1}\] for the corresponding geometric or geometric inverse twist on \(U_{k}.\) By Theorem 8.3, \[\mathsf{Twist}_{X}\,|_{U_{k}}\cong\operatorname{Geo}_{k}\mathsf{Twist}_{ \mathfrak{a},j}\text{ and }\mathsf{Twist}_{X}^{-1}\,|_{U_{k}}\cong\operatorname{Geo}_{k} \mathsf{Twist}_{\mathfrak{a},j}^{-1}.\] We next define \[m_{k}^{\mathsf{aff}}\colon\pi_{1}(\mathbb{C}^{n_{k}}\setminus(\mathcal{H}_{k}^{ \mathsf{aff}})_{\mathbb{C}})\to\operatorname{Auteq}\operatorname{D}^{\mathrm{b}} (\operatorname{coh}X),\] by mapping the generators \(\ell_{\mathfrak{a},j}\) to \(\mathsf{Twist}_{X,\mathfrak{a},j}\) in Definition 8.2. We now prove this is a homomorphism. Suppose that \(\ell_{\mathfrak{a}_{1},j_{1}}^{\pm 1}\,\ell_{\mathfrak{a}_{2},j_{2}}^{\pm 1}\cdots \ell_{\mathfrak{a}_{t},j_{t}}^{\pm 1}\) is a relation in \(\pi_{1}(\mathbb{C}^{n_{k}}\setminus(\mathcal{G}_{k}^{\mathsf{aff}})_{\mathbb{ C}}).\) We prove the corresponding relation holds in \(\mathrm{Auteq}\mathrm{D}^{\mathrm{b}}(\mathrm{coh}\,X).\) For this, by Proposition 7.9, for a skyscraper sheaf \(\mathcal{O}_{x}\) where \(x\in U_{k},\) consider the commutative diagram where \(\mathcal{O}_{x}\mapsto\mathcal{O}_{x}\) on the top line since (8.E) is a group homomorphism. Thus \(\mathcal{O}_{x}\mapsto\mathcal{O}_{x}\) for all \(x\in U_{k}.\) Also, \(\mathcal{O}_{x}\mapsto\mathcal{O}_{x}\) for all \(x\in X\setminus U_{k},\) for the same reason as in Lemma 7.8. We conclude \(\mathcal{O}_{x}\mapsto\mathcal{O}_{x}\) for all \(x\in X.\) Thus, by [1, 1], \[\mathsf{Twist}_{X,\mathfrak{a}_{1},j_{1}}^{\pm 1}\,\mathsf{Twist}_{X,\mathfrak{a} _{2},j_{2}}^{\pm 1}\cdots\mathsf{Twist}_{X,\mathfrak{a}_{t},j_{t}}^{\pm 1}\cong- \otimes\mathcal{L}.\] As in Proposition 7.7 applied to \(X,\) this implies \[\mathsf{Twist}_{X,\mathfrak{a}_{1},j_{1}}^{\pm 1}\,\mathsf{Twist}_{X,\mathfrak{a} _{2},j_{2}}^{\pm 1}\cdots\mathsf{Twist}_{X,\mathfrak{a}_{t},j_{t}}^{\pm 1}\cong \mathsf{Id},\] as required. Recall that if \(\mathcal{H}_{1}^{\mathsf{aff}}\) and \(\mathcal{H}_{2}^{\mathsf{aff}}\) are hyperplane arrangements in \(\mathbb{R}^{a_{1}}\) and \(\mathbb{R}^{a_{2}}\) respectively, Then the product hyperplane arrangement \(\mathcal{H}_{1}^{\mathsf{aff}}\times\mathcal{H}_{2}^{\mathsf{aff}}\) is in \(\mathbb{R}^{a_{1}+a_{2}}.\) The following is our main result. **Corollary 8.5**.: _Suppose that \(X\to X_{\mathrm{con}}\) is a flopping contraction between quasi-projective 3-folds, where \(X\) has Gorenstein terminal singularities. For any singular point \(p_{k},\) associate \(\mathcal{H}_{k}\) and \(\mathcal{H}_{k}^{\mathsf{aff}}\) and, set_ \[\mathfrak{H}:=\mathcal{H}_{1}\times\mathcal{H}_{2}\times\ldots\times\mathcal{ H}_{t}\text{ and }\mathfrak{H}^{\mathsf{aff}}:=\mathcal{H}_{1}^{\mathsf{aff}}\times\mathcal{H}_{2}^{ \mathsf{aff}}\times\ldots\times\mathcal{H}_{t}^{\mathsf{aff}}.\] _Then there exists group homomorphisms_ Proof.: Recall that \[\pi_{1}(\oplus\mathbb{C}^{n_{k}}\setminus\mathfrak{H}_{\mathbb{C}})=\pi_{1}( \mathbb{C}^{n_{1}}\setminus(\mathcal{H}_{1})_{\mathbb{C}})\times\pi_{1}( \mathbb{C}^{n_{2}}\setminus(\mathcal{H}_{2})_{\mathbb{C}}))\times\ldots \times\pi_{1}(\mathbb{C}^{n_{k}}\setminus(\mathcal{H}_{t})_{\mathbb{C}})\] and \[\pi_{1}(\oplus\mathbb{C}^{n_{k}}\setminus\mathfrak{H}_{\mathbb{C}}^{\mathsf{ aff}})=\pi_{1}(\mathbb{C}^{n_{1}}\setminus(\mathcal{H}_{1}^{\mathsf{aff}})_{ \mathbb{C}})\times\pi_{1}(\mathbb{C}^{n_{2}}\setminus(\mathcal{H}_{2}^{ \mathsf{aff}})_{\mathbb{C}}))\times\ldots\times\pi_{1}(\mathbb{C}^{n_{k}} \setminus(\mathcal{H}_{t}^{\mathsf{aff}})_{\mathbb{C}}).\] Set \(m=(m_{1},m_{2},\ldots,m_{t})\) and \(m^{\mathsf{aff}}=(m_{1}^{\mathsf{aff}},m_{1}^{\mathsf{aff}},\ldots,m_{t}^{ \mathsf{aff}}).\) By Theorem 8.4, the only thing left to prove is that the functors from different summands of \(\pi_{1}\) commute, but this is [1, Remark 5.6].
2308.02270
Redundancy Aware Multi-Reference Based Gainwise Evaluation of Extractive Summarization
The ROUGE metric is commonly used to evaluate extractive summarization task, but it has been criticized for its lack of semantic awareness and its ignorance about the ranking quality of the extractive summarizer. Previous research has introduced a gain-based automated metric called Sem-nCG that addresses these issues, as it is both rank and semantic aware. However, it does not consider the amount of redundancy present in a model summary and currently does not support evaluation with multiple reference summaries. It is essential to have a model summary that balances importance and diversity, but finding a metric that captures both of these aspects is challenging. In this paper, we propose a redundancy-aware Sem-nCG metric and demonstrate how the revised Sem-nCG metric can be used to evaluate model summaries against multiple references as well which was missing in previous research. Experimental results demonstrate that the revised Sem-nCG metric has a stronger correlation with human judgments compared to the previous Sem-nCG metric and traditional ROUGE and BERTScore metric for both single and multiple reference scenarios.
Mousumi Akter, Santu Karmaker
2023-08-04T11:47:19Z
http://arxiv.org/abs/2308.02270v2
# Redundancy Aware Multi-Reference Based Gainwise Evaluation of ###### Abstract While very popular for evaluating extractive summarization task, the ROUGE metric has long been criticized for its lack of semantic awareness and its ignorance about the ranking quality of the summarizer. Thanks to previous research that has addressed these issues by proposing a gain-based automated metric called _Sem-nCG_, which is both rank and semantic aware. However, _Sem-nCG_ does not consider the amount of redundancy present in a model-generated summary and currently does not support evaluation with multiple reference summaries. Unfortunately, addressing both these limitations simultaneously is not trivial. Therefore, in this paper, we propose a redundancy-aware _Sem-nCG_ metric and demonstrate how this new metric can be used to evaluate model summaries against multiple references. We also explore different ways of incorporating redundancy into the original metric through extensive experiments. Experimental results demonstrate that the new redundancy-aware metric exhibits a higher correlation with human judgments than the original _Sem-nCG_ metric for both single and multiple reference scenarios. ## 1 Introduction For the past two decades, ROUGE Lin (2004) has been the most used metric for evaluating extractive summarization tasks. Nonetheless, ROUGE has long been criticized for its lack of semantic awareness Graham (2015); Ng and Abrecht (2015); Ganesan (2018); Yang et al. (2018) and its ignorance about the ranking quality of the extractive summarizer Akter et al. (2022). To address these issues, previous work has proposed a gain-based metric called _Sem-nCG_Akter et al. (2022) to evaluate extractive summaries by incorporating rank and semantic awareness. Redundancy, a crucial factor in evaluating extractive summaries, was not, however, included in the _Sem-nCG_ metric. Additionally, their proposed _Sem-nCG_ metric does not support the evaluation of model summaries against multiple references. However, it is well recognized that a set of documents can have multiple, very different, and equally valid summaries; as such, obtaining multiple reference summaries can improve the stability of the evaluation Nenkova (2005); Lin (2004). Unfortunately, addressing both these limitations simultaneously is not trivial, and a systematic study of how to incorporate redundancy and multiple references into the existing _Sem-nCG_ metric is duly warranted. In this paper, we first incorporate redundancy into the previously proposed _Sem-nCG_ metric. In other words, we propose a redundancy-aware _Sem-nCG_ metric by exploring different ways of incorporating redundancy into the original metric. Through extensive experiments, we demonstrate that the redundancy-aware _Sem-nCG_ exhibits a notably stronger correlation with humans than the original _Sem-nCG_ metric. Next, we demonstrate how this redundancy-aware metric could be applied to evaluate model summaries against multiple references. This is a non-trivial task because _Sem-nCG_ evaluates a model-generated summary by considering it as a ranked list of sentences and then comparing it against an automatically inferred _ground-truth_ ranked list of sentences within a source document based on a single human written summary Akter et al. (2022). However, in the case of multiple references, the _ground-truth_ ranked list of source sentences must be inferred based on all available human-written reference summaries, not just one. When multiple reference summaries are available, the traditional way of computing ROUGE/BERTScore is to compute the corresponding metric score for each reference and then average those scores. While this is certainly possible for _Sem-nCG_ too, it is problematic for the following two reasons: 1) Multiple ground-truth rankings will need to be created, one for each reference summary available, which is computationally very expensive, and 2) Human-written summary qualities differ not only in writing style but also in focus and including multiple reference summaries with a lot of terminology variations and paraphrase make the automated evaluation metric less stable Cohan and Goharian (2016). Therefore, we opted to infer a single/unique ground-truth ranking based on multiple reference summaries in this work. The problem of inferring a unique ground-truth ranking based on multiple reference summaries can be framed in many ways; e.g., one way to solve this problem is to infer ranks based on each reference and then aggregate them; another option is to merge multiple references into a single reference (a non-trivial task) and then infer the ranks of the source sentences. In this work, we have explored multiple ways of inferring ground-truth ranks to facilitate the evaluation using multiple references. Our findings suggest that, compared to the conventional ROUGE and BERTScore metric, the redundancy-aware _Sem-nCG_ exhibits a stronger correlation with human judgments for evaluating model summaries when multiple references are available. Therefore, we encourage the community to use redundancy-aware _Sem-nCG_ to evaluate extractive summarization tasks. ## 2 Related Work The most common method for evaluating model summaries has been to compare them against human-written reference summaries. ROUGE Lin (2004) considers direct lexical overlap and afterwards different version of ROUGE Graham (2015) has also been proposed including _ROUGE_ with word embedding Ng and Abrecht (2015) and synonym Ganesan (2018), graph-based lexical measurement ShafieiBavani et al. (2018), Vanilla _ROUGE_Yang et al. (2018) and highlight-based _ROUGE_Hardy et al. (2019) to mitigate the limitations of original ROUGE. Metrics based on semantic similarity between reference and model summaries have also been proposed to capture the semantics, including S+WMS Clark et al. (2019), MoverScore Zhao et al. (2019), and BERTScore Zhang et al. (2020). Reference-free evaluation has also been a recent trend to avoid dependency on human reference Bohm et al. (2019); Peyrard (2019); Sun and Nenkova (2019); Gao et al. (2020); Wu et al. (2020). Despite the fact that the _extractive_ summarizing task is typically framed as a sentence ranking problem, none of the metrics mentioned above evaluate the quality of the ranker. To address this issue, Recently Akter et al. (2022) has proposed a rank-aware and gain-based evaluation metric for extractive summarization called _Sem-nCG_, but it does not incorporate redundancy and also lacks evaluation with multiple references, which are two significant limitations that need to be addressed and hence, the focus of this work. Redundancy in extracted sentences is a prominent issue in extractive summarization systems. Maximal Marginal Relevance (MMR) Carbonell and Goldstein (1998) is a classic algorithm to penalize redundancy in model summary. There are several approaches that explicitly model redundancy and use algorithms to avoid selecting sentences that are too similar to those that have already been extracted Ren et al. (2016). Trigram blocking Paulus et al. (2018) is another popular approach to reduce redundancy in model summary. Chen et al. (2021) has shown how to compute self-referenced redundancy score while evaluating the model summary. In this work, we explore various ways to incorporate redundancy into the original _Sem-nCG_ metric. In the context of multi-reference summary evaluation, our work is additionally distinctive since we do not follow the conventional procedure of computing the evaluation metric for each reference separately and then estimating their average/max. Instead, we use a variety of human-written reference summaries to infer a single, unified ground-truth ranked list of source sentences, after which the sem-nCG score is computed only once. When multiple reference summaries are available, Researchers have also suggested Pyramid-based Nenkova and Passonneau (2004) approaches for summary evaluation. However, since the pyramid must be manually constructed and requires more manual labor, this method received little attention. Although the method has undergone numerous improvements Passonneau et al. (2013); Yang et al. (2016); Shapira et al. (2019); Mao et al. (2020), it still needs a substantial amount of manual effort, making it unsuitable for large-scale evaluation. Recently, for NLG evaluation unified framework Deng et al. (2021); Zhong et al. (2022) to predict different aspects of the generated text has been proposed. Even though these metrics can be applied to text summarization, it is still a data-driven approach where the pseudo-data generation approach is erroneous and it is unclear why the model produces such scores. ## 3 Methodology _Sem-nCG_ Score:Normalized Cumulative Gain (_nCG_) is a popular evaluation metric in information retrieval to evaluate the quality of a ranker. nCG compares the model ranking with an _ideal_ ranking and assigns a certain score to the model based on some pre-defined gain. Akter et al. (2022) has utilized the idea of _nCG_ in the evaluation of extractive summarization. The basic concept of Sem-nCG is to compute the gain (_CG@k_) obtained by a top \(k\) extracted sentences and divide that by the maximum/ideal possible gain (_ICG@k_), where the gains are inferred by comparing the input document against a human written summary. Mathematically: \[\textit{Sem-nCG@k}=\frac{\textit{CG@k}}{\textit{ICG@k}} \tag{1}\] **Redundancy Score:** We followed Chen et al. (2021) to compute self-referenced redundancy score. The summary, \(X\), itself is used as the reference to determine the degree of semantic similarity between each summary token/sentence and the other tokens/sentences. The average of maximum semantic similarity is used to determine the redundancy score. For a given summary, \(X=\{x_{1},x_{2},...,x_{n}\}\), the calculation is as follows: \[\small\small\begin{split} Score_{\text{red}}=\frac{\sum_{i}max_{j:i \neq j}Sim(x_{j},x_{i})}{\text{IXl}}\end{split} \tag{2}\] where, \(j:i\neq j\) denotes that the similarity between \(x_{i}\) and itself has not been considered. Note that \(\textit{Score}_{\text{red}}\in[0,1]\) in our case and lower is better. **Final Score:** We used the following formula to calculate the final score after obtaining the scores of _Sem-nCG_ and \(\textit{Score}_{\text{red}}\): \[\small\begin{split} Score=\lambda*\textit{Sem-nCG}+(1-\lambda)*(1- \textit{Score}_{\text{red}})\end{split} \tag{3}\] Here, \(\lambda\in[0,1]\) is a hyper-parameter to scale the weight between \(\textit{Score}_{\text{red}}\) and _Sem-nCG_. \(\textit{Score}\in[0,1]\) where higher score means better summary. ## 4 Experimental Setup **Dataset:** Human correlation is an essential attribute to consider while assessing the quality of a metric. To compute the human correlation of the new redundancy-aware _Sem-nCG_ metric, we utilized SummEval dataset from Fabbri et al. (2021)1. The annotations include summaries generated by 16 models (abstractive and extractive) from 100 news articles (1600 examples in total) on the CNN/DailyMail Dataset. Each source news article includes the original CNN/DailyMail reference summary as well as 10 additional crowd-sourced reference summaries. Each summary was annotated by 5 independent crowd-sourced workers and 3 independent experts (8 annotations in total) along the four dimensions: _Consistency_, _Relevance_, _Coherence_ and _Fluency_Fabbri et al. (2021)2. As this work focuses on the evaluation of extractive summarization, we considered the output generated by extractive models and filtered out samples comprising less than \(3\) sentences (as we report _SemenCG@3_) which resulted in \(252\) samples eventually. Additionally, we considered the expert annotations for the meta-evaluation, as non-expert annotations can be risky Gillick and Liu (2010). Footnote 1: [https://github.com/Yale-LILV/SummEval](https://github.com/Yale-LILV/SummEval) As was done in Akter et al. (2022), for each sample, from the 11 available reference summaries, we considered 3 settings: Less Overlapping Reference/LOR (highly abstractive references with fewer lexical overlap with the original document), Medium Overlapping Reference/MOR (medium lexical overlap with the original document) and Highly Overlapping Reference/HOR (highly extractive references with high lexical overlap with the original document). **Embedding for Groundtruth Ranking:** The core of the _Sem-nCG_ metric is to automatically create the groundtruth/ideal ranking against which the model ranking is compared. To create the groundtruth ranking, Akter et al. (2022) used various sentence embeddings. Similarly, we utilized various sentence embeddings as well since our goal is to compare the new redundancy-aware _Sem-nCG_ metric to the original _Sem-nCG_ metric. Specifically, we considered Inferent (v2) Conneau et al. (2017), Semantic Textual Similarity benchmark (STSb - bert/roberta/distilbert) Reimers and Gurevych (2019), Elmo Peters et al. (2018) and Google Universal Sentence Encoder (USE) Cer et al. (2018) with enc-2 Iyyer et al. (2015) based on the deep average network, to infer the groundtruth/ideal ranking of the sentences within the input document with guidance from the human written summaries. _Score\({}_{\text{red}}\)_ Computation:** To compute the self-referenced redundancy score, we used the top-\(3\) sentences from the model generated summary (as we report _Sem-nCG@3_). We calculated each sen tence's maximum similarity to other sentences and then averaged it to get the desired \(Score_{red}\). We experimented with four distinct variations to compare the sentences: cosine similarity (by converting sentences to STSb-distilbert Reimers and Gurevych (2019) embeddings), ROUGE Lin (2004), MoverScore Zhao et al. (2019) and BERTScore Zhang et al. (2020). ## 5 Results ### Redundancy-aware _Sem-nCG_ We first considered how redundancy-aware Sem-nCG performs in extractive summarization with single reference. As shown in Table 1, we computed Kendall's tau (\(\tau\)) correlation between the expert given score for model summary and the Sem-nCG score with/without redundancy along the four meta-evaluation criteria: _Consistency_, _Relevance_, _Coherence_, and _Fluency_, for different embedding variations (to create the groudtruth ranking) and different approaches to compute \(Score_{red}\). We utilized Equation 3 to compute the redundancy-aware _Sem-nCG_ score, where lambda (\(\lambda\)) is a hyper-parameter choice and is set to \(\lambda=0.5\) empirically. In Table 1 w/o redundancy refers to Equation 1. Table 1 shows that the redundancy-aware _Sem-nCG_ metric outperforms the original _Sem-nCG_ metric in terms of _Consistency_, _Relevance_, and _Coherence_; with a \(5\%\) improvement in _Relevance_ and a \(14\%\) improvement in _Coherence_ for less overlapping references (LOR). We also observe improvements in the _Relevance_ (\(9\%\)) and _Coherence_ (\(20\%\)) dimensions for medium overlapping references (MOR). For High Overlapping References (HOR), the improvement is \(8\%\) and \(22\%\) for _Relevance_ and _Coherence_, respectively. We also observe that STSb-distilbert embedding is a better choice in the _Consistency_ dimension, whereas USE with enc-2 is a better choice in the _Relevance_ and _Coherence_ dimensions to construct the groundtruth ranking. Therefore, we recommend STSb-distilbert to create groundtruth ranking if _Consistency_ is a top priority, otherwise, we recommend using USE with enc-2. A groundtruth ranking was also created by combining STSb-distilbert and USE into an ensemble, which showed balanced performance across all four dimensions. It also appears that ROUGE and BERTScore provide comparable performances while computing \(Score_{red}\). However, using ROUGE score as self-referenced redundancy will be a better choice as evident from Section 5.3. In Table 2 Kendall's tau correlation of ROUGE and BERTScore has been demonstrated to get an idea of the advantage of redundancy-aware _Sem-nCG_ and it is clearly evident that redundancy-aware Sem-nCG also exhibits stronger correlation than these metrics. ### Hyperparameter Choice In figure 1, we have varied \(\lambda\in[0,1]\) for the 3 scenarios (LOR, MOR and HOR) and computed human correlation along four dimensions (_Consistency_, _Relevance_, _Coherence_ and _Fluency_) when different embeddings are used to create the groundtruth ranking and ROUGE score is used to compute \(Score_{red}\). Human correlations with BERTScore-based redundancy are presented in Appendix. For both redundancy penalties, it shows that higher lambda (\(\lambda\geq 0.6\)) achieves better correlation for the _Consistency_ dimensions, which makes sense because higher lambda means giving more weight to _Sem-nCG_. For _Relevance_ and _Coherence_ dimensions, a lower lambda (\(\lambda\)) value between \([0.3-0.5]\) is a better choice as lower \(\lambda\) means more penalty to redundancy. It appears that for _Fluency_ all metric variations struggle. It is evident that \(\lambda=0.5\) gives comparable performance in all four quality dimensions (consistency, relevance, coherence and fluency) and thus we recommend using \(\lambda=0.5\) while adopting Equation 3 to compute redundancy-aware _Sem-nCG_. Table 3 shows a qualitative example for the evaluation of a model-extracted summary. ### Redundancy-aware _Sem-nCG_ for Evaluation with Multiple References SummEval Fabbri et al. (2021) dataset contains 11 reference summaries. For summary evaluation with multiple references, we considered the lexical overlap of the reference summaries with the original document to demonstrate the terminology variations. Then we considered 3 less overlapping references as Multi-Ref LORs, 3 medium overlapping references as Multi-Ref MORs and 3 high overlapping references as Multi-Ref HORs. We have also mixed up 1 LOR, 1 MOR and 1 HOR and considered this set as Muti-Ref LOR, MOR, HOR to see how the evaluation metric correlates in different terminology variations. Table 4 confirms that ROUGE shows very poor correlation in all the dimensions (consistency, relevance, coherence, and fluency) in all the scenarios and shows slightly better correlation in Multi-Ref HORs (which is somewhat expected as ROUGE considers direct lexical overlap). Interestingly, BERTScore also shows poor correlation in all the settings supporting that the traditional evaluation metric becomes less stable for multiple reference summaries with lots of terminology variations Cohan and Goharian (2016). In the original _Sem-nCG_ metric, a groundtruth ranking is prepared by considering the cosine similarity between each sentence of the document and reference summary but the evaluation with multiple-reference was left as future work. As a starting point, how to incorporate multiple-reference summaries in the original _Sem-nCG_ metric, we designed how to create the groundtruth ranking by considering multiple references. Here, we took the naive approach, first computing cosine similarity of each sentence of the document with each reference among multiple references. Then average it, which we called Ensemblesim. For Ensemblerel, for each groundtruth ranking prepared for each reference among multiple reference summaries, we took the average of relevance (as it was computed in previously proposed _Sem-nCG_ metric Akter et al. (2022)) and based on that we merged the groundtruth rankings into one groundtruth ranking. Then we use this groundtruth ranking to compute _Sem-nCG_ for model extracted summary. With the original Sem-nCG metric, we have also incorpo \begin{table} \begin{tabular}{l l|c c c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Embedding**} & \multicolumn{3}{c|}{**Consistency**} & \multicolumn{3}{c|}{**Relevance**} & \multicolumn{3}{c|}{**Coherence**} & \multicolumn{3}{c}{**Fluency**} \\ \cline{3-13} & & LOR & MOR & HOR & LOR & MOR & HOR & LOR & MOR & HOR & LOR & MOR & HOR \\ \hline Infresent & w/o redundancy & 0.08 & 0.06 & 0.08 & 0.07 & 0.12 & 0.09 & 0.06 & 0.06 & 0.04 & **0.05** & 0.03 & **0.12** \\ \hline \multirow{3}{*}{\begin{tabular}{l} + Redundancy \\ penalty \\ \end{tabular} } & Cosine Similarity & 0.04 & 0.02 & 0.06 & 0.08 & 0.15 & 0.13 & 0.14 & 0.19 & 0.18 & 0.02 & -0.02 & 0.08 \\ & ROUGE & 0.07 & 0.05 & 0.11 & 0.11 & 0.18 & 0.17 & 0.18 & 0.25 & **0.26** & -0.01 & -0.04 & 0.05 \\ & MoverScore & 0.05 & 0.06 & 0.11 & 0.09 & 0.15 & 0.12 & 0.11 & 0.13 & 0.11 & 0.03 & 0.01 & 0.11 \\ & BERTScore & 0.05 & 0.02 & 0.08 & 0.13 & 0.19 & 0.18 & 0.18 & 0.22 & 0.24 & -0.01 & -0.04 & 0.04 \\ \hline \multirow{3}{*}{\begin{tabular}{l} Elmo \\ + Redundancy \\ penalty \\ \end{tabular} } & w/o redundancy & 0.06 & 0.07 & 0.09 & 0.02 & 0.08 & 0.06 & 0.02 & 0.02 & 0.01 & 0.00 & 0.01 & 0.06 \\ \hline \multirow{3}{*}{\begin{tabular}{l} + Redundancy \\ penalty \\ \end{tabular} } & Cosine Similarity & 0.03 & 0.03 & 0.05 & 0.04 & 0.13 & 0.10 & 0.12 & 0.14 & 0.14 & -0.06 & -0.05 & 0.02 \\ & ROUGE & 0.08 & 0.05 & 0.08 & 0.07 & 0.15 & 0.14 & 0.17 & 0.20 & 0.20 & -0.06 & -0.06 & 0.01 \\ & MoverScore & 0.08 & 0.07 & 0.10 & 0.04 & 0.10 & 0.09 & 0.07 & 0.06 & 0.06 & -0.02 & -0.01 & 0.05 \\ & BERTScore & 0.06 & 0.03 & 0.05 & 0.09 & 0.17 & 0.16 & 0.17 & 0.19 & 0.18 & -0.06 & -0.07 & 0.00 \\ \hline \multirow{3}{*}{\begin{tabular}{l} SISb-bert \\ \end{tabular} } & w/o redundancy & 0.11 & 0.08 & 0.09 & 0.03 & 0.13 & 0.12 & -0.01 & 0.06 & 0.01 & 0.03 & **0.10** & 0.03 \\ \cline{2-11} & Cosine Similarity & 0.08 & 0.01 & 0.06 & 0.05 & 0.17 & 0.13 & 0.10 & 0.18 & 0.16 & -0.05 & 0.02 & 0.05 \\ & ROUGE & 0.12 & 0.05 & 0.09 & 0.08 & **0.22** & 0.18 & 0.14 & 0.25 & 0.22 & -0.04 & -0.04 & 0.01 \\ & MoverScore & 0.12 & 0.06 & 0.10 & 0.05 & 0.16 & 0.15 & 0.04 & 0.11 & 0.09 & -0.01 & 0.02 & 0.08 \\ & BERTScore & 0.10 & 0.01 & 0.06 & 0.11 & **0.22** & **0.20** & 0.14 & 0.24 & 0.20 & -0.06 & -0.04 & 0.01 \\ \hline \multirow{3}{*}{\begin{tabular}{l} + Redundancy \\ penalty \\ \end{tabular} } & w/o redundancy & 0.12 & **0.14** & 0.07 & 0.07 & 0.07 & 0.05 & 0.04 & 0.00 & -0.02 & -0.01 & 0.01 & 0.06 \\ \hline \multirow{3}{*}{\begin{tabular}{l} + Redundancy \\ penalty \\ \end{tabular} } & Cosine Similarity & 0.09 & 0.07 & 0.05 & 0.08 & 0.11 & 0.06 & 0.13 & 0.13 & 0.10 & -0.06 & -0.05 & -0.01 \\ & ROUGE & 0.12 & 0.11 & 0.09 & 0.11 & 0.16 & 0.10 & 0.18 & 0.20 & 0.17 & -0.07 & -0.07 & -0.04 \\ & MoverScore & 0.13 & 0.13 & 0.10 & 0.09 & 0.10 & 0.07 & 0.08 & 0.07 & 0.04 & -0.03 & 0.00 & 0.04 \\ & BERTScore & 0.10 & 0.08 & 0.05 & 0.13 & 0.18 & 0.12 & 0.17 & 0.18 & 0.15 & -0.08 & -0.06 & -0.04 \\ \hline \multirow{3}{*}{\begin{tabular}{l} USE \\ + Redundancy \\ penalty \\ \end{tabular} } & w/o redundancy & 0.05 & 0.04 & 0.04 & 0.11 & 0.14 & 0.08 & 0.07 & 0.08 & 0.02 & 0.03 & 0.05 & 0.08 \\ \hline \multirow{3}{*}{\begin{tabular}{l} + Redundancy \\ penalty \\ \end{tabular} } & Cosine Similarity & 0.02 & -0.01 & 0.03 & 0.10 & 0.16 & 0.09 & 0.16 & 0.19 & 0.16 & 0.16 & -0.05 & 0.01 & 0.03 \\ & ROUGE & 0.06 & 0.02 & 0.07 & 0.13 & 0.21 & 0.14 & 0.20 & **0.26** & 0.23 & -0.06 & 0.00 & 0.00 \\ & MoverScore & 0.07 & 0.03 & 0.07 & 0.13 & 0.16 & 0.11 & 0.13 & 0.10 & 0.01 & 0.03 & 0.06 \\ & BERTScore & 0.03 & -0.01 & 0.05 & **0.15** & **0.22** & 0.17 & **0.21** & 0.24 & 0.22 & -0.06 & 0.00 & 0.00 \\ \hline \multirow{3}{*}{ \begin{tabular}{l} SISb-distilber \\ \end{tabular} } & w/o redundancy & 0.17 & 0.09 & **0.12** & 0.06 & 0.09 & 0.07 & 0.05 & 0.03 & -0.01 & 0.01 & 0.03 & 0.04 \\ \cline{2-11} & Cosine Similarity & 0.16 & 0.04 & 0.06 & 0.07 & 0.12 & 0.07 & 0.14 & 0.16 & 0.11 & -0.05 & -0.03 & -0.04 \\ \cline{2-11} & ROUGE & 0.16 & 0.06 & 0.08 & 0.10 & 0.16 & 0.12 & 0.17 & 0.21 & 0.17 & -0.06 & -0.04 & -0.05 \\ \cline{2-11} & MoverScore & **0.18** & 0.08 & 0.10 & 0.08 & 0.12 & 0.09 rated redundancy into the _Sem-nCG_ metric utilizing equation 3. We have only considered ROUGE and BERTScore as redundancy penalty both in Table 5 and 6 when \(\lambda=0.5\) (as evident from Section 5.2 that this setting gives better performance). We have also considered different embedding variations to create the groundtruth ranking. From Table 5, we can see that redundancy-aware _Sem-nCG_ shows better correlations for all the scenarios (multi-ref LORs, multi-ref MORs, multi-ref HORs and mixture of LOR, MOR & HOR). Both ROUGE and BERTScore provide comparable results for self-referenced redundancy penalties, with ROUGE score-based redundancy providing a marginally superior result. Interestingly, redundancy-aware _Sem-nCG_ shows robust performance in all the scenarios while showing 25% improvement in coherence and 10% improvement in relevance dimension. Same patterns are observed when \(\text{Ensemble}_{\text{rel}}\) is also used for the evaluation of multiple reference (See Table 6). From our empirical evaluation, we would recommend USE embedding to create \(\text{Ensemble}_{\text{sim}}\) (merging sentence-wise similarities across different references) with ROUGE redundancy penalty to evaluate extractive summary with multiple refer Figure 1: Kendall Tau (\(\tau\)) Correlation coefficient when lambda (\(\lambda\)) \(\in[0,1]\) from (a)-(c) for Consistency, (d)-(f) for relevance, (g)-(i) for coherence and (j)-(l) for Fluency dimension when ROUGE score is used as redundancy penalty for less overlapping reference (LOR), medium overlapping reference (MOR) and high overlapping reference (HOR). ences. ## 6 Conclusion Previous work has proposed the _Sem-nCG_ metric exclusively for evaluating extractive summarization task considering both rank awareness and semantics. However, the _Sem-nCG_ metric ignores redundancy in a model summary and does not support evaluation with multiple reference summaries, which are two significant limitations. In this paper, we have suggested a redundancy-aware multi-reference based _Sem-nCG_ metric by exploring different embeddings and similarity functions which is superior compared to the previously proposed _Sem-nCG_ metric along _Consistency_, _Relevance_ and _Coherence_ dimensions. Additionally, for summary evaluation using multiple references, we created a unique ground-truth ranking by incorporating multiple references rather than trivial max/average score computation with multiple references. Our empirical evaluation shows that the traditional metric becomes unstable when multiple references are available and the new redundancy-aware _Sem-nCG_ shows a notably higher correlation with human judgments than ROUGE and BERTScore metric both for single and multiple references. Thus we encourage the community to evaluate extractive summaries using the new redundancy-aware _Sem-nCG_ metric. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{3}{c|}{**Multi-Ref LOR, MOR, HOR**} & \multicolumn{3}{c|}{**Multi-Ref LORs**} & \multicolumn{3}{c|}{**Multi-Ref MORs**} & \multicolumn{3}{c}{**Multi-Ref HORs**} \\ \cline{2-13} & Con & Rel & Coh & Flu & Con & Rel & Coh & Flu & Con & Rel & Coh & Flu & Con & Rel & Coh & Flu \\ \hline ROUGE-1 & 0.00 & -0.01 & -0.09 & -0.01 & -0.05 & 0.05 & 0.00 & 0.01 & -0.05 & 0.09 & 0.04 & -0.01 & -0.02 & 0.21 & 0.13 & 0.10 \\ ROUGE-L & 0.00 & -0.01 & -0.09 & -0.01 & 0.00 & 0.04 & -0.01 & 0.01 & -0.06 & 0.07 & 0.04 & 0.00 & -0.01 & 0.15 & 0.09 & -0.04 \\ BERTScore & 0.09 & 0.19 & 0.14 & 0.03 & 0.01 & 0.07 & -0.01 & 0.04 & -0.04 & 0.05 & 0.03 & 0.05 & 0.04 & 0.20 & 0.12 & 0.06 \\ \hline \hline \end{tabular} \end{table} Table 4: Kendall Tau (\(\tau\)) correlation coefficient for ROUGE and BERTScore for consistency (con), relevance (rel), coherence (coh) and fluency (flu) dimension for evaluating extractive model summaries with multiple references. \begin{table} \begin{tabular}{l} \hline \hline **Article:** Last week she was barely showing – but Demelza Poldark is now the proud mother to the show’s latest addition. Within ten minutes of tomorrow night’s episode, fans will see Aidan Turner’s dashing Ross polldark gaze loviny at his new baby daughter. As Sunday night’s latest heartthrop, women across the country had been able to perform the following. Last week she was barely showing – but demelza Poldark is now the proud mother to the show’s latest addition. (clearly redundant extractive summary) \\ \hline **Score\({}_{\text{red}}\) for model summary**: 0.40 \\ \hline **Less Overlapping Reference (LOR)**: A celebrity recently welcomed a baby into the world and the wife discusses her experiences with her pregnancy. She has wanted to settle down for a while and is glad her pregnancy wasn’t noticeable on television. \\ \hline **Medium Overlapping/CNN Reference (MOR)**: SPOILER ALERT: Maid gives birth to baby on Sunday’s episode. Only announced she was pregnant with Poldark’s baby last week. \\ \hline **High Overlapping Reference (HOR)**: In the latest episode, Demelza Poldark talks about being 8 months pregnant. Ross Poldark, who is off the market and in love with Demelza, will be shown gazing loviny at his new baby daughter tomorrow night. \\ \hline **Semi-nCG Score** only according to equation 1 for \\ LOR: 0.67 \\ \hline **Revised Sem-nCG Score** along with Score\({}_{\text{red}}\) according to equation 3 for \\ LOR: 0.532 \\ \hline **Human Evaluation** (annotated by experts and score ranged between 0-1) \\ Coherence: 0.47 \\ \hline \hline \end{tabular} \end{table} Table 3: An example of the model summary evaluation using the redundancy-aware Sem-nCG metric. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c} \hline \hline & \multicolumn{10}{c}{**Multi-Ref IOR, MOR, IOR**} \\ \hline \hline **Embedding** & \multicolumn{3}{c}{**w/o _Redundancy_**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (ROUCE)**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (BERTScore)**} \\ \hline & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency \\ \hline Inferent & 0.09 & 0.10 & 0.04 & 0.08 & 0.11 & 0.17 & 0.24 & 0.01 & 0.09 & 0.18 & 0.18 & 0.04 \\ Elmo & 0.09 & 0.06 & 0.02 & 0.01 & 0.09 & 0.13 & 0.20 & -0.05 & 0.09 & 0.12 & 0.13 & -0.03 \\ STSb-bert & 0.09 & 0.10 & 0.00 & 0.01 & 0.10 & 0.22 & 0.24 & -0.03 & 0.12 & 0.24 & 0.18 & 0.01 \\ STSb-aborten & 0.14 & 0.08 & 0.01 & 0.01 & 0.13 & 0.15 & 0.21 & -0.05 & 0.13 & 0.15 & 0.12 & -0.02 \\ USE & 0.04 & 0.16 & 0.11 & **0.08** & 0.05 & 0.21 & **0.29** & 0.00 & 0.04 & 0.22 & 0.24 & 0.05 \\ STSb-disilbert & 0.14 & 0.10 & 0.03 & 0.02 & 0.10 & 0.16 & 0.22 & -0.04 & 0.11 & 0.18 & 0.16 & -0.01 \\ \hline \hline \multicolumn{10}{c}{**Multi-Ref IORs**} \\ \hline **Embedding** & \multicolumn{3}{c}{**w/o _Redundancy_**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (ROUCE)**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (BERTScore)**} \\ \hline & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency \\ \hline Inferent & 0.06 & 0.10 & 0.05 & 0.06 & 0.07 & 0.19 & **0.26** & -0.01 & **0.06** & 0.18 & **0.19** & 0.02 \\ Elmo & 0.06 & 0.06 & 0.00 & 0.02 & 0.04 & 0.13 & 0.17 & -0.06 & 0.04 & 0.12 & 0.11 & -0.02 \\ STSb-bert & 0.08 & 0.01 & -0.02 & 0.01 & **0.09** & 0.18 & -0.08 & 0.08 & 0.08 & 0.11 & -0.04 \\ STSb-aborten & 0.05 & 0.07 & 0.00 & 0.02 & 0.06 & 0.14 & 0.20 & -0.07 & 0.05 & 0.14 & 0.13 & -0.02 \\ USE & 0.01 & 0.09 & 0.05 & 0.01 & 0.04 & 0.16 & 0.24 & -0.05 & 0.01 & 0.16 & 0.19 & -0.02 \\ STSb-disilbert & 0.08 & 0.02 & 0.00 & -0.01 & 0.07 & 0.09 & 0.18 & -0.09 & 0.07 & 0.08 & 0.12 & -0.06 \\ \hline \hline \multicolumn{10}{c}{**Multi-Ref IORs**} \\ \hline **Embedding** & \multicolumn{3}{c}{**w/o _Redundancy_**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (ROUCE)**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (BERTScore)**} \\ \hline & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency \\ \hline Inferent & 0.09 & 0.10 & 0.04 & 0.08 & 0.11 & 0.17 & 0.24 & 0.01 & 0.09 & 0.18 & 0.18 & 0.04 \\ STSb-bert & 0.07 & 0.05 & 0.02 & 0.01 & 0.09 & 0.13 & 0.22 & -0.08 & 0.07 & 0.12 & 0.15 & -0.04 \\ STSb-aborten & 0.05 & 0.07 & -0.01 & 0.02 & 0.07 & 0.14 & 0.21 & -0.07 & 0.04 & 0.14 & 0.14 & -0.03 \\ USE & 0.02 & 0.08 & 0.05 & 0.01 & 0.04 & 0.15 & **0.25** & -0.06 & 0.02 & 0.14 & 0.17 & -0.03 \\ STSb-disilbert & **0.11** & 0.01 & 0.00 & -0.01 & 0.09 & 0.07 & 0.17 & -0.10 & 0.10 & 0.06 & 0.10 & -0.05 \\ \hline \hline \multicolumn{10}{c}{**Multi-Ref IORs**} \\ \hline **Embedding** & \multicolumn{3}{c}{**w/o _Redundancy_**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (ROUCE)**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (BERTScore)**} \\ \hline & 0.07 & 0.08 & 0.05 & 0.03 & **0.11** & 0.16 & 0.23 & -0.02 & 0.07 & 0.15 & 0.15 & 0.01 \\ Elmo & 0.04 & 0.09 & 0.02 & 0.06 & 0.06 & 0.16 & 0.19 & 0.00 & 0.04 & 0.14 & 0.11 & 0.03 \\ STSb-bert & 0.08 & 0.11 & 0.04 & 0.05 & 0.12 & 0.18 & **0.24** & -0.03 & 0.08 & 0.18 & 0.16 & 0.01 \\ STSb-robetta & 0.10 & 0.09 & 0.01 & 0.03 & 0.14 & 0.17 & 0.22 & -0.04 & 0.10 & 0.16 & 0.13 & 0.00 \\ USE & 0.04 & 0.14 & 0.07 & 0.05 & 0.07 & 0.20 & **0.24** & -0.03 & 0.04 & **0.21** & 0.18 & 0.01 \\ STSb-disilbert & 0.08 & 0.09 & 0.02 & 0.05 & **0.11** & 0.15 & 0.22 & -0.03 & 0.09 & 0.15 & 0.14 & 0.02 \\ \hline \hline \end{tabular} \end{table} Table 5: Kendall Tau (\(\tau\)) correlation coefficient for \(\mathbf{Ensemble_{sim}}\) when lambda (\(\lambda)=0.5\) for consistency, relevance, coherence and fluency dimension without redundancy and when ROUGE and BERTScore is used as redundancy penalty for different terminology variations of multiple references (highly abstractive (LORs), medium overlapping (MORs) and highly extractive (HORs) references). The best value in each dimension has been bold red. \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \hline \hline & \multicolumn{10}{c}{**Multi-Ref IOR, MOR, IOR**} \\ \hline **Embedding** & \multicolumn{3}{c}{**w/o _Redundancy_**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (ROUCE)**} & \multicolumn{3}{c}{**\(\mathbf{+}\) **Redundancy Penalty (BERTScore)**} \\ \hline & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency & Consistency & Relevance & Coherence & Fluency \\ \hline Inferent & 0.07 & 0.11 & 0.08 & **0.06** & 0.11 & 0.18 & **0.27** & 0.01 & 0.09 & 0.18 & 0.20 & 0.03 \\ Elmo & 0.09 & 0.06 & 0.01 & 0.00 & 0.09 & 0.12 & 0.18 & -0.05 & 0.09 & 0.12 & 0.11 & -0.03 \\ STSb-bert & 0.10 & 0.12 & 0.04 & 0.06 & 0.09 & 0.19 & 0.24 & -0.02 & 0.10 & **0.20** & 0.18 & 0.01 \\ STSb-aborten & 0.14 & 0.10 & 0.01 & 0.02 & 0.12 & 0.17 & 0.21 & -0.06 & **0.13** & ## 7 Limitations One limitation of the work is that the dataset for human evaluation is not big (252 samples). We utilized the dataset from (Fabbri et al., 2021) which is the only available benchmark human-annotated dataset for extractive evaluation. Even though 252 at first glance can appear small, note that each of the sample summaries was manually annotated along the four quality aspects of consistency, relevance, coherence and fluency by five separate crowd-source workers and three independent experts. Additionally, each sample has 11 reference summaries that were written by annotators. Annotators must carefully read, digest, and then summarize the entire document for each sample, placing a significant cognitive burden. Another limitation of the work may seem like the improvement of correlation for redundancy-aware _Sem-nCG_ is not large. We argue that we have considered multiple dimensions for assessing the metric and getting a metric that performs well in all the dimensions is a difficult task where the most popular metrics fail as well (Fabbri et al., 2021). The redundancy-aware _Sem-nCG_, however, clearly demonstrates improvement in coherence and relevance dimensions for more abstractive references (about 20%), as well as in multi-reference evaluation. Additionally, It is yet to be determined how this metric can be applied in abstractive settings. ## 8 Ethics Statement For the experiments, we used a publicly accessible dataset and anonymous human annotations. As a result, to the best of our knowledge, there are no ethical violations. Additionally, the evaluation of extractive summarization is a major aspect of this work. Hence, we consider it a low-risk research study.
2306.02910
Action-Evolution Petri Nets: a Framework for Modeling and Solving Dynamic Task Assignment Problems
Dynamic task assignment involves assigning arriving tasks to a limited number of resources in order to minimize the overall cost of the assignments. To achieve optimal task assignment, it is necessary to model the assignment problem first. While there exist separate formalisms, specifically Markov Decision Processes and (Colored) Petri Nets, to model, execute, and solve different aspects of the problem, there is no integrated modeling technique. To address this gap, this paper proposes Action-Evolution Petri Nets (A-E PN) as a framework for modeling and solving dynamic task assignment problems. A-E PN provides a unified modeling technique that can represent all elements of dynamic task assignment problems. Moreover, A-E PN models are executable, which means they can be used to learn close-to-optimal assignment policies through Reinforcement Learning (RL) without additional modeling effort. To evaluate the framework, we define a taxonomy of archetypical assignment problems. We show for three cases that A-E PN can be used to learn close-to-optimal assignment policies. Our results suggest that A-E PN can be used to model and solve a broad range of dynamic task assignment problems.
Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, Willem van Jaarsveld
2023-06-05T14:14:48Z
http://arxiv.org/abs/2306.02910v3
# Action-Evolution Petri Nets: a Framework for Modeling and Solving Dynamic Task Assignment Problems ###### Abstract Dynamic task assignment involves assigning arriving tasks to a limited number of resources in order to minimize the overall cost of the assignments. To achieve optimal task assignment, it is necessary to model the assignment problem first. While there exist separate formalisms, specifically Markov Decision Processes and (Colored) Petri Nets, to model, execute, and solve different aspects of the problem, there is no integrated modeling technique. To address this gap, this paper proposes Action-Evolution Petri Nets (A-E PN) as a framework for modeling and solving dynamic task assignment problems. A-E PN provides a unified modeling technique that can represent all elements of dynamic task assignment problems. Moreover, A-E PN models are executable, which means they can be used to learn close-to-optimal assignment policies through Reinforcement Learning (RL) without additional modeling effort. To evaluate the framework, we define a taxonomy of archetypical assignment problems. We show for three cases that A-E PN can be used to learn close-to-optimal assignment policies. Our results suggest that A-E PN can be used to model and solve a broad range of dynamic task assignment problems. Keywords:Petri Nets, Dynamic Assignment Problem, Business Process Optimization, Markov Decision Processes, Reinforcement Learning ## 1 Introduction During the execution of a business process, tasks become executable and resources become available to execute these tasks. As resources are assigned to tasks, they become unavailable to execute other tasks. Consequently, continuously assigning the right task to the right resource is essential to run a process efficiently. This problem is known as dynamic task assignment. The dynamic task assignment problem can be seen as a particular case of the _dynamic assignment problem_, which, according to [1], is the problem of assigning a fixed number of individuals to a sequence of tasks, such as to minimize the total cost of the allocations, which may include setup costs, travel costs, or other time-varying costs. This problem has been extensively studied in business process optimization [2] as well as related areas, such as manufacturing [3]. For the sake of brevity, we will employ the term "assignment problem" to indicate the general dynamic (task) assignment problem. To solve an assignment problem, it must first be modeled mathematically. Markov Decision Processes (MDPs) are a common technique for modeling assignment problems [4], and they are the standard interface for Reinforcement Learning (RL) algorithms [5]. The basic definition of MDP involves a single agent interacting with an environment to maximize a cumulative reward, which is a global signal of the goodness of the actions chosen by the agent during a (possibly infinite) sequence of system states. In the context of business process optimization, the environment is the business process that must be executed, and the agent decides which task to assign to which resource. The reward is calculated based on what we want to optimize in the process, such as the total time resources spend working, the total cost of employing the resources, or the time customers spend waiting. While MDPs provide a good formalism for modeling the agent's behavior, they consider the environment, in our case the business process, as a black box that provides rewards for the decisions taken by the agent without exposing its internal behavior. Moreover, they do not have an agreed-upon syntax and lack any type of graphical representation. On the other hand, (Colored) Petri Nets [6] are a well-known formalism for modeling a business process but have no inherent mechanisms for modeling and calculating the best decision in a given situation. Also, frameworks exist for many mathematical optimization techniques, such as linear programming and constraint programming, where problems can be modeled and solved without additional effort. However, no such framework exists for dynamic task assignment problems. To fill this gap, this paper presents a unified and executable framework for modeling assignment problems. We use the term "unified" to refer to the capability of expressing both the agent and the environment of the assignment problem in a single standardized notation, thus simplifying the modeling of new problems. We use the term "executable" to refer to the possibility of using the models to train and test decision-making algorithms (specifically RL algorithms) without additional effort. To this end, we propose a new artifact in the form of a modeling language with a solid mathematical foundation, namely A-E Petri Net (A-E PN), which draws from the well-known Petri Net (PN) formalism to model assignment problems in a readable and executable manner. This paper pays particular attention to embedding the A-E PN formalism in the RL cycle, such that RL algorithms can be trained and used to solve assignment problems without additional effort. The proposed artifact is evaluated by modeling and solving a set of archetypical assignment problems. A taxonomy of assignment problem variants is proposed, and an example for each of the three main variants is modeled through A-E PN. An RL algorithm is trained on each instance, achieving close-to-optimal results. Apart from modeling each assignment problem as an A-E PN, no additional effort is required to achieve these results, empirically demonstrating that A-E PN constitutes a unified and executable framework for modeling and solving assignment problems. Against this background, the remainder of this paper is structured as follows. Section 2 is dedicated to a review of relevant literature. Section 3 introduces Timed-Arc Colored Petri Nets (T-A CPN). Section 4 is devoted to the formal definition of Action-Evolution Petri Net and the description of the integration of A-E PN in the classic RL loop. In section 5, an essential taxonomy of assignment problem variants is presented. A problem instance for each variant is modeled through A-E PN, and a RL algorithm is trained on each instance, obtaining close-to-optimal results. Section 6 discusses the proposed method's benefits and limitations and delineates the next research steps. ## 2 Related work To the best of our knowledge, this paper presents the first attempt at defining a unified and executable framework for assignment problems. In contrast, the relation between (generalized stochastic) Petri Nets and Markov Chains is well studied [7], but Markov Chains cannot be used to model and optimize (task assignment) decisions. Since Markov Decision Processes can be seen as an extension to Markov Chains, the idea of extending Petri Nets to model Markov Decision Processes follows naturally. Several attempts at this exist in the literature, but none focus on the assignment problem. An overview of existing frameworks for modeling and solving dynamic optimization problems is presented in Table 1, listing, for each framework, the Petri Net variant employed, the scope of applicability, and whether the framework is unified and executable. The current work is presented in the last line. In [8], the authors define a CPN variant: Factored Petri Net (FPN). In FPNs, the transition probabilities are defined explicitly, and a reward is attached to each network state. A limitation of [8] is that actions must be input marks from a single source transition (a transition without input arcs), while our framework allows actions to be defined anywhere in the Petri net, thus allowing for more modeling flexiblity. \begin{table} \begin{tabular}{l l l l l} \hline \hline Reference & PN & Scope & Unified & Executable \\ \hline [8] & FPN & Problems expressible as finite MDPs & Yes & Yes* \\ [9] & DPN & Problems expressible as finite MDPs & No & Yes \\ [10] & GSPN & A single power management problem & Yes & No \\ [11] & TCPN & A single manufacturing scheduling problem & Yes & No \\ [12] & TCPN & Manufacturing scheduling problems & Yes & No \\ This paper & A-E PN & Assignment problems & Yes & Yes \\ \hline \hline \end{tabular} * No executable example is provided. \end{table} Table 1: Comparison of existing frameworks for dynamic optimization. In [9], the authors propose the Decision Petri Net (DPN) formalism. In DPN, the network is partitioned into a probabilistic network, in which transition probabilities are determined on arcs, and a non-deterministic network, corresponding to the actions that can be taken at a given moment by the decision maker. In our framework, we remove the need for two separate subnets and model the agents as tokens in the network, obtaining a unified representation. Both [8], and [9] require the number of states in the system to be finite, whereas our approach does not rely on states enumeration. In [10], the authors propose a model for a power-managed distributed computing system that is based on the Generalized Stochastic Petri Net (GSPN) formalism and provide a translation to the equivalent continuous-time MDP. The work demonstrates the expressive power of PN variants, but the resulting model is not executable. Also, the paper presents a single case study, while our approach is demonstrated to be generally applicable to modeling and solving problems with different characteristics. In [11], a manufacturing scheduling problem is modeled using Timed Colored Petri Nets (TCPN). The search for an optimal policy is implemented using Q-learning, where each action corresponds to a complete schedule, which is a path from the initial marking to a final marking of the TCPN representing the system, whereas in our case, an action corresponds to a single assignment, which allows for more flexible modeling of decisions. Moreover, [11] only covers a single case study, relying heavily on problem-specific heuristics. In [12], the authors provide an example usage of TCPN in the context of manufacturing systems, focusing on reinforcement learning as solving approach. While [12] highlights the relationship between TCPN and RL, TCPNs are used only to describe the environment and not to train or test solving algorithms. In contrast, our work provides a unified and executable framework. ## 3 Preliminaries This section provides the formal definition of Colored Petri Net (CPN) and Timed-Arc Colored Petri Net (T-A CPN), which will be used to define the new formalism. Colored Petri Net (CPN) [6] is an extension of Petri Nets (PN) in which tokens have different characteristics called colors. In the remainder of this section, we rely on the CPN definition provided in [13]. Definition 1 (Colored Petri Net): A CPN is defined as a tuple \(CPN=(\mathcal{E},P,T,F,C,G,E,I)\), such that: * \(\mathcal{E}\) is a finite set of types called color sets. Each color set must be finite and non-empty. * \(P\) is a finite set of places. * \(T\) is a finite set of transitions, such that \(P\cap T=\emptyset\) * \(F\subseteq P\times T\cup T\times P\) is a finite set of arcs. * \(C:P\rightarrow\mathcal{E}\) _is a color function that maps each place_ \(p\) _into a set of possible token colors. Each token on_ \(p\) _must have a color that belongs to the type_ \(C(p)\)_, which is called the place's color set._ * \(G\) _is a guard function. It is defined from_ \(T\) _into expressions such that for each_ \(t\in T\)_,_ \(G(t)\) _is a Boolean expression and_ \(Type(Var(G(t)))\subseteq\mathcal{E}\)_, where_ \(Type(x)\) _denotes the type of_ \(x\) _and_ \(Var(f)\) _denotes the set of free variables in the function_ \(f\)_._ * \(E\) _is an arc expression function. It is defined from_ \(F\) _into expressions such that for each_ \(f\in F\)_,_ \(Type(E(f))=C(P(f))_{MS}\) _and_ \(Type(Var(E(f)))\subseteq\mathcal{E}\) _where_ \(P(f)\) _is the place of_ \(f\)_. This means that each evaluation of the arc expression must yield a multi-set (indicated by the_ \(MS\) _subscript) over the color set attached to the corresponding place._ * \(I\) _is an initialization function. It is defined from_ \(P\) _into expressions such that_ \(\forall p\in P:Type(I(p))=C(p)_{MS}\)_. The initialization function determines the network's initial marking._ Definition 2 (Marking): A marking of a CPN is a function \(M\), such that for each place \(p\in P\), it defines a multi-set of colors \(C(p)\rightarrow\mathbb{N}\), which maps each possible color of the place to the number of times it occurs. For a place \(p\) with colors \(C(p)=\{c_{1},c_{2}\}\), we also write \(M(p)=c_{1}^{n}c_{2}^{m}\) to denote that \(p\) has \(n)\) token with color \(c_{1}\) and \(m\) tokens with color \(c_{2}\). Since a marking is a multi-set, multi-set operations, such as \(\geq\), \(+\), and \(-\), are available on markings. Definition 3 (Binding): For a transition \(t\), the variables \(Var(t)=Var(G(t))\cup\{Var(E(f))|f\in F,T(f)=t\}\) represent the set of variables from the guard function and the expressions on its arcs, where \(T(f)\) is the transition of arc \(f\). A binding of a transition \(t\in T\) is a function \(Y\) that maps each \(v\in Var(t)\) to a color, such that \(\forall v\in Var(v):Y(v)\in Type(v)\) and \(G(t)\langle Y\rangle\) evaluates to true, where \(f\langle Y\rangle\) denotes the evaluation of a function \(f\) with its free variables bound as \(Y\). For a transition \(t\) with variables \(Var(t)=\{v_{1},v_{2}\}\), we also write \(Y(t)=\langle v_{1}=c_{1},v_{2}=c_{2}\rangle\) to denote that the binding \(Y\) assigns color \(c_{1}\) to variable \(v_{1}\) and color \(c_{2}\) to variable \(v_{2}\). We now define the behavior of a CPN through its firing rules. Definition 4 (CPN Firing Rules): 1. A transition \(t\) is enabled in marking \(M\) for binding \(Y\) if and only if \(\forall(p,t)\in F:M(p)\geq E((p,t))\langle Y\rangle\). 2. An enabled transition can fire, changing the Marking \(M\) into a marking \(M^{\prime}\), such that \(\forall p\in P:M^{\prime}(p)=M(p)-E((p,t))\langle Y\rangle+E((t,p))\langle Y\rangle\). The standard CPN definition assumes that the effect of a firing is always instantaneous. To account for time, we will refer to a modified version of the Timed-Arc Petri Net (T-A PN) formulation [14]. Our version defines a global clock, updated according to a next-event time progression. This is also the time management paradigm implemented in CPN Tools [15], a widely adopted software for Petri Nets modeling. Definition 5 (Timed-Arc Colored Petri Net): A T-A CPN is defined by a tuple \(TACPPN=(\mathcal{E},P,T,F,C,G,E,I)\), where \(P,T,F,C,G,I\) are as in Definition 1, and \(\mathcal{E}\) and \(E\) are adapted as follows: * \(\mathcal{E}\) is a finite set of timed types called timed color sets. A color of a timed color set has both a value \(v\) and a time \(\tau\), we also denote this as \(v@\tau\). * \(E\) is an arc expression function. It is defined from \(F\) into tuples of two elements. For a given \(f\in F\), \(E(f)_{0}\) is defined the same as \(E\) in Definition 1 and \(E(f)_{1}\) is a scalar increment, thus \(\forall f\in F:Type(E(f)_{1})=\mathbb{N}\), that indicates the generated tokens' time with reference to the global clock. The second tuple element is ignored for arcs outgoing from places and incoming to transitions since the scalar increment is only used when producing new tokens. Note that each color now has a time and consequently, each color in a marking and in a binding has time. For example, we can refer to the marking of a place \(p\) with \(M(p)=c_{1}@2^{1}c_{1}@3^{5}\) as the marking that has one token with color \(c_{1}\) at time \(2\) and five tokens with color \(c_{1}\) at time \(3\). With some abuse of notation, we will allow arc expression functions \(E(f)_{0}\), to ignore the time element of colors and leave it unaffected, and we will denote with \(c@e\) that an expression \(e\) only changes the time element of a timed color. We also extend the concept of marking to account for the presence of a global clock, which we need further on in the paper to define the transition rules for A-E PN. Definition 6 (Timed Marking): A timed marking is defined as the tuple \(TM=(M,\tau)\), where \(M\) is a marking and \(\tau\) is the current value of the global clock. The T-A CPN firing rule can then be expressed as follows: Definition 7 (T-A CPN Firing Rules): 1. Let \(t\) be a transition that is enabled in marking \(M\) for binding \(Y=\langle v1=c_{1}@\tau_{1},v2=c_{2}@\tau_{2},\ldots,v_{n}=c_{n}@\tau_{n}\rangle\) as in Definition 4 (using only \(E_{0}\) for \(E\)). The enabling time of the transition, denoted \(\tau_{E}\), is \(max(\tau_{1},\tau_{2},\ldots,\tau_{n})\). 2. An enabled transition \(t\) is time-enabled in timed marking \((M,\tau)\), if its enabling time \(\tau_{E}\) is less than or equal to \(\tau\), and there exists no transition \(t^{\prime}\) that is enabled in marking \(M\) for some binding \(Y^{\prime}\) with enabling time \(\tau^{\prime}_{E}\leq\tau_{E}\). 3. A transition \(t\) that is time-enabled in timed marking \((M,\tau)\) for binding \(Y\) with enabling time \(\tau_{E}\) can fire, changing the timed marking to \((M^{\prime},\tau_{E})\), where \(M^{\prime}\) is constructed, such that \(\forall p\in P:M^{\prime}(p)=M(p)-E((p,t))_{0}\langle Y\rangle+E((t,p))_{0} \langle Y\rangle@\tau_{E}+E((t,p))_{1}\). 4. _When there exists no_ \(t\) _in timed marking_ \((M,\tau)\)_, for which there is a binding_ \(Y\)_, such that_ \(t\) _is time-enabled, the global clock_ \(\tau\) _is increased until there is._ In practice, point 4 can be performed by evaluating bindings that are enabling but not time-enabling. The binding that leads to the lowest enabling time reveals the minimal increase of the global clock, making it possible to update the global clock using a next-event time progression. ## 4 Action-Evolution Petri Nets This section extends the definition of T-A CPN to provide a model that can automatically learn close-to-optimal task assignment policies. This extension is called Action-Evolution Petri Nets (A-E PN). The new elements are first described informally, then a formal definition is provided. Finally, the definition is incorporated into the RL cycle, allowing for automated learning of close-to-optimal task assignment policies. ### Tags and Rewards The overall objective of A-E PN is to mimic the behavior of an agent that observes changes in the environment and acts upon those changes when possible. We will thus extend the CPN definition provided in the background section to distinguish two separate types of transitions: * **Actions**: transitions that represent actions taken by the agent. In the context of assignment problems, the firing of an action transition represents a single assignment. * **Evolutions**: transitions that represent events happening in the system independently of the actions taken by the agent. The firing of an evolution transition represents a single event in the environment, for example, the arrival of a new order. This distinction is expressed by associating every transition with a _transition tag_, that can be either \(A\) (action) or \(E\) (evolution), through a _transition tag function_\(L\). We also extend the concept of marking to embed a _network tag_\(l\), which can assume a single value in \(\{A,E\}\): only transitions associated with a tag of the same type as the one in the network tag are allowed to fire. The network tag \(l\) must be updated every time no transitions with the same tag are available for firing. The _tag update function_\(S\) performs the update by changing the network's tag from \(A\) to \(E\) or vice versa: \(S(l)=A\), if \(l=E;S(l)=E\), if \(l=A\). We use the term _tag time frame_ to refer to the period between changes in the network tag. The objective of the RL cycle is the maximization of a cumulative reward over a (possibly infinite) horizon. To track rewards in A-E PN, we introduce a _transitions reward function_\(\mathcal{R}\) that associates a reward to the firing of any transition, and we embed the total reward accumulated by firing transitions, which we call _network reward_\(\rho\), in the network's marking. In general, a reward can be produced by any change in the environment, regardless of whether an action or an evolution produced such change. For this reason, a reward is produced due to the firing of any transition, regardless if the transition is tagged as an action or an evolution. To comply with the classic RL cycle, rewards associated with evolutions are accumulated and awarded to the last action taken, eventually after a normalization operation (see subsection 4.3). To further clarify the basic mechanisms of A-E PN, the example in Fig. 1 provides an overview of a sequence of firings. The network shows the evolution of a system with two types of tasks, \(a\) and \(b\), and two employees, one that can undertake only task \(a\) and one that can undertake only task \(b\). A task of each type arrives at every clock tick, and an employee is assigned to a task of the same type. Assignments take one clock tick to complete, and a reward of 1 is produced every time an assignment is completed. The parentheses on the top right corner contain the components of the tagged marking that are not directly represented as network elements. Guard functions and reward functions are associated with single transitions. Timed tokens and arcs follow the notation introduced in Definition 5. The initial marking is presented in the dotted square \(a\), in which only \(E\) transitions are enabled. After two firings of transition _Arrive_, consuming both tokens in the _Arrival_ place (in any order), no evolution transitions are available, so the tag is updated, and the system transitions to state \(b\). Notice that the transition from \(e\) to \(a\) does not produce a clock update, since actions are available to be taken at time 0. In \(b\), transition _Start_ is enabled. In this case, the RL agent would have two available actions: pairing task \(a\) with resource \(a\), or pairing task \(b\) with resource \(b\). In this case, both actions will be taken sequentially, in any order, leading to tagged marking \(c\), while in the general case, choices would have to be made by a decision algorithm on which assignments to make. In \(c\), the Figure 1: A sequence of firings in a simple task assignment problem. network tag is again \(E\), and two transitions are associated with time-enabled steps: _Arrive_ and _Complete_. The firing of _Arrive_ produces two new tokens at time 1 in the _Waiting_ place, while the firing of _Complete_ places two tokens back in the _Resources_ place at time 1 and generates a network reward increment of 2 units in state \(d\). ### Formal Definition of Action-Evolution Petri Net To provide a formal definition of A-E PN, we must adapt three definitions from T-A CPN: the net itself, the marking, and the firing rules. Definition 8 (Action-Evolution Petri Net): Let \(\mathcal{T}=\{A,E\}\) be a finite set of tags representing actions and evolutions, and \(S:\mathcal{T}\rightarrow\mathcal{T}\) a network tag update function. An Action-Evolution Petri Net (A-E PN) is as a tuple \(AEPN=(\mathcal{E},P,T,F,C,G,E,I,L,l_{o},\mathcal{R},\rho_{0})\), where \(\mathcal{E},P,T,F,C,G,E,I\) follow Definition 5, and: * \(L:T\rightarrow\mathcal{T}\) is a transition tag function that maps each transition \(t\) to a single tag. Only transitions associated with the same tag as the network can fire. * \(l_{0}\in\mathcal{T}\) is a singleton containing the network's initial tag, usually equal to \(E\). * \(\mathcal{R}:T\rightarrow(f:\mathbb{R})\) associates every transition with a reward function. The function can take timing properties or numbers of tokens (representing completed cases) as parameters, thus allowing for flexbility in modeling reward. * \(\rho_{0}\in\mathbb{R}\) is the initial network reward, usually equal to \(0\). Definition 9 (Tagged Marking): A tagged marking is a tuple \(TM=(M,l,\tau,\rho)\), where the tuple \((M,\tau)\) is a timed marking, as in Definition 6, \(l\in\mathcal{T}\) is the network tag at the current time \(\tau\), and \(\rho\in\mathbb{R}\) is the total reward accumulated until the current time \(\tau\). Definition 10 (A-E PN Firing Rule): 1. A transition \(t\) is tag-enabled in a tagged marking \((M,l,\tau,\rho)\) for binding \(Y\) if and only if \(t\) is enabled in \(M\) according to Definition 1, and \(L(t)=l\). 2. Let \(t\) be a transition that is tag-enabled in tagged marking \((M,l,\tau,\rho)\) for binding \(Y=\langle v1=c_{1}@\tau_{1},v2=c_{2}@\tau_{2},\ldots,v_{n}=c_{n}@\tau_{n}\rangle\). The enabling time of the transition, denoted \(\tau_{E}\), is \(max(\tau_{1},\tau_{2},\ldots,\tau_{n})\). 3. An enabled transition \(t\) is tag-time-enabled in tagged marking \(TTM=(M,l,\tau,\rho)\), if its enabling time \(\tau_{E}\) is less than or equal to \(\tau\), and there exists no transition \(t^{\prime}\) that is enabled in tagged marking \(TTM\) for some binding \(Y^{\prime}\) with enabling time \(\tau^{\prime}_{E}\leq\tau_{E}\). 4. A transition \(t\) that is tag-time-enabled in tagged marking \((M,l,\tau,\rho)\) for binding \(Y\) with enabling time \(\tau_{E}\) can fire, changing the tagged marking to \((M^{\prime},l,\tau_{E},\rho^{\prime})\), where \(M^{\prime}\) is constructed, such that \(\forall p\in P:M^{\prime}(p)=M(p)-E((p,t))_{0}\langle Y\rangle+E((t,p))_{0} \langle Y\rangle@\tau_{E}+E((t,p))_{1}\) and \(\rho^{\prime}=\rho+\mathcal{R}(t)\). 5. _When there exists no_ \(t\) _in tagged marking_ \(TTM=(M,l,\tau,\rho)\)_, for which there is a binding_ \(Y\)_, such that_ \(t\) _is time-enabled, the set of all transitions is partitioned in two disjoint sets:_ \(T_{current}=\{t\in T|L(t)=l\}\) _and_ \(T_{next}=\{t\in T|L(t)\neq l\}\)_. Let_ \(\tau_{current}\) _be the minimum value for which a transition in_ \(T_{current}\) _is time-enabled (according to Definition_ 7_), and let_ \(\tau_{next}\) _be the minimum value for which a transition in_ \(T_{next}\) _is time-enabled. Note that_ \(\tau_{current}\) _and_ \(\tau_{next}\) _can be undefined._ * _If_ \(\tau_{current}\) _is defined, and_ \(\tau_{current}\leq\tau_{next}\) _or_ \(\tau_{next}\) _is undefined, only the global clock is updated, leading to a new tagged marking_ \(TTM^{\prime}=(M,l,\tau_{current},\rho)\)_._ * _If_ \(\tau_{next}\) _is defined, and_ \(\tau_{current}>\tau_{next}\) _or_ \(\tau_{current}\) _is undefined, both the global clock and the network tag are updated, leading to a new tagged marking_ \(TTM^{\prime}=(M,S(l),\tau_{next},\rho)\)_._ ### Extending the Reinforcement Learning Loop Having completely defined the characteristics of the A-P PN formalism, we can clarify how it can be used to learn optimal task assignment policies (i.e. mapping from observations to assignments) by applying it in a Reinforcement Learning (RL) cycle. Figure 2 shows the RL cycle. In every step in the cycle, the agent receives an observation (a representation of the environment's state), then it produces a single action that it considers the best action for this observation. The action leads to a change in the environment's state. The environment is responsible for providing a reward for the chosen action along with a new observation. Then the cycle repeats, and a new decision step takes place. The MDP formulation is the standard framework for training an agent to take actions that lead to the highest cumulative reward. In recent years, the embedding of neural networks in RL algorithms gave birth to the field of Deep Reinforcement Learning (DRL), achieving breakthroughs in settings such as playing board games [16] and robotic manipulation [17], as well as successful applications in domains like industrial process control [18], and healthcare [19]. With the proliferation of robust DRL algorithms, the main hurdle in modeling new problems is the definition of the environment, which is usually represented as a black box, as in Fig. 2, thus leaving the implementation Figure 2: A common representation of the RL training cycle [5]. of the system's dynamics entirely to the modeler. The lack of a standardized interface makes the creation of new environments time-consuming and dependent on the modeler's coding skills. Moreover, even introducing small changes potentially requires substantial effort once the environment has been modeled. These observations motivate the effort to provide a unified and executable framework. In Fig. 3, the classic RL cycle is extended to account for the presence of A-E PN. The main element is the A-E PN, which acts as a simulator for the whole process. The A-E PN communicates with the agent through two sub-components: _observation manager_ and _action manager_. The observation manager is invoked every time the tagged marking changes, regardless if due to a firing or not. The new reward is stored, and the network tag is evaluated: if the tag is \(E\), no action is required, and the control is given back to the A-E PN, which can fire a new \(E\) transition. If the tag is \(A\), the accumulated rewards are added up, and the result is divided by \(1+(\tau_{t+1}-\tau_{t})\). The resulting value is returned to the agent as \(r_{t+1}\). The reward value takes into account the possible misalignment between clock ticks (\(\tau\)) and RL steps (\(t\)), given by the fact that multiple actions can happen at the same \(\tau\). The observation manager also returns to the agent the new observation \(o_{t+1}\). For the set of experiments presented in the next section, the observation is built as a vector containing, for each place, the number of tokens of each color in the place's color set. The action manager is invoked every time the agent chooses an action \(a_{t}\), which it transforms into the corresponding binding \(B_{t}\) (associated with an action transition) to be fired. ## 5 Evaluation This section aims to show that A-E PN constitutes a unified and executable framework for expressing dynamic task assignment problems with different characteristics: in fact, all the examples were modeled using a single notation (except Figure 3: The reinforcement learning cycle with A-E PN for color-specific functions on arcs, guards, and rewards) and a RL algorithm was trained on each problem, without any additional development effort. We provide a (non-exhaustive) taxonomy of assignment problem variants based on [20]. We distinguish three archetypes of assignment problems. * **Assignment Problem with Compatibilities**: resources are assigned to tasks according to a measure of compatibility. Two problem subclasses can be formulated: * **Assignment Problem with Hard Compatibilities**: resources can only be assigned to tasks if they are compatible. The dynamic task assignment problem in subsection 5.1 falls into this subclass. * **Assignment Problem with Soft Compatibilities**: resources can always be assigned to tasks, but different assignments result in different system behaviors. An example of such a problem is if multiple resources can perform a task, but some will be faster at it than others. * **Assignment Problem with Multiple Assignments**: the same resource can be assigned to multiple tasks, or the same task can be assigned to multiple resources. Two problem subclasses can be formulated: * **Assignment Problem with Resource Capacity**: resources have a maximum capacity of tasks that they can undertake before being considered full. In the simple case each resource can only be busy with a single task at a time. The dynamic bin packing problem in subsection 5.2 provides a more elaborate example. * **Assignment Problem with Task Capacity**: tasks have a minimum capacity of resources to be assigned to them before processing. In the simple case each tasks needs exactly one resource. * **Assignment Problem with Dynamic Resources' Behavior**: resources have dynamic behavior. Two problem subclasses can be formulated: * **Assignment Problem with Action-Dependent Dynamic Resources' Behavior**: resources change their attribute values as the consequence of taking actions. The dynamic order-picking problem in subsection 5.3 falls into this category. * **Assignment Problem with Action-Independent Dynamic Resources' Behavior**: resources change their attribute values as the consequence of evolutions in the environment. For example, resources may take breaks or go on holidays. In the following sections, one example is detailed for each archetype. An example for each subclass is implemented in the provided Python package. ### Dynamic Task Assignment Problem with Hard Compatibilities Let us consider a system that solves a task assignment problem, similar to the one presented in Fig. 1. At every clock tick, two tasks arrive: one has type \(r1\) and the other \(r2\). Two resources are available for the assignment: one can only undertake tasks of type \(r1\), while the other can undertake tasks of type \(r1\) or \(r2\). Once a task is assigned to a resource, completion always takes one clock tick, after which the resource becomes available for a new assignment. A resource cannot work on multiple tasks at the same time. A network reward of 1 is returned every time a task is assigned to a resource and every time an assignment completes, leading to a theoretical maximum reward of 200 over 100 clock ticks. The problem can be fully expressed in terms of A-E PN, as reported in Fig. 4. ### Dynamic Bin Packing Problem In this scenario, we model a dynamic version of the bin packing problem where items (the problem tasks, characterized by their _weight_) arrive sequentially and they must be allocated to two bins (the problem resources, characterized by the total weight of objects in the bin _curr_ and the bin's total capacity _tot_) that are emptied at every clock tick (except for the first, which is used to generate the objects to be put in the bins). The fullness of the bins before being emptied gives the measure of goodness of the object's allocation, quantified as the weight of objects in the bin divided by the total bin capacity. This problem showcases how tokens' colors can be used to model non-trivial reward functions. In the example reported, three objects arrive in the system at every clock tick, one of weight 1 and two of weight 2. Two initially empty bins are available, one with capacity 2 and one with capacity 3. The optimal allocation would give a reward of 2, leading to a theoretical maximum reward of 200 over a 100 clock ticks horizon. The A-E CPN formalization of the problem is reported in Fig. 5. ### Dynamic Order-Picking Problem In this section, we present an example of action-dependent resource behavior (i.e. the agent taking decisions on the actions that it performs). The example is a simple order-picking problem in which a single agent (the resource) moves on Figure 4: A-E PN initial marking for the dynamic task assignment problem a squared grid of size 2, trying to pick orders (the tasks). The agent's and the orders' colors are characterized by two parameters representing the coordinates on the grid (infinite capacity is assumed). The agent starts in position \((0,0)\) and can move left, right, up, or down, but not over a diagonal. If an order is in the same position as the agent, the latter can use an action to pick the order. A single order arrives at every clock tick, always in position \(1,1\), and the order stays on the grid for exactly one clock tick, according to a time-to-live (TTL) parameter. The agent's objective is to pick as many orders as possible, so it gets a reward of 1 every time an order is picked, leading to a theoretical maximum reward of 98 over a 100 clock ticks horizon (at least two orders will be lost due to the agent moving to position \((1,1)\). The problem is formulated in terms of A-E PN in Fig. 6. Figure 5: A-E PN initial marking for the dynamic bin packing problem Figure 6: A-E PN initial marking for the dynamic order-picking problem ### Experimental Results All experiments were implemented in a proof-of-concept package1, relying on the Python programming language and the widely adopted RL library Gymnasium [21]. Proximal Policy Optimization (PPO) [22] with masking was used as the training algorithm. Specifically, the PPO implementation of the _Stable Baselines_ package [23] is used. Note, however, that the mapping from each A-E PN to PPO was automated and requires no further effort from the modeler. The PPO algorithm was trained on each example for (\(10^{6}\) steps with 100 clock ticks per episode, completed in less than 2300 seconds on a mid-range laptop, without GPUs), always using the default hyperparameters. The experimental results were computed on (network) rewards obtained by the trained agent and following a random policy over 1000 trajectories, each of duration 100 clock ticks. In Table 2, the average and standard deviations of rewards obtained by the trained PPO are compared to those of a random policy on each of the three presented problem instances, with reference to the maximum attainable reward. In all cases, PPO shows to be able to learn a close-to-optimal assignment policy. Footnote 1: The code is publicly available in [https://github.com/bpogroup/aepn-project](https://github.com/bpogroup/aepn-project). ## 6 Conclusions and Future Work This paper presented a framework for modeling and solving dynamic task assignment problems. To this end, it introduced a new variant of Petri Nets, namely Action-Evolution Petri Nets (A-E PN), to provide a mathematically sound modeling tool. This formalism was integrated with the Reinforcement Learning (RL) cycle and consequently with existing algorithms that can solve RL problems. To evaluate the general applicability of the framework for modeling and solving task assignment problems, a taxonomy of archetypical problems was introduced, and working examples were provided. A DRL algorithm was trained on each implementation, obtaining close-to-optimal policies for each example. This result shows the suitability of A-E PN as a unified and executable framework for modeling and solving assignment problems. While the applicability of the framework was shown, its possibilities and limitations are yet to be fully explored. This will be done in future research by expanding the provided taxonomy of assignment problems and considering different problem classes. \begin{table} \begin{tabular}{l l l l} \hline \hline Instance & Random & PPO & Optimal \\ \hline Task Assignment & \(186.894\pm 2.084\) & \(199.852\pm 0.398\) & 200 \\ Bin Packing & \(186.746\pm 1.941\) & \(199.963\pm 0.186\) & 200 \\ Order Picking & \(6.046\pm 2.585\) & \(96.776\pm 2.019\) & 98 \\ \hline \hline \end{tabular} \end{table} Table 2: The results for the three presented problem instances. ## Acknowledgement The research that led to this publication was partly funded by the European Supply Chain Forum (ESCF) and the Eindhoven Artificial Intelligence Systems Institute (EAISI) under the AI Planners of the Future program.
2307.04090
DebateKG: Automatic Policy Debate Case Creation with Semantic Knowledge Graphs
Recent work within the Argument Mining community has shown the applicability of Natural Language Processing systems for solving problems found within competitive debate. One of the most important tasks within competitive debate is for debaters to create high quality debate cases. We show that effective debate cases can be constructed using constrained shortest path traversals on Argumentative Semantic Knowledge Graphs. We study this potential in the context of a type of American Competitive Debate, called Policy Debate, which already has a large scale dataset targeting it called DebateSum. We significantly improve upon DebateSum by introducing 53180 new examples, as well as further useful metadata for every example, to the dataset. We leverage the txtai semantic search and knowledge graph toolchain to produce and contribute 9 semantic knowledge graphs built on this dataset. We create a unique method for evaluating which knowledge graphs are better in the context of producing policy debate cases. A demo which automatically generates debate cases, along with all other code and the Knowledge Graphs, are open-sourced and made available to the public here: https://huggingface.co/spaces/Hellisotherpeople/DebateKG
Allen Roush, David Mezzetti
2023-07-09T04:19:19Z
http://arxiv.org/abs/2307.04090v2
# DebateKG - Automatic Policy Debate Case Creation with Semantic Knowledge Graphs ###### Abstract Recent work within the Argument Mining community has shown the applicability of Natural Language Processing systems for solving problems found within competitive debate. One of the most important tasks within competitive debate is for debaters to create high quality debate cases. We show that effective debate cases can be constructed using constrained shortest path traversals on Argumentative Semantic Knowledge Graphs. We study this potential in the context of a type of American Competitive Debate, called "Policy Debate", which already has a large scale dataset targeting it called "DebateSum". We significantly improve upon DebateSum by introducing 53180 new examples, as well as further useful metadata for every example, to the dataset. We leverage the txtai semantic search and knowledge graph toolchain to produce and contribute 9 semantic knowledge graphs built on this dataset. We create a unique method for evaluating which knowledge graphs are better in the context of producing policy debate cases. A demo which automatically generates debate cases, along with all other code and the Knowledge Graphs, are open-sourced and made available to the public here: [https://huggingface.co/spaces/Hellisotherp](https://huggingface.co/spaces/Hellisotherp) eople/DebateKG ## 1 Introduction ### Policy Debate Persuasion has been of interest to humans since we first began communicating with each other. The formal process of using argumentation and rhetoric to convince others to see in one's own way is known as "debate". With varying levels of formality and intensity, these debates happen all around us every day. More formalized, competitive forms of debate are both highly educational and integral to the formation of a lawful and just society. There is a long and time-honored tradition of academic institutions and news organizations facilitating competitive debate. Many organizations and associations organize debate tournaments according to their differing traditions and rule sets. Some types of debate are more suited to be assisted with Natural Language Processing systems than others. A popular form of competitive debate done predominantly within United States high schools and universities is called "Policy Debate". Policy Debate maintains one extremely broad and open-ended topic over a whole year, and challenges teams to be ready to either affirm any plan which implements the topic, or to be ready to explain why the opposing teams plan is a bad idea. Policy Debate is a highly technical form of debate, which puts relatively little emphasis on the aesthetic quality of the speech act, and correspondingly strong emphasis on the quality of the delivered evidence and the delivered argumentation around it. For this reason, Policy Debate rewards teams who can present the maximum amount of evidence possible during their limited speaking time. This leads to a peculiar phenomenon known as "speed reading" or "spreading" which is normalized among most serious competitors. While Policy Debate idiosyncrasies may end up making it less amicable for the general public to watch than other forms, those very same traits make it a uniquely good source of data for NLP systems which generate high quality debate cases. ### Policy Debate Cases Luckily, a large-scale dataset of Policy Debate evidence called DebateSum (Roush and Ballaji., 2020) exists. DebateSum includes all publically available Policy Debate evidence gathered from 2013-2019, which totals to over 180,000 pieces of evidence with corresponding abstractive and extractive summaries alongside rich metadata such as the citation author and word counts. Beyond its original targeted task of queryable word-level extractive summarization, DebateSum is an excellent dataset for the task of constructing Policy Debate cases. This is because most Policy Debate cases are highly standardized. In almost every Policy Debate round, each debater carefully reads a set of around 3-12 pieces of evidence, starting first with slowly reading the abstractive summary of the evidence (the "argument"), then formulaically reading the evidence citation, and then finally speed reading the extractive summary of the evidence that supports the argument. Moving from each piece of evidence to the next can sometimes be so imperceptible that debaters are instructed to add a slow verbal "next" to their speeches in-between each piece of evidence. Each piece of evidence is likely to be highly related to the previous piece, as they are being chained together to advance the larger narrative of the debate case. This extractive format for debate case construction can be naturally performed by NLP systems which leverage ideas from the Information Retrieval, Graph Analysis, and Distributional Semantics communities. ### Semantic Knowledge Graphs Knowledge Graphs are systems which store information about entities and relates them to each other using (often weighted) edges which show the relationships between each entity. We denote Knowledge Graphs, where each entity consists of documents or sentences, and where weighted edges are constructed between each based on their semantic similarity to each other as "Semantic Knowledge Graphs". ### txtai Computing the semantic similarity between each entity and every other entity is an ideal place to leverage a large scale language model. Approximate Nearest Neighbor (ANN) Systems unlock viable semantic search of these entities, and storing and querying these is a natural place to leverage a database. We are fortunate in that software which does all of these things already exists, and it is called "txtai". Txtai is a python software package for building AI powered semantic search applications. Txtai features support for a wide variety of backends to power its aforementioned components. Txtai is a natural choice for building Semantic Knowledge Graphs. ## 2 Innovations Introduced In this work, we introduce several innovations related to automatic Policy Debate case generation. ### DebateSum We significantly improve the existing DebateSum dataset by adding the most recent three additional years of evidence (2020-2022) using the same preprocessing tools as discussed in Roush and Ballaji (2020). This totals to an addition of 53,180 number of documents, bringing the total number of documents within DebateSum to 240,566. We also add further metadata columns, indicating the source DebateCamp, the broad type of argument, and the topic-year, for all documents within DebateSum. The type of the argument, designated as the "tag", This metadata was extracted from the "openCaselist!" project. Figure 1 shows how this metadata was represented on openCaselist. The additional metadata is particularly useful for more fine-grained information retrieval (e.g. "Give me all evidence about the environment from Gonzaga debate camp in 2013") as well as for leveraging information about the type of debate argument (e.g. "Give me an argument about why individual states should do the plan from the arguments labeled as counterplans"). ### Contributed Semantic Graphs We use txtai to build 9 Semantic Knowledge Graphs, which differ based on which column of DebateSum was indexed semantically, and on the language model underlying language model used for similarity calculations. We leave all settings at their defaults during graph construction, which means that networkx is used for the graph backend, huggingface for the language models, faiss for the ANN index, and sqlite for the database. A table of these contributed models is presented in Appendix 1. Txtai automatically does topic modeling on each graph using the Louvain Blondel et al (2008) community detection algorithm. This data is stored as further information within the graph and unlocks a powerful way to constrain the topics of the generated arguments. ### DebateKG The system that we demonstrate is called "DebateKG". DebateKG is a huggingface "space" webapp which leverages the contributed Semantic Knowledge Graphs to build Policy Debate cases. Users can specify a starting, an ending, and any number of middle arguments. They can also specify any additional constraints, like on the topic, or on the contents of each piece of evidence. DebateKG extracts the evidence closest to the given arguments which meets the given constraints, and then connects these evidence examples together by calculating the constrained weighted shortest path between each evidence example. The portions of each extracted piece of evidence which match the previous portions are highlighted, which functions as a kind of interpretability. Since there are usually many paths which connect the given pieces of evidence together, there are also many viable debate cases which can be generated. We allow users to generate all possible connected paths (all debate cases), and we enable users to manually display any possible debate case and to interpret the connections between the evidence within them. Besides the automatic case construction functionality, users can also individually query for evidence using txtai's built in semantic SQL language, which helps in the construction of input arguments. Figure 2 shows a sample generated debate case from DebateKG. ## 3 Prior Work Many others have looked at the relationships between Graph Methods and Argumentation. The closest prior work to our own comes from IBM Project Debater Slonim et al. (2021). They Figure 1: The added metadata to DebateSum was parsed from tables on openCaselist, which associates each debate document with its camp, its tag (argument types), and its year. Figure 2: A Policy Debate Case created with DebateKG. Arguments are shown. The citation, read-aloud extracts, and evidence are omitted for brevity. The first and final argument are the inputs supplied by the user. The highlighted portions show the tokens with the highest similarity to the previous argument, and functions as interpretability. created a full debating system which they prominently pitted against champion parliamentary debaters. They defined a custom tailored, "simplified version" of the Parliamentary Debate style. Parliamentary Debate has dramatic differences compared to Policy Debate, namely that the topics are only known to each side 15 minutes ahead of time. As a result, Parliamentary Debate relies far less on evidence, usually only including small snippets as part of a larger speech. In Policy Debate, the vast majority of most of the opening speeches is recitation of extractive summaries of evidence for or against a position. This dramatically simplifies the required system for Policy Debate case generation. Project Debater utilizes many closed source models models, a massive but generalized corpus and requires significantly more compute resources than DebateKG to run. Finally, Policy Debate is considered to be a more "rigorous style" of debate at its highest level than Parliamentary Debate, which requires dramatically more effort to participate in. An example of this can be found in the 2014-2015 National Parliamentary Tournament of Excellence (NPDA) tournament, the largest American college level parliamentary debate tournament, where the winning team had no prior Parliamentary Debate experience and was otherwise a good but not champion Policy Debate team 2. Their defeated opponents had been undefeated for the prior 3 years that they competed in the national tournament. Footnote 2: A recording of that final debate round and results can be found here: [https://www.youtube.com/watch?v=l9HJ6Iq6Vas](https://www.youtube.com/watch?v=l9HJ6Iq6Vas) Further work coming from IBM exists about Knowledge Graphs directly being used for Argument Generation Khatib et al. (2021). Their work explores how to utilize KG encoded knowledge to fine-tune GPT-2 to generate arguments. Our system is extractive in nature, as it creates debate cases by chaining together evidence from DebateSum utilizing graph traversals. Extractive systems are far more appropriate for Policy Debate. There is fascinating work that applies the idea of Graph Neural Networks for predicting the way that each member of a legislative branch will vote on an input motion Sawhney et al. (2020). Our work does not try to predict how judges will vote based on any inputs, but instead generates debate cases given input arguments. Their work is in the context of elected officials, whereas ours is in the context high school and collegic competitive debate. There is also work related to trying to understand the arguments made within these legislative Parliamentary Debates Tamper et al. (2022) Knowledge Graphs have been utilized for fact checked arguments. ClaimsKG Tchechmedjiev et al. (2020) is an example, which indexes a wide variety of fact checking websites and annotates them. DebateSum and its contributed KGs do not have fact checking information directly since it is considered the debaters job to convince the judge of the truth of each presented piece of evidence. DebateSum and DebateKG are also significantly larger in size than ClaimsKG and its training corpus. Work related to automatically evaluating the quality of arguments using Knowledge Graphs exists Dolz et al. (2022). In their work, they leverage a dataset of debate, the VivesDebate corpus, to identify if an argument is likely to "win". They also recognized the potential for graph traversals to form arguments, or whole debate cases (see figures 2 and 3 from their work). VivesDebate is significantly smaller and less encompassing than DebateSum, and DebateSum does not have information about how successful the arguments within it are. Other work, which recognizes the potential for paths within knowledge graphs to form arguments, exists Das et al. (2017). The idea of using "debate dynamics" to present evidence for graph classification has been extensively explored Hildebrandt et al. (2020). They imagine triple classification and link prediction in graphs as a figurative "debate game" between two reinforcement learning agents who extract "arguments" (paths) which support or oppose a hypothesis. A final binary classifier "judge" votes based on the presented "arguments". They show parallels within Graph Analysis algorithm development to the ideas that we present, but they evaluate this algorithm on non-argumentative datasets. To our knowledge, we are the first work to explore "arguments" (constrained paths) within Knowledge Graphs on an argumentative dataset. Details The DebateKG demo is hosted on huggingface3. In this section, we describe the details of DebateKG and its underlying Semantic Knowledge Graphs. Footnote 3: The link to that demo is here: [https://huggingface.co/spaces/Hellisotherpeople/De](https://huggingface.co/spaces/Hellisotherpeople/De) batekG Footnote 4: An analysis of the pretrained models can be found here: ### Underlying Language Models Txtai supports several language modeling backends, the most modern of which is sentence transformers (Reimers and Gurevych., 2019). Besides having many pre-trained language models which are designed for Semantic Textual Similarity or for Sentence Modeling, any Transformer model can be transformed into a "sentence transformer" model with nothing more than a pooling layer added. We choose three language models for building the Knowledge Graphs. The first is the recommended model from the sentence transformers documentation 4, "all-mpnet-base-v2". We are also curious about the potential usefulness of language models which are fine-tuned in a domain similar to DebateSum, such as the legal domain. We choose "legal-bert-base-uncased" (Chalkidis et al., 2020) for this reason, as it is trained on a diverse legal corpus. Finally, we are curious about language models which can model long sequences. We choose "allenai/longformer-base-4096" (Beltagy et al., 2020) due to its potential to model sequences up to 4096 tokens long directly. Footnote 4: The link to that demo is here: [https://huggingface.co/spaces/Hellisotherpeople/De](https://huggingface.co/spaces/Hellisotherpeople/De) batekG Footnote 5: And in fact, any path on this graph can be an Argument ### Importance of Granularity For each piece of evidence in DebateSum, there is an associated abstractive summary and biased extractive summary. Since at the time of writing, txtai and DebateKG can only semantically index one text column at a time, the choice of which column and at what granularity is highly important. There are merits and drawbacks to each approach. For this reason, we construct Graphs which index two of these columns (denoted with the prefixes "DebateKG-ext", and "DebateKG-abs"). We also construct graphs which index each individual sentence of the full document (denoted as "DebateKG-sent"). These graphs are significantly larger, but are potentially far more potent since the sentence transformers recommended models are designed for the sentence granularity and because the other two models are average pooled and subsequently long sequences dilute their embeddings. ### Importance of Settings DebateKG computes the semantic similarity between each entity, and connects the entities whose similarity is greater than a user-defined threshold. We use the default threshold of 0.10, and each entity has a limit of no more than 100 edges. Changes in these settings, such as lowering the threshold and increasing the entity limit, will result in more highly connected and correspondingly larger graphs. ### Policy Debate Case Construction The shortest paths, which minimizes the semantic distance between each input argument, are also Policy Debate Arguments5. One or more of these Arguments can be concatenated to form Policy Debate Cases. The ideal Policy Debate Argument uses the minimum amount of spoken words. This enables competitors to make more arguments, and to make broader and stronger cases. Footnote 5: And in fact, any path on this graph can be an Argument Beyond a naive shortest path calculation on the whole graph, we can control how Debate Case are constructed by choosing to run these calculations on subgraphs. These subgraphs include only entities which fulfil a particular constraint - enabling things like arguments where all of the evidence stays on a particular topic, or which always includes a keyword, or even where the evidence isn't longer than a certain number of words. Related to the idea of minimizing the number of words spoken out loud within each debate case, we can also modify the scoring function used within the shortest path calculations to account for and try to minimize the length of the evidences extracts. This has the advantage over selecting subgraphs of allowing for inclusion of long documents within the argument if they are actually the most appropriate. ### Value of Knowledge Graphs While an exhaustive analysis of these Knowledge Graphs is beyond the scope of this paper, it is important to recognize that techniques and algorithms from the Graph Analysis literature can be particularly illuminating. Centrality algorithms, like Pagerank (Page et al., 1999), will find evidence which is highly applicable to many arguments. Community detection, also known as clustering - finds evidence which is highly related to each other. A treasure trove of insights into DebateSum are unlocked for those willing to explore the Semantic Knowledge Graphs. ## 5 Evaluation DebateSum does not include any data indicating if an argument is "strong", or if it is likely to win or not. It also does not have similarity labels between each example or even between pairs of samples. This means that it is challenging to compare the argumentation quality of each graph. Fortunately, it is simple to look at the lengths of the spoken aloud extracts. Since Policy Debaters are trying to minimize the time spent on each argument, they will prefer Graphs that extract evidence chains with shorter extracts. Thus, we evaluate each graph based on how long the created Debate Cases extracts are. We choose 10 input argument pairs (a table of which is included within the github repo) and rank each graph based on the average length of the read aloud extracts from the generated debate cases across all 10 of these argument pairs. Table 1 shows the results of this experiment. Due to the unique and small-scale nature of our evaluation, we hope that future work can find more effective ways to evaluate Semantic Knowledge Graphs in an argumentative context. ## 6 Conclusion In this paper, we significantly expanded and improved an existing large scale argument mining dataset called "DebateSum". We created 9 Semantic Knowledge Graphs using the "txtai" Semantic AI toolkit. We showed how constrained shortest path traversals on these graphs can be used to create Policy Debate Cases. We created a System Demonstration of this called DebateKG which is a "space" webapp hosted on huggingface. We discuss implementation details of this system. We propose a way for Policy Debaters to decide which graph is better for their needs, and evaluate our systems using this technique. We open source all data, code, and graphs. ### Limitations The largest of the contributed Semantic Graphs, denoted "DebateKG-sent", can require as much as 100gb of free-space on disk when uncompressed (which is required to leverage them). All training and creation of these graphs was performed on a personal computer with an RTX 3080ti GPU, an I7 8700K CPU, and 32gigs of ram. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Model** & **Average Words in Case** \\ \hline Mpnet-DebateKG-abs & 406 \\ \hline Mpnet-DebateKG-ext & 305 \\ \hline Mpnet-DebateKG-sent & 760 \\ \hline legalbert-DebateKG-abs & 502 \\ \hline legalbert-DebateKG-ext & **230** \\ \hline legalbert-DebateKG-sent & 709 \\ \hline longformer-DebateKG-abs & 500 \\ \hline longformer-DebateKG-ext & 457 \\ \hline longformer-DebateKG-sent & 301 \\ \hline \end{tabular} \end{table} Table 1: Results of experiment on sample 10 arguments American Policy Debate, is almost always performed in English, and it is unlikely that suitable training data targeting it outside of English will be created in the near future. DebateSum is crowd sourced from high school and college Policy Debate camp attendees. The evidence found within DebateSum, as well as the additions included within this paper, may have some annotation and/or parsing errors. This is because while the general layout of evidence is agreed upon by all, there is much variance in the formatting. ## Ethics Statement Philosophy, Law, Politics, Economics, and other Social Sciences are particularly well represented within DebateSum due to its nature as an argumentative dataset. The Policy Debate community has strong norms and supervision related to the included content which make the risk of hurtful or harmful content being included to be low. Still, the possibility of problematic content being included cannot be fully eliminated. DebateKG is an extractive system. While extractive systems have far lower abuse potential compared to generative systems, the risk of abuse is also not totally eliminated. A "dialectic", according to the ancient philosopher Plato, is a dialogue held between two or more people for the purposes of finding truth. By contrast, a "debate", as far as competitors are concerned, is nothing more than a game of rhetorical persuasion played with real life evidence and situations. While most evidence within DebateSum is fully cited and is generally high quality, the way that that the evidence is summarized is biased towards the targeted argument that the competitor was trying to craft. We also point out that DebateSum is not necessarily factual or "truthful". While the evidence within it should have almost no direct "lies", "fabrications" or "fake-news", the evidence can still be misleading or without important context.
2310.19368
Color Equivariant Convolutional Networks
Color is a crucial visual cue readily exploited by Convolutional Neural Networks (CNNs) for object recognition. However, CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions. Color invariance addresses this issue but does so at the cost of removing all color information, which sacrifices discriminative power. In this paper, we propose Color Equivariant Convolutions (CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum while retaining important color information. We extend the notion of equivariance from geometric to photometric transformations by incorporating parameter sharing over hue-shifts in a neural network. We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts. Our approach can be seamlessly integrated into existing architectures, such as ResNets, and offers a promising solution for addressing color-based domain shifts in CNNs.
Attila Lengyel, Ombretta Strafforello, Robert-Jan Bruintjes, Alexander Gielisse, Jan van Gemert
2023-10-30T09:18:49Z
http://arxiv.org/abs/2310.19368v1
# Color Equivariant Convolutional Networks ###### Abstract Color is a crucial visual cue readily exploited by Convolutional Neural Networks (CNNs) for object recognition. However, CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions. Color invariance addresses this issue but does so at the cost of removing all color information, which sacrifices discriminative power. In this paper, we propose Color Equivariant Convolutions (CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum while retaining important color information. We extend the notion of equivariance from geometric to photometric transformations by incorporating parameter sharing over hue-shifts in a neural network. We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts. Our approach can be seamlessly integrated into existing architectures, such as ResNets, and offers a promising solution for addressing color-based domain shifts in CNNs. ## 1 Introduction Color is a powerful cue for visual object recognition. Trichromatic color vision in primates may have developed to aid the detection of ripe fruits against a background of green foliage [38; 45]. The benefit of color vision here is two-fold: not only does color information improve foreground-background segmentation by rendering foreground objects more salient, color also allows diagnostics, e.g. identifying the type (orange) and ripeness (green) where color is an intrinsic property facilitating recognition [3], as illustrated in Fig. 1a. Convolutional neural networks (CNNs) too exploit color information by learning color selective features that respond differently based on the presence or absence of a particular color in the input [42]. Unwanted color variations, however, can be introduced by accidental scene recording conditions such as illumination changes [29; 48], or by low color-diagnostic objects occurring in a variety of colors, making color no longer a discriminative feature but rather an undesired source of variation in the data. Given a sufficiently large training set that encompasses all possible color variations, a CNN learns to become robust by learning color invariant and equivariant features from the available data [36; 37]. However, due to the long tail of the real world it is almost impossible to collect balanced training data for all scenarios. This naturally leads to color distribution shifts between training and test time, and an imbalance in the training data where less frequently occurring colors are underrepresented. As CNNs often fail to generalize to out-of-distribution test samples, this can have significant impact on many real-world applications, e.g. a model trained mostly on red cars may struggle to recognize the exact same car in blue. _Color invariance_ addresses this issue through features that are by design invariant to color changes and therefore generalize better under appearance variations [14; 17]. However, color invariance comes at the loss of discriminative power as valuable color information is removed from the model's internal feature representation [18]. We therefore propose to equip models with the less restrictive _color equivariance_ property, where features are explicitly shared across different colors through a hue transformation on the learned filters. This allows the model to generalize across different colors, while at the same time also retaining important color information in the feature representation. An RGB pixel can be decomposed into an orthogonal representation by the well-known hue-saturation-value (HSV) model, where hue represents the chromaticity of a color. In this work we extend the notion of equivariance from geometric to photometric transformations by hard-wiring parameter sharing over hue-shifts in a neural network. More specifically, we build upon the seminal work of Group Equivariant Convolutions [7] (GConvs), which implements equivariance to translations, flips and rotations of multiples of 90 degrees, and formulates equivariance using the mathematical framework of symmetry groups. We introduce Color Equivariant Convolutions (CEConvs) as a novel deep learning building block, which implements equivariance to the \(H_{n}\) symmetry group of discrete hue rotations. CEConvs share parameters across hue-transformed filters in the input layer and store color information in hue-equivariant feature maps. CEConv feature maps contain an additional dimension compared to regular CNNs, and as a result, require larger filters and thus more parameters for the same number of channels. To evaluate equivariant architectures, it is common practice to reduce the width of the network to match the parameter count of the baseline model. However, this approach introduces a trade-off between equivariance and model capacity, where particularly in deeper layers the quadratic increase in parameter count of CEConv layers makes equivariance computationally expensive. We therefore investigate hybrid architectures, where early color invariance is introduced by pooling over the color dimension of the feature maps. Note that early color invariance is maintained throughout the rest of the network, despite the use of regular convolutional layers after the pooling operation. Limiting color equivariant filters to the early layers is in line with the findings that early layers tend to benefit the most from equivariance [5] and learn more color selective filters [37; 42]. We rigorously validate the properties of CEConvs empirically through precisely controlled synthetic experiments, and evaluate the performance of color invariant and equivariant ResNets on various more realistic classification benchmarks. Moreover, we investigate the combined effects of color equivariance and color augmentations. Our experiments show that CEConvs perform on par or better Figure 1: Color plays a significant role in object recognition. (a) The absence of color makes flowers less distinct from their background and thus harder to classify. The characteristic purple-blue color of the Monkshood (Class A) enables a clear distinction from the Snapdragon (Class B) [35]. On the other hand, relying too much on colors might negatively impact recognition to color variations within the same flower class. (b) Image classification performance on the Flower-102 dataset [35] under a gradual variation of the image hue. Test-time hue shifts degrade the performance of CNNs (ResNet-18) drastically. Grayscale images and color augmentations result in invariance to hue variations, but fail to capture all the characteristic color features of flowers. Our color equivariant network (CE-ResNet-18-1) enables feature sharing across the color spectrum, which helps generalise to underrepresented colors in the dataset, while preserving discriminative color information, improving classification for unbalanced color variations. than regular convolutions, while at the same time significantly improving the robustness to test-time color shifts, and is complementary to color augmentations. The main contributions of this paper can be summarized as follows: * We show that convolutional neural networks benefit from using color information, and at the same time are not robust to color-based domain shifts. * We introduce Color Equivariant Convolutions (CEConvs), a novel deep learning building block that allows feature sharing between colors and can be readily integrated into existing architectures such as ResNets. * We demonstrate that CEConvs improve robustness to train-test color shifts in the input. All code and experiments are made publicly available on [https://github.com/Attila94/CEConv](https://github.com/Attila94/CEConv). ## 2 Related work Equivariant architecturesTranslation equivariance is a key property of convolutional neural networks (CNNs) [23; 28]: shifting the input to a convolution layer results in an equally shifted output feature map. This allows CNNs to share filter parameters over spatial locations, which improves both parameter and data efficiency as the model can generalize to new locations not covered by the training set. A variety of methods have extended equivariance in CNNs to other geometric transformations [44], including the seminal Group Equivariant Convolutions [7] for rotations and flips, and other works concerning rotations [2; 30; 52], scaling [50; 53] and arbitrary Lie groups [32]. Yet to date, equivariance to photometric transformations has remained largely unexplored. Offset equivariant networks [9] constrain the trainable parameters such that an additive bias to the RGB input channels results in an equal bias in the output logits. By applying a log transformation to the input the network becomes equivariant to global illumination changes according to the Von Kries model [13]. In this work we explore an alternative approach to photometric equivariance inspired by the seminal Group Equivariant Convolution [7] framework. Color in CNNsRecent research has investigated the internal representation of color in Convolutional Neural Networks (CNNs), challenging the traditional view of CNNs as black boxes. For example, [41; 42] introduces the Neuron Feature visualization technique and characterizes neurons in trained CNNs based on their color selectivity, assessing whether a neuron activates in response to the presence of color in the input. The findings indicate that networks learn highly color-selective neurons across all layers, emphasizing the significance of color as a crucial visual cue. Additionally, [43] classifies neurons based on their class selectivity and observes that early layers contain more class-agnostic neurons, while later layers exhibited high class selectivity. A similar study has been performed in [12], further supporting these findings. [36; 37] investigate learned symmetries in an InceptionV1 model trained on ImageNet [10] and discover filters that demonstrated equivariance to rotations, scale, hue shifts, and combinations thereof. These results motivate color equivariance as a prior for CNNs, especially in the first layers. Moreover, in this study, we will employ the metrics introduced by [42] to provide an explanation for several of our own findings. Color priors in deep learningColor is an important visual discriminator [15; 19; 51]. In classical computer vision, color invariants are used to extract features from an RGB image that are more consistent under illumination changes [14; 17; 18]. Recent studies have explored using color invariants as a preprocessing step to deep neural networks [1; 33] or incorporating them directly into the architecture itself [29], leading to improved robustness against time-of-day domain shifts and other illumination-based variations in the input. Capsule networks [22; 47], which use groups of neurons to represent object properties such as pose and appearance, have shown encouraging results in image colorization tasks [39]. Quaternion networks [16; 54] represent RGB color values using quaternion notation, and employ quaternion convolutional layers resulting in moderate improvements in image classification and inpainting tasks. Building upon these advancements, we contribute to the ongoing research on integrating color priors within deep neural architectures. Color equivariant convolutions ### Group Equivariant Convolutions A CNN layer \(\Phi\) is equivariant to a symmetry group \(G\) if for all transformations \(g\in G\) on the input \(x\) the resulting feature mapping \(\Phi(x)\) transforms similarly, i.e., first doing a transformation and then the mapping is similar to first doing the mapping and then the transformation. Formally, equivariance is defined as \[\Phi(T_{g}x)=T_{g}^{\prime}\Phi(x),\quad\forall g\in G, \tag{1}\] where \(T_{g}\) and \(T_{g}^{\prime}\) are the transformation operators of group action \(g\) on the input and feature space, respectively. Note that \(T_{g}\) and \(T_{g}^{\prime}\) can be identical, as is the case for translation equivariance where shifting the input results in an equally shifted feature map, but do not necessarily need to be. A special case of equivariance is invariance, where \(T_{g}^{\prime}\) is the identity mapping and the input transformation leaves the feature map unchanged: \[\Phi(T_{g}x)=\Phi(x),\quad\forall g\in G. \tag{2}\] We use the definition from [7] to denote the \(i\)-th output channel of a standard convolutional layer \(l\) in terms of the correlation operation \((\star)\) between a set of feature maps \(f\) and \(C^{l+1}\) filters \(\psi\): \[[f\star\psi^{i}](x)=\sum_{y\in\mathbb{Z}^{2}}\sum_{c=1}^{C^{l}}f_{c}(y)\psi_{c }^{i}(y-x). \tag{3}\] Here \(f:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{C^{l}}\) and \(\psi^{i}:\mathbb{Z}^{2}\rightarrow\mathbb{R}^{C^{l}}\) are functions that map pixel locations \(x\) to a \(C^{l}\)-dimensional vector. This definition can be extended to groups by replacing the translation \(x\) by a group action \(g\): \[[f\star\psi^{i}](g)=\sum_{y\in\mathbb{Z}^{2}}\sum_{c}^{C^{l}}f_{c}(y)\psi_{c}^ {i}(g^{-1}y) \tag{4}\] As the resulting feature map \(f\star\psi^{i}\) is a function on G rather than \(\mathbb{Z}^{2}\), the inputs and filters of all hidden layers should also be defined on \(G\): \[[f\star\psi^{i}](g)=\sum_{h\in G}\sum_{c}^{C^{l}}f_{c}(h)\psi_{c}^{i}(g^{-1}h) \tag{5}\] Invariance to a subgroup can be achieved by applying a pooling operation over the corresponding cosets. For a more detailed introduction to group equivariant convolutions, please refer to [4; 7]. ### Color Equivariance We define color equivariance as equivariance to hue shifts. The HSV color space encodes hue by an angular scalar value, and a hue shift is performed as a simple additive offset followed by a modulo operator. When projecting the HSV representation into three-dimensional RGB space, the same hue shift becomes a rotation along the \([1,1,1]\) diagonal vector. We formulate hue equivariance in the framework of group theory by defining the group \(H_{n}\) of multiples of \(360/n\)-degree rotations about the \([1,1,1]\) diagonal vector in \(\mathbb{R}^{3}\) space. \(H_{n}\) is a subgroup of the \(SO(3)\) group of all rotations about the origin of three-dimensional Euclidean space. We can parameterize \(H\) in terms of integers \(k,n\) as \[H_{n}(k)=\begin{bmatrix}\cos(\frac{2k\pi}{n})+a&a-b&a+b\\ a+b&\cos(\frac{2k\pi}{n})+a&a-b\\ a-b&a+b&\cos(\frac{2k\pi}{n})+a\end{bmatrix} \tag{6}\] with \(n\) the total number of discrete rotations in the group, \(k\) the rotation, \(a=\frac{1}{3}-\frac{1}{3}\cos(\frac{2k\pi}{n})\) and \(b=\sqrt{\frac{1}{3}}\ast\sin(\frac{2k\pi}{n})\). The group operation is matrix multiplication which acts on the continuous \(\mathbb{R}^{3}\) space of RGB pixel values. The derivation of \(H_{n}\) is provided in Appendix A. Color Equivariant Convolution (CEConv)Let us define the group \(G=\mathbb{Z}^{2}\times H_{n}\), which is a direct product of the \(\mathbb{Z}^{2}\) group of discrete 2D translations and the \(H_{n}\) group of discrete hue shifts. We can then define the Color Equivariant Convolution (CEConv) in the input layer as: \[[f\star\psi^{i}](x,k)=\sum_{y\in\mathbb{Z}^{2}}\sum_{c=1}^{C^{l}}f_{c}(y)\cdot H _{n}(k)\psi^{i}_{c}(y-x). \tag{7}\] We furthermore introduce the operator \(\mathcal{L}_{g}=\mathcal{L}_{(t,m)}\) including translation \(t\) and hue shift \(m\) acting on input \(f\) defined on the plane \(\mathbb{Z}^{2}\): \[[\mathcal{L}_{g}f](x)=[\mathcal{L}_{(t,m)}f](x)=H_{n}(m)f(x-t) \tag{8}\] Since \(H_{n}\) is an orthogonal matrix, the dot product between a hue shifted input \(H_{n}f\) and a filter \(\psi\) is equal to the dot product between the original input \(f\) and the inverse hue shifted filter \(H_{n}^{-1}\psi\): \[H_{n}f\cdot\psi=(H_{n}f)^{T}\psi=f^{T}H_{n}^{T}\psi=f\cdot H_{n}^{T}\psi=f \cdot H_{n}^{-1}\psi. \tag{9}\] Then the equivariance of the CEConv layer can be derived as follows (using \(C^{l}=1\) for brevity): \[\begin{split}[[\mathcal{L}_{(t,m)}f]\star\psi^{i}](x,k)& =\sum_{y\in\mathbb{Z}^{2}}H_{n}(m)f(y-t)\cdot H_{n}(k)\psi^{i}(y-x) \\ &=\sum_{y\in\mathbb{Z}^{2}}f(y)\cdot H_{n}(m)^{-1}H_{n}(k)\psi^{ i}(y-(x-t))\\ &=\sum_{y\in\mathbb{Z}^{2}}f(y)\cdot H_{n}(k-m)\psi^{i}(y-(x-t)) \\ &=[f\star\psi^{i}](x-t,k-m)\\ &=[\mathcal{L^{\prime}}_{(t,m)}[f\star\psi^{i}]](x,k)\end{split} \tag{10}\] Since input \(f\) and feature map \([f\star\psi]\) are functions on \(\mathbb{Z}^{2}\) and \(G\), respectively, \(\mathcal{L}_{(t,k)}\) and \(\mathcal{L^{\prime}}_{(t,k)}\) represent two equivalent operators acting on their respective groups. For all subsequent hidden layers the input \(f\) and filters \(\psi^{i}\) are functions on \(G\) parameterized by \(x,k\), and the hidden layer for CEConv is defined as: \[[f\star\psi^{i}](x,k)=\sum_{y\in\mathbb{Z}^{2}}\sum_{r=1}^{n}\sum_{c=1}^{C^{l} }f_{c}(y,r)\cdot\psi^{i}_{c}(y-x,(r-k)\%n), \tag{11}\] where \(n\) is the number of discrete rotations in the group and \(\%\) is the modulo operator. In practice, applying a rotation to RGB pixels will cause some pixel values to fall outside of the RGB cube, which will then have to be reprojected within the cube. Due to this discrepancy, Eq. (9) only holds approximately, though in practice this has only limited consequences, as we empirical show in Appendix D. ### Implementation Tensor operationsWe implement CEConv similarly to GConv [7]. GConv represents the pose associated with the added spatial rotation group by extending the feature map tensor \(X\) with an extra dimension \(G^{l}\) to size \([C^{l},G^{l},H,W]\), denoting the number of channels, transformations that leave the origin invariant, and height and width of the feature map at layer \(l\), respectively (batch dimension omitted). Similarly, a GConv filter \(\tilde{F}\) with spatial extent \(k\) is of size \([C^{l+1},G^{l+1},C^{l},G^{l},k,k]\). The GConv is then defined in terms of tensor multiplication operations as: \[X^{l+1}_{c^{\prime},g^{\prime},:,:}=\sum_{c}^{C^{l}}\sum_{g}^{G^{l}}\tilde{F}^ {l}_{c^{\prime},g^{\prime},c,g,:,:}\star X^{l}_{c,g,:,:}, \tag{12}\] where \((:)\) denotes tensor slices. Note that in the implementation, a GConv filter \(F\) only contains \([C^{l+1},C^{l},G^{l},k,k]\) unique parameters - the extra \(G^{l+1}\) dimension is made up of transformed copies of \(F\). As the RGB input to the network is defined on \(\mathbb{Z}^{2}\), we have \(G^{1}=1\) and \(\tilde{F}\) has size \([C^{l+1},G^{l+1},3,1,k,k]\). The transformed copies in \(G^{l+1}\) are computed by applying the rotation matrix from Eq. (6): \[\tilde{F}^{1}_{c^{\prime},g^{\prime},:,1,u,v}=H_{n}(g^{\prime})F^{1}_{c^{\prime },:,1,u,v}. \tag{13}\] In the hidden layers \(\tilde{F}\) contains cyclically permuted copies of \(F\): \[\tilde{F}^{l}_{c^{\prime},g^{\prime},c,g,u,v}=F^{l}_{c^{\prime},c,(g+g^{\prime })\oplus n,u,v}. \tag{14}\] Furthermore, to explicitly share the channel-wise spatial kernel over \(G^{l}\)[30], filter \(F\) is decomposed into a spatial component \(S\) and a pointwise component \(P\) as follows: \[F^{l}_{c^{\prime},c,g,u,v}=S_{c^{\prime},c,1,u,v}\cdot P_{c^{\prime},g^{\prime },c,g,1,1} \tag{15}\] \(F\) is precomputed in each forward step prior to the convolution operation in Eq. (12). Input normalizationis performed using a single value for the mean and standard deviations rather than per channel, as is commonly done for standard CNNs. Channel-wise means and standard deviations break the equivariance property of CECNN as a hue shift could no longer be defined as a rotation around the \([1,1,1]\) diagonal. Experiments have shown that using a single value for all channels instead of channel-wise normalization has no effect on the performance. Compute efficiencyCEConvs create a factor \(|H_{n}|\) more feature maps in each layer. Due to the decomposition in Eq. (15), the number of multiply-accumulate (MAC) operations increase by only a factor \(\frac{|H_{n}|^{2}}{k^{2}}+|H_{n}|\), and the number of parameters by a factor \(\frac{|H_{n}|}{k^{2}}+1\). See Appendix C.3 for an overview of parameter counts and MAC operations. ## 4 Experiments ### When is color equivariance useful? Color equivariant convolutions share shape information across different colors while preserving color information in the group dimension. To demonstrate when this property is useful we perform two controlled toy experiments on variations of the MNIST [11] dataset. We use the Z2CNN architecture from [7], and create a color equivariant version of the network called CECNN by replacing all convolutional layers by CEConvs with three rotations of 120\({}^{\circ}\). The number of channels in CECNN is scaled such as to keep the number of parameters approximately equal to the Z2CNN. We also create a color invariant CECNN by applying coset max-pooling after the final CEConv layer, and a color invariant Z2CNN by converting the inputs to grayscale. All experiments are performed using the Adam [24] optimizer with a learning rate of 0.001 and the OneCycle learning rate scheduler. No data augmentations are used. We report the average performance over ten runs with different random initializations. Color imbalanceis simulated by _long-tailed ColorMNIST_, a 30-class classification problem where digits occur in three colors on a gray background, and need to be classified by both number (0-9) and color (red, green, blue). The number of samples per class is drawn from a power law distribution resulting in a long-tailed class imbalance. Sharing shape information across colors is beneficial as a certain digit may occur more frequently in one color than in another. The train set contains a total of 1,514 training samples and the test set is uniformly distributed with 250 samples per class. The training set is visualized in Appendix B.1. We train all four architectures on the dataset for 1000 epochs using the standard cross-entropy loss. The train set distribution and per-class test accuracies for all models are shown in Fig. 2a. With an average accuracy of \(91.35\pm 0.40\%\) the CECNN performs significantly better than the CNN with \(71.59\pm 0.61\%\). The performance increase is most significant for the classes with a low sample size, indicating that CEConvs are indeed more efficient in sharing shape information across different colors. The color invariant Z2CNN and CECNN networks, with an average accuracy of \(24.19\pm 0.53\%\) and \(29.43\pm 0.46\%\), respectively, are unable to discriminate between colors. CECNN with coset pooling is better able to discriminate between foreground and background and therefore performs slightly better. We repeated the experiment with a weighted loss and observed no significantly different results. We have also experimented with adding color jitter augmentations, which makes solving the classification problem prohibitive, as color is required. See Appendix B.2 for both detailed results on both experiments. Color variationsare simulated by _biased ColorMNIST_, a 10-class classification problem where each class \(c\) has its own characteristic hue \(\theta_{c}\) defined in degrees, distributed uniformly on the hue circle. The exact color of each digit \(x\) is sampled according to \(\theta_{x}\sim\mathcal{N}(\theta_{c},\sigma)\). We generate multiple datasets by varying \(\sigma\) between 0 and \(10^{6}\), where \(\sigma=0\) results in a completely deterministic color for each class and \(\sigma=10^{6}\) in an approximately uniform distribution for \(\theta_{x}\). For small \(\sigma\), color is thus highly informative of the class, whereas for large \(\sigma\) the classification needs to be performed based on shape. The dataset is visualized in Appendix B.1. We train all models on the train set of 1.000 samples for 1500 epochs and evaluate on the test set of 10.000 samples. The test accuracies for different \(\sigma\) are shown in Fig. 1(b). CECNN outperforms Z2CNN across all standard deviations, indicating CEConvs allow for a more efficient internal color representation. The color invariant CECNN network outperforms the equivariant CECNN model from \(\sigma\geq 48\). Above this value color is no longer informative for the classification task and merely acts as noise unnecessarily consuming model capacity, which is effectively filtered out by the color invariant networks. The results of the grayscale Z2CNN are omitted as they are significantly worse, ranging between \(89.89\%\) (\(\sigma=0\)) and \(79.94\) (\(\sigma=10^{6}\)). Interestingly, CECNN with coset pooling outperforms the grayscale Z2CNN. This is due to the fact that a CECNN with coset pooling is still able to distinguish between small color changes and therefore can partially exploit color information. Networks trained with color jitter are unable to exploit color information for low \(\sigma\); see Appendix B.2 for detailed results. ### Image classification SetupWe evaluate our method for robustness to color variations on several natural image classification datasets, including CIFAR-10 and CIFAR-100 [27], Flowers-102 [35], STL-10 [6], Oxford-IIIT Pet [40], Caltech-101 [31], Stanford Cars [26] and ImageNet [10]. We train a baseline and color equivariant (CE-)ResNet [20] with 3 rotations and evaluate on a range of test sets where we gradually apply a hue shift between -180\({}^{\circ}\) and 180\({}^{\circ}\). For high-resolution datasets (all except CIFAR) we train a ResNet-18 architecture and use default ImageNet data augmentations: we scale to 256 pixels, random crop to 224 pixels and apply random horizontal flips. For the CIFAR datasets we use the ResNet-44 architecture and augmentations from [7], including random horizontal flips and translations of up to 4 pixels. We train models both with and without color jitter augmentation to separately evaluate the effect of equivariance and augmentation. The CE-ResNets are downscaled in width to match the parameter count of the baseline ResNets. We have also included AugMix [21] and CIConv [29] as baselines for comparison. Training is performed for 200 epochs using the Adam [25] optimizer with a learning rate of 0.001 and the OneCycle learning rate scheduler. All our experiments use PyTorch and run on a single NVIDIA A40 GPU. Hybrid networksIn our toy experiments we enforce color equivariance throughout the network. For real world datasets however, we anticipate that the later layers of a CNN may not benefit from enforcing parameter sharing between colors, if the classes of the dataset are determined by color Figure 2: Color equivariant convolutions efficiently share shape information across different colors. CECNN outperforms a vanilla network in both a long-tailed class imbalance setting (a), where MNIST digits are to be classified based on both shape and color, and a color biased setting (b), where the color of each class \(c\) is sampled according to \(\theta_{d}\sim\mathcal{N}(\theta_{c},\sigma)\). specific features. We therefore evaluate hybrid versions of our color equivariance networks, denoted by an integer suffix for the number of ResNet stages, out of a possible four, that use CEConvs. ResultsWe report both the performance on the original test set, as well as the average accuracy over all hue shifts in Table 1. For brevity we only show the fully equivariant and hybrid-2 networks, a complete overview of the performances of all hybrid network configurations and error standard deviations can be found in Appendix C.1. Between the full color equivariant and hybrid versions of our CE-ResNets, at least one variant outperforms vanilla ResNets on most datasets on the original test set. On most datasets the one- or two-stage hybrid versions are the optimal CE-ResNets, providing a good trade-off between color equivariance and leaving the network free to learn color specific features in later layers. CE-ResNets are also significantly more robust to test-time hue shifts, especially when trained without color jitter augmentation. Training the CE-ResNets with color jitter further improves robustness, indicating that train-time augmentations complement the already hard-coded inductive biases in the network. We show the detailed performance on Flowers-102 for all test-time hue shifts in Fig. 1b. The accuracy of the vanilla CNN quickly drops as a hue shift is applied, whereas the CE-CNN performance peaks at -120\({}^{\circ}\), 0\({}^{\circ}\)and 120\({}^{\circ}\). Applying train-time color jitter improves the CNN's robustness to the level of a CNN with grayscale inputs. The CE-CNN with color jitter outperforms all models for all hue shifts. Plots for other datasets are provided in Appendix C.2. Color selectivityTo explore what affects the success of color equivariance, we investigate the _color selectivity_ of a subset of the studied datasets. We use the color selectivity measure from [42] and average across all neurons in the baseline model trained on each dataset. Fig. 3 shows that color selective datasets benefit from using color equivariance up to late stages, whereas less color selective datasets do not. Feature representations of color equivariant CNNsWe use the Neuron Feature [42] (NF) visualization method to investigate the internal feature representation of the CE-ResNet. NF computes a weighted average of the \(N\) highest activation input patches for each filter at a certain layer, as such representing the input patch that a specific neuron fires on. Fig. 4 shows the NF (\(N=50\)) and top-3 input patches for filters at the final layers of stages 1-4 of a CE-ResNet18 trained on Flowers-102. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline _Original test set_ & **Caltech** & **C-10** & **C-100** & **Flowers** & **Ox-Pet** & **Cars** & **STL10** & **ImageNet** \\ \hline Baseline & 71.61 & 93.69 & 71.28 & 66.79 & 69.87 & 76.54 & 83.80 & 69.71 \\ CIConv-W & **72.85** & 75.26 & 38.81 & **68.71** & 61.53 & **79.52** & 80.71 & 65.81 \\ CEConv & 70.16 & 93.71 & 71.37 & 68.18 & 70.24 & 76.22 & 84.24 & 66.85 \\ CEConv-2 & 71.50 & **93.94** & **72.20** & 68.38 & **70.34** & 77.06 & **84.50** & **70.02** \\ \hline Baseline + jitter & 73.93 & 93.03 & 69.23 & 68.75 & 72.71 & 80.59 & 83.91 & 69.37 \\ CIConv-W + jitter & **74.38** & 77.49 & 42.27 & **75.05** & 64.23 & **81.56** & 81.88 & 65.95 \\ CEConv + jitter & 73.58 & 93.51 & 71.12 & 74.17 & **73.29** & 79.79 & 84.16 & 65.57 \\ CEConv-2 + jitter & 72.61 & **93.86** & **71.35** & 71.72 & 72.80 & 80.32 & **84.46** & **69.42** \\ \hline Baseline + AugMix & **71.92** & 94.13 & **72.64** & 75.49 & **76.02** & **82.32** & 84.99 & - \\ CEConv + AugMix & 70.74 & **94.22** & 72.48 & **78.10** & 75.90 & 80.81 & **85.46** & - \\ \hline _Hue-shifted test set_ & & & & & & & & \\ \hline Baseline & 51.14 & 85.26 & 47.01 & 13.41 & 37.56 & 55.59 & 67.60 & 54.72 \\ CIConv-W & **71.92** & 74.88 & 37.09 & **59.03** & **60.54** & **78.71** & **79.92** & **64.62** \\ CEConv & 62.17 & 90.90 & 59.04 & 33.33 & 54.02 & 67.16 & 78.25 & 56.90 \\ CEConv-2 & 64.51 & **91.43** & **62.11** & 33.32 & 51.14 & 68.17 & 77.80 & 62.26 \\ \hline Baseline + jitter & 73.61 & 92.91 & 69.12 & 68.44 & 72.30 & 80.65 & 83.71 & 67.10 \\ CIConv-W + jitter & **74.40** & 77.28 & 42.30 & **75.66** & 63.93 & **81.44** & 81.54 & 65.03 \\ CEConv + jitter & 73.57 & 93.39 & 71.06 & 73.86 & **72.94** & 79.79 & 84.02 & 64.52 \\ CEConv-2 + jitter & 73.03 & **93.80** & **71.33** & 71.44 & 72.58 & 80.28 & **84.31** & **68.74** \\ \hline Baseline + AugMix & 51.82 & 88.03 & 51.39 & 15.99 & 48.04 & 68.69 & 72.19 & - \\ CEConv + AugMix & **62.29** & **91.68** & **60.75** & **41.43** & **62.27** & **73.59** & **80.17** & - \\ \hline \hline \end{tabular} \end{table} Table 1: Classification accuracy in % of vanilla vs. color equivariant (CE-)ResNets, evaluated both on the original and hue-shifted test sets. Color equivariant CNNs perform on par with vanilla CNNs on the original test sets, but are significantly more robust to test-time hue shifts. Different rows represent different rotations of the same filter. As expected, each row of a NF activates on the same shape in a different color, demonstrating the color sharing capabilities of CEConvs. More detailed NF visualization are provided in Appendix C.4. Ablation studiesWe perform ablations to investigate the effect of the number of rotations, the use of group coset pooling, and the strength of train-time color jitter augmentations. In short, we find that a) increasing the number of hue rotations increases robustness to test-time hue shifts at the cost of a slight reduction in network capacity, b) removing group coset pooling breaks hue invariance, and c) hue equivariant networks require lower intensity color jitter augmentations to achieve the same test-time hue shift robustness and accuracy. The full results can be found in Appendix D. ## 5 Conclusion In this work, we propose Color Equivariant Convolutions (CEConvs) which enable feature sharing across colors in the data, while retaining discriminative power. Our toy experiments demonstrate benefits for datasets where the color distribution is long-tailed or biased. Our proposed fully equivariant CECNNs improve performance on datasets where features are color selective, while hybrid versions that selectively apply CEConvs only in early stages of a CNN benefit various classification tasks. LimitationsCEConvs are computationally more expensive than regular convolutions. For fair comparison, we have equalized the parameter cost of all models compared, at the cost of reducing the number of channels of CECNNs. In cases where color equivariance is not a useful prior, the reduced capacity hurts model performance, as reflected in our experimental results. Pixel values near the borders of the RGB cube can fall outside the cube after rotation, and subsequently need to be reprojected. Due to this clipping effect the hue equivariance in Eq. (9) only holds approximately. As demonstrated empirically, this has only limited practical consequences, yet future work should investigate how this shortcoming could be mitigated. Figure 4: Neuron Feature [42] (NF) visualization with top-3 patches at different stages of a CECResNet18 trained on Flowers-102. Rows represent different rotations of the same filter. As expected, each row of a NF activates on the same shape in a different color. Figure 3: Color selective datasets benefit from using color equivariance up to late stages, whereas less color selective datasets do not. We compute average color selectivity [42] of all neurons in the baseline CNN trained on each dataset, and plot the accuracy improvement of using color equivariance in hybrid and full models, coloring each graphed dataset for color selectivity. Local vs. global equivarianceThe proposed CEConv implements local hue equivariance, i.e. it allows to model local color changes in different regions of an image separately. In contrast, global equivariance, e.g. by performing hue shifts on the full input image, then processing all inputs with the same CNN and combining representations at the final layer to get a hue-equivariant representation, encodes global equivariance to the entire image. While we have also considered such setup, initial experiments did not yield promising results. The theoretical benefit of local over global hue equivariance is that multiple objects in one image can be recognized equivariantly in any combination of hues - empirically this indeed proves to be a useful property. Future workThe group of hue shifts is but one of many possible transformations groups on images. CNNs naturally learn features that vary in both photometric and geometric transformations [5, 37]. Future work could combine hue shifts with geometric transformations such as roto-translation [7] and scaling [49]. Also, other photometric properties could be explored in an equivariance setting, such as saturation and brightness. Our proposed method rotates the hue of the inputs by a predetermined angle as encoded in a rotation matrix. Making this rotation matrix learnable could yield an inexact but more flexible type of color equivariance, in line with recent works on learnable equivariance [34, 46]. An additional line of interesting future work is to incorporate more fine-grained equivariance to continuous hue shifts, which is currently intractable within the GConv-inspired framework as the number multiply-accumulate operations grow quadratically with the number of hue rotations. Broader impactImproving performance on tasks where color is a discriminative feature could affect humans that are the target of discrimination based on the color of their skin. CEConvs ideally benefit datasets with long-tailed color distributions by increasing robustness to color changes, in theory reducing a CNN's reliance on skin tone as a discriminating factor. However, careful and rigorous evaluation is needed before such properties can be attributed to CECNNs with certainty. ## Acknowledgements This project is supported in part by NWO (project VI.Vidi.192.100).
2303.06370
Distributed Solution of the Inverse Rig Problem in Blendshape Facial Animation
The problem of rig inversion is central in facial animation as it allows for a realistic and appealing performance of avatars. With the increasing complexity of modern blendshape models, execution times increase beyond practically feasible solutions. A possible approach towards a faster solution is clustering, which exploits the spacial nature of the face, leading to a distributed method. In this paper, we go a step further, involving cluster coupling to get more confident estimates of the overlapping components. Our algorithm applies the Alternating Direction Method of Multipliers, sharing the overlapping weights between the subproblems. The results obtained with this technique show a clear advantage over the naive clustered approach, as measured in different metrics of success and visual inspection. The method applies to an arbitrary clustering of the face. We also introduce a novel method for choosing the number of clusters in a data-free manner. The method tends to find a clustering such that the resulting clustering graph is sparse but without losing essential information. Finally, we give a new variant of a data-free clustering algorithm that produces good scores with respect to the mentioned strategy for choosing the optimal clustering.
Stevo Racković, Cláudia Soares, Dušan Jakovetić
2023-03-11T10:34:07Z
http://arxiv.org/abs/2303.06370v2
# Distributed Solution of the Inverse Rig Problem in Blendshape Facial Animation ###### Abstract The problem of rig inversion is central in facial animation as it allows for a realistic and appealing performance of avatars. With the increasing complexity of modern blendshape models, execution times increase beyond practically feasible solutions. A possible approach towards a faster solution is clustering, which exploits the spacial nature of the face, leading to a distributed method. In this paper, we go a step further, involving cluster coupling to get more confident estimates of the overlapping components. Our algorithm applies the Alternating Direction Method of Multipliers, sharing the overlapping weights between the subproblems. The results obtained with this technique show a clear advantage over the naive clustered approach, as measured in different metrics of success and visual inspection. The method applies to an arbitrary clustering of the face. We also introduce a novel method for choosing the number of clusters in a data-free manner. The method tends to find a clustering such that the resulting clustering graph is sparse but without losing essential information. Finally, we give a new variant of a data-free clustering algorithm that produces good scores with respect to the mentioned strategy for choosing the optimal clustering. ## 1 Introduction Blendshape animation is a method in computer graphics, specifically popular for modeling a human face, that animates a 3D mesh \(\textbf{b}_{0}\in\mathbb{R}^{3n}\) by linearly interpolating between a set of predefined morph targets (blendshapes) \(\textbf{b}_{1},...,\textbf{b}_{m}\in\mathbb{R}^{3n}\), where \(n\) is the number of mesh vertices (Pighin et al., 1998; Pighin et al. [1998], and Lewis et al., 2014; Lewis et al. [2014]). These morph targets represent different shapes the mesh can take on, and by blending them, a wide range of shapes can be generated. This can be represented as a weighted sum of the morph targets, where the weights \(\textbf{w}=[w_{1},...,w_{m}]\) define the amount of influence each morph target has on the final shape \[f_{L}(\textbf{w};\textbf{B})=\textbf{b}_{0}+\sum_{i=1}^{m}w_{i}\textbf{b}_{i}= \textbf{b}_{0}+\textbf{B}\textbf{w}. \tag{1}\] Here, \(\textbf{B}\in\mathbb{R}^{3n\times m}\) is a blendshape matrix created by stacking the blendshape vectors as its columns. The weights are then animated over time to produce the desired shape transitions. In modern facial animation, with large \(n\) and \(m\), a linear model is not sufficient to produce the desired realism, in the first place due to the conflicting deformations -- some pairs of blendshapes \(\textbf{b}_{i}\) and \(\textbf{b}_{j}\) might produce artifacts in the mesh when activated together, hence a corrective blendshape \(\mathbf{b}^{\{ij\}}\in\mathbb{R}^{3n}\) needs to be sculpted and included with a product of their weights \(w_{i}w_{j}\) whenever the two are activated simultaneously. Similar holds for combinations of three or more blendshapes, invoking the corrective terms of higher-level, as explained in Rackovic et al. (2023). A model with three levels of corrections is defined as \[\begin{split} f_{Q}(\mathbf{w})=\mathbf{b}_{0}+\sum_{i=1}^{m}w_{i} \mathbf{b}_{i}+\sum_{\{i,j\}\in\mathcal{P}}w_{i}w_{j}\mathbf{b}^{(ij)}+\\ \sum_{\{i,j,k\}\in\mathcal{T}}w_{i}w_{j}w_{k}\mathbf{b}^{\{ijk\}} +\sum_{\{i,j,k,l\}\in\mathcal{Q}}w_{i}w_{j}w_{k}w_{l}\mathbf{b}^{\{ijkl\}}, \end{split} \tag{2}\] where \(\mathcal{P},\mathcal{T}\) and \(\mathcal{Q}\) stand for tuples of indices (of sizes 2, 3, and 4, respectively) of the blendshapes that invoke corrective terms. Further in this paper, we will drop the subscript \(Q\), and assume that a rig function \(f(\cdot)\) always incorporates all the available corrective terms. A common problem, of primary interest in this paper, is the inversion of the rig. I.e., given a target mesh \(\widehat{\mathbf{b}}\in\mathbb{R}^{3n}\) (obtained as a 3D scan of an actor or a set of markers), find a configuration of the weight vector \(\mathbf{w}\) that would closely approximate the target. While the data fidelity term \(f(\mathbf{w})\approx\widehat{\mathbf{b}}\) plays a central role, the solution also needs to satisfy given constraints, specifically \(\mathbf{0}\leq\mathbf{w}\leq\mathbf{1}\), where the inequalities here are assumed component-wise. Preferably, the solution should also have as few non-zero weights as possible, as it makes it easier for artists to work with the obtained animation later, and less likely to produce artifacts like anatomically incorrect expressions Seol et al. (2011). Possible approaches to solving the inverse rig problem can be divided into data-based and model-based. Data-based methods neglect the structure of the underlying rig function and rely on large amounts of animated material that are used to train regression models Holden et al. (2015, 2016); Seonghyeon et al. (2021). While this can yield good performance, producing enough training data often poses a problem here, as it demands additional time and effort. On the other side, model-based solutions exploit the structure of the rig functions and rely on optimization techniques rather than data Joshi et al. (2006); Cetinaslan (2016), not for review (2022). A state-of-the-art model-based solution is given in Rackovic et al. (2023), and it solves the problem \[\operatorname*{minimize}_{\mathbf{\theta}\approx\mathbf{\gamma}\leq 1}\frac{1}{2} \|f(\mathbf{w})-\widehat{\mathbf{b}}\|_{2}^{2}+\alpha\mathbf{I}^{T}\mathbf{w}, \tag{3}\] using coordinate descent, where \(\alpha>0\) is a regularization parameter included to enhance the sparsity of the solution. As pointed out in Rackovic et al. (2021), the human face (and blendshape model) have local nature, hence most of the vertices are irrelevant for estimating the weights of the majority of the blendshapes. This calls for a segmented model, where objective (3) is split into subproblems with only relevant weights estimated over each mesh segment (see Fig. 1). Early works suggested splitting the face manually, by inspection, into the upper and lower regions Choe and Ko (2006). Yet this is not convenient for modern models with hundreds of blendshapes in the bases, and more sophisticated and automated methods are needed. While different papers propose segmenting the mesh based on the vertex behavior over animated sequences Joshi et al. (2006); Tena et al. (2011), this makes the clusters susceptible to the quality of training data, and unsuitable for model-based approaches in solving the inverse rig. In Seol et al. (2011), the mesh regions are painted manually, and then blendshapes are assigned to the corresponding segments. In Romeo and Schwartzman (2020) and Rackovic et al. (2021), mesh clusters are estimated from a given blendshape matrix, and Rackovic et al. (2021) further automatically assigns blendshapes to the relevant segments. While these clustering approaches help reduce the size of problem (3), and might even have an effect of an additional regularization of the solution, this brings a question of what to do with the weights that are shared between mesh clusters. In Rackovic et al. (2021), this is solved Figure 1: _Solving the inverse problem in a clustered setting. A blendshape offset matrix \(\mathbf{D}\) is clustered/segmented in both mesh (rows) and controller (columns) space, so the whole face model is divided into several submodels. The inverse rig problem is solved for each local cluster and the final results are aggregated into the prediction \(\hat{\mathbf{w}}\). Face model: ©unrealengine.com/en-US/eula/mhc._ by simply averaging the values, yet if the coupling between the clusters is included in the optimization process, this could improve the shared estimates. **Contributions** 1. We formulate a metric that can be used to evaluate the goodness of the blendshape clusters, based on the overall sparsity and the quality of reconstruction of a given clustering, apriori to the fitting phase. This is useful for choosing the optimal number of clusters \(K\) in a data-free manner. 2. We propose an adjustment to the blendshape assignment within the clustering technique of Rackovic et al. (2021), which, in general, results in a denser graph but a higher reconstruction quality. 3. We propose a model-based solution to the inverse rig problem in a clustered setting. The proposed method applies the alternating direction method of multipliers (ADMM) Boyd et al. (2011), in combination with coordinate descent similar to Rackovic et al. (2023), allowing coordination between the clusters and adjusted estimates of the shared weights. This paper follows the pipeline consisting of the above contributions in the following way. Several instantiations of the clusterings are performed and evaluated based on the proposed metric for estimating the trade-off between the error reconstruction and density of the produced segmented blendshape matrix in order to choose the best representative clustering. It is important here to note that, while we propose a new clustering method, this pipeline can work with an arbitrary clustering method, as shown in the results section of the paper. Finally, the clusters are used to solve the inverse rig in a distributed manner, where introducing ADMM allows the coupling of the overlapping components, as opposed to a naive clustered solution that observes each cluster independently. The results show that the pipeline produces solutions closely matching that of the holistic approach in terms of sparsity and accuracy, while significantly reducing the execution time (\(50\%\) reduction). A naive clustered solution demands slightly less time than a proposed method, but it does not compare with our solution in either accuracy or the sparsity metric, and supplementary video materials show a clear superiority of our results. The codes will be made available upon the paper acceptance. ## 2 Clustering of the face The clustering methods of Seol et al., 2011 Seol et al. (2011) (here termed _SSKLN_, from the initials of the authors) and of Rackovic et al., 2021 Rackovic et al. (2021) (here termed _RSJD_) transform the blendshape matrix \(\textbf{B}\in\mathbb{R}^{3n\times m}\) into a matrix of offset values \(\textbf{D}\in\mathbb{R}^{n\times m}\). Columns \(\textbf{d}_{i}\) of this matrix are obtained as offsets for each controller \(i\): \[d_{i}^{l}=\big{\|}[b_{i}^{3l},b_{i}^{3l-1},b_{i}^{3l-2}]\big{\|}_{2}^{2},\ \ \text{for}\ l=1,...,n. \tag{4}\] Here \(b_{i}^{3l-2}\) represents entry of a blendshape \(\textbf{b}_{i}\) that corresponds to \(x\) coordinate of the vertex \(\textbf{v}_{l}\). Similarly, superscripts \(3l-1\) and \(3l\) correspond to \(y\) and \(z\) coordinates of \(\textbf{v}_{l}\). The method proposed in Romeo et al., 2020 Romeo and Schwartzman (2020) (here termed _RS_, from the initials of the authors), rearranges blendshape matrix \(\textbf{B}\in\mathbb{R}^{3n\times m}\) into a matrix \(\Delta\in\mathbb{R}^{n\times 3m}\), whose elements are \(\Delta_{i,3l}=b_{i}^{3l}\) for \(l=1,...,n\). I.e., each blendshape \(\textbf{b}_{i}\) is decomposed into three vectors, containing \(x,y\), and \(z\) coordinates, respectively, and then these vectors are stacked as columns of matrix \(\Delta\). _RSJD_ and _RS_ perform K-Means clustering Hartigan and Wong (1979) over the rows of **D** and \(\Delta\), respectively, to obtain mesh clusters \(\mathcal{M}^{(k)}\), for \(k=1,...,K\). The _SSKLN_ assumes that an artist manually selects the four mesh segments. Further, _SSKLN_ assigns each blendshape to a relevant mesh segment using the following procedure for each blendshape \(i\), a magnitude of deformation over the segment \(\mathcal{M}^{(k)}\), denoted \(s_{i}^{(k)}\), is computed as the sum of the elements of \(\textbf{d}_{i}\) within the segment. The overall deformation of the blendshape \(i\) is a sum of the entire vector, \(s_{i}=\textbf{1}^{T}\textbf{d}_{i}=\sum_{k=1}^{4}s_{i}^{(k)}\). The controller \(i\) is assigned to each mesh cluster \(k\) where \(s_{i}^{(k)}>\frac{s_{i}}{2}\), producing in this way \(K=4\) controller clusters \(\mathcal{C}^{(k)}\), as illustrated in Fig. 1. In _RSJD_, each column vector \(\textbf{d}_{i}\), of the matrix **D**, is compressed into \(\textbf{h}_{i}\in\mathbb{R}^{K}\) such that \(h_{i}^{k}=\frac{\sum_{l\in\mathcal{M}^{(k)}}d_{i}^{l}}{|\mathcal{M}^{(k)}|} \ \ \text{for}\ k=1,..,K\). Then K-means is performed over \(\textbf{h}_{i}\) to split it into two subvectors, one with high entries and the other one with low. The controller \(i\) is assigned to all the mesh clusters corresponding to the high-valued labels. The number of clusters \(K\) is left as a user-defined parameter and is chosen based on the performance over training data. In this Section, we propose a data-free method for selecting a good value of \(K\) -- we want the resulting clustering to produce a relatively sparse reconstruction of the blendshape matrix without losing important information. Recall that, while the mesh segmentation of the _RSJD_ and _RS_ is in a way similar, _RS_ does not provide a method for blendshape assignment to the mesh clusters. Hence we augment it in this paper, for the purpose of solving the inverse rig, by applying the same method as in the _RSJD_. Proposed Clustering Method.The _RSJD_ method clusters the face model in both mesh and blendshape space, in a data-free manner, relying purely on a blendshape matrix. While the presented results show great performance, there is an issue in the blendshape assignment that we want to address. Blendshapes are assigned to the mesh segments where their effect is significantly larger than in the others, yet it does not imply that their effect within the corresponding cluster will be significant compared to other blendshapes. In particular, it might happen that, within a specific mesh segment, there are blendshapes non-assigned to it but whose overall magnitude of deformation is significantly larger than some of the assigned ones (Fig. 2). In this paper, we propose a simple adjustment -- the lowest magnitude value among all the blendshapes initially assigned to an observed cluster is taken as a threshold, \(p^{(k)}=\min\sum_{i\in\mathcal{M}^{(k)}}(b_{j}^{i})^{2}\) for \(j\in\mathcal{C}^{(k)}\). Consequently, all the other blendshapes whose deformation magnitude is larger than the threshold, i.e., such that \(\sum_{i\in\mathcal{M}^{(k)}}(b_{l}^{i})^{2}>p^{(k)}\) for \(l\not\in\mathcal{C}^{(k)}\), are assigned to the cluster as well. This method will be termed _RSJD\({}_{\text{A}}\)_ ("_A_" standing for "_adjusted_") throughout the paper. Choosing the Number of Clusters \(K\).Let us consider a blendshape matrix \(\textbf{B}\in\mathbb{R}^{3n\times m}\), segmented into submatrices \(\textbf{B}^{(k)}\in\mathbb{R}^{3n_{\text{A}}\times m_{k}}\), for \(k=1,...,K\), as illustrated in Fig. 3 (right). The _Density_ of the clustering represents the percentage of the elements of the blendshape matrix kept after the clustering, and it can be computed as \(E_{D}=\sum_{k=1}^{K}\frac{n_{k}m_{k}}{nm}\), where \(n_{k}=|\mathcal{M}^{(k)}|<m\) and \(m_{k}=|\mathcal{C}^{(k)}|<m\) are the number of vertices and the number of blendshapes assigned to cluster \(k\), respectively. We can also understand this as a number of edges \(E\) in a bipartite graph \(G=(U,V,E)\) where \(U\) represents all the vertices of the mesh and \(V\) the controllers -- an edge \((i,j)\in E\) is drawn for every \(i\in\mathcal{M}^{(k)}\) and \(j\in\mathcal{C}^{(k)}\), for \(k=1,...,K\) (see Fig. 4). While \(E_{D}\) shows the overall density of the model, we are also interested in the size of the clusters' overlap. We call this _Inter-Density_, \(E_{ID}\). It represents the number of edges shared between multiple clusters in the bipartite graph \(G\), that is, edges \((i,j)\in E\) such that \(i\in\mathcal{M}^{(k_{1})}\cup\mathcal{M}^{(k_{2})}\) and \(j\in\mathcal{C}^{(k_{1})}\cap\mathcal{C}^{(k_{2})}\) for some \(k_{1},k_{2}=1,...,K\), and \(k_{1}\neq k_{2}\). This metric will indicate how much coupling should be added between the clusters in the fitting phase. As a heuristic for measuring the _Reconstruction Error_, we will focus on the ratio between the kept and dismissed elements of the blendshape matrix. Let us observe the submatrices \(\mathbf{\bar{B}}^{(k)}\in\mathbb{R}^{3n_{k}\times(m-m_{k})}\), for \(k=1,...,K\), which represent rejected elements of **B**. Compute the sum of the squared entries of all these matrices \(E_{R1}=\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\sum_{j=1}^{m-m_{k}}(\bar{B}_{ij}^{(k)} )^{2}\), and a sum of the kept elements as \(E_{R2}=\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\sum_{j=1}^{m}(B_{ij}^{(k)})^{2}\). The reconstruction error is now computed as \(E_{R}=E_{R1}/E_{R2}\). The two metrics, \(E_{D}\) and \(E_{R}\), are, in general, inversely proportional, and an optimal clustering would exhibit relatively small values of each. This trade-off is illustrated in Fig. 3 (left). The holistic case (i.e., the blendshape matrix without clustering) will always have \(E_{R}=0\) and \(E_{D}=1\), as illustrated with a gray star. Presented are also the results of the four clustering methods _SSKLN_, _RSJD_, and _RS_ and _RSJD\({}_{\text{A}}\). The _SSKLN_ is represented by a single scatter (blue star), as it is deterministic in the sense that the four clusters are selected as suggested in Seol et al. (2011). The other three can vary in terms of the number of clusters \(K\) and for different repetitions of the same \(K\); hence, we repeat the clustering 1000 times, with \(K\) taking values between \(4\) and \(m=102\). For the sake of completeness, we also consider the extremely sparse case -- where each mesh vertex is assigned to exactly one blendshape controller, choosing always the one producing the largest offset. This is termed _Sparse_, and presented by the purple star in Fig. 3. The instances of the _RSJD_ are closer to the lower left corner (of the left subfigure) than _SSKLN_ or _RSJD\({}_{\text{A}}\), however, we need to zoom in to get a better idea of the relationships. The _RSJD\({}_{\text{A}}\)_ in general leads to quite low \(E_{R}\), but \(E_{D}\) can get relatively large, while the _RSJD_ is of lower density but higher \(E_{R}\). The _SSKLN_ is suboptimal in this plot. However, a relationship between \(E_{R}\) and \(E_{ID}\) does not need to follow the same shape (Fig. 3, middle). In this case, the distinction between the _RSJD_ and _RSJD\({}_{\text{A}}\)_ is even cleaner, however, notice that the _SSKLN_ has \(E_{ID}=0\) as its clusters have no overlapping. In both plots, the _RS_ closely follows the behavior of the _RSJD_, hence we will eliminate it from further consideration. Figure 2: _Average magnitude of deformation produced by each blendshape in a chosen cluster, obtained by the method of RSJD._ Figure 4: _Clustering outputs of the four approaches — for the RSJD and RSJD\({}_{A}\), the number of clusters \(K\) is indicated. Besides the mesh clusters, the figure shows bipartite graphs consisting of the vertices (left-hand-side) and controllers (right-hand-side). Each color indicates a single cluster, with edges representing the cluster correspondences. The avatar Jesse is acquired from the MetaHuman Creator (@unrealengine.com/en-US/eula/mhc)._ Figure 3: _Left: Trade-off between the density (\(E_{D}\)) and the reconstruction error (\(E_{R}\)) of the clustered blendshape matrix, for different clusterings. Middle: Trade-off between the inter-density (\(E_{ID}\)) and the reconstruction error (\(E_{R}\)) of the clustered blendshape matrix, for different clusterings. Each dot represents a single clustering output, with \(K\) taking values from \(4\) to \(m=102\) for the RSJD, RS and RSJD\({}_{A}\) (and fixed to \(K=4\) for the SSKLN, and \(K=m=102\) for Sparse). Right: Blendshape matrix clustered using the SSKLN. Dark entries correspond to clusters, and light entries are discarded vertex-blendshape pairs._ An optimal choice of the clustering (and hence \(K\)) should be based on these plots, choosing the point near the elbow of the trade-off curve, for each of the approaches. For _RSJD\({}_{\text{A}}\)_ approach, this would be one of the clusterings with the lowest \(E_{D}\), and for the _RSJD_, the one with low \(E_{R}\). We proceed to work on several different choices of \(K\) for the two approaches. We will show later in the results sections that a standard procedure of cross-validation, as used in prior works, leads to the same conclusions on the choice of \(K\), validating that the considered \(K\) selection works. In Fig. 4 (accompanied by Table 1) we show mesh clusters and bipartite graphs between the mesh vertices (left-hand-side) and the blendshapes (right-hand-side), colored corresponding to the cluster assignment. One can see that, in general, the lower number of clusters leads to a more dense graph. ## 3 Distributed Solution to the Rig Inversion Problem The objective function for the inverse rig problem, as formulated in a state-of-the-art Rackovic et al. (2023), is (3). In the clustered setting, one can simply split this problem into subproblems \[\underset{\mathbf{0}\leq\mathbf{w}^{(k)}\leq\mathbf{1}}{\operatorname{ minimize}}\frac{1}{2}\|f^{(k)}(\mathbf{w}^{(k)})-\mathbf{\widehat{b}}^{(k)}\|_{2}^{2}+ \alpha^{(k)}\mathbf{1}^{T}\mathbf{w}^{(k)}, \tag{5}\] for \(k=1,...,K\), where \(\mathbf{w}^{(k)}\in\mathbb{R}^{m_{k}}\) is a vector containing only the \(m_{k}\) weights assigned to the cluster \(k\); \(\mathbf{\widehat{b}}^{(k)}\in\mathbb{R}^{3n_{k}}\) is a subvector of the target mesh \(\mathbf{\widehat{b}}\), consisting of the \(n_{k}\) vertices from the corresponding cluster, and similarly, \(f^{(k)}(\cdot)\) is a blendshape function restricted only to the vertices and controllers within the cluster \(k\) (See Fig. 1). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & _SSKLN_ & \multicolumn{6}{c|}{_RSJD_} & \multicolumn{6}{c|}{_RSJD\({}_{\text{A}}\)_} & _Sparse_ \\ \hline \(K\) & 4 & 4 & 10 & 12 & 17 & 22 & 4 & 10 & 20 & 50 & 102 & 102 \\ \(E_{R}\) & 0.057 & 0.214 & 0.159 & 0.138 & 0.089 & 0.035 & 0.002 & 0.001 & 0.004 & 0.003 & 0.005 & 8.434 \\ \(E_{D}\) & 0.308 & 0.262 & 0.098 & 0.110 & 0.086 & 0.148 & 0.544 & 0.421 & 0.253 & 0.237 & 0.208 & 0.009 \\ \(E_{ID}\) & 0.0 & 0.068 & 0.056 & 0.071 & 0.072 & 0.136 & 0.383 & 0.330 & 0.252 & 0.237 & 0.208 & 0.0 \\ \hline \end{tabular} \end{table} Table 1: _Values of the clustering for each of the four methods._ Figure 5: _The first two columns show trade-off curves between the mesh error (RMSE) and cardinality (number of non-zero weights), for different methods, (and different choices of \(K\) for the RSJD and RSJD\({}_{\text{A}}\)) as functions of the regularization parameter \(\alpha>0\). The dotted lines represent a naive clustered solution, while the solid lines of the same color are the corresponding ADMM solution. The black dashed line shows a holistic approach. The gray horizontal line shows the cardinality of the ground-truth data, with a shaded region marking its standard deviation. A red vertical line in the plot of the Sparse approach represents a baseline where at each frame the weight vector is set to \(\mathbf{w}=\mathbf{0}\). For the RSJD and RSJD\({}_{\text{A}}\), we also represent the average execution time for each choice of \(K\) (the third column), as well as the trade-off between \(E_{R}\) and \(E_{D}\) (the fourth column) and \(E_{R}\) and \(E_{ID}\) (the last column)._ If these subproblems are solved independently, they yield a set of local weight vectors \(\hat{\mathbf{w}}^{(k)}\), that should be merged into a single global prediction vector \(\hat{\mathbf{w}}\). For the controllers that are shared among multiple clusters, the final value is taken as the average of all the estimates. More formally, we introduce the mapping from local variable indices into a global variable index as \(j=\mathcal{G}(k,i)\), which means that for some local variable \(\mathbf{v}^{(k)}\) and a global variable \(\mathbf{v}\), a local variable component \((\mathbf{v}^{(k)})_{i}\) corresponds to the global variable component \(\mathbf{v}_{j}\). We also introduce a diagonal matrix \(\mathbf{S}\in\mathbb{R}^{m\times m}\) that has entries corresponding to the multiplicity of each controller over the clusters, i.e., \(S_{ii}=\sum_{k=1}^{K}\sum_{\mathcal{G}(k,i)}1\). Now, the global weight estimate is obtained as \(\hat{\mathbf{w}}=\mathbf{S}^{-1}\sum_{k=1}^{K}\mathbf{v}^{(k)}\), where the entries of \(\mathbf{v}^{(k)}\in\mathbb{R}^{m}\) are the values of \(\hat{\mathbf{w}}^{(k)}\) obtained for the corresponding cluster, i.e., \((\mathbf{v}^{(k)})_{\mathcal{G}(k,i)}=(\hat{\mathbf{w}}^{(k)})_{i}\). Solution via ADMM.We now formulate a solution that includes coupling between the clusters, instead of solving each subproblem independently. In this way, we can produce a better estimate of the shared weights. For this, we apply the ADMM. In the workflow of ADMM, the objective function should be transformed to have the form minimize \[\Phi(\mathbf{x})+\Psi(\mathbf{z})\] s.t. \[\mathbf{G}\mathbf{x}+\mathbf{F}\mathbf{z}=\mathbf{c},\] (6) by choosing the functions \(\Phi(\cdot)\) and \(\Psi(\cdot)\) and the constraints. Similar to Neumann et al. (2013), we dualize on the regularization term, i.e., we set \(\Psi(\mathbf{z})=\alpha\mathbf{I}^{T}\mathbf{z}\), \(\Phi(\mathbf{x})=\|f(\mathbf{x})-\hat{\mathbf{b}}\|_{2}^{2}\), \(\mathbf{G}=\mathbf{I}\), \(\mathbf{F}=-\mathbf{I}\), and \(\mathbf{c}=\mathbf{0}\). This corresponds to the general form consensus with regularization Boyd et al. (2011), with the following ADMM updates at each iteration \(t+1\): \[\mathbf{x}_{t+1}^{(k)} \in\operatorname*{argmin}_{\mathbf{\theta}\leq\mathbf{x}\leq \mathbf{S}}\left(\|f^{(k)}(\mathbf{x})-\mathbf{\hat{b}}^{(k)}\|_{2}^{2}+\rho \|\mathbf{x}-\tilde{\mathbf{z}}_{t}^{(k)}+\mathbf{u}_{t}^{(k)}\|_{2}^{2}\right) \tag{7}\] \[\mathbf{z}_{t+1} =\mathbf{S}^{-1}\left(\sum_{k=1}^{K}\mathbf{q}^{(k)}-\frac{ \alpha_{k}}{\rho}\right)\] \[\mathbf{u}_{t+1}^{(k)} =\mathbf{u}_{t}^{(k)}+\mathbf{x}_{t+1}^{(k)}-\mathbf{z}_{t+1}^{( k)}.\] A vector \(\tilde{\mathbf{z}}^{(k)}\in\mathbb{R}^{m_{k}}\) is a local copy of the global variable \(\mathbf{z}\in\mathbb{R}^{m}\), i.e., we have \((\tilde{\mathbf{z}}^{(k)})_{i}=\mathbf{z}_{\mathcal{G}(k,i)}\). The entries of \(\mathbf{q}^{(k)}\in\mathbb{R}^{m}\) are the values of \(\mathbf{x}_{t+1}^{(k)}+\mathbf{u}_{t}^{(k)}\) obtained for the corresponding cluster, i.e., \((\mathbf{q}^{(k)})_{\mathcal{G}(k,i)}=(\mathbf{x}_{t+1}^{(k)})_{i}+(\mathbf{u} _{t}^{(k)})_{i}\). Further, we proceed by solving the \(x\)-update step via coordinate descent, following the approach of Rackovic et al. (2023). The idea of ADMM is illustrated in Fig. 6. ## 4 Results For each of the four introduced clustering strategies (_SSKLN, RSJD, RSJD\({}_{A}\)_ and _Sparse_), we will experiment with two possible approaches: 1. a naive clustered solution, where the subproblems (5) are solved independently for each cluster, and in the end, the weights that are shared among multiple clusters are averaged; Figure 6: _A scheme of ADMM proposed in Sec. 3. Global vector \(\mathbf{z}_{t}\) is split into local copies \(\tilde{z}_{t}^{(k)}\) used to constrain the estimate to \(\mathbf{x}_{t+1}^{(k)}\) (solving (7)). These estimates are again merged into a global variable \(\mathbf{z}_{t+1}\) and the procedure is repeated. Red is used to indicate the controllers shared among clusters, and yellow for the others._ 2. the proposed ADMM approach (7), where the clusters can communicate the values of the shared weights, i.e., these components are constrained to be similar by coupling between the local and global variables, to get more confident estimates. Additionally, we include a holistic case, i.e., the method of Rackovic et al. (2023), where problem (3) is solved without segmentation. Two main metrics of interest are mesh error and cardinality. Mesh error is computed as a root mean squared error (RMSE) between the target mesh \(\widehat{\mathbf{b}}\) and the estimated mesh \(f(\hat{\mathbf{w}})\), where \(\hat{\mathbf{w}}\) is the estimated weight vector \(\text{RMSE}(\hat{\mathbf{w}},\widehat{\mathbf{b}})=\sqrt{\frac{\|f(\hat{ \mathbf{w}})-\widehat{\mathbf{b}}\|_{2}^{2}}{n}}\). Cardinality is the number of non-zero weights in \(\hat{\mathbf{w}}\). An ideal solution should have low values for both of these. The realistic, real-size human head character, _Jesse_, used in our experiments, is publicly available within the MetaHumans platform (@unrealengine.com/en-US/eula/mhc). The avatar is manually animated by a human expert, to give a wide and realistic range of motion. Further, a small amount of Gaussian noise (\(\sigma^{2}=0.03\)cm as compared to the head width of \(18\)cm) is added to the mesh vertices, to mimic the realistic 3D scans used in the production. The model consists of \(m=102\) blendshapes in the basis and \(n=10000\) face vertices. Initially, we need to choose a good value of the regularization parameter \(\alpha\), for each approach, as well as the optimal \(K\) for the _RSJD_ and _RSJD_A. For this purpose, we run experiments on \(300\) training frames with various values \(\alpha>0\) and \(0\leq K\leq m=102\). The clusters for each of the approaches and choices of \(K\) are shown in Fig. 4. The results of the training are presented in Fig. 5. The left side of the figure gives trade-off curves between the mesh error and cardinality as a function of \(\alpha\). Trade-off curves for a naive clustered solution are shown as dotted lines, while the same-color solid curves represent corresponding ADMM solutions. For the sake of visualization, the results of the four approaches are presented in separate subfigures. The gray horizontal line represents the average cardinality of the ground truth data, and the shaded region shows one standard deviation. We will mostly focus on the shaded area as it is indicated as a reasonable value for cardinality. Further, we will choose an optimal value of regularization \(\alpha\) at which each curve crosses the gray line, as in this sense we will have a fair comparison of different methods. Notice also that the results of the _Sparse_ approach are extremely poor, in most cases worse than a baseline which always predicts a zero weight vector (red vertical line). Hence, we will dismiss this approach from further consideration. Notice that in all the other cases, the results obtained using ADMM (solid curves) are outperforming those obtained via a naive clustered solution (dotted curves). For the _RSJD_ and _RSJD_A, we should additionally pick an optimal choice of \(K\). For this, we primarily look at the trade-off curves, but should also consider the execution time presented in the middle column of Fig 5. For both methods, \(K=4\) leads to the best ADMM trade-off curve, however, the execution time is considerably longer for \(K=4\) than for other choices. For the _RSJD_ we will choose \(K=22\), as it gives only a slightly worse curve in the case of ADMM, while the execution time is almost half of the case with four clusters, and the results of a simple clustered method are actually the best performing for this choice. For _RSJD_A, the relationship between the curves and the choice of \(K\) follows the same pattern for both ADMM and a simple clustered approach, Figure 7: _The first two subfigures show trade-off curves between the cardinality and mean and max RMSE, respectively, for different methods, as functions of the regularization parameter \(\alpha>0\). The dotted lines represent a naive clustered solution, while the solid lines of the same color are the corresponding ADMM solution. The black dashed line shows a holistic approach. The gray horizontal line shows the cardinality of the ground-truth data, with a shaded region marking its standard deviation. The middle subfigure shows the average execution time per frame. The last two subfigures show trade-off curves between \(E_{R}\) and \(E_{D}\) and between \(E_{R}\) and \(E_{ID}\), respectively. Large dots indicate a chosen clustering, while the smaller ones of the same color are representing the discarded cases._ with an increase in \(K\) leading to a decrease in the overall trade-off. We also notice that with \(K=20\) the execution time is as low as it gets, hence we chose it as an optimal \(K\). We take the selected set of results and present them together in Fig. 7. Notice that in all three methods, using ADMM significantly improves the results compared to the naive clustered approach. The trade-off curve of _SSKLN_ (using ADMM) is the only one reaching the accuracy of a holistic model, yet its execution time is the largest of the three distributed methods. An important observation is on the trade-off of \(E_{R}\) versus \(E_{D}\) and \(E_{ID}\) (the last two subfigures). Here, the selected clusterings for each of the three methods, are presented with the annotated dots, while, for the _RSJD_ and _RSJD\({}_{\text{A}}\)_, we additionally show the other five clusterings, as the same-color smaller dots. This confirms our assumptions from Sec. 2, that the cross-validation would lead to choosing _RSJD_ clustering with smaller \(E_{R}\), and _RSJD\({}_{\text{A}}\)_ clustering with smaller \(E_{D}\). Also, we can see a direct relationship between the execution time and \(E_{D}\), as the clusterings with higher density lead to a longer execution. Now we can observe the results of the test set in more detail. RMSE is presented in Fig. 8 (upper left), where the solid-color boxes correspond to ADMM and dotted ones to a naive clustered solution. Like in the training set, a clear distinction between the two is also visible here -- in all three cases, the upper quartile of the ADMM solution is lower than the lower quartile of clustered solution. ADMM under the _SSKLN_ is comparable to the holistic in terms of median and quartiles, while ADMM under the _RSJD_ is just slightly worse. On the other side, the execution time of the clustered solution is lower than that of ADMM, as expected due to lack of cluster coupling, yet the difference is not as large as between the holistic case to others (Fig. 7, bottom left). As expected, cardinalities within the test set are similar across all the cases, showing only a slight advantage of ADMM (Fig. 8, bottom right). Finally, since the test set consists of an animation sequence (see supplementary video), we are also interested in the temporal smoothness of the produced animation. This can be computed using the second-order differences to get the roughness penalty \(\text{Roughness}(\hat{w}_{i})=\sum_{t=2}^{T-1}\left(\hat{w}_{i}^{(t-1)}-2\hat{w} _{i}^{(t)}+\hat{w}_{i}^{(t+1)}\right)^{2},\) for a blendshape weight \(\hat{w}_{i}\) over \(T\) animated frames. Lower values of _Roughness_ correspond to smoother curves. The metric values are shown in Fig. 8 (lower left). The values for ADMM are significantly lower than the corresponding values for a naive clustered approach and also compared to the holistic case. This can be noticed in the supplementary video as well, as the produced animation (especially for the _RSJD_ and _RSJD\({}_{\text{A}}\)_ clusters) is very smooth. Figure 8: _Results of the four methods over the test set. Dotted-face bars (and boxes) represent naive clustered solutions, while the same-color solid bars (and boxes) are the corresponding ADMM solutions. A gray horizontal line shows the metric value of the ground-truth data, and the shaded area gives one standard deviation._ Cardinality stays in the shaded region of a ground-truth standard deviation, for all the approaches, as expected. Yet, it is slightly lower for the ADMM approaches than the other. We might conclude that the application of ADMM on the clustered face leads to significant outperformance compared to the previous approaches that solved each cluster independently and averaged the shared components. ADMM produces visibly lower values of each considered metric, with the exception of the execution time. However, the execution time under ADMM is still less than half of the holistic approach. While all of the three selected methods seem to perform well in our use case, one could argue that the _SSKLN_ gives slightly preferable results. However, it is also important to recall that the _SSKLN_ demands the mesh clusters to be defined manually, hence it might be a less favorable choice. The _RSJD_ is slightly less accurate than the _SSKLN_, but has better smoothness and the lowest execution time, as the clustered matrix is very sparse. ## 5 Conclusion In this paper, we proposed a method for solving the inverse rig problem in a distributed manner. It is performed over the segmented animated face, applying the ADMM in order to include coupling between the clusters. Previously, the approaches with clustering would assume segments to be independent while fitting and averaging the shared components afterward. ADMM allows us to estimate the shared blendshapes jointly and hence get better estimates of the corresponding weights. Our method is general in the sense that it can work with different clustering approaches, as illustrated in this paper. We point out that, as the method is model-based, it makes sense to be applied with the data-free clustering methods, like the _RS_ Romeo and Schvartzman (2020) or _RSJD_ Rackovic et al. (2021), or the one proposed here; although other choices are also feasible. Irrespective of the clustering strategy used, applying the ADMM leads to improvements in all the metrics compared to a naive clustering scheme, except in the execution time. The differences are also visible in the supplementary video material, strongly favoring the ADMM solution. We also propose an adjustment to the model-based clustering method of _RSJD_, which applies a different strategy in assigning the blendshapes to mesh segments. The proposed method leads to increased density compared to the _RSJD_, yet small in comparison to a holistic case, and often smaller than that of the _SSKLN_ Seol et al. (2011). Added complexity leads to an increased execution time (still almost half of the holistic approach) but to smoother and sparser results. Finally, we also propose a heuristic for choosing a good number of clusters \(K\) in a data-free fashion. It is based on the trade-off between the density of the clustered blendshape matrix, and the reconstruction error. While these two are, in general, inversely proportional, producing a large number of clusterings with different \(K\)'s or initializations will point out the tendency of the results. #### Video Materials [https://youtu.be/fQaFA8CH2S4](https://youtu.be/fQaFA8CH2S4) #### Funding This work has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 812912, from FCT IP strategic project NOVA LINCS (FCT UIDB/04516/2020) and project DSAIPA/AI/0087/2018. The work has also been supported in part by the Ministry of Education, Science and Technological Development of the Republic of Serbia (Grant No. 451-03-9/2021-14/200125).
2301.08524
Clustering Human Mobility with Multiple Spaces
Human mobility clustering is an important problem for understanding human mobility behaviors (e.g., work and school commutes). Existing methods typically contain two steps: choosing or learning a mobility representation and applying a clustering algorithm to the representation. However, these methods rely on strict visiting orders in trajectories and cannot take advantage of multiple types of mobility representations. This paper proposes a novel mobility clustering method for mobility behavior detection. First, the proposed method contains a permutation-equivalent operation to handle sub-trajectories that might have different visiting orders but similar impacts on mobility behaviors. Second, the proposed method utilizes a variational autoencoder architecture to simultaneously perform clustering in both latent and original spaces. Also, in order to handle the bias of a single latent space, our clustering assignment prediction considers multiple learned latent spaces at different epochs. This way, the proposed method produces accurate results and can provide reliability estimates of each trajectory's cluster assignment. The experiment shows that the proposed method outperformed state-of-the-art methods in mobility behavior detection from trajectories with better accuracy and more interpretability.
Haoji Hu, Haowen Lin, Yao-Yi Chiang
2023-01-20T12:02:30Z
http://arxiv.org/abs/2301.08524v1
# Clustering Human Mobility with Multiple Spaces ###### Abstract Human mobility clustering is an important problem for understanding human mobility behaviors (e.g., work and school commutes). Existing methods typically contain two steps: choosing/learning a mobility representation and applying a clustering algorithm to the representation. However, these methods rely on strict visiting orders in trajectories and cannot take advantage of multiple types of mobility representations. This paper proposes a novel mobility clustering method for mobility behavior detection. First, the proposed method contains a permutation-equivalent operation to handle sub-trajectories that might have different visiting orders but similar impacts on mobility behaviors. Second, the proposed method utilizes a variational autoencoder architecture to simultaneously perform clustering in both latent and original spaces. Also, in order to handle the bias of a single latent space, our clustering assignment prediction considers multiple learned latent spaces at different epochs. This way, the proposed method produces accurate results and can provide reliability estimates of each trajectory's cluster assignment. The experiment shows that the proposed method outperformed state-of-the-art methods in mobility behavior detection from trajectories with better accuracy and more interpretability. ## I Introduction Understanding individual human mobility patterns has been an actively-studied topic in the past decade [7, 8, 14, 24]. Clustering human mobility is an important task to group "similar" mobility behaviors and provides insight towards detected mobility behaviors patterns. As individual trajectories (e.g. raw individual GPS data) are usually used as the proximity for mobility, clustering individual trajectories to detect the mobility behaviors is widely studied [8, 19, 20, 21]. This direction is beneficial to many domains like policymaking, urban design, economics, and geo-spatial intelligence. Existing trajectory clustering methods follow a paradigm that first chooses a way to represent the trajectory and then applies a clustering algorithm on the trajectory representation to group similar trajectories [8, 14, 19, 20, 21, 22, 24]. Although many advances have been achieved, there are still major challenges. First, even though there are different ways to represent a trajectory, they fail to properly handle the diversity of multi-scale trajectories within the same mobility type. The core idea of the existing methods is either directly representing a trajectory as a temporal sequence of which the order between consecutive elements (e.g., recorded geocoordinates or stay points) represents the transition between corresponding consecutive time steps in the trajectory [2, 9] or learning a vector representation encoding all the elements and transitions from the temporal sequence [20, 21, 22, 24]. The definition of an element varies across methods but the transition direction always represents the temporal order. The assumption of existing methods is that _the same mobility behavior type usually have similar element transitions_. As a transition involves two consecutive elements and the direction between them, the similarity is also based on both the sequence elements and the transition direction between them. This idea has been widely-used, but it ignores that some sub-trajectories of the same mobility behavior type may have the same consecutive elements but with an opposite transition direction. For example, assuming people in a school are asked to get vaccines in the same hospital. A teacher starts the trip from the office (which is close to the bus station). Then, the teacher takes a long walk to the parking ramps and drives to the hospital. And a student starts the trip from the classroom (which is close to the parking ramps). Then, the student walks to the bus station and takes a bus to the hospital. Even though the teacher and the student have the same mobility goal (i.e., go to the hospital), their beginning sub-trajectories have the exact opposite element of transition. Second, it is unclear what are the best similarity measurement and space to calculate the similarity for mobility behavior clustering. Even though the goal is to group "similar" mobility behaviors, it is difficult to define what is the desired similarity measurement in an unsupervised setting. Existing methods simply use a predefined measurement (e.g., the Euclidean distance) as the proxy similarity and calculate the similarity in a single representation space (either the original data space or reduced/latent space) [17]. If the original space is suitable for clustering objective, the clustering algorithm can be directly applied. If the original space is not good enough (e.g., the original space suffers from the curse of dimensionality or the data in the original space distributes in a complex and clustering-unfriendly way), the data is first mapped into a reduced/latent space (e.g., using principal component analysis (PCA) to linearly reduce the feature dimensions [17] or neural networks to nonlinearly transform [18] the data from the original space to a latent space [16]). Then, clustering with the predefined similarity is applied in the reduced/latent space. Usually, both options are tested empirically if it is not obvious to observe the limitation in the original space (e.g., the original space has moderately large dimensions). Yet, it still is difficult to tell which representation space could provide better clustering results when we do not have the ground truth labels (i.e., unsupervised clustering). Also, there is no guarantee that choosing one representation could be enough for us to ignore the other representation without losing any
2305.03186
The Nevo--Santos--Wilson spheres are shellable
Nevo, Santos, and Wilson constructed $2^{\Omega(N^d)}$ combinatorially distinct simplicial $(2d-1)$-spheres with $N$ vertices. We prove that all spheres produced by one of their methods are shellable. Combining this with prior results of Kalai, Lee, and Benedetti and Ziegler, we conclude that for all $D \ge 3$, there are $2^{\Theta(N^{\lceil D/2 \rceil})}$ shellable simplicial $D$-spheres with $N$ vertices.
Yirong Yang
2023-05-04T22:13:06Z
http://arxiv.org/abs/2305.03186v2
# On the constructibility of the Nevo-Santos-Wilson spheres ###### Abstract Nevo, Santos, and Wilson constructed \(2^{\Omega(N^{2})}\) combinatorially distinct simplicial \((2d-1)\)-spheres with \(N\) vertices. We prove that all spheres produced by one of their methods are constructible. Combining this with prior results of Kalai, Lee, and Benedetti-Ziegler, we conclude that for all \(D\geq 3\), there are \(2^{\Theta(N^{\lceil D/2\rceil})}\) constructible simplicial \(D\)-spheres with \(N\) vertices. When \(D=3\) or \(D\) is even, this asymptotics also holds for the number of shellable spheres. ## 1 Introduction The goal of this paper is to establish the asymptotics of the number of constructible \(D\)-spheres with \(N\) vertices, as \(N\) grows to infinity. To achieve this, we show that the spheres produced in [7, Construction 3] by Nevo, Santos, and Wilson are constructible. When the dimension is \(3\), these spheres are even shellable. It follows easily from Steinitz's theorem (see [11, Chapter 4]) that all simplicial \(2\)-spheres can be realized as the boundary complexes of \(3\)-polytopes. However, in higher dimensions, there are many more simplicial spheres than the boundaries of polytopes. Let \(s(D,N)\) denote the number of combinatorially distinct \(D\)-spheres with \(N\) vertices. For \(D\geq 4\), Kalai [5] proved that \(s(D,N)\geq 2^{\Omega(N^{\lfloor D/2\rfloor})}\). Pfeffe and Ziegler [9] then complemented Kalai's result by showing \(s(3,N)\geq 2^{\Omega(N^{5/4})}\). Later, Nevo, Santos, and Wilson [7] improved the lower bound of \(s(D,N)\) for odd \(D\geq 3\) to \(2^{\Omega(N^{\lceil D/2\rceil})}\). In constrast to these bounds, we know from works by Goodman and Pollack [4] as well as Alon [1] that there are only \(2^{\Theta(N\log N)}\) combinatorially distinct \(D\)-polytopes with \(N\) vertices for \(D\geq 4\). See also a recent preprint by Padrol, Philippe and Santos [8] for the current best lower bound. An important and related result by Bruggesser and Mani [3] is that the boundary complexes of simplicial polytopes are always shellable. This naturally leads to the study of spheres with nice decomposibility properties. How many shellable spheres are there? How does this number compare to the number of polytopes? More generally, how many constructible spheres are there? These questions were partially answered by Lee's proof in [6] that Kalai's spheres are all shellable. Let \(s_{\mathrm{shell}}(D,N)\) (and respectively, \(s_{\mathrm{constr}}(D,N)\)) denote the number of shellable (constructible) \(D\)-spheres with \(N\) vertices. Lee's result implies that \[s_{\mathrm{constr}}(D,N)\geq s_{\mathrm{shell}}(D,N)\geq 2^{\Omega(N^{\lfloor D /2\rfloor})}. \tag{1}\] What about the Nevo-Santos-Wilson spheres? The main result of this paper is: **Theorem 1.1**.: 1. _The Nevo-Santos-Wilson spheres in_ _[_7_, Construction 3]_ _are all constructible._ 2. _The_ \(3\)_-spheres in_ _[_7_, Construction 1]_ _are all shellable._ On the other hand, Benedetti and Ziegler [2] proved that for \(D\geq 2\), the number of combinatorially distinct locally constructible (LC) \(D\)-spheres with \(M\) facets grows not faster than \(2^{D^{2}M}\). In addition, they proved that constructible spheres are LC. We make the following observation by combining Theorem 1.1 with Benedetti and Ziegler's results, the bounds in (1), and the Upper Bound Theorem for simplicial spheres by Stanley [10]. **Corollary 1.2**.: \[s_{\mathrm{constr}}(D,N)=2^{\Theta(N^{\lceil D/2\rceil})}\text{ for all }D\geq 3,\] _and_ \[s_{\mathrm{shell}}(D,N)=2^{\Theta(N^{\lceil D/2\rceil})}\text{ for all even }D\geq 4\text{ and }D=3.\] The structure of this paper is as follows. Several key definitions and facts related to Nevo, Santos, and Wilson's construction, as well as an outline of the main proof are provided in Section 2. Section 3 contains a detailed proof of the constructibility of the Nevo-Santos-Wilson spheres. Section 4 addresses the special case of \(3\)-spheres, which are shellable. Detailed computations leading to Corollary 1.2 can be found at the end of Section 3. ## 2 Preliminaries ### Basic definitions We start with several essential definitions and notations in preparation for the rest of the paper. A _simplicial complex_\(\Delta\) on a finite vertex set \(V\) is a collection of subsets of \(V\) such that if \(F\in\Delta\) and \(G\subseteq F\), then \(G\in\Delta\). The elements of \(\Delta\) are called _faces_. The _dimension_ of each face \(F\) is \(\dim F=|F|-1\). Conventionally we call the \(0\)-dimensional faces _vertices_, and the \(1\)-faces _edges_. We say \(\Delta\) is _pure_ if all of its maximal faces with respect to inclusion have the same dimension. In that case, these maximal faces are called _facets_ and the dimension of \(\Delta\) is defined to be that of its facets. To specify a simplicial complex \(\Delta\), it suffices to list all of its facets. The _simplex_ on \(V\), denoted \(\overline{V}\), is the collection of all subsets of \(V\). For a face \(F\in\Delta\), \(\overline{F}\) is the collection of all subsets of \(F\). Starting from the next section, we blur the difference between a face \(F\in\Delta\) and the simplex \(\overline{F}\subseteq\Delta\) and denote both as \(F\) by abuse of notation. For two positive integers \(n_{1},n_{2}\) such that \(n_{1}<n_{2}\), define \([n_{1}]:=\{1,\ldots,n_{1}\}\) and \([n_{1},n_{2}]:=\{n_{1},\ldots,n_{2}\}\). A _path_ of length \(n-1\) is a \(1\)-dimensional pure simplicial complex on the vertex set \(\{a_{1},\ldots,a_{n}\}\) whose facets are \(\{a_{i},a_{i+1}\}\) for \(i\in[n-1]\). We denote this path as \(P(a_{1},\ldots,a_{n})\). Given a \(D\)-dimensional simplicial complex \(\Delta\), we can associate with \(\Delta\) its _geometric realization_\(\|\Delta\|\) as follows. For each facet \(F\in\Delta\), build a \((|F|-1)\)-dimensional _geometric simplex_ with vertices labeled by elements in \(F\). Glue the simplices in a way that every two simplices are identified along their common (possibly empty) face. We say that \(\Delta\) is a _simplicial \(D\)-sphere_ (and respectively, a _simplicial \(D\)-ball_) if \(\|\Delta\|\) is homeomorphic to a \(D\)-sphere (\(D\)-ball). If \(\Delta\) and \(\Gamma\) are simplicial complexes on disjoint vertex sets \(V\) and \(V^{\prime}\), then the _join_ of \(\Delta\) and \(\Gamma\) is the simplicial complex \(\Delta*\Gamma=\{F\cup G:F\in\Delta,\ G\in\Gamma\}\). When one of the the complexes, say \(\Gamma\), has only a single vertex \(v\), then we call \(\Delta*\overline{\{v\}}\) (or simply, \(\Delta*v\)) the _cone_ over \(\Delta\) with apex \(v\). A pure \(D\)-dimensional simplicial complex \(\Delta\) is _shellable_ if there is an ordering of facets of \(\Delta\), \(F_{1},\ldots,F_{n}\) such that for each \(i\in[2,n]\), \(\overline{F}_{i}\cap(\overline{F}_{1}\cup\cdots\cup\overline{F}_{i-1})\) is a pure \((D-1)\)-dimensional simplicial complex. Such an ordering is called a _shelling order_. Any \((D-1)\)-dimensional subcomplex of a \(D\)-simplex is shellable. The join of two shellable complexes is shellable. A relaxation of shellability is the notion of _constructibility_. A pure \(D\)-dimensional simplicial complex \(\Delta\) is _constructible_ if either * \(\Delta\) is a simplex, or * there exist two \(D\)-dimensional constructible simplicial complexes \(\Delta_{1}\) and \(\Delta_{2}\) such that \(\Delta_{1}\cup\Delta_{2}=\Delta\), and \(\Delta_{1}\cap\Delta_{2}\) is a \((D-1)\)-dimensional constructible simplicial complex. We say that \(\Delta_{1}\cup\Delta_{2}\) is a _constructible decomposition_ of \(\Delta\). We refer to a \(D\)-dimensional constructible complex as \(D\)_-constructible_. It follows from the definition that any shellable complex is constructible. Simplicial complexes form a subclass of _polyhedral complexes_. A polyhedral complex \(C\) is a collection of polytopes such that * if \(P\in C\) and \(Q\) is a face of \(P\), then \(Q\in C\), and * if \(P,Q\in C\), then \(P\cap Q\) is a common face of \(P\) and \(Q\). For more information about polytopes and shellability, see [11]. Polyhedral complexes naturally come with a geometric realization. All definitions from the beginning of this section can be adapted to polyhedral complexes. For instance, a polyhedral complex is a _polyhedral \(D\)-sphere_ (_polyhedral \(D\)-ball_, respectively) if it is homeomorphic to a \(D\)-sphere (\(D\)-ball). Given a polyhedral \(D\)-ball \(C\), define its _boundary complex_\(\partial C\) to be the subcomplex of \(C\) whose facets are the \((D-1)\)-faces of \(C\) that are contained in exactly one facet of \(C\). A _triangulation_ of a polyhedral complex \(C\) is a simplicial complex \(\Delta\) such that * the geometric realization of \(\Delta\) coincides with \(C\), and * every face of \(\|\Delta\|\) is contained in a polytope in \(C\). ### The Nevo-Santos-Wilson spheres Here we present the key facts about [7, Construction 3] as well as introduce some new definitions and notation. We closely follow Nevo, Santos, and Wilson's paper [7] and state their results without proof. The reader is encouraged to check their paper for more details. A word about notation: the construction below is based on the join of \(d\) paths, each of length \(n-1\). This join is a \((2d-1)\)-dimensional complex. We let \(D:=2d-1\) and mention right away that each sphere produced in [7, Construction 3] is \(D\)-dimensional and has \(N=dn+\lceil d(n-1)\rceil/(d+2)+1\) vertices. Let \(\mathcal{T}\) be the join of \(d\) paths of length \(n-1\). For each \(\ell\in[d]\), we denote the \(\ell\)-th path by \(P(a_{1}^{(\ell)},a_{2}^{(\ell)},\ldots,a_{n}^{(\ell)})\) for \(\ell\in[d]\). Then each \((2d-1)\)-simplex \(\sigma\) in the join \(\mathcal{T}\) is of the form \[\sigma=\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}}^{(d)},a_{i_{d}+1}^ {(d)}\}.\] Equivalently, \(\sigma\) can be uniquely represented by a \(d\)-tuple of indices \((i_{1},\ldots,i_{d})\in[n-1]\times\cdots\times[n-1]\). We will use these two notations interchangeably. Each such simplex can be visualized as a single "unit" cube in the \(d\)-cube with side length \(n\) that represents \(\mathcal{T}\). See Figure 1 for an illustration in the case of \(d=2\) and \(n=8\). For convenience, for each \(\sigma=(i_{1},\ldots,i_{d})\) we denote its _index sum_\(i_{1}+\cdots+i_{d}\) by \(\sum\sigma\). The join \(\mathcal{T}\) is a simplicial \((2d-1)\)-ball. For each \(k\in\{1,\ldots,\lceil d(n-1)/(d+2)\rceil\}\), define \(\mathcal{T}|_{\mathbf{B}_{k}}\) to be the union of all simplices \(\sigma=(i_{1},\ldots,i_{d})\) in \(\mathcal{T}\) that satisfy \((k-1)(d+2)\leq\sum\sigma\leq k(d+2)-1\). This is a \((2d-1)\)-ball contained in \(\mathcal{T}\)[7, Lemma 5.2]; see Figure 1 for an illustration. It is easy to see that \(\mathcal{T}=\bigcup_{k}\mathcal{T}|_{\mathbf{B}_{k}}\). The idea of [7, Construction 3] is to replace each \(\mathcal{T}|_{\mathbf{B}_{k}}\) with a new ball \(\mathcal{T}_{k}\) whose boundary is the same as \(\mathcal{T}|_{\mathbf{B}_{k}}\). For each \(k\), a new vertex \(o_{k}\) is introduced as follows (assuming \(\mathcal{T}|_{\mathbf{B}_{k}}\) is not a simplex): * Consider \[\mathcal{S}_{k}^{\mathrm{low}} :=\left\{\sigma=(i_{1},\ldots,i_{d}):\sum\sigma=(k-1)(d+2)\right\},\] \[\mathcal{S}_{k}^{\mathrm{up}} :=\left\{\sigma=(i_{1},\ldots,i_{d}):\sum\sigma=k(d+2)-1\right\}, \text{ and }\mathcal{S}_{k}=\mathcal{S}_{k}^{\mathrm{low}}\cup\mathcal{S}_{k}^{ \mathrm{up}}.\] The corresponding cubes for \(k=3\) when \(d=2\) and \(n=8\) are highlighted with a darker pink in Figure 1. For each \(\sigma\in\mathcal{S}_{k}\), define \(D_{\sigma}:=\sigma\cap\partial(\mathcal{T}|_{\mathbf{B}_{k}})\). Let \(C_{\sigma}\) be a new polyhedral cell whose boundary complex is \(D_{\sigma}\cup(\partial D_{\sigma}*o_{k})\). Let \(F_{\sigma}\) be the only subset of \(V(D_{\sigma})\) not in \(D_{\sigma}\) but such that all of its proper subsets are in \(D_{\sigma}\). We call \(F_{\sigma}\) the _missing face_ of \(D_{\sigma}\). Let \(G_{\sigma}\) be the face of \(\sigma\) complementary to \(F_{\sigma}\). There are two ways to triangulate \(C_{\sigma}\) without introducing new vertices or changing the boundary of \(C_{\sigma}\); the resulting triangulations are \[T_{\sigma,1}=F_{\sigma}*\partial(G_{\sigma}*o_{k}),\quad T_{\sigma,2}=\partial F _{\sigma}*(G_{\sigma}*o_{k}).\] Both \(T_{\sigma,1}\) and \(T_{\sigma,2}\) are shellable, because they are the joins of a simplex and the boundary of a simplex. We let \(\widetilde{C}_{\sigma}\) denote either of the two triangulations of \(C_{\sigma}\), and put \[\widetilde{\mathcal{S}}_{k}^{\mathrm{low}}:=\bigcup_{\sigma\in\mathcal{S}_{k}^ {\mathrm{low}}}\widetilde{C}_{\sigma},\quad\widetilde{\mathcal{S}}_{k}^{ \mathrm{up}}:=\bigcup_{\sigma\in\mathcal{S}_{k}^{\mathrm{up}}}\widetilde{C}_{ \sigma}.\] We call these two complexes the _lower diagonal_ and _upper diagonal_ of the new ball \(\mathcal{T}_{k}\). Figure 1: A grid representing \(\mathcal{T}\) for \(d=2\) and \(n=8\) * Let \(\mathcal{C}_{k}\) be the set of all \((2d-2)\)-simplices on the boundary of \(\mathcal{T}|_{\mathbf{B}_{k}}\) not contained in any \(D_{\sigma}\) for \(\sigma\in\mathcal{S}_{k}\) (and therefore not contained in any \(\sigma\in\mathcal{S}_{k}\)). The corresponding edges for \(k=3\) when \(d=2\) and \(n=8\) are highlighted in red in Figure 1. For each \(\tau\in\mathcal{C}_{k}\), consider the simplex \(\tau*o_{k}\). Let \[\widetilde{\mathcal{C}}_{k}:=\bigcup_{\tau\in\mathcal{C}_{k}}(\tau*o_{k}).\] We call this complex the _connecting path_ of the new ball \(\mathcal{T}_{k}\). Then \(\mathcal{T}_{k}=\widetilde{\mathcal{S}}_{k}^{\mathrm{low}}\cup\widetilde{ \mathcal{C}}_{k}\cup\widetilde{\mathcal{S}}_{k}^{\mathrm{up}}\), and \(\widetilde{\mathcal{T}}=\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{\lceil d(n- 1)/(d+2)\rceil}\) is a \((2d-1)\)-ball homeomorphic to \(\mathcal{T}\). As the last step, a \((2d-1)\)-sphere is obtained by introducing a new vertex \(o\) and considering the complex \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\). As shown in [7, Corollary 5.5], this construction yields \(2^{\Omega(N^{d})}\) combinatorially distinct \((2d-1)\)-spheres with \(N\) vertices. Indeed, there are two ways to triangulate each \(C_{\sigma}\), so there are at least \(2^{\sum_{k}|\mathcal{S}_{k}|}>2^{2N^{d}/3d^{d+1}}\) many labeled \(N\)-vertex triangulations of the \((2d-1)\)-sphere. Since \(N!=2^{O(N\log N)}\), dividing by \(N!\) does not change the asymptotic order of the bound. The following result from [7] will be handy; we use it repeatedly throughout the main proof. **Lemma 2.1**.: [7, Lemma 3.2] Let \(\sigma,\sigma^{\prime}\in\mathcal{S}_{k}\) and \(\tau,\tau^{\prime}\in\mathcal{C}_{k}\). Then 1. \((\tau*o_{k})\cap(\tau^{\prime}*o_{k})=(\tau\cap\tau^{\prime})*o_{k}\). 2. \((\tau*o_{k})\cap\widetilde{C}_{\sigma}=(\tau\cap\sigma)*o_{k}\). 3. \(\widetilde{C}_{\sigma}\cap\widetilde{C}_{\sigma^{\prime}}=(\sigma\cap\sigma^{ \prime})*o_{k}\). ### Outline of the proof of constructibility This section provides an outline of the proof of the constructibility of the spheres \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\). The proof of shellability of the 3-spheres uses similar ideas. At the end of this section, we prove two lemmas in preparation for the main discussion. We prove the constructibility of \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\) with the following steps. * (Section 3.1) Fix \(k\in\{1,\ldots,\lceil d(n-1)/(d+2)\rceil\}\). Starting from the lower diagonal \(\widetilde{\mathcal{S}}_{k}^{\mathrm{low}}\) of \(\mathcal{T}_{k}\), we attach its subcomplexes \(\widetilde{C}_{\sigma}\) one by one, making sure that the complex at each phase is constructible. * (Section 3.2) After attaching the entire \(\widetilde{\mathcal{S}}_{k}^{\mathrm{low}}\), we attach the simplices in the connecting path \(\widetilde{\mathcal{C}}_{k}\) one by one, again maintaining that the complex at each phase is constructible. * (Section 3.3) Then we attach the subcomplexes \(\widetilde{C}_{\sigma}\) of the upper diagonal \(\widetilde{\mathcal{S}}_{k}^{\mathrm{up}}\) one at a time. When this is finished, we obtain the constructible complex \(\mathcal{T}_{k}\). * (Section 3.4) Next, we prove \(\widetilde{\mathcal{T}}\) is constructible by inductively adding the constructible complexes \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lceil d(n-1)/(d+2)\rceil}\). * (Section 3.4) Finally, we check that the union \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\) is a constructible decomposition, concluding that the sphere is constructible. The following lemmas characterize the \((2d-2)\)-simplices on the boundary of the join \(\mathcal{T}\) of \(d\) paths, as well as those on the boundary of \(\mathcal{T}|_{\mathbf{B}_{k}}\) (which has the same boundary as \(\mathcal{T}_{k}\)). **Lemma 2.2**.: _The list of the \((2d-2)\)-simplices on the boundary of \(\mathcal{T}\) consists of_ * \(\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) _for all_ \(\ell\in[d]\) _such that_ \(i_{\ell}=1\)_._ * \(\sigma\setminus\{a_{i_{\ell}}^{(\ell)}\}\) _for all_ \(\ell\in[d]\) _such that_ \(i_{\ell}=n-1\)_._ Proof.: Let \(\sigma=(i_{1},\ldots,i_{d})\). Any \(\tau=\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) is contained in \(\sigma\) and \(\sigma^{\prime}=(i_{1},\ldots,i_{\ell}^{\prime}=i_{\ell}-1,\ldots,i_{d})\). Therefore, \(\tau\) is on the boundary if and only if \(\sigma^{\prime}\) does not exist, or equivalently \(i_{\ell}=1\). Similarly, any \(\tau=\sigma\setminus\{a_{i_{\ell}}^{(\ell)}\}\) is contained in \(\sigma\) and \(\sigma^{\prime}=(i_{1},\ldots,i_{\ell}^{\prime}=i_{\ell}+1,\ldots,i_{d})\). Therefore, \(\tau\) is on the boundary if and only if \(\sigma^{\prime}\) does not exist, or equivalently \(i_{\ell}=n-1\). **Lemma 2.3**.: _Let \(\sigma=(i_{1},\ldots,i_{d})\in\mathcal{S}_{k}^{\text{low}}\) and \(\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\in\mathcal{S}_{k}^{ \text{up}}\). Then \(\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) and \(\sigma^{\prime}\setminus\{a_{i_{\ell}^{\prime}}^{(\ell)}\}\) are on the boundary of \(\mathcal{T}|_{\mathbf{B}_{k}}\) for every \(\ell\in[d]\)._ Proof.: For \(\sigma\in\mathcal{S}_{k}^{\text{low}}\), if \(i_{\ell}=1\), then by Lemma 2.2, \(\tau=\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\in\partial(\mathcal{T}|_{ \mathbf{B}_{k}})\). If \(i_{\ell}>1\), then the only other \((2d-1)\)-simplex in \(\mathcal{T}\) containing \(\tau\) is \(\sigma^{*}=(i_{1},\ldots,i_{\ell}^{\prime}=i_{\ell}-1,\ldots,i_{d})\). Observe that \(\sigma^{*}\) is not in \(\mathcal{T}|_{\mathbf{B}_{k}}\), because \(\sum\sigma^{*}=\sum\sigma-1=(k-1)(d+2)-1\). Therefore again \(\tau\in\partial(\mathcal{T}|_{\mathbf{B}_{k}})\). The proof for \(\sigma^{\prime}\in\mathcal{S}_{k}^{\text{up}}\) is analogous. If \(i_{\ell}^{\prime}=n-1\), then by Lemma 2.2, \(\tau^{\prime}=\sigma\setminus\{a_{i_{\ell}^{\prime}}^{(\ell)}\}\in\partial( \mathcal{T}|_{\mathbf{B}_{k}})\). If \(i_{\ell}^{\prime}<n-1\), then the only other \((2d-1)\)-simplex in \(\mathcal{T}\) containing \(\tau^{\prime}\) is \((i_{1}^{\prime},\ldots,i_{\ell}^{\prime*}=i_{\ell}^{\prime}+1,\ldots,i_{d}^{ \prime})\), which is not in \(\mathcal{T}|_{\mathbf{B}_{k}}\) because its index sum is too large. ## 3 The constructibility of the Nevo-Santos-Wilson spheres ### The lower diagonal of \(\mathcal{T}_{k}\) Fix \(k\in\{1,\ldots,\lceil d(n-1)/(d+2)\rceil\}\). Recall that the lower diagonal of \(\mathcal{T}_{k}\) is \(\widetilde{\mathcal{S}}_{k}^{\text{low}}=\bigcup_{\sigma\in\mathcal{S}_{k}^{ \text{low}}}\widetilde{C}_{\sigma}\), where \(\widetilde{C}_{\sigma}\) is one of the two possible triangulations of \(C_{\sigma}\), and \[\mathcal{S}_{k}^{\text{low}}=\left\{\sigma=(i_{1},\ldots,i_{d}):\sum\sigma=(k- 1)(d+2)\right\}.\] The goal of this section is to show that \(\widetilde{\mathcal{S}}_{k}^{\text{low}}\) is constructible. For any subset of \([n-1]^{d}\), we can consider the _lexicographic order_ on its elements. Specifically, define \(\sigma^{\prime}<\sigma\) if the leftmost nonzero entry of \(\sigma-\sigma^{\prime}\) is positive. Let \(\sigma_{1}<\cdots<\sigma_{|\mathcal{S}_{k}^{\text{low}}|}\) denote the elements of \(\mathcal{S}_{k}^{\text{low}}\) ordered in this way. By Lemma 2.1 (3), for each \(j\in\{2,\ldots,|\mathcal{S}_{k}^{\text{low}}|\}\), \[\widetilde{C}_{\sigma_{j}}\cap(\widetilde{C}_{\sigma_{1}}\cup\cdots\cup \widetilde{C}_{\sigma_{j-1}})=(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{ j-1}))*o_{k}. \tag{2}\] Therefore, to show that the left-hand side is \((2d-2)\)-constructible, it suffices to show that \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1})\) is \((2d-3)\)-constructible. We begin with an explicit description of this intersection. **Lemma 3.1**.: _Let \(d^{\prime}\in[(k-1)(d+2),k(d+2)-1]\). Let the set \(\mathcal{D}\) consist of all \((2d-1)\)-simplices in \(\mathcal{T}|_{\mathbf{B}_{k}}\) whose index sum is \(d^{\prime}\). Let \(\sigma_{1}<\cdots<\sigma_{|\mathcal{D}|}\) be elements of \(\mathcal{D}\) in lexicographic order. Let \(\sigma_{j}=(i_{1},\ldots,i_{d})\in\mathcal{D}\). Then \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1})\) is the pure \((2d-3)\)-dimensional simplicial complex generated by all sets \(\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m}}^{(m)}\}\) such that \(\ell<m\), \(i_{\ell}>1\) and \(i_{m}<n-1\)._ Proof.: Suppose \(\sigma^{\prime}=(i^{\prime}_{1},\ldots,i^{\prime}_{d})\) satisfies \(\sum\sigma^{\prime}=\sum\sigma_{j}\) and \(\sigma^{\prime}<\sigma_{j}\). Then there exists an \(\ell\in[d]\) such that \(i_{r}=i^{\prime}_{r}\) for all \(r\leq\ell-1\) and \(i_{\ell}>i^{\prime}_{\ell}\geq 1\). However, since \(\sigma^{\prime}\) has the same index sum as \(\sigma_{j}\), there must be an \(m>\ell\) such that \(i_{m}<i^{\prime}_{m}\leq n-1\). This means \(\sigma_{j}\cap\sigma^{\prime}\) does not contain \(a^{(\ell)}_{i_{\ell}+1}\) or \(a^{(m)}_{i_{m}}\). Consider \(\sigma^{\prime\prime}=(i_{1},\ldots,i^{\prime\prime}_{\ell}=i_{\ell}-1,\ldots,i^{\prime\prime}_{m}=i_{m}+1,\ldots,i_{d})\). Note that \(\sum\sigma^{\prime\prime}=\sum\sigma_{j}\). Observe that \(\sigma^{\prime\prime}<\sigma_{j}\) and \(\sigma_{j}\cap\sigma^{\prime}\subseteq\sigma_{j}\cap\sigma^{\prime\prime}= \sigma_{j}\setminus\{a^{(\ell)}_{i_{\ell}+1},a^{(m)}_{i_{m}}\}\). Conversely, given any \(\ell<m\) such that \(i_{\ell}>1\) and \(i_{m}<n-1\), the simplex \(\sigma^{\prime}=(i_{1},\ldots,i^{\prime}_{\ell}=i_{\ell}-1,\ldots,i^{\prime}_ {m}=i_{m}+1,\ldots,i_{d})\) is an element of \(\mathcal{D}\). Moreover, \(\sigma^{\prime}<\sigma_{j}\) and \(\sigma_{j}\cap\sigma^{\prime}=\sigma_{j}\setminus\{a^{(\ell)}_{i_{\ell}+1},a^ {(m)}_{i_{m}}\}\). Notice that \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1})\) in (2) is pure \((2d-3)\)-dimensional by taking \(d^{\prime}=(k-1)(d+2)\) in Lemma 3.1. We now prove that this intersection is shellable. **Lemma 3.2**.: _Let \(\sigma=(i_{1},\ldots,i_{d})\in\mathcal{S}^{\text{low}}_{k}\). Let \(\Delta\) be the pure simplicial complex generated by all sets \(\sigma\setminus\{a^{(\ell)}_{i_{\ell}+1},a^{(m)}_{i_{m}}\}\) such that \(\ell<m\), \(i_{\ell}>1\) and \(i_{m}<n-1\). Then \(\Delta\) is shellable._ Proof.: Let \(F_{\ell,m}\) denote a facet of the form described in the statement. Define \(F_{\ell^{\prime},m^{\prime}}\preceq F_{\ell,m}\) if \((\ell^{\prime},m^{\prime})<(\ell,m)\) in lexicographic order. We claim that \(\preceq\) is a shelling order on the facets of \(\Delta\). Indeed, if \(\ell=\ell^{\prime}\), then \(F_{\ell,m}\cap F_{\ell^{\prime},m^{\prime}}=F_{\ell,m}\setminus\{a^{(m^{\prime \prime})}_{i_{m^{\prime}}}\}\). Similarly, if \(\ell^{\prime}<\ell<m\), then \(F_{\ell^{\prime},m}\in\Delta\), \(F_{\ell^{\prime},m}\preceq F_{\ell,m}\), and \(F_{\ell,m}\cap F_{\ell^{\prime},m^{\prime}}\subseteq F_{\ell,m}\cap F_{\ell^ {\prime},m}=F_{\ell,m}\setminus\{a^{(\ell^{\prime})}_{i_{\ell^{\prime}}+1}\}\). The result follows. Therefore, for each \(\sigma_{j}\in\mathcal{S}^{\text{low}}_{k}\), the \((2d-2)\)-dimensional intersection \(\widetilde{C}_{\sigma_{j}}\cap(\widetilde{C}_{\sigma_{1}}\cup\cdots\cup \widetilde{C}_{\sigma_{j-1}})\) is constructible. By induction, \(\widetilde{\mathcal{S}}^{\text{low}}_{k}\) is constructible. ### The connecting path of \(\mathcal{T}_{k}\) Recall that the connecting path of \(\mathcal{T}_{k}\) is \(\widetilde{\mathcal{C}}_{k}=\bigcup_{\tau\in\mathcal{C}_{k}}(\tau*o_{k})\), where \(\mathcal{C}_{k}\) is the set of all \((2d-2)\)-simplices on the boundary of \(\mathcal{T}|_{\mathbf{B}_{k}}\) that are not contained in any \(\sigma\in\mathcal{S}_{k}\). The goal of this section is to show that \(\widetilde{\mathcal{S}}^{\text{low}}_{k}\cup\widetilde{\mathcal{C}}_{k}\) is constructible. By definition, each \(\tau\in\mathcal{C}_{k}\) is contained in a unique \((2d-1)\)-simplex \(\sigma\) in \(\mathcal{T}|_{\mathbf{B}_{k}}\) such that \((k-1)(d+2)<\sum\sigma<k(d+2)-1\). By Lemma 2.2, either \(\tau=\sigma\setminus\{a^{(\ell)}_{i_{\ell}+1}\}\) with \(i_{\ell}=1\) or \(\tau=\sigma\setminus\{a^{(\ell)}_{i_{\ell}}\}\) with \(i_{\ell}=n-1\). The following two definitions give rise to a new ordering on these elements of \(\mathcal{C}_{k}\), which induces an ordering on the \((2d-1)\)-simplices in \(\widetilde{\mathcal{C}}_{k}\). Recall that \(P(a^{(r)}_{1},a^{(r)}_{2},\ldots,a^{(r)}_{n})\) denotes the \(r\)-th path in the join \(\mathcal{T}\) of \(d\) paths for \(r\in[d]\). **Definition 3.3**.: Let \(\tau\) be a simplex in the join \(\mathcal{T}\) and \(r\in[d]\). Define \(\tau(r)=\tau\cap P(a^{(r)}_{1},a^{(r)}_{2},\ldots,a^{(r)}_{n})\), the intersection of \(\tau\) with the \(r\)-th path. Let \(\tau^{\prime}\) be another simplex in \(\mathcal{T}\). Define \(\tau(r)>\tau^{\prime}(r)\) if one of the following conditions holds. 1. \(|\tau(r)|\neq 0\) and \(|\tau^{\prime}(r)|=0\). 2. \(|\tau(r)|=|\tau^{\prime}(r)|=1\) and \(m>m^{\prime}\), where \(\tau(r)=\{a^{(r)}_{m}\}\) and \(\tau^{\prime}(r)=\{a^{(r)}_{m^{\prime}}\}\). 3. \(|\tau(r)|=|\tau^{\prime}(r)|=2\) and \(i_{r}>i^{\prime}_{r}\). 4. \(|\tau(r)|=2\), \(|\tau^{\prime}(r)|=1\), and \(i_{r}\geq m^{\prime}\), where \(\tau^{\prime}(r)=\{a^{(r)}_{m^{\prime}}\}\). For example, \(\{a^{(r)}_{i_{r}},a^{(r)}_{i_{r}+1}\}>\{a^{(r)}_{i_{r}}\}\). 5. \(|\tau(r)|=1\), \(|\tau^{\prime}(r)|=2\), and \(m\geq i^{\prime}_{r}+1\), where \(\tau(r)=\{a^{(r)}_{m}\}\). For example, \(\{a^{(r)}_{i_{r}+1}\}>\{a^{(r)}_{i_{r}},a^{(r)}_{i_{r}+1}\}\). **Definition 3.4**.: Let \(\tau,\tau^{\prime}\) be simplices in \(\mathcal{T}\) such that \(|\tau|=|\tau^{\prime}|\). Define \(\tau^{\prime}\prec\tau\) if there exists an \(r\in[d]\) such that \(\tau(s)=\tau^{\prime}(s)\) for every \(s<r\) and \(\tau(r)>\tau^{\prime}(r)\). Order the elements of \(\mathcal{C}_{k}\) with respect to \(\prec\) in Definition 3.4. For each \(\tau_{j}\in\mathcal{C}_{k}\), we show that \((\tau_{j}*o_{k})\cap((\tau_{1}*o_{k})\cup\cdots\cup(\tau_{j-1}*o_{k})\cup \widetilde{\mathcal{S}}_{k}^{\text{low}})\) is \((2d-2)\)-constructible. **Lemma 3.5**.: _Let \(\tau_{j}\in\mathcal{C}_{k}\). Then \(\tau_{j}\cap(\tau_{1}\cup\cdots\cup\tau_{j-1}\cup\mathcal{S}_{k}^{\text{low}})\) is pure \((2d-3)\)-dimensional._ Proof.: Let \(\sigma=(i_{1},\ldots,i_{d})\) be the unique simplex in \(\mathcal{T}|_{\mathbf{B}_{k}}\) containing \(\tau_{j}\). Then \((k-1)(d+2)<\sum\sigma<k(d+2)-1\). Let \(\ell\in[d]\) be the unique coordinate such that \(|\tau_{j}(\ell)|=1\). We start by extracting two arguments to be used repeatedly throughout the proof. **Argument 3.5.1**.: Suppose \(m\in[d]\) satisfies \(m\neq\ell\) and \(i_{m}>1\). Let \(\sigma^{\prime\prime}=(i_{1},\ldots,i_{m}^{\prime\prime}=i_{m}-1,\ldots,i_{d})\). Since \(\sum\sigma^{\prime\prime}=\sum\sigma-1\geq(k-1)(d+2)\), \(\sigma^{\prime\prime}\) is in \(\mathcal{T}|_{\mathbf{B}_{k}}\). If \(\sigma^{\prime\prime}\in\mathcal{S}_{k}^{\text{low}}\), then \(\tau_{j}\cap\sigma^{\prime\prime}=\tau_{j}\setminus\{a_{i_{m}+1}^{(m)}\}\). If \(\sigma^{\prime\prime}\notin\mathcal{S}_{k}^{\text{low}}\), then consider \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\sigma^{\prime\prime}(\ell) \cup\tau_{j}(\ell)\in\mathcal{C}_{k}\). Observe that \(\tau^{\prime\prime}\prec\tau_{j}\) and \(\tau_{j}\cap\tau^{\prime\prime}=\tau_{j}\setminus\{a_{i_{m}+1}^{(m)}\}\). In particular, if \(\tau^{\prime}\subseteq\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\) with \(i_{m}^{\prime}<i_{m}\), then \(a_{i_{m}+1}^{(m)}\notin\tau_{j}\cap\tau^{\prime}\). Thus \(\tau_{j}\cap\tau^{\prime}\subseteq\tau_{j}\setminus\{a_{i_{m}+1}^{(m)}\}=\tau _{j}\cap\tau^{\prime\prime}\). **Argument 3.5.2**.: Suppose \(\tau_{j}(\ell)=\{a_{i_{\ell}+1}^{(\ell)}\}\). Then \(i_{\ell}=n-1\). Let \(\sigma^{\prime\prime}=(i_{1},\ldots,i_{\ell}^{\prime\prime}=1,\ldots,i_{d})\). If \(\sum\sigma^{\prime\prime}<(k-1)(d+2)\), then there is an \(m\in[2,\ldots,n-2]\) such that \(\sigma^{\prime\prime\prime}=(i_{1},\ldots,i_{\ell}^{\prime\prime}=m,\ldots,i_{ d})\in\mathcal{S}_{k}^{\text{low}}\). This gives \(\tau_{j}\cap\sigma^{\prime\prime\prime}=\tau_{j}\setminus\{a_{i_{\ell}+1}^{( \ell)}\}\). If \(\sum\sigma^{\prime\prime}=(k-1)(d+2)\), then \(\sigma^{\prime\prime}\in\mathcal{S}_{k}^{\text{low}}\). In this case, \(\tau_{j}\cap\sigma^{\prime\prime}=\tau_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\). If \(\sum\sigma^{\prime\prime}>(k-1)(d+2)\), then consider \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\{a_{i_{\ell}^{\prime\prime}+ 1}^{(\ell)}\}\in\mathcal{C}_{k}\). Observe that \(\tau^{\prime\prime}\prec\tau_{j}\) and \(\tau_{j}\cap\tau^{\prime\prime}=\tau_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\). In particular, if \(\tau^{\prime}\subseteq\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\) with \(i_{\ell}^{\prime}<i_{\ell}\), then \(a_{i_{\ell}+1}^{(\ell)}\notin\tau_{j}\cap\tau^{\prime}\). Thus \(\tau_{j}\cap\tau^{\prime}\) is contained in the intersection \(\tau_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) constructed above. In both Arguments 3.5.1 and 3.5.2, we find a simplex in \(\tau_{1}\cup\cdots\cup\tau_{j-1}\cup\mathcal{S}_{k}^{\text{low}}\) that intersects \(\tau_{j}\) at a \((2d-3)\)-face of \(\tau_{j}\). To finish the proof, we show that the intersection of \(\tau_{j}\) with any simplex in \(\tau_{1}\cup\cdots\cup\tau_{j-1}\cup\mathcal{S}_{k}^{\text{low}}\) is contained in one of these \((2d-3)\)-dimensional intersections. Let \(\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\). We enumerate all cases below. 1. Suppose \(\tau_{j}=\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) with \(i_{\ell}=1\). 1. Suppose \(\sigma^{\prime}\in\mathcal{S}_{k}^{\text{low}}\). Then \(\sum\sigma>\sum\sigma^{\prime}\). But \(1=i_{\ell}\leq i_{\ell}^{\prime}\), so there exists an \(m\neq\ell\) such that \(i_{m}>i_{m}^{\prime}\geq 1\). Apply Argument 3.5.1. 2. Suppose \(\tau^{\prime}=\sigma^{\prime}\setminus\{a_{i_{\tau}^{\prime}+1}^{(r)}\}\in \mathcal{C}_{k}\) with \(i_{r}^{\prime}=1\) and \(\tau^{\prime}\prec\tau_{j}\). * If \(\ell=r\), then there exists an \(m\neq\ell\) such that \(\tau_{j}(m)>\tau^{\prime}(m)\). This is equivalent to \(i_{m}>i_{m}^{\prime}\geq 1\). Apply Argument 3.5.1. * If \(\ell<r\), then \(\tau_{j}(\ell)<\tau^{\prime}(\ell)\). Therefore, there exists an \(m<\ell<r\) such that \(\tau_{j}(m)>\tau^{\prime}(m)\), or equivalently \(i_{m}>i_{m}^{\prime}\geq 1\). Apply Argument 3.5.1. * If \(\ell>r\) and \(i_{r}>i_{r}^{\prime}=1\), apply Argument 3.5.1 by taking \(m=r\). * If \(\ell>r\) and \(i_{r}=i_{r}^{\prime}=1\), then consider \(\tau^{\prime\prime}=\sigma\setminus\{a_{i_{r}+1}^{(r)}\}\in\mathcal{C}_{k}\). Observe that \(\tau^{\prime\prime}\prec\tau_{j}\) and \(\tau_{j}\cap\tau^{\prime}\subseteq\tau_{j}\cap\tau^{\prime\prime}=\tau_{j} \setminus\{a_{i_{r}+1}^{(r)}\}\). 3. Suppose \(\tau^{\prime}=\sigma^{\prime}\setminus\{a_{i_{r}^{\prime}}^{(r)}\}\in \mathcal{C}_{k}\) with \(i_{r}^{\prime}=n-1\) and \(\tau^{\prime}\prec\tau_{j}\). Then \(\tau_{j}(r)<\tau^{\prime}(r)\), so there exists an \(m<r\) such that \(\tau_{j}(m)>\tau^{\prime}(m)\). Hence \(m\neq\ell\) and \(i_{m}>i_{m}^{\prime}\geq 1\). Apply Argument 3.5.1. 2. Suppose \(\tau_{j}=\sigma\setminus\{a_{i_{\ell}}^{(\ell)}\}\) with \(i_{\ell}=n-1\). 1. \(\sigma_{j}\setminus\{a^{(\ell)}_{i_{\ell}+1},a^{(m)}_{i_{m}+1}\}\) _for all_ \(i_{\ell}=1\) _and_ \(i_{m}>1\)_._ 2. \(\sigma_{j}\setminus\{a^{(\ell)}_{i_{\ell}},a^{(\ell)}_{i_{t}+1}\}\) _for all_ \(1<i_{\ell}\leq n-1\)_._ 3. \(\sigma_{j}\setminus\{a^{(\ell)}_{i_{\ell}},a^{(\ell)}_{i_{m}+1}\}\) _for all_ \(i_{\ell}=n-1\)_,_ \(i_{m}>1\) _and_ \(\ell\neq m\)_._ _._ 4. \(\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m}}^{(m)}\}\) _for all_ \(i_{\ell}>1\)_,_ \(i_{m}<n-1\)_, and_ \(\ell<m\)_._ Proof.: By Lemma 3.1, the facets of \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1})\) are exactly the ones described in 4. It remains to prove that \(\sigma_{j}\cap(\mathcal{S}_{k}^{\mathrm{low}}\cup\mathcal{C}_{k})\) is generated by the first three types of facets. We present Arguments 3.6.1 and 3.6.2 to be used repeatedly throughout the proof. **Argument 3.6.1**.: Suppose \(i_{\ell}\in\{1,n-1\}\), \(i_{m}>1\), and \(m\neq\ell\). Let \(\sigma^{\prime\prime}=(i_{1},\ldots,i_{m}^{\prime\prime}=i_{m}-1,\ldots,i_{d})\). If \(i_{\ell}=1\), then let \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) and observe that \(\sigma_{j}\cap\tau^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell) },a_{i_{m}+1}^{(m)}\}\). Alternatively, if \(i_{\ell}=n-1\), then let \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\{a_{i_{\ell}}^{(\ell)}\}\) and observe that \(\sigma_{j}\cap\tau^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell) },a_{i_{m}+1}^{(m)}\}\). In both cases, \(\tau^{\prime\prime}\in\mathcal{C}_{k}\) because \(\sum\sigma^{\prime\prime}=\sum\sigma_{j}-1=k(d+2)-2>(k-1)(d+2)\). **Argument 3.6.2**.: First suppose \(i_{\ell}>2\). Let \(\sigma^{\prime\prime}=(i_{1},\ldots,i_{\ell}^{\prime\prime}=1,\ldots,i_{d})\). If \(\sum\sigma^{\prime\prime}<(k-1)(d+2)\), then there exists an \(m\in[2,i_{\ell}-1]\) such that \(\sigma^{\prime\prime\prime}=(i_{1},\ldots,i_{\ell}^{\prime\prime\prime}=m, \ldots,i_{d})\in\mathcal{S}_{k}^{\mathrm{low}}\). Observe that \(\sigma_{j}\cap\sigma^{\prime\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{ (\ell)},a_{i_{\ell}+1}^{(\ell)}\}\). If \(\sum\sigma^{\prime\prime}=(k-1)(d+2)\), then \(\sigma^{\prime\prime}\in\mathcal{S}_{k}^{\mathrm{low}}\), and \(\sigma_{j}\cap\sigma^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell) },a_{i_{\ell}+1}^{(\ell)}\}\). If \(\sum\sigma^{\prime\prime}>(k-1)(d+2)\), then \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\{a_{i_{\ell}^{\prime\prime }+1}^{(\ell)}\}\in\mathcal{C}_{k}\), and \(\sigma_{j}\cap\tau^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell) },a_{i_{\ell}+1}^{(\ell)}\}\). Suppose alternatively that \(i_{\ell}=2\). Let \(\sigma^{\prime\prime}=(i_{1},\ldots,i_{\ell}^{\prime\prime}=i_{\ell}-1=1, \ldots,i_{d})\). Then \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\{a_{i_{\ell}}^{(\ell)}\} \in\mathcal{C}_{k}\) and \(\sigma_{j}\cap\tau^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell) },a_{i_{\ell}+1}^{(\ell)}\}\). In particular, if \(\tau^{\prime}\subseteq\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\) with \(i_{\ell}\geq 2\) and \(a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\notin\sigma_{j}\cap\tau^{\prime}\), then \(\sigma_{j}\cap\tau^{\prime}\) is contained in the intersection \(\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\}\) found above. Arguments 3.6.1 and 3.6.2 show that all first three types of facets described in the statement of the lemma are contained in \(\sigma_{j}\cap(\mathcal{S}_{k}^{\mathrm{low}}\cup\mathcal{C}_{k})\). Let \(\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\). We prove the reverse containment by discussing the following cases. 1. Suppose \(\sigma^{\prime}\in\mathcal{S}_{k}^{\mathrm{low}}\). Since \(\sum\sigma_{j}-\sum\sigma^{\prime}=d+1\), there exists an \(\ell\in[d]\) such that \(i_{\ell}-i_{\ell}^{\prime}>1\). This implies \(i_{\ell}>2\) and \(a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\notin\sigma_{j}\cap\sigma^{\prime}\). Apply Argument 3.6.2. 2. Suppose \(\tau^{\prime}=\sigma^{\prime}\setminus\{a_{i_{\ell}^{\prime}+1}^{(\ell)}\}\in \mathcal{C}_{k}\) with \(i_{\ell}^{\prime}=1\). 1. If \(i_{\ell}=i_{\ell}^{\prime}=1\), then since \(\sum\sigma_{j}>\sum\sigma^{\prime}\), there is an \(m\neq\ell\) such that \(i_{m}>i_{m}^{\prime}\geq 1\). This implies \(a_{i_{m}+1}^{(m)}\notin\sigma_{j}\cap\tau^{\prime}\). Apply Argument 3.6.1. 2. If \(i_{\ell}\geq 2\), then \(a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\notin\sigma_{j}\cap\tau^{\prime}\). Apply Argument 3.6.2. 3. Suppose \(\tau^{\prime}=\sigma^{\prime}\setminus\{a_{i_{\ell}^{\prime}}^{(\ell)}\}\in \mathcal{C}_{k}\) with \(i_{\ell}^{\prime}=n-1\). 1. If \(i_{\ell}=i_{\ell}^{\prime}=n-1\), then since \(\sum\sigma_{j}>\sum\sigma^{\prime}\), there exists an \(m\neq\ell\) such that \(i_{m}>i_{m}^{\prime}\geq 1\). Apply Argument 3.6.1. 2. If \(i_{\ell}=1<n-1=i_{\ell}^{\prime}\), then \(a_{i_{\ell}+1}^{(\ell)}\notin\sigma_{j}\cap\tau^{\prime}\). But \(\sum\sigma_{j}>\sum\sigma^{\prime}\), so there exists an \(m\neq\ell\) such that \(i_{m}>i_{m}^{\prime}\geq 1\). Apply Argument 3.6.1. 3. If \(1<i_{\ell}<n-1\), then \(a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\notin\sigma_{j}\cap\tau^{\prime}\). Apply Argument 3.6.2. Next we prove that the intersection \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1}\cup\mathcal{S}_{k}^{\mathrm{ low}}\cup\mathcal{C}_{k})\) is shellable. First, observe that for a fixed \(\sigma_{j}\in\mathcal{S}_{k}^{\mathrm{up}}\), the four types of facets described in Lemma 3.6 are mutually exclusive. This guarantees that the ordering introduced below is a total order. **Definition 3.7**.: Let \(\sigma_{j}=(i_{1},\ldots,i_{d})\in\mathcal{S}_{k}^{\mathrm{up}}\). For each \(\ell\in[d]\), define \(F_{\ell,r}\) to be one of the following: * If \(i_{\ell}=1\), then let \(F_{\ell,r}=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m_{r}}+1}^{(m_{r})}\}\), where \(m_{r}\) is the \(r\)-th coordinate of \(\sigma_{j}\) such that \(i_{m_{r}}>1\) (Type 1 in Lemma 3.6). * If \(1<i_{\ell}<n-1\), then let \(F_{\ell,0}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\}\) (Type 2 in Lemma 3.6), and \(F_{\ell,r}=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m_{r}}}^{(m_{r})}\}\), where \(m_{r}\) is the \(r\)-th coordinate of \(\sigma_{j}\) such that \(m_{r}>\ell\) and \(i_{m_{r}}<n-1\) (Type 4 in Lemma 3.6). * If \(i_{\ell}=n-1\), then let \(F_{\ell,r}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{m_{r}}+1}^{(m_{r}) }\}\), where \(m_{r}\) is the \(r\)-th coordinate of \(\sigma_{j}\) such that \(i_{m_{r}}>1\) (Type 3 in Lemma 3.6). Suppose there are \(R\) such coordinates, then let \(F_{\ell,R+1}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell) }\}\) (Type 2 in Lemma 3.6), and \(F_{\ell,R+1+r}=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{s_{r}}}^{(s_ {r})}\}\), where \(s_{r}\) is the \(r\)-th coordinate of \(\sigma_{j}\) such that \(s_{r}>\ell\) and \(i_{s_{r}}<n-1\) (Type 4 in Lemma 3.6). Furthermore, define \(F_{\ell^{\prime},r^{\prime}}\preceq F_{\ell,r}\) if \((\ell^{\prime},r^{\prime})<(\ell,r)\) in lexicographic order. **Lemma 3.8**.: _Let \(\sigma_{j}=(i_{1},\ldots,i_{d})\in\mathcal{S}_{k}^{up}\). Then \(\preceq\) in Definition 3.7 is a shelling order of the facets of \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1}\cup\mathcal{S}_{k}^{low} \cup\mathcal{C}_{k})\)._ Proof.: Let \(F,F^{\prime}\) be two facets of \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1}\cup\mathcal{S}_{k}^{\rm low }\cup\mathcal{C}_{k})\) such that \(F^{\prime}\preceq F\). We show that \(F\cap F^{\prime}\) is either \((2d-4)\)-dimensional, or contained in a \((2d-4)\)-dimensional intersection \(F\cap F^{\prime\prime}\), where \(F^{\prime\prime}\preceq F\). First, suppose \(F=F_{\ell,-}\) and \(F^{\prime}=F_{\ell,-^{\prime}}\). If \(1\leq i_{\ell}<n-1\), then \(F\cap F^{\prime}\) is \((2d-4)\)-dimensional. If \(i_{\ell}=n-1\), then both \(F_{\ell,R+1}\cap F_{\ell,r}\) and \(F_{\ell,R+1+r}\cap F_{\ell,R+1}\) are \((2d-4)\)-dimensional for any \(r\). Moreover, in this case, \(F_{\ell,R+1+r}\cap F_{\ell,r^{\prime}}\) is contained in the \((2d-4)\)-dimensional intersection \(F_{\ell,R+1+r}\cap F_{\ell,R+1}\). Now suppose \(F=F_{\ell,-}\) and \(F^{\prime}=F_{\ell^{\prime},-^{\prime}}\) for \(\ell\neq\ell^{\prime}\). By Definition 3.7, \(\ell^{\prime}<\ell\). For each case below, we find an \(F^{\prime\prime}\preceq F\) such that \(F\cap F^{\prime\prime}\) is \((2d-4)\)-dimensional and \(F\cap F^{\prime}\subseteq F\cap F^{\prime\prime}\). 1. Suppose \(F=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m}+1}^{(m)}\}\) with \(i_{\ell}=1\) and \(i_{m}>1\). 1. Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i_ {m^{\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}=1\) and \(i_{m^{\prime}}>1\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a _{i_{m}+1}^{(m)}\}\). 2. Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {\ell^{\prime}}+1}^{(\ell^{\prime})}\}\) with \(1<i_{\ell^{\prime}}\leq n-1\). * If \(m=\ell^{\prime}\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}\}\). * If \(m<\ell^{\prime}<\ell\) and \(i_{\ell}^{\prime}<n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a _{i_{m}+1}^{(\ell^{\prime})}\}\). * If \(m<\ell^{\prime}<\ell\) and \(i_{\ell}^{\prime}<n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a _{i_{m}+1}^{(\ell^{\prime})}\}\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {m^{\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}=n-1\), \(i_{m^{\prime}}>1\) and \(\ell^{\prime}\neq m^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a _{i_{m}+1}^{(m)}\}\). 4. Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i_ {m^{\prime}}}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}>1\), \(i_{m^{\prime}}<n-1\), and \(\ell^{\prime}<m^{\prime}\). * If \(m=\ell^{\prime}\), then \(F\cap F^{\prime}=F\setminus\{a_{i_{m^{\prime}}}^{(m^{\prime})}\}\). * If \(m>\ell^{\prime}\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell+1}}^{(\ell^{\prime})},a_{i_ {\ell^{\prime}}+1}^{(\ell^{\prime})}\}\). * If \(m<\ell^{\prime}<\ell,m^{\prime}\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{m+1}}^{(m)},a_{i_{m^{\prime}}+1}^{(m^{ \prime})}\}\). 2. Suppose \(F=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{\ell}+1}^{(\ell)}\}\) with \(1<i_{\ell}\leq n-1\). 1. Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i_{m^ {\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}=1\) and \(i_{m^{\prime}}>1\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})}, a_{i_{\ell}+1}^{(\ell)}\}\). 2. Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_{ \ell^{\prime}}+1}^{(\ell^{\prime})}\}\) with \(1<i_{\ell^{\prime}}\leq n-1\). * If \(i_{\ell}=n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell)},a_{i_{ \ell^{\prime}}+1}^{(\ell^{\prime})}\}\). * If \(i_{\ell}<n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}, a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})}\}\) with \(i_{\ell^{\prime}}=n-1\), \(i_{m^{\prime}}>1\) and \(\ell^{\prime}\neq m^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}, a_{i_{\ell}+1}^{(\ell)}\}\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_{ m^{\prime}}}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}>1\), \(i_{m^{\prime}}<n-1\), and \(\ell^{\prime}<m^{\prime}\). * If \(i_{\ell}=n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}, a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})}\}\). * If \(i_{\ell}<n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})}, a_{i_{\ell}}^{(\ell)}\}\). 3. Suppose \(F=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{m+1}}^{(\ell)}\}\) with \(i_{\ell}=n-1\), \(i_{m}>1\) and \(\ell\neq m\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i_ {m^{\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}=1\) and \(i_{m^{\prime}}>1\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})}, a_{i_{m}+1}^{(m)}\}\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {\ell^{\prime}}+1}^{(\ell^{\prime})}\}\) with \(1<i_{\ell^{\prime}}\leq n-1\). * If \(m=\ell^{\prime}\), then \(F\cap F^{\prime}=F\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}\}\). * If \(m>\ell^{\prime}\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell)},a_{i_{\ell^{ \prime}}+1}^{(\ell^{\prime})}\}\). * If \(m<\ell^{\prime}<\ell\) and \(i_{\ell}^{\prime}=n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}, a_{i_{m}+1}^{(m)}\}\). * If \(m<\ell^{\prime}<\ell\) and \(i_{\ell}^{\prime}<n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(m)},a_{i_{\ell^{ \prime}}}^{(\ell^{\prime})}\}\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {m^{\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}=n-1\), \(i_{m^{\prime}}>1\) and \(\ell^{\prime}\neq m^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})}, a_{i_{m}+1}^{(m)}\}\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {m^{\prime}}}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}>1\), \(i_{m^{\prime}}<n-1\), and \(\ell^{\prime}<m^{\prime}\). Then \(m>\ell>\ell^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell}}^{(\ell^{\prime})},a_{i_{ \ell^{\prime}}+1}^{(\ell^{\prime})}\}\). 4. Suppose \(F=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m}}^{(m)}\}\) with \(i_{\ell}>1\), \(i_{m}<n-1\), and \(\ell<m\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i_ {m^{\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}=1\) and \(i_{m^{\prime}}>1\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})}, a_{i_{\ell}+1}^{(\ell)}\}\). * Suppose \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {\ell^{\prime}}+1}^{(\ell^{\prime})}\}\) with \(1<i_{\ell^{\prime}}\leq n-1\). * If \(i_{\ell^{\prime}}=n-1\), then take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {\ell}+1}^{(\ell)}\}\). * If \(i_{\ell^{\prime}}<n-1\), then \(m>\ell>\ell^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i _{m}}^{(\ell^{\prime})}\}\). * \(F^{\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}}^{(\ell^{\prime})},a_{i_ {m^{\prime}}+1}^{(m^{\prime})}\}\) with \(i_{\ell^{\prime}}>1\), \(i_{m^{\prime}}<n-1\), and \(\ell^{\prime}<m^{\prime}\). Then \(m>\ell>\ell^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i _{m}}^{(\ell^{\prime})}\}\). * If \(i_{\ell^{\prime}}<n-1\), then \(m>\ell>\ell^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i _{m}}^{(\ell^{\prime})}\}\). * If \(i_{\ell^{\prime}}<n-1\), then \(m>\ell>\ell^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i _{m}}^{(\ell^{\prime})}\}\) with \(i_{\ell^{\prime}}>1\), \(i_{m^{\prime}}<n-1\), and \(\ell^{\prime}<m^{\prime}\). Take \(F^{\prime\prime}=\sigma_{j}\setminus\{a_{i_{\ell^{\prime}}+1}^{(\ell^{\prime})},a_{i _{m}}^{(\ell^{\prime})}\}\). * If \(i_{\ell^{\prime}}<n-1\), then \(m>\ell>\ell^{\prime}\). Take \(F^{\prime\prime}=\sigma By Lemma 2.1, the result of Lemma 3.8 implies that \(\widetilde{C}_{\sigma_{j}}\cap(\widetilde{C}_{\sigma_{1}}\cup\cdots\cup\widetilde {C}_{\sigma_{j-1}}\cup\widetilde{\mathcal{S}}_{k}^{\rm low}\cup\widetilde{ \mathcal{C}}_{k})\) is \((2d-2)\)-constructible. Moreover, the complex \(\widetilde{C}_{\sigma_{j}}\) is \((2d-1)\)-constructible. Hence so is \(\mathcal{T}_{k}=\widetilde{\mathcal{S}}_{k}^{\rm low}\cup\widetilde{\mathcal{ C}}_{k}\cup\widetilde{\mathcal{S}}_{k}^{\rm up}\) by induction. **Remark 3.9**.: Note that \(\widetilde{\mathcal{S}}_{k}^{\rm low}=\varnothing\) only when \(k=1\). In this case, the simplices \((i_{1},\ldots,i_{\ell}^{\prime\prime}=1,\ldots,i_{d})\) and \((i_{1},\ldots,i_{m}-1,\ldots,i_{d})\) mentioned in Arguments 3.5.1, 3.5.2, 3.6.1, and 3.6.2 will simply not be in \(\mathcal{S}_{k}^{\rm low}\) but fall into the other discussed scenarios. In addition, \(\widetilde{\mathcal{S}}_{k}^{\rm up}=\varnothing\) or \(\widetilde{\mathcal{C}}_{k}\cup\widetilde{\mathcal{S}}_{k}^{\rm up}=\varnothing\) is only possible when \(k\) is the largest. Our proof is unaffected by these special cases. ### The constructible sphere \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\) We have proved that each ball \(\mathcal{T}_{k}\) is constructible. Next, we show that \(\mathcal{T}_{k}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\) is \((2d-2)\)-constructible for each \(k\). This will imply that \(\widetilde{\mathcal{T}}=\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{\lceil d(n- 1)/(d+2)\rceil}\) is constructible. The constructibility of \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\) will then follow easily. Fix \(k\in\{1,\ldots,\lceil d(n-1)/(d+2)\rceil\}\). We first give an explicit description of \(\mathcal{T}_{k}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\). For each \(\sigma=(i_{1},\ldots,i_{d})\), let \(\mathcal{F}(\sigma)\) denote the complex whose facets are \(\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) for \(\ell\in[d]\). **Lemma 3.10**.: \(\mathcal{T}_{k}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})=\bigcup_{ \sigma\in\mathcal{S}_{k}^{\rm low}}\mathcal{F}(\sigma)\)_._ Proof.: Since \(\partial\mathcal{T}_{j}=\partial\mathcal{T}|_{\mathbf{B}_{j}}\) for all \(j\), it suffices to show that \[\mathcal{T}|_{\mathbf{B}_{k}}\cap(\mathcal{T}|_{\mathbf{B}_{1}}\cup\cdots\cup \mathcal{T}|_{\mathbf{B}_{k-1}})=\bigcup_{\sigma\in\mathcal{S}_{k}^{\rm low}} \mathcal{F}(\sigma).\] The containment \(\bigcup_{\sigma\in\mathcal{S}_{k}^{\rm low}}\mathcal{F}(\sigma)\subseteq \mathcal{T}|_{\mathbf{B}_{k}}\cap(\mathcal{T}|_{\mathbf{B}_{1}}\cup\cdots\cup \mathcal{T}|_{\mathbf{B}_{k-1}})\) is clear. Suppose \(\sigma=(i_{1},\ldots,i_{d})\in\mathcal{S}_{k}^{\rm low}\). Then each \(\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) is contained in both \(\sigma\in\mathcal{S}_{k}^{\rm low}\subseteq\mathcal{T}|_{\mathbf{B}_{k}}\) and \((i_{1},\ldots,i_{\ell}-1,\ldots,i_{d})\in\mathcal{S}_{k-1}^{\rm up}\subseteq \mathcal{T}|_{\mathbf{B}_{k-1}}\). For the reverse containment, let \(\sigma=(i_{1},\ldots,i_{d})\in\mathcal{T}|_{\mathbf{B}_{k}}\) and \(\sigma^{\prime}=(i_{1}^{\prime},\ldots,i_{d}^{\prime})\in\mathcal{T}|_{ \mathbf{B}_{1}}\cup\cdots\cup\mathcal{T}|_{\mathbf{B}_{k-1}}\). Since \(\sum\sigma>\sum\sigma^{\prime}\), every \(i_{m}<i_{m}^{\prime}\) is compensated by some (possibly multiple) \(i_{\ell}>i_{\ell}^{\prime}\). Let \(\sigma^{\prime*}=(i_{1}^{\prime*},\ldots,i_{d}^{\prime*})=\sigma^{\prime}\). We modify the indices of \(\sigma^{\prime*}\) as follows: first, replace \(i_{m}^{\prime*}\) with \(i_{m}\) for each \(m\in[d]\) such that \(i_{m}<i_{m}^{\prime}\). Then, starting from the smallest \(\ell\) such that \(i_{\ell}>i_{\ell}^{\prime}\), replace \(i_{\ell}^{\prime*}\) with \(i_{\ell}^{\prime}+r\), where \(r\) is the largest possible integer in \([0,i_{\ell}-i_{\ell}^{\prime}]\) that keeps \(\sum\sigma^{\prime*}\leq(k-1)(d+2)-1\). As a result, \(\sigma^{*}\in\mathcal{S}_{k-1}^{\rm up}\) and \(i_{\ell}\geq i_{\ell}^{\prime*}\) for all \(\ell\in[d]\). Furthermore, let \(\sigma^{*}=(i_{1}^{*},\ldots,i_{d}^{*})=\sigma\). Starting from the smallest \(\ell\) such that \(i_{\ell}>i_{\ell}^{\prime*}\), replace \(i_{\ell}^{*}\) with \(i_{\ell}-r\), where \(r\) is the largest possible integer in \([0,i_{\ell}-i_{\ell}^{\prime*}]\) that keeps \(\sum\sigma^{*}\geq(k-1)(d+2)\). As a result, \(\sigma^{*}\in\mathcal{S}_{k}^{\rm low}\) and \(i_{\ell}^{*}\geq i_{\ell}^{\prime*}\) for all \(\ell\in[d]\). Observe that \(\sigma\cap\sigma^{\prime}\subseteq\sigma^{*}\cap\sigma^{\prime*}\). Because \(\sum\sigma^{*}-\sum\sigma^{\prime*}=1\) and \(i_{\ell}^{*}\geq i_{\ell}^{\prime*}\) for all \(\ell\in[d]\) by construction, \(\sigma^{*}\) and \(\sigma^{\prime*}\) only differ by a single \(\ell\in[d]\). In particular, \(i_{\ell}^{*}=i_{\ell}^{\prime*}+1\). Hence \(\sigma^{*}\cap\sigma^{\prime*}=\sigma^{*}\setminus\{a_{i_{\ell}^{*}+1}^{\ell}\}\), which is in \(\mathcal{F}(\sigma^{*})\). Let \(\sigma_{1}<\cdots<\sigma_{|\mathcal{S}_{k}^{\rm low}|}\) be elements of \(\mathcal{S}_{k}^{\rm low}\) in lexicographic order as usual. The complex \(\bigcup_{\sigma\in\mathcal{S}_{k}^{\rm low}}\mathcal{F}(\sigma)\) in Lemma 3.10 is constructible by the following observation. **Lemma 3.11**.: _Let \(\sigma_{j}\in\mathcal{S}_{k}^{low}\). Then \(\mathcal{F}(\sigma_{j})\cap(\mathcal{F}(\sigma_{1})\cup\cdots\mathcal{F}( \sigma_{j-1}))=\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1})\)._ Proof.: Denote \(\sigma_{j}=(i_{1},\ldots,i_{d})\). Let \(F\) be a facet of \(\sigma_{j}\cap(\sigma_{1}\cup\cdots\cup\sigma_{j-1})\). By Lemma 3.1, \(F=\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)},a_{i_{m}}^{(m)}\}\) for some \(\ell<m\), \(i_{\ell}>1\) and \(i_{m}<n-1\). Consider \(\sigma^{\prime}=(i_{1},\ldots,i_{\ell}^{\prime}=i_{\ell}-1,\ldots,i_{m}^{ \prime}=i_{m}+1,\ldots,i_{d})\in\mathcal{S}_{k}^{\rm low}\). Then \(\sigma^{\prime}<\sigma_{j}\) and \(\sigma^{\prime}\setminus\{a_{i_{m}+2}^{(m)}\}\in\mathcal{F}(\sigma^{\prime})\). Observe that \(F=(\sigma_{j}\setminus\{a_{i_{\ell}+1}^{(\ell)}\})\cap(\sigma^{\prime} \setminus\{a_{i_{m}+2}^{(m)}\})\in\mathcal{F}(\sigma_{j})\cap(\mathcal{F}( \sigma_{1})\cup\cdots\mathcal{F}(\sigma_{j-1}))\) Together with Lemma 3.2, we conclude that \(\mathcal{F}(\sigma_{j})\cap(\mathcal{F}(\sigma_{1})\cup\cdots\mathcal{F}(\sigma_ {j-1}))\) is shellable. Each \(\mathcal{F}(\sigma_{j})\) is clearly shellable. Therefore, \(\bigcup_{\sigma\in\mathcal{S}_{k}^{\mathrm{low}}}\mathcal{F}(\sigma)= \mathcal{T}_{k}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\) is constructible, and so is \(\widetilde{\mathcal{T}}=\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{[d(n-1)/(d+2 )]}\) by induction. Recall that the last step of [7, Construction 3] is to join the boundary of \(\widetilde{\mathcal{T}}\) with a new vertex \(o\), resulting in the \((2d-1)\)-sphere \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\). We are now ready to prove the first part of Theorem 1.1, which states that this sphere is constructible. Proof of Theorem 1.1 (1).: It suffices to show that \(\partial\widetilde{\mathcal{T}}\) is shellable. We have characterized the facets of \(\partial\widetilde{\mathcal{T}}\) in Lemma 2.2, as \(\partial\widetilde{\mathcal{T}}=\partial\mathcal{T}\). The proof that \(\prec\) in Definition 3.4 is a shelling order of the facets of \(\partial\widetilde{\mathcal{T}}\) is almost identical to that of Lemma 3.5. We only need to slightly modify Arguments 3.5.1 and 3.5.2. For Argument 3.5.1, even if \(\sigma^{\prime\prime}\in\mathcal{S}_{k}^{\mathrm{low}}\), we can still consider \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\sigma^{\prime\prime}(\ell )\cup\tau_{j}(\ell)\) because it is contained \(\partial\widetilde{\mathcal{T}}\). For Argument 3.5.2, \(\sigma^{\prime\prime}=(i_{1},\ldots,i_{\ell}^{\prime\prime}=1,\ldots,i_{d})\) is always in \(\widetilde{\mathcal{T}}\), so \(\tau^{\prime\prime}=\sigma^{\prime\prime}\setminus\{a_{i_{\ell}^{\prime \prime}+1}^{(\ell)}\}\in\partial\widetilde{\mathcal{T}}\) is the desired preceding facet. Combining Theorem 1.1(1) with the lower bound in (1), we obtain \[s_{\mathrm{constr}}(D,N)\geq 2^{\Omega(N^{\lceil D/2\rceil})}\text{ for all }D\geq 3.\] On the other hand, we can compute an upper bound for \(s_{\mathrm{shell}}(D,N)\) and \(s_{\mathrm{constr}}(D,N)\). This uses the result of Benedetti and Ziegler [2] on LC \(D\)-spheres with \(M\) facets along with the Upper Bound Theorem [10] for simplicial spheres: \[s_{\mathrm{shell}}(D,N)\leq s_{\mathrm{constr}}(D,N)\leq\sum_{M=1}^{O(N^{ \lceil D/2\rceil})}2^{D^{2}M}=\frac{2^{D^{2}}(2^{D^{2}O(N^{\lceil D/2\rceil})} -1)}{2^{D^{2}}-1}=2^{O(N^{\lceil D/2\rceil})}. \tag{3}\] The first part of Corollary 1.2 then follows immediately: the number of combinatorially distinct constructible \(D\)-spheres with \(N\) vertices is asympotically given by \[s_{\mathrm{constr}}(D,N)=2^{\Theta(N^{\lceil D/2\rceil})}\text{ for all }D\geq 3.\] ## 4 The shellability of the \(3\)-dimensional case Nevo, Santos and Wilson constructed many \(3\)-spheres with \(N\) vertices in [7, Construction 1] as a special case of [7, Construction 3]. We prove that these \(3\)-spheres are all shellable. The main task is to show that the \(3\)-ball \(\widetilde{\mathcal{T}}=\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{\lceil(n-1 )/2\rceil}\) is shellable. We prove this by induction. Let \(k\in\{1,\ldots,\lceil(n-1)/2\rceil\}\) and suppose \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\) is shellable. Order the elements of \(\mathcal{S}_{k}^{\mathrm{low}}\) and \(\mathcal{S}_{k}^{\mathrm{up}}\) respectively using lexicographic order. The idea is to obtain a shelling of \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k}\) by concatenating the following shellings to any shelling of \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\) in the given order: * The shelling of \(\widetilde{C}_{\sigma_{j}}\) for each \(\sigma_{j}\in\mathcal{S}_{k}^{\mathrm{low}}\) specified in Table 1. * The shelling of the connecting path \(\widetilde{C}_{k}\) given by \(\prec\) in Definition 3.4. * The shelling of \(\widetilde{C}_{\sigma_{j}}\) for each \(\sigma_{j}\in\mathcal{S}_{k}^{\mathrm{up}}\) specified in Table 2. Lemmas 4.1-4.4 provide an explicit description of the facets of \(\widetilde{C}_{\sigma}\) for all \(D=2d-1\geq 3\). The facets of the case \(d=2\) are listed in Tables 1 and 2 in the intended shelling order. Recall that for \(\sigma\in\mathcal{S}_{k}\), we consider two triangulations of \(C_{\sigma}\): \[T_{\sigma,1}=F_{\sigma}*\partial(G_{\sigma}*o_{k}),\quad T_{\sigma,2}=\partial F _{\sigma}*(G_{\sigma}*o_{k}).\] Here, \(F_{\sigma}\) is the missing face of \(D_{\sigma}=\sigma\cap\partial(\mathcal{T}|_{\mathbf{B}_{k}})\), which is the only subset of \(V(D_{\sigma})\) not in \(D_{\sigma}\) but such that all of its proper subsets are in \(D_{\sigma}\). Moreover, \(G_{\sigma}\) is the face of \(\sigma\) complementary to \(F_{\sigma}\). Let \(\sigma=(i_{1},\ldots,i_{d})\in\mathcal{S}_{k}^{\text{low}}\). The first possibility of its missing face is \(F_{\sigma}=\{a_{i_{1}}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\)[7, Lemma 5.3]. In this case, \(G_{\sigma}*o_{k}=\{a_{i_{1}}^{(1)},\ldots,a_{i_{d}}^{(d)},o_{k}\}\) and Lemma 2.2 implies \(n-1\notin\{i_{1},\ldots,i_{d}\}\). Therefore, we can characterize \(\sigma\) and the facets of \(T_{\sigma,1}\) and \(T_{\sigma,2}\) as follows. **Lemma 4.1**.: [Type I of \(\mathcal{S}_{k}^{\text{low}}\)] If \(F_{\sigma}=\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\), then \(n-1\notin\{i_{1},\ldots,i_{d}\}\). The facets of \(T_{\sigma,1}\) and \(T_{\sigma,2}\) are respectively * \(T_{\sigma,1}:\sigma\) and \(\sigma\setminus\{a_{i_{\ell}}^{(\ell)}\}\cup\{o_{k}\}\) for \(\ell\in[d]\). * \(T_{\sigma,2}:\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\cup\{o_{k}\}\) for \(\ell\in[d]\). The next result discusses the other possibilities of \(F_{\sigma}\). **Lemma 4.2**.: [Type II of \(\mathcal{S}_{k}^{\text{low}}\)] If \(F_{\sigma}\neq\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\), then \(n-1\in\{i_{1},\ldots,i_{d}\}\). Let \(\ell_{1},\ldots,\ell_{\beta}\in[d]\) denote all coordinates such that \(i_{\ell_{1}}=\cdots=i_{\ell_{\beta}}=n-1\), and let \(m_{1},\ldots,m_{\gamma}\) denote the rest of the coordinates. Then \(F_{\sigma}=\{a_{i_{\ell_{1}}}^{(\ell_{1})},\ldots,a_{i_{\ell_{\beta}}}^{(\ell _{\beta})}\}\cup\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\). The facets of \(T_{\sigma,1}\) and \(T_{\sigma,2}\) are respectively * \(T_{\sigma,1}:\sigma\) and \(\sigma\setminus\{a_{i_{m_{r}}}^{(m_{r})}\}\cup\{o_{k}\}\) for \(r\in[\gamma]\). * \(T_{\sigma,2}:\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\cup\{o_{k}\}\) for \(\ell\in[d]\), and \(\sigma\setminus\{a_{i_{\ell_{r}}}^{(\ell_{r})}\}\cup\{o_{k}\}\) for \(r\in[\beta]\). Proof.: By Lemma 2.3, \(\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\) is on the boundary of \(\mathcal{T}|_{\mathbf{B}_{k}}\) for every \(\ell\in[d]\). This means every proper subset of \(\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\) is contained in \(D_{\sigma}\). Therefore, if \(\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\) is not the missing face, then it must be contained in \(D_{\sigma}\). Suppose \(F\) is a facet of \(\sigma\) containing \(\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\). Then \(F=\sigma\setminus\{a_{i_{\ell}}^{(\ell)}\}\) for some \(\ell\in[d]\). We claim that \(F\in D_{\sigma}\) if and only if \(i_{\ell}=n-1\). Indeed, if \(i_{\ell}\neq n-1\), then \(F\) is also a facet of \(\sigma^{\prime}=(i_{1},\ldots,i_{\ell}^{\prime}=i_{\ell}+1,\ldots,i_{d})\in \mathcal{T}|_{\mathbf{B}_{k}}\), so \(F\notin\partial(\mathcal{T}|_{\mathbf{B}_{k}})\). The converse direction is clear. It follows that \(F_{\sigma}=\{a_{i_{\ell_{1}}}^{(\ell_{1})},\ldots,a_{i_{\ell_{\beta}}}^{(\ell _{\beta})}\}\cup\{a_{i_{1}+1}^{(1)},\ldots,a_{i_{d}+1}^{(d)}\}\) and \(G_{\sigma}=\sigma\setminus F_{\sigma}=\{a_{i_{m_{1}}}^{(m_{1})},\ldots,a_{i_{m_{r }}}^{(m_{\gamma})}\}\cup\{o_{k}\}\). Below are analogous results for \(\sigma=(i_{1},\ldots,i_{d})\in\mathcal{S}_{k}^{\text{up}}\). **Lemma 4.3**.: [Type I of \(\mathcal{S}_{k}^{\text{up}}\)] If \(F_{\sigma}=\{a_{i_{1}}^{(1)},\ldots,a_{i_{d}}^{(d)}\}\), then \(1\notin\{i_{1},\ldots,i_{d}\}\). The facets of \(T_{\sigma,1}\) and \(T_{\sigma,2}\) are respectively * \(T_{\sigma,1}:\sigma\) and \(\sigma\setminus\{a_{i_{\ell}+1}^{(\ell)}\}\cup\{o_{k}\}\) for \(\ell\in[d]\). * \(T_{\sigma,2}:\sigma\setminus\{a_{i_{\ell}}^{(\ell)}\}\cup\{o_{k}\}\) for \(\ell\in[d]\). **Lemma 4.4**.: [Type II of \(\mathcal{S}_{k}^{\text{up}}\)] If \(F_{\sigma}\neq\{a_{i_{1}}^{(1)},\ldots,a_{i_{d}}^{(d)}\}\), then \(1\in\{i_{1},\ldots,i_{d}\}\). Let \(\ell_{1},\ldots,\ell_{\beta}\in[d]\) denote all coordinates such that \(i_{\ell_{1}}=\cdots=i_{\ell_{\beta}}=1\), and let \(m_{1},\ldots,m_{\gamma}\) denote the rest of the coordinates. Then \(F_{\sigma}=\{a_{i_{\ell_{1}+1}}^{(\ell_{1})},\ldots,a_{i_{\ell_{\beta}+1}}^{( \ell_{\beta})}\}\cup\{a_{i_{1}}^{(1)},\ldots,a_{i_{d}}^{(d)}\}\). The facets of \(T_{\sigma,1}\) and \(T_{\sigma,2}\) are respectively * \(T_{\sigma,1}:\sigma\) and \(\sigma\setminus\{a_{i_{m_{r}}+1}^{(m_{r})}\}\cup\{o_{k}\}\) for \(r\in[\gamma]\). * \(T_{\sigma,2}:\sigma\setminus\{a_{i_{t}}^{(\ell)}\}\cup\{o_{k}\}\) for \(\ell\in[d]\), and \(\sigma\setminus\{a_{i_{t}+1}^{(\ell_{r})}\}\cup\{o_{k}\}\) for \(r\in[\beta]\). Proof.: The proof is analogous to that of Lemma 4.2. For \(d=2\), the facets of \(\widetilde{C}_{\sigma}\) are listed in Tables 1 and 2. See also Figure 2 for an illustration. **Remark 4.5**.: In Table 1, we assume that at most one of \(i_{1},i_{2}\) is \(n-1\) for \(\sigma\in\mathcal{S}_{k}^{\text{low}}\). If \(i_{1}=i_{2}=n-1\), then \(k=\lceil(n-1)/2\rceil\) and \(\mathcal{T}|_{\mathbf{B}_{k}}=\sigma\). However, [7, Lemma 3.2] only applies when \(\mathcal{T}|_{\mathbf{B}_{k}}\) is a ball that is not a simplex. For this edge case, simply take \(\mathcal{T}_{k}=\mathcal{T}|_{\mathbf{B}_{k}}\). Since \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\) is assumed to be shellable, \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k}\) is clearly shellable. For \(\sigma\in\mathcal{S}_{k}^{\text{up}}\), at most one of \(i_{1},i_{2}\) can be \(1\). Otherwise \(\sum\sigma=i_{1}+i_{2}=2=4k-1\), which is impossible for \(k\geq 1\). We rewrite several results from the previous sections for \(d=2\): **Lemma 4.6**.: _Let \(d=2\) and \(k\in\{1,\ldots,\lceil(n-1)/2\rceil\}\)._ 1. _The complex_ \(\mathcal{T}_{k}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\) _is generated by the facets of the forms_ \(\{a_{i_{1}}^{(1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2)}\}\) _and_ \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)}\}\) _for all_ \((i_{1},i_{2})\in\mathcal{S}_{k}^{\text{low}}\)__[Lemma 3.10]__._ 2. _Let_ \(\sigma_{1}=\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2 )}\}\in\mathcal{S}_{k}^{\text{low}}\)_, the first in lexicographic order of the lower diagonal. Then at least one of_ \(i_{1}=1\) _and_ \(i_{2}=n-1\) _holds. By 1,_ \(\widetilde{C}_{\sigma_{1}}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}) =\{\{a_{i_{1}}^{(1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2)}\},\{a_{i_{1}}^{(1)}, a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)}\}\}\)_._ 3. _Let_ \(\sigma_{j}=\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2)} \}\in\mathcal{S}_{k}^{low}\) _with_ \(i_{1}>1\) _and_ \(i_{2}<n-1\)_. Then_ \(\widetilde{C}_{\sigma_{j}}\cap(\widetilde{C}_{\sigma_{1}}\cup\cdots\cup \widetilde{C}_{\sigma_{j-1}})=\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\)__[Lemmas 2.1 and 3.1]_. By 1,_ \(\widetilde{C}_{\sigma_{j}}\cap(\widetilde{C}_{\sigma_{1}}\cup\cdots\cup \widetilde{C}_{\sigma_{j-1}}\cup\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})= \{\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\},\{a_{i_{1}}^{(1)},a_{i_{2}}^{(2 )},a_{i_{2}+1}^{(2)}\},\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)}\}\}\)_._ 4. _Let_ \(\tau_{j}\in\mathcal{C}_{k}\)_. Then_ \((\tau_{j}*o_{k})\cap((\tau_{1}*o_{k})\cup\cdots\cup(\tau_{j-1}*o_{k})\cup \widetilde{\mathcal{S}}_{k}^{low})\) _is_ \(2\)_-dimensional_ [Lemmas 2.1 and 3.2]_. By 1, this intersection also contains_ \((\tau_{j}*o_{k})\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\)_._ 5. _Let_ \(\sigma_{j}=\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2 )}\}\in\mathcal{S}_{k}^{up}\)_. Then the list of facets of_ \(\widetilde{C}_{\sigma_{j}}\cap(\widetilde{C}_{\sigma_{1}}\cup\cdots\cup \widetilde{C}_{\sigma_{j-1}}\cup\widetilde{\mathcal{S}}_{k}^{low}\cup \widetilde{\mathcal{C}}_{k})\) _is given by_ [Lemmas 2.1 and 3.6]_:_ * _For_ \(i_{1}=1,i_{2}>1\)_:_ \(\{a_{i_{1}}^{(1)},a_{i_{2}}^{(2)},o_{k}\},\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},o_{k}\}\)_._ * _For_ \(i_{1}>1,i_{2}=1\)_:_ \(\{a_{i_{1}}^{(1)},a_{i_{2}}^{(2)},o_{k}\},\{a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2)},o_{k}\},\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\)_._ * _For_ \(i_{1}=n-1,i_{2}>1\)_:_ \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},o_{k}\},\{a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2 )},o_{k}\},\{a_{i_{1}}^{(1)},a_{i_{2}}^{(2)},o_{k}\},\{a_{i_{1}}^{(1)},a_{i_{2} +1}^{(2)},o_{k}\}\)_._ * _For_ \(1<i_{1}<n-1,i_{2}>1\)_:_ \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},o_{k}\},\{a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2 )},o_{k}\},\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\)_._ _By 1, this intersection also contains_ \(\widetilde{C}_{\sigma_{j}}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\)_._ We concatenate the shellings of the different subcomplexes of \(\mathcal{T}_{k}\) to a shelling of \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\) as described at the beginning of Section 4. Lemma 4.6 reduces the number of intersections to check each time we add a facet of \(\mathcal{T}_{k}\). This leads to the following result. **Lemma 4.7**.: _The \(3\)-ball \(\widetilde{\mathcal{T}}=\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{\lceil(n-1) /2\rceil}\) is shellable._ _Outline of the proof._ Suppose \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\) is shellable. Add the facets of \(\mathcal{T}_{k}\) in the following way: Figure 2: The different types of diagonal simplices in \(\mathcal{T}\) for \(d=2\) and \(n=8\) * Start from \(\sigma_{1}=(i_{1},i_{2})\in\mathcal{S}_{k}^{\mathrm{low}}\). For each of the possible scenarios in Table 1 (note that it cannot be of Type II with \(i_{1}=n-1\)), add the facets of \(\widetilde{C}_{\sigma_{1}}\) in the order listed. For each new facet \(F\), let \(C(F)\) denote the union of all preceding facets of \(\widetilde{C}_{\sigma_{1}}\). Check that \(F\cap(C(F)\cup(\widetilde{C}_{\sigma_{1}}\cap(\mathcal{T}_{1}\cup\cdots\cup \mathcal{T}_{k-1}))\) is \(2\)-dimensional, using the description of \(\widetilde{C}_{\sigma_{1}}\cap(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1})\) in Lemma 4.6(2). * Repeat the same process for \(\sigma_{2},\sigma_{3},\cdots\in\widetilde{\mathcal{S}}_{k}^{\mathrm{low}}\), except now we use Lemma 4.6(3) instead of Lemma 4.6(2). Finish adding all facets of \(\widetilde{\mathcal{S}}_{k}^{\mathrm{low}}\) in this way. At this point, we obtain a shelling of the complex \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\cup\widetilde{\mathcal{S}}_{k}^ {\mathrm{low}}\). * Add the facets of \(\widetilde{\mathcal{C}}_{k}\) one by one. By Lemma 4.6(4), the intersection of each facet with the existing complex is \(2\)-dimensional. At the end of this step, we obtain a shelling of the complex \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\cup\widetilde{\mathcal{S}}_{k} ^{\mathrm{low}}\cup\widetilde{\mathcal{C}}_{k}\). * Repeat the first two steps for \(\widetilde{\mathcal{S}}_{k}^{\mathrm{up}}\), utilizing Lemma 4.6(5) instead of Lemma 4.6(2)(3). In the end, this yields a shelling of \(\mathcal{T}_{1}\cup\cdots\cup\mathcal{T}_{k-1}\). For example, suppose we would like to add the second facet \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\) listed in Table 1 under Type I, \(T_{\sigma,1}\). According to the second step above, we check the following: * \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\cap\{a_{i_{1}}^{( 1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2)}\}=\{a_{i_{1}}^{(1)},a_ {i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)}\}\). * \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\cap\{a_{i_{1}}^{ (1)},a_{i_{2}+1}^{(2)},o_{k}\}=\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\). * \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\cap\{a_{i_{1}}^{ (1)},a_{i_{2}}^{(2)},a_{i_{2}+1}^{(2)}\}=\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)} \}\subseteq\{a_{i_{1}}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\). * \(\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)},o_{k}\}\cap\{a_{i_{1}}^{ (1)},a_{i_{1}+1}^{(1)},a_{i_{2}}^{(2)}\}=\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)} \}\subseteq\{a_{i_{1}}^{(1)},a_{i_{1}+1}^{(1)},a_{i_{2}+1}^{(2)}\}\). The same detailed checking can be done for all other facets. It is clear that base case \(\mathcal{T}_{1}\) is shellable, with \(\mathcal{S}_{1}^{\mathrm{low}}=\varnothing\). We are now ready to derive Theorem 1.1(2), which asserts that the \(3\)-spheres by Nevo, Santos, and Wilson in [7, Construction 1] are shellable. Proof of Theorem 1.1(2).: By the proof of Theorem 1.1(1), \(\partial\widetilde{\mathcal{T}}\) is shellable. Therefore, we can concatenate the shelling of \(\partial\widetilde{\mathcal{T}}*o\) to that of \(\widetilde{\mathcal{T}}\) to obtain a shelling of the sphere \(\widetilde{\mathcal{T}}\cup(\partial\widetilde{\mathcal{T}}*o)\). Using the bounds in (1) and (3), the second part of Corollary 1.2 follows: the number of combinatorially distinct shellable \(D\)-spheres with \(N\) vertices is \(s_{\mathrm{shell}}(D,N)=2^{\Theta(N^{[D/2]})}\) for all even \(D\geq 4\) and \(D=3\). It is natural to ask if the higher-dimensional spheres constructed by Nevo, Santos, and Wilson in [7, Construction 3] are not only constructible but also shellable. While this is still unknown, we note that if the answer is affirmative, then the asymptotics of Corollary 1.2 will also apply to the number of shellable spheres for all \(D\geq 3\). ## Acknowledgements I am extremely grateful to Isabella Novik for her detailed and invaluable comments on the drafts as well as her encouragement and guidance throughout the writing process.
2308.12077
Analysis of XLS-R for Speech Quality Assessment
In online conferencing applications, estimating the perceived quality of an audio signal is crucial to ensure high quality of experience for the end user. The most reliable way to assess the quality of a speech signal is through human judgments in the form of the mean opinion score (MOS) metric. However, such an approach is labor intensive and not feasible for large-scale applications. The focus has therefore shifted towards automated speech quality assessment through end-to-end training of deep neural networks. Recently, it was shown that leveraging pre-trained wav2vec-based XLS-R embeddings leads to state-of-the-art performance for the task of speech quality prediction. In this paper, we perform an in-depth analysis of the pre-trained model. First, we analyze the performance of embeddings extracted from each layer of XLS-R and also for each size of the model (300M, 1B, 2B parameters). Surprisingly, we find two optimal regions for feature extraction: one in the lower-level features and one in the high-level features. Next, we investigate the reason for the two distinct optima. We hypothesize that the lower-level features capture characteristics of noise and room acoustics, whereas the high-level features focus on speech content and intelligibility. To investigate this, we analyze the sensitivity of the MOS predictions with respect to different levels of corruption in each category. Afterwards, we try fusing the two optimal feature depths to determine if they contain complementary information for MOS prediction. Finally, we compare the performance of the proposed models and assess the generalizability of the models on unseen datasets.
Bastiaan Tamm, Rik Vandenberghe, Hugo Van hamme
2023-08-23T11:52:49Z
http://arxiv.org/abs/2308.12077v1
# Analysis of XLS-R for Speech Quality Assessment ###### Abstract In online conferencing applications, estimating the perceived quality of an audio signal is crucial to ensure high quality of experience for the end user. The most reliable way to assess the quality of a speech signal is through human judgments in the form of the mean opinion score (MOS) metric. However, such an approach is labor intensive and not feasible for large-scale applications. The focus has therefore shifted towards automated speech quality assessment through end-to-end training of deep neural networks. Recently, it was shown that leveraging pre-trained wav2vec-based XLS-R embeddings leads to state-of-the-art performance for the task of speech quality prediction. In this paper, we perform an in-depth analysis of the pre-trained model. First, we analyze the performance of embeddings extracted from each layer of XLS-R and also for each size of the model (300M, 1B, 2B parameters). Surprisingly, we find two optimal regions for feature extraction: one in the lower-level features and one in the high-level features. Next, we investigate the reason for the two distinct optima. We hypothesize that the lower-level features capture characteristics of noise and room acoustics, whereas the high-level features focus on speech content and intelligibility. To investigate this, we analyze the sensitivity of the MOS predictions with respect to different levels of corruption in each category. Afterwards, we try fusing the two optimal feature depths to determine if they contain complementary information for MOS prediction. Finally, we compare the performance of the proposed models and assess the generalizability of the models on unseen datasets. Bastiaan Tamm,\({}^{1,2}\)+ Rik Vandenberghe,\({}^{1}\) Hugo Van hamme,\({}^{2}\)+\({}^{1}\) Laboratory for Cognitive Neurology (LCN), Department of Neurosciences, KU Leuven, Belgium \({}^{2}\) Processing Speech and Images (PSI), Department of Electrical Engineering, KU Leuven, Belgium speech quality assessment, MOS prediction Footnote †: 979-8-3503-2372-6/23/$31.00 ©2023 IEEE ## 1 Introduction Given the increased dependence on online conferencing applications in recent years, the demand for a reliable automated method to assess perceived speech quality has grown. Common factors that can degrade conversational quality include jitter, latency, echo, packet loss, and distortion [1]. The ground truth for perceived speech quality is derived from human judgments, usually in the form of Absolute Category Ratings (ACR). These ratings are used to calculate the mean opinion score (MOS), which is used as the ground truth for the perceived speech quality of a given audio sample. However, the collection of human judgments is extremely time- and labor-intensive, making it impractical for large-scale evaluations of speech quality. Many objective metrics such as the speech-to-reverberation modulation energy ratio do not necessarily correlate with the perceived speech quality [2]. Efforts have therefore been dedicated towards machine learning approaches for speech quality assessment [3, 4], for example based on long short-term memory (LSTM) networks [5]. In recent work on the ConferencingSpeech 2022 challenge [6], it was shown that leveraging pre-trained wav2vec-based XLS-R [7] embeddings leads to state-of-the-art performance for the MOS prediction task [8]. Most notably, the model performed exceptionally well on the unseen TUB dataset, outperforming the next-closest competitor by 27.4% and the overall second-place model by 42.9% for the RMSE metric [6, 8]. Whether the embeddings generalize well to other unseen datasets has not yet been investigated. This paper aims to perform an in-depth analysis of the wav2vec-based XLS-R model for the task of speech quality assessment. We first note that the original paper [8] simply used the final hidden layer of the pre-trained 300M parameter XLS-R to train a MOS-prediction model instead of determining a feature depth that is optimal for the downstream task. We will therefore analyze the performance of embeddings extracted from each layer of XLS-R and also for each size of the pre-trained model (300M, 1B, 2B parameters). Also, we aim to validate the performance of the model on other unseen datasets. ## 2 Experimental Setup ### Datasets We use the same four corpora as the ConferencingSpeech 2022 challenge [6], specifically * the **Tencent corpus**, consisting of around 14,000 Chinese Figure 1: Layer-wise performance of pre-trained XLS-R models on speech quality assessment task. The performance of each layer’s activations is plotted for the three model sizes. This is measured using the best validation RMSE from all model configurations. This analysis was done on 35% of the full dataset. speech clips with and without reverberation, ranging from 5 to 13.5 seconds long; * the **Public switched telephone network (PSTN) corpus**, consisting of 80,000 English speech samples based on LibriVox with a length of 10 seconds each, containing both clean samples as well as samples with artificial background noise; * the **Non-intrusive speech quality assessment (NISQA) corpus**[9], a collection of more than 14,000 English and German speech clips with real as well as simulated noise; * the **IU Bloomington (IUB)** corpus, comprising 36,000 English speech samples from the VOiCES [10] and COSINE [11] datasets, ranging from three to six seconds long. Each corpus is labeled with MOS values in a rating range of 1-5, which are derived from ACRs based on the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) recommendation P.808 [12]. The only exception is the IU Bloomington corpus, which follows ITU-R BS.1534 [13] and has a range of 0-100. These are converted to the range 1-5 for the experiments. Additionally, the count and standard deviation of the ACRs is provided for each audio sample in all corpora except for Tencent. ### Dataset Division We use the same approach to dataset division as [8]. We define the challenge subset as the combination of the Tencent and PSTN corpora and the full dataset as the combination of all four corpora. The full training and validation sets are constructed by shuffling the samples in the full dataset and using 85% for training and 15% for validation. Finally, the training and validation subsets are constructed by keeping only the Tencent and PSTN samples from the original training and validation sets. We define an additional dataset, which will be referred to as the unseen dataset, which only contains the samples from NISQA and IUB. Thus, the models will be trained on the challenge subset and evaluated on the "unseen" datasets. ### Model Architecture The model takes as input a sequence of 384 extracted features, which can be pre-trained XLS-R embeddings from a specific layer or MFCC features for comparison. The features are extracted as a preprocessing step and are not finetuned. The speech quality prediction model consists of three modules: a linear down-projection to the size of the hidden space, a bidirectional LSTM (Bi-LSTM) or transformer module to model temporal dependency, and an attention-based pooling module [9] to map to the output space. The number of Bi-LSTM or transformer layers as well as the hidden size are varied across models. The best transformer models use a hidden size of 32, 4 attention heads and 4 layers; the best Bi-LSTM models use a hidden size of 32 in each direction and 2 layers. We apply batch normalization at the input and after the Bi-LSTM/transformer. Subsequently, the outputs from the attention pooling module are mapped to the range (0,1) with a sigmoid function. These final outputs are referred to as normalized MOS values. This intermediate normalization step ensures that the output range of the attention pooling layer is unrestricted. During the evaluation of the model, the normalized MOS predictions are mapped to the original 1-5 MOS range. ### Training Details For MFCC calculation, we use the implementation by torchaudio [14] with the default parameters and a sample rate of 16 kHz. The XLS-R feature extraction uses the _facebook/wav2vec2-xls-r-{300m,1b,2b}_ models available on HuggingFace [15]. The models are implemented using the PyTorch (v.1.11.0) and PyTorch Lightning (v.1.8.6) libraries in Python 3.9. Training is performed using the PyTorch Lightning trainer. The network is trained using the ADAM optimizer [16], a learning rate of \(3\times 10^{-3}\), batch size of 60, and MSE loss. Each model is trained for a total of 30 epochs, and the model with the lowest validation loss is selected. ## 3 XLS-R layer-wise performance First, we look at the performance of embeddings extracted from each layer of XLS-R and also for each size of the model (300M, 1B, 2B parameters). The results of the training are shown in Figure 1. On the horizontal axis, the layer from which the activations are taken is displayed. This includes the output of the CNN (layer 0) and all transformer layers (layer 1-48). We interpolate the results of XLS-R 300M for this visualization since this model only has 24 transformer layers. On the vertical axis, we display the performance of the downstream speech quality prediction model on the validation split of the full dataset (RMSE metric, lower is better). We hypothesized that the performance would rapidly improve over the first few layers and reach an optimum in the lower- or mid-level features, where room acoustics and noise characteristics are best modeled. Then, we believed that the performance would gradually degrade as the layer index further increased, as the highly contextualized speech representations would probably be less suitable to detect localized sources of speech degradation. The first part of the hypothesis appears to be validated across the three model sizes, as the models reach an optimum around layer 10 (layer 5 for XLS-R 300M). Surprisingly, there seems to be a second local optimum around layer 41 (layer 21 for XLS-R 300M). It appears that there is a certain level of contextualization that is beneficial for speech quality assessment. In the following sections, we will investigate the properties of the low-level and high-level XLS-R embeddings. ## 4 Corruption sensitivity analysis Next, we investigate the reason for the two distinct optima in Figure 1. We hypothesize that the lower-level features better model the more typical conditions that affect the quality of audio in online applications (e.g., room acoustics, echo, packet loss, distortion) and that the higher-level features capture some sort of speech content and intelligibility. Moreover, we expect the two levels of representation to have complementary information and that a fusion model will outperform each feature level individually. This section focuses on the first hypothesis, while Section 5 addresses the latter. To investigate what types of information are represented at each level, we artificially inject different types of noise and corruption and observe what effect this has on the predictions. We test a variety of corruption techniques: white Gaussian noise, overlapping speech, simulated reverb, low-/high-pass filter, time masking, and MP3 compression. The goal is to determine if a particular feature depth is more sensitive to certain types of degradation. The implementation is done using the audiomentations1 Python package. Footnote 1: [https://github.com/ive56/audiomentations](https://github.com/ive56/audiomentations) To estimate the sensitivity, we keep the ground-truth labels fixed and calculate the RMSE between the predictions of the corrupted audio and the ground-truth labels. Naturally, as the level of corruption increases, the predictions will be affected and the RMSE will increase. For visualization, we plot the relative performance of the corrupted predictions compared to the original predictions. A constant value of 1 indicates that the model is insensitive to the injected corruption, whereas a steep negative slope means the predictions are highly sensitive to the corruption. The results are shown in Figure 2. It can be seen that the model based on high-level XLS-R embeddings (orange line) is more sensitive to all types of corruption, a finding which does not directly support the hypothesis. It seems that perturbations in a given hidden layer of XLS-R propagate and may slightly magnify in later layers. This result can most likely be attributed to the fact that the wav2vec2 training procedure is not designed to generate embeddings that are insensitive to degraded audio. During pre-training, the model is trained to predict masked quantized representations derived from raw audio. The only robustness we would expect is to masking in the quantized representation space. Apparently, this does not translate to insensitivity to time masking, but this could also be due to the relatively long mask window. This gives an idea of why the XLS-R embeddings are useful for speech quality assessment in the first place. We expect that a noise-robust version of wav2vec2, such as those proposed in [17, 18, 19], would not be useful for speech quality assessment. We have seen in our experiments, for example, that the version of XLS-R finetuned on multilingual speech translation2 performs very poorly when embeddings are extracted from the final hidden layer, presumably because these embeddings focus on speech content and are more invariant to noise characteristics. Footnote 2: [https://huggingface.co/facebook/wave2vec2-sls-r-2b-22-to-16](https://huggingface.co/facebook/wave2vec2-sls-r-2b-22-to-16) ## 5 Model Comparison To assess if the two feature depths contain complementary information for speech quality assessment, we developed layer-fusion models and compared their performance to the single-layer models. We also included the performance of the baseline model from [8] and the popular DNSMOS [20] model for comparison. As a final metric for comparison, we calculate the RMSE of human annotations with respect to the mean opinion score. This can be derived for all corpora except for Tencent since the count and standard deviation of the ACRs are provided per audio sample. \[RMSE_{\mathrm{human}}=\sqrt{\frac{\sum_{j}s_{j}^{2}\cdot(N_{j}-1)}{\sum_{j}N_ {j}}} \tag{1}\] The expression \(s_{j}^{2}\cdot(N_{j}-1)\) is equal to the sum of squared errors for a given audio sample \(j\) with respect to the MOS (\(s_{j}\) is the Bessel-corrected standard deviation). We sum this expression over all samples \(j\) to obtain the global sum of squared errors and divide by the total number of votes to obtain the mean squared error. This is followed by a square root operation to obtain the desired metric. Contrary to the model predictions, humans are generally restricted to integer scores, so this must be considered as an extra source of variance. Finally, a histogram of the number of votes per audio sample is shown in Figure 3. The model comparison results are shown in Table 1. Values of interest are shown in bold, and the overall best model per column is underlined as well. We achieve slightly better performance than the Figure 3: MOS vote count statistics for PSTN, IUB and NISQA validation sets. No counts were available for the Tencent corpus. Figure 2: Corruption sensitivity analysis of 2B-parameter model for full dataset. baseline on the challenge subset (1.1% better). More importantly, we validate the claim that the XLS-R-based model indeed performs exceptionally well on unseen data [8]. We show that the baseline model achieves an RMSE of 0.5323 on the validation set of the unseen NISQA+IUB corpora. This already outperforms DNSMOS (0.6565) and the RMSE of human annotations (0.6629). XLS-R 1B Layer41 performs even better with an RMSE of 0.4966 (6.7 / 24.4 / 25.1 % better than baseline / DNSMOS / human respectively). Figure 4 shows a visualization of the model predictions compared to DNSMOS and the MFCC model. Regarding layer fusion, we do not see a consistent improvement by applying early fusion to the two feature depths (weighted sum of inputs). The model attends to both inputs with weights -0.75/0.10 for layers 10/41 respectively. ## 6 Conclusion In this paper, we have performed an analysis of the pre-trained XLS-R models for the task of speech quality assessment. We found that using specific layer activations results in improved performance compared to using the final hidden layer. Specifically, there are two local optima for feature depth selection around layers 10 and 41 (layers 5 and 21 for XLS-R 300M); however, the reason for the two distinct optima is still unclear. Finally, we showed that the proposed models3 substantially outperform DNSMOS and have lower variance than human annotators. \begin{table} \begin{tabular}{c c c|c c|c c|c c} \hline \hline **Group** & **Train Set** & **XLS-R Layer** & **Tencent** & **PSTN** & **NISQA** & **IUB** & **Subset** & **Unseen** & **Full** \\ \hline **Baseline [8]** & subset & 24 & **0.3037** & **0.5022** & **0.5907** & **0.5067** & **0.4759** & **0.5323** & (0.5000) \\ **DNSMOS [20]** & / & / & (0.8850) & (0.7087) & **0.8718** & **0.5452** & (0.7282) & **0.6565** & (0.6982) \\ \hline **MFCC** & full & / & 0.5932 & 0.5924 & (0.6743) & (0.3846) & 0.5925 & (0.6845) & **0.5511** \\ **Transformer** & subset & / & 0.5762 & 0.5992 & 0.8280 & 0.7775 & 0.5955 & 0.7924 & (0.6846) \\ \hline & full & 5 & 0.3340 & 0.5002 & (0.4251) & (0.7711) & 0.4774 & (0.3875) & 0.4423 \\ & full & 21 & 0.3119 & **0.4953** & (0.4143) & (0.7580) & **0.4706** & (0.9252) & 0.4400 \\ **XLS-R 300M** & full & 5+21 & 0.3115 & 0.4976 & (0.4146) & (0.7080) & 0.4726 & (0.3824) & **0.4396** \\ **Transformer** & subset & 5 & 0.3212 & 0.5036 & 0.6256 & 0.5049 & 0.4790 & 0.5425 & (0.5600) \\ & subset & 21 & 0.3003 & 0.5068 & 0.5694 & 0.5025 & 0.4796 & 0.5227 & (0.4679) \\ & subset & 5+21 & **0.2948** & 0.5055 & **0.5683** & **0.4886** & 0.4779 & **0.5129** & (0.4927) \\ \hline & full & 10 & 0.3127 & **0.4988** & (0.4285) & (0.3650) & **0.4738** & (0.3062) & **0.4396** \\ & full & 41 & **0.3014** & 0.5007 & (0.4149) & (0.3869) & 0.4743 & (0.3060) & 0.4415 \\ **XLS-R 1B** & full & 10+41 & 0.3188 & 0.5021 & (0.4651) & (0.3060) & 0.4774 & (0.4149) & 0.4541 \\ **Transformer** & subset & 10 & 0.3198 & 0.5126 & **0.5456** & 0.5815 & 0.4868 & 0.57713 & (0.5218) \\ & subset & 41 & 0.3168 & 0.5118 & 0.5657 & **0.4656** & 0.4858 & **0.4966** & (0.4060) \\ & subset & 10+41 & 0.3380 & 0.5050 & 0.5748 & 0.5288 & 0.4821 & 0.5425 & (0.5800) \\ \hline & full & 10 & 0.3520 & 0.5139 & (0.4717) & (0.2799) & 0.4915 & (0.4046) & 0.4575 \\ & full & 41 & 0.3236 & **0.4992** & (0.4857) & (0.3812) & **0.4754** & (0.3859) & **0.4442** \\ **XLS-R 2B** & full & 10+41 & 0.3111 & 0.5037 & (0.4277) & (0.3820) & 0.4780 & (0.4355) & 0.4494 \\ **Transformer** & subset & 10 & 0.3034 & 0.5175 & 0.6277 & 0.4899 & 0.4894 & 0.5334 & (0.5801) \\ & subset & 41 & **0.2977** & 0.5054 & **0.5724** & 0.4897 & 0.4781 & 0.5150 & (0.4037) \\ & subset & 10+41 & 0.3069 & 0.5031 & 0.6036 & **0.4743** & 0.4770 & **0.5150** & (0.4031) \\ \hline **Human** & / & / & / & (0.2899) & **0.6738** & **0.6573** & / & **0.6629** & / \\ without quantization & / & / & / & (0.7542) & 0.6088 & 0.6571 & / & / & / \\ \hline \hline \end{tabular} \end{table} Table 1: Model comparison for each corpus individually and the challenge subset (Tencent+PSTN), unseen (NISQA+IUB), and full datasets. The metric is RMSE on the respective validation set, lower is better. Values of interest are shown in bold. The overall best model per column is underlined. Some values are not relevant for the discussion but are provided for completeness: these are displayed in a smaller font between parentheses. For example, models trained on the full dataset have technically seen the so-called “unseen” dataset. Also, comparing DNSMOS, which has not been trained on the challenge subset, to models where this is the case would not be a fair comparison. The final rows display the “RMSE” of the human annotations and the estimated human RMSE without integer limitation (modeled as uniform quantization noise). Figure 4: Visualization of MOS predictions on unseen corpora. The human ACRs are also visualized for the IUB corpus.
2305.13395
BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance
Timely and accurate extraction of Adverse Drug Events (ADE) from biomedical literature is paramount for public safety, but involves slow and costly manual labor. We set out to improve drug safety monitoring (pharmacovigilance, PV) through the use of Natural Language Processing (NLP). We introduce BioDEX, a large-scale resource for Biomedical adverse Drug Event Extraction, rooted in the historical output of drug safety reporting in the U.S. BioDEX consists of 65k abstracts and 19k full-text biomedical papers with 256k associated document-level safety reports created by medical experts. The core features of these reports include the reported weight, age, and biological sex of a patient, a set of drugs taken by the patient, the drug dosages, the reactions experienced, and whether the reaction was life threatening. In this work, we consider the task of predicting the core information of the report given its originating paper. We estimate human performance to be 72.0% F1, whereas our best model achieves 62.3% F1, indicating significant headroom on this task. We also begin to explore ways in which these models could help professional PV reviewers. Our code and data are available: https://github.com/KarelDO/BioDEX.
Karel D'Oosterlinck, François Remy, Johannes Deleu, Thomas Demeester, Chris Develder, Klim Zaporojets, Aneiss Ghodsi, Simon Ellershaw, Jack Collins, Christopher Potts
2023-05-22T18:15:57Z
http://arxiv.org/abs/2305.13395v2
# BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction ###### Abstract Timely and accurate extraction of Adverse Drug Events (ADE) from biomedical literature is paramount for public safety, but involves slow and costly manual labor. We set out to improve drug safety monitoring (pharmacovigilance, PV) through the use of Natural Language Processing (NLP). We introduce BioDEX, a large-scale resource for Biomedical adverse Drug Event Extraction, rooted in the historical output of drug safety reporting in the U.S. BioDEX consists of 65k abstracts and 19k full-text biomedical papers with 256k associated document-level safety reports created by medical experts. The core features of these reports include the reported weight, age, and biological sex of a patient, a set of drugs taken by the patient, the drug dosages, the reactions experienced, and whether the reaction was life threatening. In this work, we consider the task of predicting the core information of the report given its originating paper. We estimate human performance to be 72.0% F1, whereas our best model achieves 62.3% F1, indicating significant headroom on this task. We also begin to explore ways in which these models could help professional PV reviewers. Our code and data are available at [https://github.com/KarelDO/BioDEX](https://github.com/KarelDO/BioDEX). ## 1 Introduction In the United States, the Food and Drug Administration (FDA) mandates drug producers to monitor and report Adverse Drug Events (ADE) described in the biomedical literature. Such a report, called an Individual Case Safety Report (ICSR), is stored in the FDA Adverse Event Reporting System (FAERS; Food and Drug Administration 2017), which is a cornerstone resource for drug safety research, also called pharmacovigilance (PV). Figure 1 briefly summarizes the core information PV workers must extract from papers while constructing these reports. This includes a description of the patient in terms of reported weight, age, and biological sex, a list of drugs taken by the patient, and a list of adverse reactions experienced and whether they are considered serious. Drug manufacturers employ teams of experts to continually triage new papers and submit these reports. This is challenging work since it requires experts to survey entire biomedical papers and utilize their pre-existing knowledge about a drug of interest, its conventional indications, and its known adverse reactions. Furthermore, manufacturers are placed under constant time pressure to keep up with the latest publications, since failure to report in a timely manner can lead to hefty fines and compromise public safety. This pressure has potential to increase in the near future: there has been a steady acceleration of biomedical research over the last few years (Figure 2), and drug events are consistently under-reported (Alatawi and Hansen, 2017). In this work, we set out to improve the scalability and accuracy of PV using Natural Language Processing (NLP). As a first step, we introduce BioDEX, a large-scale dataset for document-level Biomedical adverse Drug Event Extraction. BioDEX consists of biomedical papers with associated expert-created drug safety reports. These reports were submitted to the FDA between 2012 Figure 1: BioDEX consists of 65k PubMed abstracts and 19k full text papers, accompanied by 256k document-level drug safety reports. The schematic illustrates the core information that constitutes a drug safety report (they often contain much more detailed information as well). These reports are created by pharmacovigilance experts and are vital for drug safety monitoring. and 2022 as part of real-world PV efforts. Thus, BioDEX is grounded in the historical and regulatory context of drug safety monitoring in the U.S. BioDEX contains PubMed articles published between 1968 and 2022, with 65,648 articles having an abstract available and 19,433 featuring a full-text paper. In total, 256,240 reports are included (there can be multiple reports per article). We evaluate the ability of language models (LMs) to fill out the core information of a report given a full-text article that is known to describe at least one ADE. We estimate a lower bound on human performance to be 72.0% F1. Our best model (a fine-tuned FLAN-T5-Large; Chung et al.2022) attains 62.3% F1, indicating substantial additional room for improvement while also suggesting that models trained on BioDEX are on a path to being useful tools for PV workers. Additionally, we evaluate the capability of OpenAI's GPT models (text-davinci-002, text-davinci-003, gpt-3.5-turbo, gpt-4; Brown et al.2020) but find that they severely struggle with this task, attaining at most 51% F1. Our models can aid drug safety research efforts today. An important use-case for drug safety research is efficiently finding papers that describe an adverse event with regard to a specific drug or reaction. Conventional search baselines suffer from low precision, since mentioned drugs and reactions are only rarely involved in an adverse event. Our models are specifically trained to extract adverse events, leading to better performance. All our code and data is made available online: [https://github.com/KarelD0/BioDEX](https://github.com/KarelD0/BioDEX). ## 2 Pharmacovigilance Reporting Pharmaceutical companies are required to participate in drug safety reporting for the drugs they produce. Regulations differ across regions of the world. In this work, we focus on the pharmacovigilance process as defined by U.S. regulations. The reporting process starts with a PV literature review stage. Periodically, a vast database of biomedical literature is queried to retrieve new publications that could describe an adverse event with regard to a drug of interest. Conventionally this is done by matching the trade name of the drug or names of its active substances. These queries are designed by experts and depend on the specific use-case, but they always aim for wide coverage; there are strong regulatory fines associated with missing reports, which creates strong incentives for very high recall. Reports can also originate from other modalities such as forms, emails, and social media. In this work, we only focus on reports originating from biomedical publications. Once a set of candidate publications is found, a triaging process begins. For example, papers that mention a serious adverse event should be prioritized, as these reports need to be submitted in a strict time window. This is often done via another high recall system that matches words such as'serious' and 'life threatening' via a lexicon-based approach. Each resulting publication is investigated by expert PV workers in a multi-stage pipeline, which can differ across companies. Typically, the initial flagging of potential ADEs is done by non-clinician PV workers. Evidence is flagged and can be mapped to a standardized ontology to introduce uniformity in downstream stages. Subsequently, clinicians review the report and refine the event details before the report is submitted. In this work, we abstract away the details of this human-based workflow and model the task as taking in a biomedical publication and outputting the final pharmacovigilance report. Systems that perform well at this task could go a long way towards automating pharmacovigilance. ## 3 Related Work Biomedical NLPLMs have pushed the frontiers of biomedical NLP. These models generally follow the Transformer architecture (Vaswani et al., 2017; Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Raffel et al., 2020; Nori et al., Figure 2: The number of peer-reviewed biomedical papers published each year is accelerating (as indexed in Medline). The total number of drug safety reports originating from articles is on the rise as well, but the trend indicates stagnation (reports submitted to FAERS from 2012 onwards). 2023). LMs, sometimes specifically tailored towards the biomedical domain, achieve state-of-the-art results across a range of biomedical benchmarks Yasunaga et al. (2022); Luo et al. (2022); Singhal et al. (2022). For example, LMs have achieved single-human performance on PubMedQA Jin et al. (2019), an expert-labeled biomedical question answering task with yes/no/maybe labels. Potentially, such models could be useful for PV as well. A key challenge is that PV requires processing entire biomedical publications, which PubMedQA does not support but BioDEX does. Recently, Zhao et al. (2022) introduced PMC-Patients, a large-scale dataset for patient-to-patient or patient-to-article retrieval built on top of PubMed. BioDEX can be seen as complementing this effort; instead of retrieving relevant papers, BioDEX aims to extract structured patient information from biomedical publications for pharmacovigilance purposes. Both the extraction of the information as well as the retrieval of relevant articles are highly relevant for Evidence-Based Medicine EgBM; Sackett (1997) and pharmacovigilance. Adverse Drug Event ExtractionPrevious work has focused on ADE extraction. However, almost all ADE datasets utilize some form of span-level annotations created by medical experts Wallace et al. (2016); Roberts et al. (2017); Nye et al. (2018); Kang et al. (2019); Dirkson et al. (2022). This severely limits the scale of these approaches Basile et al. (2019). Nye et al. (2018) annotate an impressive 5000 abstracts but in part utilize non-expert annotations Wallace et al. (2016) combine a document-level resource for Randomized Control Trial reports with their supporting literature and use distant supervision to derive pseudo span-level labels. BioDEX relies on the historical output of safety reporting in the U.S. Thus, it is orders of magnitude larger than these resources without requiring any additional expert labels, and it can automatically be expanded over time when new reports become available. This grounding in historical data entails that BioDEX closely matches the real-world clinical and regulatory task of PV. In addition, since we consider adverse drug event extraction at the document-level, we circumvent the need for span-level labels. FDA Adverse Event Reporting SystemThe FDA Adverse Event Reporting System (FAERS; Food and Drug Administration 2017) is used as a cornerstone resource for drug safety research. Previous work has focused on pre-processing FAERS, which can include grounding drug and reaction mentions to medical ontologies and detecting duplicate reports Banda et al. (2016); Hauben et al. (2021); Khaleel et al. (2022); Kreimeyer et al. (2022); Hung et al. (2022). In contrast, BioDEX is focused on improving the process of entering drug safety reports into FAERS, starting from the biomedical literature. Xu and Wang (2014) combine both FAERS and biomedical literature for enhanced drug safety signal mining. We go one step further and explicitly link reports from FAERS with their originating documents, which allows us to create a document-level drug event extraction task. ## 4 The BioDEX Dataset ### Dataset Description Each entry of BioDEX consists of one article and a list of associated reports. Articles and reports both contain many different features and metadata. In this section we limit ourselves to discussing only the most prominent features of our dataset. A full enumeration of all fields is given in Appendix A (for reports) and Appendix B (for articles). #### 4.1.1 PubMed Articles Each article contains a title and an abstract. If the full-text paper is openly accessible, it is also included together with its corresponding license. Articles also feature lists of keywords, Medical Subject Headings (MeSH; Lipscomb 2000), and a list of chemical substances mentioned in the publication. The abstract and article metadata was parsed from the Medline distribution (National Library of Medicine (US), 2021) using the pubmed-parser package Achakulvisut et al. (2020). If available, the full-text paper was pulled from PubMed Central Open Access Subset, using their provided API.1 Footnote 1: [https://www.ncbi.nlm.nih.gov/pmc/tools/openflist/](https://www.ncbi.nlm.nih.gov/pmc/tools/openflist/) #### 4.1.2 Drug Safety Reports A report contains clinically-relevant information about the described patient in the form of reported patient biological sex, weight, age group, and the age at which the event first occurred. Not all information is always present in the reports; this depends on what exactly the authors described in their article. Each report features a list of drugs, each with their own set of fields. Every drug consists of one active ingredient. If available, the drug may feature additional details such as the product name of the drug, the drug administration route, the (cumulative) dosage taken, the action taken with this drug (e.g., dose increased), and whether the drug was considered a potential cause of the adverse reaction by the authors or not. If provided in the article, the reports can even describe the exact lot number of the drug product taken by the patient. Each report also features a list of reactions. Each reaction is characterized by an entry from the standardized MedDRA ontology (Medical Dictionary for Regulatory Activities; Brown et al. 1999), as well as a field describing the outcome (e.g., recovered, recovering, fatal). ### Dataset Analysis BioDEX features articles published between 1968 and 2022, with a stark increase in articles from 2013 onwards, corresponding to new PV-related legislation in Europe in 2012 (Fornasier et al., 2018). Figure 3 displays the article distribution starting from 2000. The associated reports all originate from a period between 2012 and 2022. BioDEX covers a broad range of topics. In total 55,951 unique article keywords are included. Figure 4 shows the most prominent ones. The median full-text paper in BioDEX is about 20k characters long. Table 1 displays the quartiles for both the abstract and full-text length in number of characters and tokens. We note that the average full-text paper is much longer than the context window used in many present-day LMs. While BioDEX is rooted in a U.S.-based resource, other countries are represented as well. Figure 5 illustrates from which countries the reports originated. Some regions are underrepresented, indicating an avenue for future work. ### Dataset Creation BioDEX is created by matching articles parsed from Medline with drug safety reports entered in FAERS. To avoid ambiguity, we only consider articles with a unique PubMed identifier and a unique title. Only reports containing an explicit reference to a supporting paper are considered. Unfortunately, this reference to the supporting literature is not structured. We parse the article title out of this unstructured reference. If we find a title that exactly matches a title in our set of articles, we enter both the article and associated report in BioDEX. Otherwise, we drop the report. When creating BioDEX, we prioritized creating high-precision matches. Future work could expand the size of our dataset by considering a more sophisticated procedure to match articles and reports - e.g., by using metadata other than the article titles. ## 5 Task and Metrics In this work, we focus on the task of predicting the core information of a report given a full-text paper, which we call Report-Extraction. Accurate Figure 4: Histogram of 30 most frequent keywords associated with publications in BioDEX and their frequency of occurence. Figure 3: Number BioDEX abstracts and full-text papers published over time. Articles published 1968–2000 are not visualized because they are not frequent. and autonomous extraction of drug safety reports can have a large impact on PV by increasing the quality of safety signals and decreasing the time required to surface new signals. ### Core Reports We reduce the complexity of the detailed reports by only predicting the 4 core attributes: 1. Serious: The seriousness of the adverse event. Equal to 1 if the adverse event resulted in death, a life threatening condition, hospitalization, disability, congenital anomaly, or any other serious condition. If none of the above occurred, equal to 2. 2. Patientsex: The reported biological sex of the patient. \(\emptyset\) for unknown, 1 for male, 2 for female. 3. Drugs: The set of all active substance names of the drugs discussed in the report. For example: azathioprine, infliximab, mesalamine, prednisolone. 4. Reactions: The set of all reaction terms discussed in the report. For example: Epstein-Barr virus infection reactivation, Idiopathic interstitial pneumonia. For the Report-Extraction task, we only consider reports where all these 4 attributes are present. While BioDEX reports contain more detailed attributes as well, we leave predicting these details as future work. ### The Report-Extraction Dataset We create a new dataset specifically for this task by manipulating BioDEX. First, we restrict ourselves to only articles with a full-text paper available. Additionally, we only consider articles with less than 10 associated reports, since we found that the few articles with more were often very large survey papers discussing a broad range of adverse effects. If multiple reports per article are available, one report is sampled to act as the gold label of our task. We leave the task of predicting a variable number of reports per publication, which BioDEX supports, as future work. We divide the data into train/test splits by taking articles published before 2021 as training instances and the rest as testing instances. This adds a temporal generalization component to our task. Finally, we create a validation split by uniformly holding-out 20% of the training samples. We deliberately created a test scenario that simulates the real-world situation these models will \begin{table} \begin{tabular}{l r r r} \hline \hline percentile: & 25th & 50th & 75th \\ \hline abstract length & & & \\ \# characters & 825 & 1,263 & 1,679 \\ \# tokens & 177 & 275 & 383 \\ full text length & & & \\ \# characters & 14,801 & 19,935 & 29,531 \\ \# tokens & 3,761 & 5,152 & 7,890 \\ \hline \hline \end{tabular} \end{table} Table 1: Abstract and full text length percentiles of BioDEX in number of characters and tokens. Tokenization done with the OpenAI’s tiktoken package, using the vocabulary of the text-davinci-002 model. Figure 5: Number of drug safety reports in BioDEX originating from a given country. Colors follow a log scale. face: they will have been developed on data up to a specific time point and then, by necessity, they will encounter reports from later time periods. It is vital that we study how models behave in this challenging scenario. The resulting dataset sizes and article dates are given in Table 2. We distribute this subset of our dataset in structured format as well. ### Report-Extraction Performance To estimate performance, we need to define a similarity metric between two core reports. This is achieved by taking a weighted average over the 4 attribute similarities.2 For serious and patientsex, the similarity is the conventional classification accuracy. For drugs and reactions, the set precision and recall metrics are used. Every predicted drug or reaction in these sets is either correct or wrong, based on an exact string match. This is a strict metric, since multiple correct ways of describing the same drug or reaction are not taken into account. Medical ontologies can be used to normalize drug and reaction mentions; we plan to share code and results for these more lenient metrics in the future. Footnote 2: The weight factors are \(1/6\) for the serious and patientsex scores, and \(1/3\) for the drugs and reactions scores. ### Inter-Annotator Agreement A single article can be linked to multiple reports. Often, these reports comment on the same underlying adverse event but were submitted by independent people or institutions. These situations can be used to estimate a lower-bound on the Inter-Annotator Agreement (IAA). For every article with multiple reports available, we randomly validate one core report against another. Using our Report-Extraction Performance, this produces an IAA score of 72.04% F1. As a random baseline, we consider validating a core report against another report uniformly sampled from the entire dataset. This produces an F1 of 24.28% and serves as lower bar for non-trivial performance. This score is significantly larger than 0% mainly due to high random guessing accuracy on the serious and patientsex attributes. ## 6 Experiments and Results Motivated by the recent success of LLMs, we choose to model Report-Extraction as a sequence-to-sequence problem.3 Given a full-text paper as input, we train models to predict the core report in a stringified format, such as "serious: 1 patientsex: 1 drugs: azathioprine, infliximab, mesalamine, prednisolone reactions: epstein-barr virus infection reactivation, idiopathic interstitial pneumonia". Footnote 3: Different views of the Report-Extraction task are possible. For example, it could be defined as a set of (multi-label) classification tasks. In this preprint, we only report results on the validation split, to avoid prematurely evaluating on the test split. ### Few-shot In-context Learning First, we evaluate the few-shot in-context learning performance on our dataset achieved by OpenAI's text-davinci-002, text-davinci-003, gpt-3.5-turbo, and gpt-4 models (Brown et al., 2020). A key limitation of in-context learning is that both the few-shot demonstrations and the actual input need to fit in the same context window. Given the average length of our inputs, the context window becomes a constraint: most of the full-text papers do not fit the text-davinci-003 context window of 4,096 tokens (see Table 1). Thus, we aim to maximally utilize the available context window. Given a fixed natural description prompt of the task (see Appendix C for the full prompt), we investigate the trade-off between the number of tokens dedicated to in-context demonstrations and the number of tokens of the input paper. Since it is prohibitive to include entire papers, we use only the abstracts for the demonstrations and truncate the full-text input paper to maximally fill the context window. Table 3 summarizes the experiments. We find the optimal trade-off to consist of 7 abstract-level demonstrations, which results in incorporating around 1,660 tokens of the final paper. On the validation set, this achieves a performance of 45.78% F1 for text-davinci-002, \begin{table} \begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{split} & \multirow{2}{*}{size} & \multicolumn{2}{c}{article date} \\ & & min. & max. \\ \hline train & 9,624 (62\%) & 1990 & 2020 \\ validation & 2,407 (15\%) & 1985 & 2020 \\ test & 3,628 (23\%) & 2021 & 2022 \\ \hline \hline \end{tabular} \end{table} Table 2: Sizes of the Report-Extraction splits and corresponding ranges of article publish dates. 50.44% F1 for text-davinci-003, and 50.62% F1 for gpt-4.4 While this performance is certainly non-trivial, especially given only 7 labeled examples, it is far from expert-level. We explored using the context window of gpt-4 beyond 4096 tokens, but found no improvements when further scaling the amount of demonstrations or the amount of paper input tokens. The cheaper gpt-3.5-turbo model performs sub-par and struggles to properly format its generations. Footnote 4: All default hyperparameter settings were used for the OpenAI API calls. To save on costs, we validate on the first 100 validation examples for the experiments involving the davinci and gpt-3.5-turbo models. We use only the first 20 examples for the gpt-4 experiments. We conclude that, at least in our standard use of the methods, few-shot learning achieves non-trivial but unsatisfactory performance on our Report-Extraction task. See Appendix D for 10 examples. ### Fine-tuned Models We further experiment with fine-tuning our own specialized models for the Report-Extraction task. We consider the suite of FLAN-T5 models Chung et al. (2022), which are based on the encoder-decoder Transformer architecture Vaswani et al. (2017). Table 4 summarizes the experiments. The most successful run consisted of fine-tuning FLAN-T5-Large on a source context window of 2048 tokens and a target context window of 256 tokens. This achieves 62.28% F1 on our task. Given a fixed context window of 512 or 1,024 tokens, the larger FLAN-T5-XL model performs better. For a given model size, longer context windows improve performance. We leave the further scaling of model sizes and context windows as future work. Models were trained for up to 5 epochs with a starting learning rate of \(0.0001\), linearly scheduled. We used the Adafactor optimizer with default hyperparameters Shazeer and Stern (2018). We used greedy decoding to form the generations. Beam search decoding, with a beam width of 8, did not further improve performance. ## 7 Improving Pharmacovigilance Our primary goal is to improve the scalability and accuracy of PV using NLP. The above experiments highlighted the potential for LMs to autonomously fill in ADE reports. However, fully autonomous drug event reporting systems are unlikely to achieve widespread adoption _today_. Mainly because of the challenging nature of this task and the high cost of errors, human experts will remain vital for effective solutions in the years to come. However, our models can still deliver tangible value by augmenting existing expert-based work \begin{table} \begin{tabular}{l c c c c c c} \hline \hline model & \# demos & \begin{tabular}{c} \# input paper \\ tokens (avg) \\ \end{tabular} & \begin{tabular}{c} REP \\ (\% F1) \\ \end{tabular} & \begin{tabular}{c} Parse \\ percentage \\ \end{tabular} & \begin{tabular}{c} \# generation \\ tokens (avg) \\ \end{tabular} & \begin{tabular}{c} \# context \\ tokens (avg) \\ \end{tabular} \\ \hline text-davinci-002 & 5 & 2347 & 44.15 & 100 & 41 & 3871 \\ text-davinci-002 & 7 & 1669 & 45.78 & 97 & 35 & 3956 \\ text-davinci-002 & 10 & 845 & 45.91 & 98 & 43 & 3965 \\ text-davinci-002 & 12 & 385 & 45.80 & 98 & 36 & 3968 \\ \hline text-davinci-003 & 6 & 2070 & 48.13 & 100 & 50 & 3968 \\ **text-davinci-003** & **7** & **1669** & **50.45** & **99** & **47** & **3968** \\ text-davinci-003 & 8 & 1440 & 47.16 & 100 & 54 & 3959 \\ \hline gpt-3.5-turbo-0310 & 7 & 1710 & 30.55 & 76 & 29 & 3955 \\ \hline **gpt-4-0312** & **7** & **1665** & **50.62** & **100** & **44** & **3953** \\ **(4k context)** & 7 & 3638 & 49.69 & 100 & 43 & 5925 \\ gpt-4-0312 & 14 & 3151 & 48.00 & 100 & 38 & 7215 \\ \hline \hline \end{tabular} \end{table} Table 3: Few-shot in-context learning results on the BioDEX Report-Extraction task (validation split). For each model, we vary the combination of number of few-shot demos and the amount of tokens dedicated towards the input paper. REP denotes the Report-Extraction Performance. Parse percentage denotes the frequency of times the model formed a well-structured generation. For cost reasons, all models except gpt-4-0312 were only evaluated on the first 100 examples in the validation split. gpt-4-0312 was evaluated on the first 20 examples. flows. Given the vast number of biomedical papers published, it is increasingly impractical to thoroughly yet every candidate publication (as is currently being done). Drug manufacturers are looking to more efficiently triage the literature to prioritize efforts, as this minimizes risk from regulatory fines and maximizes public safety. Such a triaging system is typically based on a naive lookup: finding all papers that match a drug name is likely to find all papers where that drug engages in an adverse event. Unfortunately, such a system has low precision, causing human effort to be wasted investigating irrelevant papers. We find that our model predictions achieve a higher performance at finding adverse events concerning specific reactions, compared to the lookup baseline. We measure this through the macro average F1 score on the binary classification task of predicting per paper if a reaction was part of an adverse events. Figure 6 shows the results for the 30 most frequent reactions in the validation split. High-recall baselines still have a valuable place in the PV review process, but our system could be used to more efficiently prioritize effort. Appendix E describes the same experiment for drugs. Future work could utilize all details of BioDEX reports or incorporate the 65k abstract-level datapoints during training to further improve utility for PV. For example, BioDEX would support fine-tuning a question answering model for PV. ## 8 Conclusion We introduced BioDEX, a large-scale document-level Biomedical adverse Drug Event Extraction dataset. BioDEX covers an important and challenging real-world task: extracting detailed drug safety reports from full-text biomedical publications. We find that LLMs struggle to get traction on this task using in-context learning. Fine-tuned models are more successful, but expert-level performance remains elusive. Nevertheless, our models have the potential to make drug safety research more efficient, and we demonstrated their utility in a conventional PV use-case. We release all data and models. We hope that BioDEX stimulates new research in the high-impact area of drug safety monitoring. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{model} & \# source & \# target & REP & Parse & \# generation \\ & tokens & tokens & (\% F1) & Percentage & tokens (avg) \\ \hline **FLAN-T5-Large** & **2048** & **256** & **62.28** & **98.96** & **59.60** \\ FLAN-T5-Large & 2048 & 128 & 61.39 & 99.58 & 52.96 \\ FLAN-T5-Large & 1024 & 256 & 55.88 & 96.05 & 75.08 \\ FLAN-T5-Large & 512 & 128 & 50.92 & 94.72 & 53.60 \\ \hline FLAN-T5-XL & 1024 & 256 & 58.32 & 99.46 & 48.82 \\ FLAN-T5-XL & 512 & 256 & 53.19 & 97.55 & 64.69 \\ \hline \hline \end{tabular} \end{table} Table 4: Fine-tuning results on the BioDEX Report-Extraction task (validation split). REP denotes the Report-Extraction Performance. Parse percentage denotes the frequency of well-structured model outputs. Figure 6: Reaction classification performance across the 30 most frequent reactions in the BioDEX validation set. Support in parentheses. Average performance in bold. Reactions are sorted by baseline performance. ## 9 Limitations and Ethical Considerations Drug Safety Reporting is an important real-world task. Submitting faulty reports or consistently underreporting specific adverse events could have profound impacts for public safety. LMs are known to make mistakes and fabricate evidence, they are almost invariably biased towards specific predictions, and they can be prone to adversarial attacks (Bender et al., 2021; Zhang et al., 2020). Thus, the resources put forth in this paper should not be naively applied to automate safety reporting. Rather, we suggest that these systems could be integrated as an additional tool at the disposal of PV workers, and we encourage careful study of how to best empower these experts to work more efficiently and effectively. Different countries can face different health issues. When we develop biomedical language systems, it is important they work for everyone. Some countries are underrepresented in our dataset. Subsequent data collection efforts should focus on these countries to alleviate this issue. ## Acknowledgements KD is funded by an FWO Fundamental Research PhD Fellowship (11632223N).
2302.00428
Lateral constraint for thin glass shell: analysis of the requirements and conceptual design for a segmented active mirror
The latest high-performance telescopes for deep space observation employ very large primary mirrors that are made of smaller segments, like the JWST which employs monolithic beryllium hexagonal segments. A very promising development stage of these systems is to make them active and to operate on their reflective surfaces to change their shape and compensate for aberrations as well as to perform a very precise alignment. This is possible by employing a reference body that stores actuators to modify the shape of the shell, like in the SPLATT project where voice coil actuators are used. However, the lack of physical contact between the main body and shell places, along with the many advantages related to the physical decoupling of the two bodies, some concerns related to the retaining of the shell under all the possible acceleration conditions affecting the system during the mission lifetime. This paper aims to study the acceleration environment affecting the spacecraft during its lifetime and to use it as a baseline for operational requirements of a retaining system for the shells. Any solution is selected in this paper to leave complete freedom for the development of a constraining system, just some are qualitatively discussed.
Marcello Agostino Scalera, Runa Briguglio, Ciro Del Vecchio, Marco Xompero, Marco Riva
2023-02-01T13:24:07Z
http://arxiv.org/abs/2302.00428v1
Lateral constraint for thin glass shell: analysis of the requirements and conceptual design for a segmented active mirror. ###### Abstract The latest high-performance telescopes for deep space observation employ very large primary mirrors that are made of smaller segments, like the JWST which employs monolithic beryllium hexagonal segments. A very promising development stage of these systems is to make them active and to operate on their reflective surfaces to change their shape and compensate for aberrations as well as to perform a very precise alignment. This is possible by employing a reference body that stores actuators to modify the shape of the shell, like in the SPLATT project where voice coil actuators are used. However, the lack of physical contact between the main body and shell places - along with the many advantages related to the physical decoupling of the two bodies - some concerns related to the retaining of the shell under all the possible acceleration conditions affecting the system during the mission lifetime. This paper aims to study the acceleration environment affecting the spacecraft during its lifetime and to use it as a baseline for operational requirements of a retaining system for the shells. Any solution is selected in this paper to leave complete freedom for the development of a constraining system, just some are qualitatively discussed. active optics, space telescopes, space science missions, accelerations, thin shell, voice coil actuators Further author information: Marcello Agostino Scalera: E-mail: [email protected], Telephone: +39 348 5204516 ## 1 Introduction In the LATT project, an ESA-funded TRP concluded in 2015, the concept of an active primary mirror (or segment) for a space telescope has been investigated. The team manufactured a 40 cm diameter demonstrator, called OBB (Optical Breadboard), actively shaped by 19 actuators. The technology is that used for large format adaptive secondary mirrors currently deployed at the Large Binocular Telescope[1] (Arizona) and Very Large Telescope[2] (Chile). The concept is based on the very favourable mix (as demonstrated on those adaptive optics systems on ground) of voice coil actuators (VCM) and thin glass shell (TS). The TS is a 1.5 to 2 mm thick Zerodur meniscus with magnets bonded on its back; the VCM are encapsulated in a rigid Reference Body (RB) and the TS is shaped thanks to the coupling VCM - magnets. In add, the VCM are controlled in a local close loop fed by co-located position sensor to drive the current in the coil and keep the wanted position, at a frequency much faster than the optical close loop with the wavefront sensor (WFS). Such technology has been adopted for the Large The core of such technology (which is also adopted for the ELT M4[3] adaptive mirror and for the GMT adaptive M2) is then the TS, whose manufacturing process is now well mastered both in the USA and in Europe. For the M4 mirror, 8 TS have been manufactured so far to realize the 2.5 m diameter optical surface, 2 mm thickness, and the typical manufacturing residual is lower than 20 nm RMS WF. A TS is an extremely fragile piece of optics and its use on a space telescope is considered critical. One of the outcomes of the LATT project was the development and test of a constraint mechanism, based on an electrostatic force applied to the TS, to let it survive to the launch accelerations. The test was successful, yet the question "would you adopt a TS based active mirror on a space telescope?" is open. The answer could be positive when we focus on the global-level benefits of such technology, for instance the very low areal density: the OBB has an areal density lower than 16 kg/m2, including the support, actuators and position sensors. In add, VCM are contactless actuators: the mechanical gap between the RB (or the payload) and the optical surface would a natural insulator for mechanical vibrations. Additional questions arise. Contactless actuation means that the TS are _formation flying_ in front of the telescope, is there the risk of loosing them? Do we need a retaining system to compensate such risk? Which are the functional requirements of such system to satisfy a safety need while preserving a low areal density and a mechanical insulation of the optical surfaces? Addressing in the correct view such questions could open the way to the use of very lightweight, large stroke active mirrors with global benefit for the mission. Referring to the prototype of a thin shell actuated by voice coil actuators used in the SPLATT project, this paper studies the loads that affect the system during a JWST-like space mission, highlighting the most critical events like launch, orbital maneuvers during the orbital transfer, attitude maneuvers and orbital maintenance. The study is based mainly on the mission and operation profile data of the JWST and secondarily on those of the LUVOIR space telescope. Future JWST-like telescopes, such as Luvoir, are expected to take the largest advantage from the thin active shell technology used on the segmented primary mirror, but other applications can be foreseen for high-resolution telescopes for Earth observation or for solar system exploration, being the SPLATT prototype potential adaptable to any space payload. Following a system engineering conceptual approach, this paper aims to define the loads that the thin shell system shall sustain, placing some clear technical requirements to drive the development of the constraint system, without imposing any solution, but some qualitative hint at the end of the paper. The outcome of this study shall propose a guideline of the accelerations acting on the thin shell active systems, useful to design dedicated constraint systems or to strengthen the actuation technology. A clear distinction between one-off loads (launch, orbital transfer) and recurrent loads (attitude and orbital maintenance maneuvers) is highlighted since it may lead to different approaches and developments. ## 2 One-off Accelerations During the Mission Lifetime Launch accelerations are usually the highest ones experimented by a space system along its all complete lifetime and they must be supported only once. Being a one-off event, it is possible to think about non-permanent solutions that employ high energies that can be provided by the launcher and its batteries. A different situation is encountered while manoeuvring during the orbital transfer trajectory. Now, the accelerations are weaker than during launch but they are scheduled to happen in a way wider time frame and employ only the onboard resources without any external support. Once the orbital transfer is done, only the station keeping manoeuvres will affect the orbit of the spacecraft as recursive manoeuvres. Due to these profound operational differences, the study regarding the launch and manoeuvring phases are split and analyzed separately. ### Launch The launch accelerations are summarised in the Ariane V launcher manual[4], where it is clearly stated that the longitudinal acceleration does not exceed 4.55g (value met at solid boosters' burnout) and the transverse accelerations do not exceed 0.25g. These values were the ones experienced by the JWST, but future spacecrafts will use different launchers like the new version of the SLS, already selected for the Luvoir launch. The available data for these new generation launchers are scarce and focused on specific mission profiles like lunar missions where they suggest lower accelerations that those generated by the Ariane V. Because of this, the data of the Ariane V have been used. The accelerations act differently on the shells according to the launch configuration inside the fairing. Referring to the launch configuration of the JWST and the expected one of Luvoir, the strongest longitudinal acceleration would be transferred to the shells as a force acting approximately on the plane of the shell surface, similarly to a shear force. This force tries to make the shell slide away from the reference body along the flight direction. The transverse accelerations act both on the shell surface and perpendicular to it. Another interesting analysis refers to the dynamical environment generated by the launcher. It is well described in the Ariane V user manual and can be used to evaluate the response of the shell to the vibration loads, but more importantly to this study, to compute the constraint forces to keep the shell in position. orbital path considering the uncertainties related to the engine performance and thrust times. As a result, the manoeuvres are defined in terms of statistical distributions of the thrust times \(\Delta T\) and \(\Delta V\). The variation of the \(\Delta T\) is less effective than that of the \(\Delta V\), leading to the decision of using the mean \(\Delta T\) and the worst-case scenario \(\Delta V\) to compute the accelerations. This introduces a great simplification in the acceleration computation that avoids statistical considerations but still provides meaningful results, shown in Tab.1. A 5% margin over the maximum \(\Delta V\) is applied to make the computations even more robust. The actual computation of the accelerations is the simple ratio \(a_{MCC}=\frac{\Delta V_{max}0.05}{\Delta T_{mean}^{2}}\frac{1}{9.81}[g]\) between the total \(\Delta V\) and the \(\Delta T\), that has as reasonable - and likely true - assumption that the manoeuvre is performed at constant engine thrust. These acceleration values are specific to the JWST mission since its data can be retrieved by literature. The deep orbital manoeuvres for future missions will very likely be different, but the many similarities between JWST and future telescopes like Luvoir lead to the expectation of accelerations in the same order of magnitude for these types of space missions. However, the direction of the forces generated by these accelerations on the shells may change significantly depending on the relative direction of the thrust vector with respect to the shells during manoeuvres. The definitive direction of these forces can be known only once the attitude profile during the deep space transfer is fully defined. Finally, the vibration profile to evaluate the dynamic loads is not available in the literature for this phase of the mission. Also, future telescopes like Luvoir may employ systems that decouple the dynamic environment of the spacecraft from that of the telescope, making this evaluation potentially completely useless. However, the vibrations induced by the chemical thrusters - if these are used - are for sure way lower than those generated during launch, causing less issues. \begin{table} \begin{tabular}{l|c|c|c|c} & Mean \(\Delta V[m/s]\) & Max \(\Delta V[m/s]\) & Mean \(\Delta T[s]\) & \(a_{MCC}[g]\) \\ \hline \hline MCC-1a & 22.279 & 23.4 & 4952.28 & \(4.82\,10^{-4}\) \\ MCC-1b & 1.967 & 4.5 & 455.68 & 0.001 \\ MCC-2 & 0.712 & 3.0 & 149.4 & 0.002 \\ \end{tabular} \end{table} Table 1: Accelerations generated on the JWST during the Mid Course Correction Manoeuvres (MCCs). Values computed starting from [5]. Figure 1: Representation of the deep space transfer orbit from Earth to the L2 operational orbit. The maneouvres are identified as Mid Course Corrections (MCC). Picture from the work of Petersen et al. [5] ## 3 Repeated Accelerations During the Mission Lifetime ### Station keeping manoeuvres JWST must perform station keeping (SK) manoeuvres every 21 days in order to maintain its designed halo orbit around L2. The total foreseen \(\Delta V\) for the whole operative life of 10.5 years is of 24.88 m/s. This translates in around 2.37 m/s yearly and consequently around 0.14 m/s per SK manoeuvre, according to the work of Dichmann et al.[6] Information about the burning time for this operation can not be found and it has been hypothesized based on the fact that MCC2 can be considered as the first station-keeping operation. A linear assumption based on the acceleration has been carried out according to the following equation, where the assumption of constant thrust during the whole station keeping manoeuvre holds: \[\Delta V_{MCC2}:\Delta T_{MCC2}=\Delta V_{SK}:\Delta T_{SK}\to 0.712:149.4=0.14: \Delta T_{SK}\rightarrow\Delta T_{SK}=\frac{0.14*149.4}{0.712}\approx 30s\] \[a_{SK}=\frac{\Delta V_{SK}}{\Delta T_{SK}}\frac{1}{g*}=\frac{0.14}{30}\frac{1} {9.81}=4.75\,10^{-4}g\] The direction of this acceleration is variable with the attitude of the spacecraft and of the shell with respect to the thrust direction. Due to the agility and the re-pointing capability of Luvoir, this acceleration may act along all possible directions - tangential to the shell surface or perpendicular to it, with all possible combinations -. The worst case scenario is probably when the acceleration acts completely tangential to the shell, but a comparative analysis shall be performed among the various loading conditions to quantitative find the worst case operative scenario and use it for the constraint system design. A note shall be done: the computations of the acceleration follow a linear assumption based on MCC2 strategy that may result relatively far from reality, especially on a quantitative level. However, the resulting operative time for a SK single manoeuvre looks realistic. In the case of Luvoir, the value of the needed \(\Delta V\) will be probably larger due to its bigger dimensions that directly influence the effect of the solar radiation pressure on the orbit and consequently on the actions to counteract it and on the frequency of the momentum damping manoeuvres. SK thruster can also have higher specific impulse leading to shorter burning time and higher accelerations. However, the order of magnitude of SK accelerations previously computed should be realistic and reliable to preliminary dimension a retaining system. Anyway, the computation using real SK strategy data and associated attitude shall be used for a high-fidelity evaluation of the SK accelerations on the shells. ### Slew manoeuvres Precise values of the attitude accelerations acting on the JWST can be found in the work by Karpenko et al.[7] related to optimization methods for attitude control and slewing speed. These values are summarised in Tab.2, directly extracted from the previously cited paper and they give a clear idea of the order of magnitude of the applied accelerations and consequent times for slewing. Literature gives also some interesting information about Luvoir, especially on its strategies to maximise the coverage of the anti-Sun hemisphere. A presentation by Dewell et al.[8] places a clear requirement on the re-pointing performance of Luvoir, which shall access any location in the anti-sun hemisphere in a maximum of 45 minutes with 30 minutes as the design goal. This, together with the geometry of the tel \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{4}{l}{} \\ Sizing rule & \(\alpha_{max}[deg/s^{2}]\) & \(\omega_{max}[deg/s]\) & 90° slew time [min] \\ \multicolumn{4}{l}{} \\ Two-norm arousal & \(0.9\,10^{-}4\) & 0.037 & 47.0 \\ \(L_{\infty}\) conventional & \(1.08\,10^{-}4\) & 0.045 & 40.2 \\ \(L_{\infty}\) arousal & \(1.14\,10^{-}4\) & 0.048 & 38.4 \\ \(L_{\infty}\) arousal & \(1.45\,10^{-}4\) & 0.060 & 31.9 \\ \end{tabular} \end{table} Table 2: Agility parameters of the JWST according to different modelling and optimization techniques, data extracted from the work by Karpenko et al.[7] distance from the center of rotation, is enough to compute the in-plane accelerations acting on the mirrors that shall be counteracted in order to avoid the misalignment of the magnets and the consequent loss of performance or of the shell. The gimbaling system of Luvoir permits a pitch movement of \(90^{\circ}\) of the primary mirror and of the tower where the instruments are mounted - discussed in the paper by Tajdaran et al.[9] -, as shown in Fig.2. However, the gimbal performance and its acceleration cannot be found in literature. The values for this acceleration are nevertheless computed considering the data related to the JWST listed in Tab.2 combined with the operative time requirements enlisted in the work by Dewell et al.[8] The roll motion is not considered since it would expose the telescope to the sunlight and it would rarely happen and at negligible angular accelerations. Based on the images of the Luvoir telescope, some dimensions of interest can be retrieved. Also, considering that yaw rotation happens around the central point of the whole spacecraft - a very plausible assumption derived from a symmetric AOCS configuration-, the maximum distance between the primary mirror and the centre of the rotation is \(R_{y}\approx 2.7m\). Same reasoning for the pitch rotation but considering that it happens at the gimbal point, the distance between the center of rotation and the primary mirror is then \(R_{p}\approx 5.45m\). These distances are highlighted in Fig.3. All the data are now available to evaluate the linear accelerations acting on the mirror due to yaw (\(a_{y}\)), pitch (\(a_{p}\)) and the combined acceleration (\(a_{t}\)) increased by a 10% margin to compensate for the unknowns related to the pitch rotation and to the AOCS system of Luvoir. This is a very serious worst-case scenario - probably even not feasible due to the limitations of the attitude control system - since all the accelerations are at their maximum at the same time. To grant the full coverage of the observable hemisphere, the slew capability shall grant a yaw slew range of \(180^{\circ}\) and a pitch rotation of \(90^{\circ}\), both shall happen in a 30 minutes time window. Based on the assumption of a linear angular motion at a constant acceleration and deceleration - uniformly accelerated angular motion -, some iterations to define the yaw angular acceleration led to the following values: \(\omega_{y,max}=0.135\,\frac{deg}{s}\) is the maximum angular velocity during the yaw rotation, \(\alpha_{y}=3\,10^{-4}\,\frac{deg}{s^{2}}=5.2\,10^{-6}\,\frac{rad}{s^{2}}\). Using the uniformly accelerated angular motion assumption,it is possible to compute the acceleration times \(t_{acc}\) and the time spent at constant angular speed \(t_{\omega}\) to verify that the time constraint of 30 minutes is respected: Figure 2: Representation of the gimbal rotation of Luvoir to improve the accessibility time to the complete anti Sun hemisphere. Here the rotation is up to \(60^{\circ}\) but the maximum capability is of \(90^{\circ}\), making the main mirror pointing straight to Nadir. Picture from the work of Tajdaran et al.[9] \[t_{acc}=\frac{\omega_{y,max}}{\alpha_{y}}\approx 450s=7.5\, mins\rightarrow\theta_{acc}=\frac{1}{2}\alpha_{y}t_{acc}^{2}=30.375^{\circ}\] \[t_{\omega}=\frac{180^{\circ}-2\theta_{acc}}{\omega_{y,max}}\approx 885s =15\, mins\to t_{180^{\circ}}=t_{\omega}+2t_{acc}=30\, mins\rightarrow\alpha_{y},\omega_{y,max}\,OK\] As discussed earlier, the pitch rotation of the telescope is performed with the gimbal rotation of the telescope alone. The computations related to its angular acceleration are performed like those of the yaw computation. \(\omega_{p,max}=0.0818\,\frac{deg}{s}\) is the maximum angular velocity during the yaw rotation, \(\alpha_{p}=2\,10^{-4}\,\frac{deg}{s^{2}}=3.5\,10^{-6}\,\frac{rad}{s^{2}}\). Again, \(t_{acc}\) and \(t_{\omega}\) for the pitch rotation can be computed to verify that the time constraint of 30 minutes is respected. \[t_{acc}=\frac{\omega_{p,max}}{\alpha_{p}}\approx 414s=7\, mins\rightarrow\theta_{acc}=\frac{1}{2}\alpha_{p}t_{acc}^{2}=17.14^{\circ}\] \[t_{\omega}=\frac{180^{\circ}-2\theta_{acc}}{\omega_{p,max}}\approx 681s=11.35 \, mins\to t_{180^{\circ}}=t_{\omega}+2t_{acc}\approx 25\, mins\rightarrow\alpha_{p},\omega_{p,max}\,OK\] Finally, the linear accelerations acting on the shells can be computed. The maximum acceleration due to yaw rotation is computed in Eq.(1), the one related to pitch in Eq.(2), where the 10% margin is added. In Eq.(3), the composed acceleration when the shells are subjected to both the yaw and pitch accelerations is computed. \[a_{y}=1.1\,R_{y}[m]\alpha_{y}[\frac{rad}{s^{2}}]=1.1*2.7*5.2\,10^{-6}\approx 1 5\,\mu g \tag{1}\] \[a_{p}=1.1\,R_{p}[m]\alpha_{p}[\frac{rad}{s^{2}}]=1.1*5.45*3.5\,10^{-6}\approx 2 1\,\mu g \tag{2}\] \[a_{t}=\sqrt{a_{p}^{2}+a_{y}^{2}}\approx 26\,\mu g \tag{3}\] The computed total acceleration \(a_{p}\) represent the absolute worst case scenario. The related acceleration is the one that shall be counteracted to avoid any sort of misalignment between the shells and their main bodies during the slew manoeuvres. Because of the geometry of the system, these accelerations act on the shell surface Figure 3: Representation of the distances of the primary mirror from the center of rotations of the Luvoir telescope. Picture elaborated starting from the final design report of Luvoir.[10] and not perpendicular to them, on a first approximation where the primary is considered as flat. The generated forces are shear forces that make the shell slide away from the reference body and a constraining system shall be considered to avoid this effect. ## 4 Requirements Definition for the Shell Retaining System This section places some first level requirements to design the retaining system.They can obviously be further refined or developed, but they place a good solid base. We identified the following: 1. **POWER CONSUMPTION**: the retaining system shall not reduce the available power to science instruments while in operations+. Footnote †: The available power is usually designed for the worst case usage scenario adding margin, leaving some power for the retaining system if needed. Alternatively, some power can be dedicated in design phase. 2. **DYNAMIC BEHAVIOUR**: the retaining system shall not transmit vibrations from the reference body to the shell, apart from low frequency vibrations (! 0.1Hz) and vibrations that induce deformations that are controllable by the actuators++. Footnote †: the retaining system shall not introduce excitations of the intra-actuators modes. 3. **RIGIDITY**: the retaining system shall limit the translations and rotational displacements of the thin shell within a given bounding box of TBD dimensions*. The static accelerations that the retaining system shall counteract are listed in Tab.3 for the various expected conditions during lifetime. Footnote *: The dimensions of this box are to be defined according to the assembled system and to the various elements that may enter in contact during the various phases of the lifetime Table 3. Static accelerations affecting the system during the expected phases of the lifetime of a JWST like telescope \begin{table} \begin{tabular}{c|c|c|c} **Life phase** & **Max acceleration** & **Frequency of acceleration** & **Acceleration direction** \\ & & **event** & **Acceleration direction** \\ \hline \hline \multirow{4}{*}{Launch} & & & The main accelerations acts \\ & & & longitudinal wrt flight direction, \\ & & & acting as a shear force that \\ & \(a_{long}\approx 4.5g\) & Once in the lifetime & makes the shell slide away from \\ & \(a_{lat}\approx 0.25g\) & Once in the lifetime & the reference body. The lateral \\ & & & accelerations acts both \\ & & & tangentially and perpendicularly \\ & & & to the shell surface \\ \hline \multirow{4}{*}{Orbital transfer} & & Limited number of & \\ & & times in the first months & \\ & & of the mission. & Variable: depends \\ & & JWST performed 3 of such & on flight configuration \\ & & manoeuvres in the first month & \\ \hline \multirow{4}{*}{Orbital maintenance} & & Every few days during the & Variable: depends \\ & & whole operative life. & on flight configuration. \\ \cline{1-1} & \(a_{mnt}\approx 0.5mg\) & JWST performed this & Expected mainly perpendicular \\ \cline{1-1} & & manoeuvres once every 21 days & to the shell surface \\ \hline \multirow{2}{*}{Slew manoeuvres} & \multirow{2}{*}{\(a_{slew}\approx 26\mu g\)} & Thousands of times & Tangential to shell surface. \\ \cline{1-1} & & & Introduces a shear force \\ \end{tabular} \end{table} Table 3: Static accelerations affecting the system during the expected phases of the lifetime of a JWST like telescope ## 5 Short Summary of Possible Solutions As discussed and shown in Tab.3, the loads that act on the shell/reference body system range from g levels to \(\mu g\) levels during the lifetime. The operations happening frequently during the complete operative lifetime do not exceed the \(mg\) levels while only during launch the g level accelerations are experienced. These levels of accelerations can be counteracted using properly designed mechanical retainers or relaying on the holding capabilities of the magnetic system. Large voice coil mirrors for ground based telescopes are usually designed to sustain static loads up to 1.5g using bias magnets to sustain the weight of the mirror when the actuation magnets are down, as discussed in the works by Riccardi et al.[1] about LBT, Briguglio et al.[2] about VLT, Briguglio et al.[11] about the Magellan telescope and Biasi et al.[3] about M4 of the E-ELT. This solution can be implemented also in space applications but it shall be carefully tuned and designed to satisfy the much lower loads imposed by operations in space, avoiding useless increases of required power and mass. Also, the use of diamagnetic materials can be investigated to increase the generated magnetic field at the same power use. The launch loads can be counteracted using the electrostatic locking as discussed in Briguglio et al.[12] and Briguglio et al.[11] This strategy requires the use of large amounts of power to feed the coils and to lock the shell to its reference body. It would be possible only using dedicated power sources that like primary batteries on the launcher. ## 6 Future Developments The load analysis can be further refined including the dynamic study of the vibrations environment. Experimental studies are currently carried on to analyse the vibrations damping provided by the voice coil actuation between the reference body and the thin shell. This damping is key to understand the loads that would really affect the shell. Any simulation that does not include this effect may lead to way larger values than the real ones, leading to a not optimal design process for the loads during operations. ## Acknowledgements The view expressed herein can in no way be taken to reflect the official opinion of the European Space Agency. The LATT prototype is property of ESA and has been kindly made available by ESA for laboratory testing with a loan agreement. The SPLATT project is funded by INAF - Istituto Nazionale di Astrofisica under the TECNO-PRIN INAF 2019 program.
2303.03208
Counterexamples to Minkowski's Uniqueness Conjecture and Escape of Mass in Positive Characteristic
We show that there are infinitely many counterexamples to Minkowski's conjecture in positive characteristic regarding uniqueness of the upper bound of the multiplicative covering radius, $\mu$, by constructing a sequence of compact $A$ orbits where $\mu$ obtains its conjectured upper bound. In addition, we show that these orbits, as well as a slightly larger sequence of orbits, must exhibit complete escape of mass.
Noy Soffer Aranov
2023-03-06T15:06:35Z
http://arxiv.org/abs/2303.03208v2
# Counterexamples to Minkowski's conjecture and escape of mass in positive characteristic ###### Abstract. We show that there are infinitely many counterexamples to Minkowski's conjecture in positive characteristic regarding uniqueness of the upper bound of the multiplicative covering radius, \(\mu\), by constructing a sequence of compact \(A\) orbits where \(\mu\) obtains its conjectured upper bound. In addition, we show that these orbits, as well as a slightly larger sequence of orbits, must exhibit complete escape of mass. ## 1. Introduction Let \(d\geq 2\) be an integer, let \(G=\mathrm{SL}_{d}(\mathbb{R})\), let \(\Gamma=\mathrm{SL}_{d}(\mathbb{Z})\), and let \(X_{d}=G/\Gamma\). Then \(X_{d}\) can be identified with the space of unimodular lattices in \(\mathbb{R}^{d}\) through the identification \(g\Gamma\mapsto g\mathbb{Z}^{d}\). Given a lattice \(x\in X_{d}\) and a function \(F:\mathbb{R}^{d}\to\mathbb{R}^{+}\), we define the \(\mathrm{CovRad}_{F}(x)\) to be the infimal \(r\geq 0\), such that for every \(R>r\), \[x+\{\mathbf{v}\in\mathbb{R}^{d}:F(\mathbf{v})<R\}=\mathbb{R}^{d} \tag{1.1}\] This value has been well studied for several functions \(F\), such as the multiplicative function \(N:\mathbb{R}^{d}\to\mathbb{R}^{+}\) defined by \(N\left((v_{1},\ldots v_{d})\right)=\prod_{i=1}^{d}|v_{i}|\). This function is dynamically significant, since it is invariant under the group of diagonal matrices with determinant \(1\), which we denote by \(A\). We define Minkowski's function as \(\mu(x)=\mathrm{CovRad}_{N}(x)\). Since \(N\) is \(A\) invariant, then \(\mu\) is \(A\) invariant as well. Hence ergodicity of the \(A\) action on \(X_{d}\) implies that \(\mu\) is constant almost everywhere, and in [11], Shapira proved that for \(d\geq 3\), \(\mu(x)=0\) for Haar almost every \(x\in X_{d}\). Furthermore, it is interesting to understand the set of values that \(\mu\) obtains, and in particular, to understand what is the upper bound of \(\mu\). A famous conjecture attributed to Minkowski claims the following: **Conjecture 1.1** (Minkowski's Conjecture).: _For every \(d\geq 2\), and for every \(x\in X_{d}\),_ 1. \(\mu(x)\leq 2^{-d}=\mu(\mathbb{Z}^{d})\)__ 2. \(\mu(x)=2^{-d}\) _if and only if_ \(x\in A\mathbb{Z}^{d}\)__ Conjecture 1.1 has been proved for \(d\leq 10\) (see for example [10], [11], [12], [13], [14], [15], [16], [17], and [18]). Furthermore, in [10], Cassels proved that \(2^{-d}\) is not isolated in the Minkowski spectrum \[\mathcal{S}_{d}=\{\mu(x):x\in X_{d}\}\] In fact he proved a stronger fact, which relates to the structure of \(A\) orbits. It is well known that \(\mathbb{Z}^{d}\) has a divergent \(A\) orbit, that is the function \(a\in A\mapsto a\mathbb{Z}^{d}\) is a proper function. In particular \(A\mathbb{Z}^{d}\) is not compact, but yet Cassels proved that \(\mu(\mathbb{Z}^{d})=2^{-d}\) can be approximated by evaluating \(\mu\) at a sequence compact \(A\) orbits. **Theorem 1.2** (Main Theorem of [10]).: _There exists a sequence of compact \(A\) orbits, \(Ax_{n}\subseteq X_{d}\) such that \(\mu(x_{n})\to 2^{-d}\)._ The proof of Theorem 1.2 is constructive, and it raises the following question - what can be the limit points of sequences of compact \(A\) orbits? In [11], Shapira provided a partial answer to this question by generalizing Cassels' construction. **Theorem 1.3** (Theorem 1.1 in [20]).: _For any \(d\geq 2\), there exists a sequence of compact \(A\) orbits \(Ax_{n}\subseteq X_{d}\) such that any accumulation point of the form \(x=\lim_{k\to\infty}a_{k}x_{k}\), where \(a_{k}\in A\), must satisfy \(x\in A\mathbb{Z}^{d}\)._ Moreover, Shapira proved that the lattices satisfying the conclusion of Theorem 1.3 must exhibit full escape of mass. It is well known that every compact \(A\) orbit, \(Ax_{n}\) supports a unique \(A\) invariant probability measure \(\mu_{Ax_{n}}\). We say that \(Ax_{n}\) exhibit escape of mass if every limit point of \(\mu_{Ax_{n}}\) gives mass \(<1\) to \(X_{d}\), and we say that \(Ax_{n}\) exhibits full escape of mass if \(\mu_{Ax_{n}}\to 0\). **Corollary 1.4** (Corollary 1.2 in [20]).: _The lattices satisfying the conclusion of Theorem 1.3 must satisfy \(\mu_{Ax_{n}}\to 0\)._ In this paper, we shall prove a positive characteristic analogue of Theorem 1.3, as well as Corollary 1.4. This will lead to a positive characteristic analogue of Theorem 1.2, which will show that the conjectured upper bound of the Minkowski spectrum in positive characteristic is not unique, contrary to Conjecture 1.1(2). ### The Positive Characteristic Setting We first introduce the positive characteristic setting. Let \(d\geq 2\), let \(p\) be a prime, \(q\) be a power of \(p\), and let \(\mathcal{R}=\mathbb{F}_{q}[x]\) be the ring of polynomials over \(\mathbb{F}_{q}\). Let \(\mathcal{K}=\mathbb{F}_{q}(x)\) be the field of rational functions over \(\mathbb{F}_{q}\). We define an absolute value on \(\mathcal{R}\) by \(|f|=q^{\deg(f)}\) and extend it to an absolute value on \(\mathcal{K}\) by \(\left|\frac{f}{g}\right|=q^{\deg(f)-\deg(g)}\). Then the topological completion of \(\mathcal{K}\) with respect to the metric \(d(f,g)=|f-g|\) is the field of Laurent series \(\tilde{\mathcal{K}}\) defined by \[\tilde{\mathcal{K}}\,=\mathbb{F}_{q}\left(\left(x^{-1}\right)\right)=\left\{ \sum_{n=-N}^{\infty}a_{n}x^{-n}:a_{n}\in\mathbb{F}_{q},N\in\mathbb{Z}\right\}\] Let \(\mathcal{O}\) be the maximal compact subgroup of \(\tilde{\mathcal{K}}\), that is \[\mathcal{O}=\mathbb{F}_{q}\left[\left[x^{-1}\right]\right]=\left\{f\in\tilde {\mathcal{K}}:|f|\leq 1\right\}\] Denote by \(\mathbf{U}\) the group of units, that is \[\mathbf{U}\,=\left\{f\in\tilde{\mathcal{K}}:|f|=1\right\}=\left\{\sum_{n=0}^{ \infty}a_{n}x^{-n}:a_{n}\in\mathbb{F}_{q},a_{0}\in\mathbb{F}_{q}^{*}\right\}= \mathcal{O}^{*}\] We can view \(\tilde{\mathcal{K}}\) as the direct product \(\tilde{\mathcal{K}}\cong\mathbb{Z}\times\mathbf{U}\) in the following way: \[f\mapsto\left(\log_{q}|f|,\frac{f}{x^{\log_{q}|f|}}\right)\] Define the functions \(\rho(f)=\log_{q}|f|\) and \(\pi(f)=\frac{f}{x^{\log_{q}|f|}}\). We often abuse notation and write \(\rho(\mathbf{v})=(\rho(v_{1})\ldots\rho(v_{d}))\) and similarly \(\pi(\mathbf{v})=(\pi(v_{1}),\ldots\pi(v_{d}))\) for vectors \(\mathbf{v}\in\tilde{\mathcal{K}}^{d}\). Similarly, for \(g\in\operatorname{GL}_{d}(\tilde{\mathcal{K}})\) we define \((\rho(g))_{ij}=\rho(g_{ij})\) and \((\pi(g))_{ij}=\pi(g_{ij})\). Let \(G=\operatorname{GL}_{d}(\tilde{\mathcal{K}})\) be the group of invertible \(d\times d\) matrices over \(\tilde{\mathcal{K}}\) and let \[[G]=\operatorname{PGL}_{d}(\tilde{\mathcal{K}})\cong\operatorname{GL}_{d}( \tilde{\mathcal{K}})/\tilde{\mathcal{K}}^{*}I\cong\operatorname{GL}_{d}( \tilde{\mathcal{K}})/\tilde{\mathcal{K}}^{*}\] be the group of invertible \(d\times d\) matrices over \(\tilde{\mathcal{K}}\) up to homothety. Let \(\Pi:G\to[G]\) be the quotient map. Denote \[[g]=\Pi(g)=\{cg:c\in\tilde{\mathcal{K}}^{*}\}\] Since \(G\) is a topological group, \([G]\) inherits the quotient topology from \(G\). Let \(\Gamma=\operatorname{GL}_{d}(\mathcal{R})<G\) be the group of invertible \(d\times d\) matrices with entries in \(\mathcal{R}\) and let \([\Gamma]\) be its image under \(\Pi\). Since \(\Gamma\) is the stabilizer of \(\mathcal{R}^{d}\) in \(G\), then \([\Gamma]\) is the stabilizer of \(\left[\mathcal{R}^{d}\right]\) in \([G]\) and thus, \([\Gamma]\cong\operatorname{GL}_{d}(\mathcal{R})/\tilde{\mathcal{K}}= \operatorname{PGL}_{d}(\mathcal{R})\). Let \(\mathcal{L}_{d}=G/\Gamma\) and let \([\mathcal{L}_{d}]=[G]/[\Gamma]\). Since \([G]\) is a topological group and \([\Gamma]\) is a lattice in \([G]\) (see sections 2 and 3 of [21]), then \([\mathcal{L}_{d}]\) inherits the quotient topology from \([G]\). Furthermore, \([\mathcal{L}_{d}]\) is identified with the space of lattices in \(\tilde{\mathcal{K}}^{d}\) up to homothety via the identification \[[g][\Gamma]\mapsto[g]\left[\mathcal{R}^{d}\right]\] The determinant map \(\det:G\to\tilde{\mathcal{K}}^{*}\) descends to a determinant map \[[\det]:[G]\to\tilde{\mathcal{K}}^{*}/(\tilde{\mathcal{K}}^{*})^{d}\] through the quotient map \(\Pi\). Since \(\tilde{\mathcal{K}}\cong\mathbb{Z}\times\mathbf{U}\), then \[\tilde{\mathcal{K}}^{*}/(\tilde{\mathcal{K}}^{*})^{d}\cong(\mathbb{Z}/d \mathbb{Z})\times(\mathbf{U}/\mathbf{U}^{d})\] Therefore, the image of \(\left|\left[\det\right]\right|\) is \(q^{\mathbb{Z}}/q^{d\mathbb{Z}}\). Thus, the set \(\{1,q,q^{2},\ldots q^{d-1}\}\) is a set of representatives for \[\left\{\left|\left[\det\right)([g])\right|:[g]\in[G]\right\}\] Let \([A]\) be the group of diagonal matrices in \([G]\) and let \(A_{1}<G\) be the group of diagonal matrices \(a\) with \(|\det(a)|=1\). Let \([A_{1}]\) be the group of diagonal matrices \([a]\in[A]\) which have a representative \(a^{\prime}\in[a]\) with \(|\det(a^{\prime})|=1\). We identify \([A]\) with \(A\), the group of matrices of determinants of absolute value lying in the set \(\{1,q,q^{2},\ldots q^{d-1}\}\) by choosing a representative of each homothety class with the fitting determinant. For \(j=0,1\ldots d-1\), we say that a lattice \(\mathfrak{x}\) has covolume \(q^{j}\) if there exists a representative of \(\mathfrak{x}\) of the form \(g\Gamma\) with \(|\det(g)|=q^{j}\). We view \(\mathcal{L}_{d}\) as \(d\) copies of \(\mathrm{SL}_{d}(\tilde{\mathcal{K}})\), each with covolume \(q^{j}\) for \(j=0,1\ldots d\). **Definition 1.5**.: Given a lattice \(\mathfrak{x}=[g][\Gamma]\), we define the length of the shortest non-zero vector in \(\mathfrak{x}\) as \[\ell(\mathfrak{x})=\frac{1}{|\det(g)|^{\frac{1}{d}}}\min\left\{\|\mathbf{v}\| :\mathbf{v}\in g\Gamma\setminus\{0\}\right\} \tag{1.2}\] where \(\|(v_{1},\ldots v_{d})^{t}\|=\max_{i}|v_{i}|\). In \(\mathcal{L}_{d}\), Mahler's compactness criterion gives a necessary and sufficient condition for compactness (see [10] for the real case). Since \([\mathcal{L}_{d}]\) inherits the function \(\ell:[\mathcal{L}_{d}]\to q^{\mathbb{Z}}\) from \(\mathcal{L}_{d}\), then Mahler's compactness criterion also holds in \([\mathcal{L}_{d}]\) (see [11] for a version of Mahler's compactness criterion for general \(S\)-adic fields). **Theorem 1.6** (Mahler's Compactness Criterion).: _A set of lattices \(Y\subseteq[\mathcal{L}_{d}]\) is compact if and only if there exists \(\varepsilon>0\) such that \(\inf_{\mathfrak{x}\in Y}\ell(\mathfrak{x})>\varepsilon\)._ _Remark 1.7_.: In the positive characteristic setting, we have to take lattices up to homothety instead of unimodular lattices, since there is no convenient normalization of lattices over \(\tilde{\mathcal{K}}\). In \(\mathbb{R}\), we can make any lattice \(g\mathbb{Z}^{d}\subseteq\mathbb{R}^{d}\) unimodular by normalizing by \(|\det(g)|^{1/d}\). On the other hand, if \(\mathfrak{x}=[g][\Gamma]\in[\mathcal{L}_{d}]\) is a lattice, then \(\det([g])\) may not necessarily have a \(d\)-th root in \(\tilde{\mathcal{K}}\). For instance, if \(\det([g])=x\), then \(x^{1/d}\notin\tilde{\mathcal{K}}\). Therefore, it is more natural to work with lattices up to homothety. ### Main Results Fix an integer \(d\geq 2\) and a prime power \(q\). We first define Minkowski's function in positive characteristic. Define the function \(N:\tilde{\mathcal{K}}^{d}\to\mathbb{R}^{+}\) by \[N(\mathbf{v})=\prod_{i=1}^{d}|v_{i}|\] We define \([\mathcal{G}_{d}]\) to be the space of translates of lattice, that is \[[\mathcal{G}_{d}]=\left\{\mathfrak{x}+\mathbf{v}:\mathfrak{x}\in[\mathcal{L}_{ d}]\,,\mathbf{v}\in\tilde{\mathcal{K}}^{d}\right\}\] We identify \([\mathcal{G}_{d}]\) with the space \[\left\{g\mathcal{R}^{d}+\mathbf{v}:g\in G,|\det(g)|\in\{1,q,\ldots q^{n}\}, \mathbf{v}\in\tilde{\mathcal{K}}^{d}\right\}\] We define the projection \(\pi:[\mathcal{G}_{d}]\to[\mathcal{L}_{d}]\) by \(\mathfrak{x}+\mathbf{v}\mapsto\mathfrak{x}\). We identify the fiber \(\pi^{-1}(\mathfrak{x})\) with the torus \(\widetilde{\mathcal{K}}^{d}/\mathfrak{x}\). Given \(y=g\mathcal{R}^{d}+\mathbf{v}\in[\mathcal{G}_{d}]\), we define the product set of \(y\) as \[P(y)=\{N(\mathbf{w}):\mathbf{w}\in y\}=\left\{N(\mathbf{u}+\mathbf{v}): \mathbf{u}\in g\mathcal{R}^{d}\right\}\] and we define \(N(y)=\inf P(y)\). Given \(\mathfrak{x}\in[\mathcal{L}_{d}]\), we define \[\mu(\mathfrak{x})=\mathrm{CovRad}_{N}(\mathfrak{x})=\frac{1}{|\det(g)|}\sup_{ \mathbf{v}\in\mathcal{K}^{d}}\inf_{\mathbf{u}\in F}N(\mathbf{v}-\mathbf{u})= \frac{1}{|\det(g)|}\sup_{y\in\pi^{-1}(\mathfrak{x})}N(y)\] where \(g\mathcal{R}^{d}\) is a representative of the homothety class of \(\mathfrak{x}\). It is easy to see that \(\mu\left(\left[\mathcal{R}^{d}\right]\right)=q^{-d}\) and that \(\mu\) is \([A]\) invariant. We define the Minkowski spectrum by \[\mathcal{S}_{d}=\{\mu(\mathfrak{x}):\mathfrak{x}\in[\mathcal{L}_{d}]\}\] It is easy to see that \(\mu\) is \(A\) invariant and \(\mu\left(\left[\mathcal{R}^{d}\right]\right)=q^{-d}\). This provides us with a guess for the upper bound of \(\mu\). **Conjecture 1.8**.: _For every \(\mathfrak{x}\in[\mathcal{L}_{d}]\), \(\mu(\mathfrak{x})\leq q^{-d}=\mu\left(\left[\mathcal{R}^{d}\right]\right)\)._ One natural question pertains to the uniqueness of the upper bound. That is, if \(\mu(\mathfrak{x})=q^{-d}\), then, is it true that \(\mathfrak{x}\in[A]\left[\mathcal{R}^{d}\right]\)? In this paper we shall show that in contrast to the real case, \(q^{-d}\), the conjectured upper bound of \(\mu\), is not the unique to \([A]\left[\mathcal{R}^{d}\right]\). **Theorem 1.9**.: _There exist infinitely many compact \([A]\) orbits \([A]\mathfrak{x}\) such that \(\mu(\mathfrak{x})=q^{-d}\)._ In order to prove Theorem 1.9, we shall prove a positive characteristic analogue of Theorem 1.2 and use discreteness of the absolute value around non-zero points as well as the fact that the product sets \(P(y)\) satisfy the following inheritance lemma (see [21] for the real analogue). **Lemma 1.10** (Inheritance).: _If \(y,y_{0}\in\mathcal{G}_{d}\) are such that \(y_{0}\in\overline{Ay}\), then, \(\overline{P(y_{0})}\subseteq\overline{P(y)}\)._ _Remark 1.11_.: A consequence of the Lemma 1.10 is the upper semicontinuity of \(\mu\), that is if \(\mathfrak{x}_{n}\to\mathfrak{x}\) in \(\mathcal{L}_{d}\), then \(\limsup\mu(\mathfrak{x}_{n})\leq\mu(\mathfrak{x})\). Moreover, if \(\mathfrak{x}_{0}\in\overline{A\mathfrak{x}}\), then \(\mu(\mathfrak{x}_{0})\geq\mu(\mathfrak{x})\). Ergodicity of the \(A\) action on \(\mathcal{L}_{d}\) with respect to the Haar measure implies that \(\mu\) is constant almost everywhere. Furthermore, upper semicontinuity of \(\mu\) implies that the generic value of \(\mu\) is its minimal value. In order to prove Theorem 1.9, we shall prove a positive characteristic analogue of Theorem 1.3. **Theorem 1.12**.: _Let \(d\geq 2\). Then, there exists a sequence of lattices \(\mathfrak{x}_{k}\in[\mathcal{L}_{d}]\) such that_ 1. \([A]\mathfrak{x}_{k}\) _is compact for each_ \(k\)_, and_ 2. _Any limit point of the form_ \(\mathfrak{x}=\lim_{k\to\infty}a_{k}\mathfrak{x}_{k}\) _with_ \(a_{k}\in[A]\) _satisfies_ \(\mathfrak{x}\in[A]\left[\mathcal{R}^{d}\right]\)_._ From Theorem 1.12, we can obtain the following corollary, which pertains to escape of mass. This corollary can be viewed as an analogue of Corollary 1.2 in [21]. **Corollary 1.13**.: _Let \([A]\mathfrak{x}_{k}\) be the sequence of compact orbits satisfying the conclusion of Theorem 1.12. Let \(\mu_{[A]\mathfrak{x}_{k}}\) be the unique \([A]\)-invariant probability measure supported on \([A]\mathfrak{x}_{k}\). Then \(\mu_{[A]\mathfrak{x}_{k}}\) converge to the zero measure._ Proof of Corollary 1.13.: If \(\mu\) is an accumulation point of \(\mu_{[A]\mathfrak{x}_{k}}\), then by Theorem 1.12, \(\mu\) must be supported on \([A]\left[\mathcal{R}^{d}\right]\). By Poincare recurrence, the only probability measure supported on \([A]\left[\mathcal{R}^{d}\right]\) is the \(0\) measure, and thus, \(\mu=0\). In order to prove Theorem 1.12, we shall provide precise bounds on the rate of convergence and the rate of escape of mass. For \(\delta>0\) define the compact sets \[[\mathcal{L}_{d}]^{\geq\delta}=\left\{\mathfrak{x}\in[\mathcal{L}_{d}]:\ell( \mathfrak{x})\geq\delta\right\}\] In SS2.2, we show that a certain family of compact orbits \(\mathcal{F}\) satisfies the following conditions: 1. \(\mu_{[A]\mathfrak{x}}\left([\mathcal{L}_{d}]^{\geq\delta}\right)\ll f([A] \mathfrak{x})\) 2. \(\forall\mathfrak{y}\in[A]\mathfrak{x}\cap[\mathcal{L}_{d}]^{\geq \delta}\), \(d\left(\mathfrak{y},[A]\left[\mathcal{R}^{d}\right]\right)\ll g([A]\mathfrak{x})\) where \(f,g:\mathcal{F}\rightarrow\mathbb{R}\) are explicit functions satisfying \(f([A]\mathfrak{x}),g([A]\mathfrak{x})\to 0\) as we vary \([A]\mathfrak{x}\in\mathcal{F}\). From (1) and Theorem 1.6, we deduce that the orbits in \(\mathcal{F}\) must satisfy the conclusion of Corollary 1.13. Furthermore, the lattices satisfying (2) must satisfy the conclusion of Theorem 1.12. All of these results are stated and proved in an effective manner as done in [21]. ### Structure of this Article In SS2, we shall prove Theorem 1.12. To do so, in SS2.2, we provide geometric definitions of \(A\) orbits and prove that orbits with this geometry exhibit complete escape of mass. Then, in SS2.3, we shall construct lattices satisfying the properties which we defined in SS2.2. In SS3, we shall show that a specific subsequence of the lattices constructed in SS2.3 satisfies the conclusion of Theorem 1.9. ### Acknowledgements I would like to thank Uri Shapira for introducing this problem to me and for carefully reading drafts of this paper. Without him, this article wouldn't be possible. ## 2. Escape of Mass In this section we develop the necessary concepts that will allow us to establish the topological and distributional statements claimed above for the sequences of compact \(A\)-orbits we construct in SS2.3. We proceed analogously to [21]. ### Simplex Sets We will first introduce the notation of simplex sets, which will be useful for the subsequent parts. Let \(\|\cdot\|\) denote the supremum norm on \(\tilde{\mathcal{K}}^{d}\), i.e. \(\|\mathbf{v}\|=\max_{i}|v_{i}|\). **Definition 2.1**.: A simplex set \(\Phi\) in \(A_{1}\) is a set of \(d\) matrices \(\mathbf{t}_{1},\ldots\mathbf{t}_{d}\in A_{1}\), such that 1. The group generated by \(\Phi=\{\mathbf{t}_{1}\ldots\mathbf{t}_{d}\}\) is a lattice in \(A_{1}\), 2. \(\prod_{i=1}^{d}\mathbf{t}_{i}=I\) The associated lattice is \(\Gamma_{\Phi}:=\langle\Phi\rangle\). Let \(n=d-1\). Define \[\mathbb{R}_{0}^{d}:=\left\{(v_{1},\ldots v_{d})\in\mathbb{R}^{d}:\sum_{i=1}^{ d}v_{i}=0\right\}\cong\mathbb{R}^{n}\] and \(\mathbb{Z}_{0}^{d}=\mathbb{R}_{0}^{d}\cap\mathbb{Z}^{d}\). Then, \(\rho(A_{1})=\mathbb{Z}_{0}^{d}\). For \(\mathbf{v}\in\mathbb{Z}_{0}^{d}\) denote \(\left\lceil\mathbf{v}\right\rceil_{\mathbb{R}_{0}^{d}}=\max_{i}v_{i}\). For \(\mathbf{a}\in A_{1}\), define \(\left\lceil\mathbf{a}\right\rceil=q^{\left\lceil\rho(\mathbf{a})\right\rceil_ {\mathbb{R}_{0}^{d}}}\). Define \(\xi_{\Phi}:=\max_{\mathbf{t}\in\Phi}\left\lceil\rho(\mathbf{t})\right\rceil_{ \mathbb{R}_{0}^{d}}\). Let \(S_{\Phi}:=\text{hull}\{\rho(\Phi)\}\) be the convex hull of \(\rho(\Phi)\) in \(\mathbb{R}_{0}^{d}\) and let \(\mathbf{S}_{\Phi}:=\rho^{-1}\left(\frac{n}{2}S_{\Phi}\cap\mathbb{Z}_{0}^{d} \right)\subseteq A_{1}\). Let \(S_{\Phi}^{o}\) be the interior of \(S_{\Phi}\) in \(\mathbb{R}_{0}^{d}\) and let \(\mathbf{S}_{\Phi}^{o}:=\rho^{-1}\left(\frac{n}{2}S_{\Phi}^{o}\cap\mathbb{Z}_{0} ^{d}\right)\). For convenience we often write matrices in \(A_{1}\) as vectors. Define \(A_{1}(\mathbf{U}):=A_{1}\cap\operatorname{GL}_{d}(\mathbf{U})\). Let \(\mathcal{P}_{n}\) be the group of permutations on \(n\) elements. Given a simplex set \(\Phi=\{\mathbf{t}_{1},\ldots\mathbf{t}_{d}\}\), define \[\mathbf{w}:=\frac{1}{d}\sum_{l=1}^{d}(l-1)\rho(\mathbf{t}_{l})\in\mathbb{R}_{0 }^{d} \tag{2.1}\] For \(\tau\in\mathcal{P}_{n}\), let \(\mathbf{w}_{\tau}\in\mathbb{R}_{0}^{d}\) be vector obtained by permuting the coordinates of \(\mathbf{w}\) by \(\tau\). Let \[W_{\Phi}:=\{\mathbf{w}_{\tau}:\tau\in\mathcal{P}_{n}\}\subseteq\mathbb{R}_{0}^{d}\] The following covering claim from [21] will be an essential part of our proofs. **Proposition 2.2** (Proposition 3.8 in [21]).: _Let \(\Phi\) be a simplex set in \(A_{1}\). Then,_ 1. \(\mathbb{R}_{0}^{d}=\frac{n}{2}S_{\Phi}+\rho(\Gamma_{\Phi})\)__ 2. \(\mathbb{R}_{0}^{d}\setminus\left(\frac{n}{2}S_{\Phi}^{o}+\rho(\Gamma_{\Phi}) \right)\subseteq W_{\Phi}+\rho(\Gamma_{\Phi})\)__ _._ 3. _There exists a universal constant_ \(c>0\) _such that for each_ \(\gamma\in(0,1)\)_,_ \[\mathbb{R}_{0}^{d}\setminus\left((1-\gamma)\frac{n}{2}S_{\Phi}+\rho(\Gamma_{ \Phi})\right)\subseteq B_{c\gamma\xi_{\Phi}}(W_{\Phi})+\rho(\Gamma_{\Phi})\] _where_ \(B_{r}(0)\) _is the ball of radius_ \(r\) _around_ \(0\) _in_ \(\mathbb{R}_{0}^{d}\) _and_ \[B_{c\gamma\xi_{\Phi}}(W_{\Phi}):=\left\{\mathbf{v}\in\mathbb{Z}_{0}^{d}:\inf_{ \mathbf{u}\in W_{\Phi}}\left[\mathbf{v}-\mathbf{u}\right]_{\mathbb{R}_{0}^{d}} \leq c\gamma\xi_{\Phi}\right\}\] By intersecting Proposition 2.2(1) and (3) with \(\mathbb{Z}_{0}^{d}\) and then pulling these claims back with \(\rho\), we obtain the following covering lemma in \(A_{1}\). **Lemma 2.3**.: _Let \(\Phi\) be a simplex set in \(A_{1}\). Then,_ 1. \(\Gamma_{\Phi}\mathbf{S}_{\Phi}=A_{1}\)__ 2. _For_ \(0<\gamma<1\)_, define_ \[\mathbf{S}_{\Phi}^{(\gamma)}:=\rho^{-1}\left((1-\gamma)\frac{n}{2}S_{\Phi} \right)\cap\mathbb{Z}_{0}^{d}\] _Then, there exists a constant_ \(c>0\) _such that for any_ \(0<\gamma<1\)_,_ \[A_{1}\setminus\left(\mathbf{S}_{\Phi}^{(\gamma)}\cdot\Gamma_{\Phi}\right) \subseteq\rho^{-1}\left(B_{c\gamma\xi_{\Phi}}(W_{\Phi})\right)\Gamma_{\Phi}\] ### Escape of Mass and Geometry of the Space of Lattices In this section, we shall connect between the covering lemmas obtained in SS2.1 and the structure of the \(A_{1}\) orbit. This will provide necessary conditions for a sequence of lattices to exhibit escape of mass. Let \[\Omega=\left\{\mathfrak{x}\in[\mathcal{L}_{d}]:A_{1}\mathfrak{x}\text{ is compact}\right\}\] For \(\mathfrak{x}\in\Omega\), we say that a simplex set \(\Phi\) is a simplex set for \(\mathfrak{x}\) if \(\Gamma_{\Phi}:=\langle\Phi\rangle\subseteq\operatorname{stab}_{A_{1}}( \mathfrak{x})\). For \(\mathfrak{x}\in\Omega\), denote \(\Delta_{\mathfrak{x}}:=\operatorname{stab}_{A_{1}}(\mathfrak{x})\). We shall extract information about the structure of a compact \(A_{1}\) orbit, \(A_{1}\mathfrak{x}\), given that the length of the shortest vector of \(\mathfrak{x}\) is very short and that \(\mathfrak{x}\) has a simplex set of a nice form. We will need the fact that every \(A_{1}\) orbit intersects a fixed compact set \(\mathcal{L}_{d}^{\geq q^{-d}}\). **Theorem 2.4**.: _There exists a universal constant \(\delta_{0}>0\) such that for any \(\mathfrak{x}\in\mathcal{L}_{d}\), \(A\mathfrak{x}\cap\mathcal{L}_{d}^{\geq\delta_{0}}\neq\emptyset\). Furthermore, \(\delta_{0}\) can be taken to be \(\geq q^{-d}\)._ #### 2.2.1. Proof of Theorem 2.4 We shall first prove Theorem 2.4. Our proof is very similar to Margulis' proof of the analogous result in \(\mathbb{R}^{d}\), which can be found in the appendix of [14]. We shall include the proof for completeness. In order to prove Theorem 2.4, we shall need an analogue of Minkowski's Second Theorem (see [11] for example). **Theorem 2.5**.: _Let \(\lambda_{i}(\mathfrak{x})\) be the successive minima of \(\mathfrak{x}\in[\mathcal{L}_{d}]\), that is_ \[\lambda_{i}(\mathfrak{x})=\min\{r>0:\text{there exist $i$ linearly independent vectors in $\mathfrak{x}$ of norm }\leq r\}\] _Then,_ \[\operatorname{covol}(\mathfrak{x})=\prod_{i=1}^{d}\lambda_{i}( \mathfrak{x})\] We shall now use Theorem 2.5 to prove the following analogue of Proposition A.1 in [14]. **Proposition 2.6**.: _For \(r>0\), denote \(B_{r}=B(0,r)\). Then, there exists a finite set \(F\subseteq A_{1}\) such that for every \(g\in G\) with \(|\det(g)|\in\{1,q,\ldots q^{n}\}\), there exists \(\mathbf{f}\in F\) such that_ \[\forall 0\neq\mathbf{w}\in g\mathcal{R}^{d}\cap B_{q^{-1}},\|\mathbf{ f}\mathbf{w}\|\geq q\|\mathbf{w}\| \tag{2.2}\] Proof.: By Theorem 2.5, for every \(g\in G\) with \(|\det(g)|\in\{1,q,\ldots q^{n}\}\), \(\operatorname{span}\{g\mathcal{R}^{d}\cap B_{q^{-1}}\}\) is a proper subspace. Thus, it suffices to show that there exists a finite set \(F\subseteq A_{1}\) such that for any proper subspace \(V\subseteq\tilde{\mathcal{K}}^{d}\), there exists some \(\mathbf{f}\in F\) such that for every \(0\neq\mathbf{v}\in V\), \(\|\mathbf{f}\mathbf{v}\|\geq q\|\mathbf{v}\|\). Since \(V\) is a proper subspace, there exists some \(i\) such that \(\operatorname{span}\{\mathbf{e}_{i}\}\cap V=\{0\}\). We want to choose \(i\) wisely so that for every \(\mathbf{v}\in V\), \(\|\mathbf{b}_{i}\mathbf{v}\|\geq q\|\mathbf{v}\|\), where \((\mathbf{b}_{i})_{jj}=\begin{cases}x^{-n}&i=j\\ x&\text{else}\end{cases}\in A_{1}\). For \(\mathbf{v}=(v_{1}\ldots v_{d})\in V\), define \[M_{\mathbf{v}}=\left\{l=1\ldots d:|v_{l}|=\|\mathbf{v}\|\right\} \tag{2.3}\] If there exists some \(l=1\ldots d\), such that for each \(0\neq\mathbf{v}\in V\), \(M_{\mathbf{v}}\neq\{l\}\), then, for each \(\mathbf{v}\), there exists some \(j_{\mathbf{v}}\neq l\), such that \(\|\mathbf{v}\|=|v_{j_{\mathbf{v}}}|\). Hence, \[\|\mathbf{b}_{l}\mathbf{v}\|=q\,|v_{j_{\mathbf{v}}}|=q\|\mathbf{v}\| \tag{2.4}\] Thus, it suffices to show that there exists some \(l\) such that \(M_{\mathbf{v}}\neq\{l\}\) for every \(0\neq\mathbf{v}\in V\). Assume on the contrary that for every \(l=1\ldots d\), there exists some \(\mathbf{v}^{(l)}\in V\) such that \(M_{\mathbf{v}^{(l)}}=\{l\}\). Then for every \(l=1\ldots d\) and for every \(j\neq l\), \(\left|v_{j}^{(l)}\right|<\left|v_{l}^{(l)}\right|=\left\|\mathbf{v}^{(l)}\right\|\). Hence if \(\sigma\neq I\) is some permutation in \(\mathcal{P}_{d}\), then, \[\left|\prod_{l=1}^{d}v_{\sigma(l)}^{(l)}\right|<\left|\prod_{l=1}^{d}v_{l}^{( l)}\right| \tag{2.5}\] Thus, by the ultrametric inequality, the matrix whose columns are \(\mathbf{v}^{(1)}\ldots\mathbf{v}^{(d)}\) has determinant of absolute value \[\left|\prod_{l=1}^{d}v_{l}^{(l)}+\sum_{I\neq\sigma\in\mathcal{P}_{d}}(-1)^{ \operatorname{sgn}(\sigma)}\prod_{l=1}^{d}v_{\sigma(l)}^{(l)}\right|=\prod_{l =1}^{d}\left|v_{l}^{(l)}\right|=\prod_{l=1}^{d}\left\|\mathbf{v}^{(l)}\right\|\neq 0 \tag{2.6}\] Therefore, \(\mathbf{v}^{(1)}\ldots\mathbf{v}^{(d)}\in V\) are linearly independent, which contradicts the fact that \(V\) is a proper subspace of \(\tilde{\mathcal{K}}^{d}\). Thus, there exists some \(l\) such that for every \(\mathbf{v}\in V\), \(M_{\mathbf{v}}\neq\{l\}\). Hence, the set \(F=\{\mathbf{b}_{1},\ldots\mathbf{b}_{d}\}\subseteq A_{1}\) satisfies the conditions of Proposition 2.6. Proof of Theorem 2.4.: Let \(\mathfrak{x}\in\mathcal{L}_{d}\) and let \(\mathbf{v}\in\mathfrak{x}\) satisfy \(\|\mathbf{v}\|=\min_{0\neq\mathbf{w}\in\mathfrak{x}}\|\mathbf{w}\|\). We shall show that there exists some \(\mathbf{a}\in A_{1}\) such that \(\mathbf{a}\mathfrak{x}\cap B_{q^{-d}}=\{0\}\). The radius \(q^{-d}\) was chosen since \(q^{-d}=q^{-1}\min_{\mathbf{a}\in F}\min_{i=1\ldots d}|(\mathbf{a})_{ii}|\). Thus, for every \(\mathbf{u}\notin B_{q^{-1}}\) and for every \(\mathbf{a}\in F\), \(\mathbf{au}\notin B_{q^{-d}}\). If \(\mathfrak{x}\cap B_{q^{-d}}=\{0\}\), then, \(\mathfrak{x}\in A_{1}\mathfrak{x}\cap\mathcal{L}_{d}^{\geq q^{-d}}\neq\emptyset\). Now assume that \(\mathfrak{x}\cap B_{q^{-d}}\neq\{0\}\). Since \(\mathfrak{x}\cap B_{q^{-1}}\) spans a proper subspace of \(\tilde{\mathcal{K}}^{d}\), Proposition 2.6 implies that there exists some \(\mathbf{a}_{1}\in F\) such that \(\|\mathbf{a}_{1}\mathbf{u}\|\geq q\|\mathbf{u}\|\) for every \(\mathbf{u}\in\mathfrak{x}\cap B_{q^{-1}}\). We shall now use Proposition 2.6 to define a sequence \(\mathbf{a}_{k}\in F\), satisfying \(\|\mathbf{a}_{k}\mathbf{v}\|\geq q\|\mathbf{v}\|\), for every \(\mathbf{v}\in\mathbf{a}_{k-1}\ldots\mathbf{a}_{1}\mathfrak{x}\cap B_{q^{-1}}\). Assume that we have already chosen \(\mathbf{a}_{1}\ldots\mathbf{a}_{k}\) and denote \(\tilde{\mathbf{a}}_{k}=\mathbf{a}_{k}\mathbf{a}_{k-1}\ldots\mathbf{a}_{1}\). In addition, assume that for each \(j\leq k\), \(B_{q^{-d}}\cap\tilde{\mathbf{a}}_{j}\mathfrak{x}\neq\{0\}\), since otherwise, we obtain that \(A_{1}\mathfrak{x}\cap B_{q^{-d}}=\{0\}\), and then we can terminate this algorithm. Use proposition 2.6 to choose some \(\mathbf{a}_{k+1}\in F\) such that for every \(\mathbf{v}\in B_{q^{-1}}\cap\tilde{\mathbf{a}}_{k}\mathfrak{x}\), \(\|\mathbf{a}_{k+1}\mathbf{v}\|\geq q\|\mathbf{v}\|\). Let \(\mathbf{v}\in B_{q^{-d}}\cap\tilde{\mathbf{a}}_{k+1}\mathfrak{x}\). Then, there exists \(\mathbf{u}\in\tilde{\mathbf{a}}_{k}\mathfrak{x}\) such that \(\mathbf{v}=\mathbf{a}_{k+1}\mathbf{u}\). Moreover, since \[q^{-n}\|\mathbf{u}\|\leq\|\mathbf{a}_{k+1}\mathbf{u}\|=\|\mathbf{v}\|\leq q^{-d} \tag{2.7}\] then, \(\mathbf{u}\in B_{q^{-1}}\). Hence, \[B_{q^{-d}}\cap\tilde{\mathbf{a}}_{k+1}\mathfrak{x}\subseteq\mathbf{a}_{k+1} \left(B_{q^{-1}}\cap\tilde{\mathbf{a}}_{k}\mathfrak{x}\right) \tag{2.8}\] Let \(\mathbf{v}\) be the shortest non-zero vector in \(\tilde{\mathbf{a}}_{k+1}\mathfrak{x}\). If \(\|\mathbf{v}\|\leq q^{-d}\), then (2.8) implies that there exists \(\mathbf{u}\in\tilde{\mathbf{a}}_{k}\mathfrak{x}\) such that \[\ell(\tilde{\mathbf{a}}_{k+1}\mathfrak{x})=\|\mathbf{v}\|=\|\mathbf{a}_{k+1} \mathbf{u}\|\geq q\|\mathbf{u}\|\geq q\ell(\tilde{\mathbf{a}}_{k}\mathfrak{x}) \tag{2.9}\] Therefore, this process increases the length of the shortest vector in \(\tilde{\mathbf{a}}_{k}\mathfrak{x}\), so that for \(k\) large enough, \(\ell(\tilde{\mathbf{a}}_{k+1}\mathfrak{x})\geq q^{-d}\). Hence, \(B_{q^{-d}}\cap\tilde{\mathbf{a}}_{k+1}\mathfrak{x}=\{0\}\) so that \(A_{1}\mathfrak{x}\cap\mathcal{L}_{d}^{\geq q^{-d}}\neq\emptyset\). Hence, \(A\mathfrak{x}\cap\mathcal{L}_{d}^{\geq q^{-d}}\neq\emptyset\). #### 2.2.2. The Structure of the \(A\) Orbit From now one let \(\mathfrak{r}\in\Omega\), let \(\Phi\) be a simplex set for \(\mathfrak{r}\), and let \(\Gamma_{\Phi}\) be the associated lattice. We can now interpret the results of SS2.1 in terms of the structure of the \([A_{1}]\) orbit. **Lemma 2.7**.: _Let \(\Phi\) be a simplex set for \(\mathfrak{r}\) and let \(\Gamma_{\Phi}\) be the corresponding lattice. Then,_ \[A_{1}\mathfrak{r}=\left\{\mathfrak{a}\mathfrak{r}:\mathbf{a}\in\mathbf{S}_{ \Phi}\right\}\] Proof.: By Lemma 2.3, each \(\mathbf{a}\in A_{1}\) can be written as \(\mathbf{a}^{\prime}\mathbf{t}\) where \(\mathbf{t}\in\Gamma_{\Phi}\) and \(\mathbf{a}^{\prime}\in\mathbf{S}_{\Phi}\). Thus, \[\mathfrak{a}\mathfrak{r}=\mathbf{a}^{\prime}\mathfrak{r}=\mathbf{a}^{\prime }\mathfrak{r}\in\left\{\mathfrak{a}\mathfrak{r}:\mathbf{a}\in\mathbf{S}_{ \Phi}\right\}\] We shall now use Theorem 2.4 and Lemma 2.7 to bound the length of the shortest vector in \(\mathfrak{r}\) with respect to its simplex set. **Lemma 2.8**.: _Let \(\mathfrak{r}\in\Omega\) and let \(\Phi=\{\mathbf{t}_{1},\ldots\mathbf{t}_{d}\}\) be a simplex set for \(\mathfrak{r}\). Then,_ \[\ell(\mathfrak{r})\gg q^{-\frac{n}{2}\xi_{\Phi}} \tag{2.10}\] Proof.: Let \(\mathbf{v}\) be a shortest non-zero vector in \(\mathfrak{r}\). Then by Lemma 2.7 and Theorem 2.4, there exists \(\mathbf{a}\in A_{1}\) with \(\mathbf{a}\in\mathbf{S}_{\Phi}\) such that \[q^{-d}\leq\left\|\mathbf{a}\mathbf{v}\right\|\leq\left\lceil\mathbf{a}\right \rceil\cdot\ell(\mathfrak{r}) \tag{2.11}\] Write \(\rho(\mathbf{a})=\frac{n}{2}\sum_{i=1}^{d}\alpha_{i}\rho(\mathbf{t}_{i})\) where \(\sum_{i=1}^{d}\alpha_{i}=1\). Then, \[\left\lceil\mathbf{a}\right\rceil=q^{\left\lceil\rho(\mathbf{a})\right\rceil _{\Phi}^{d}}\leq q^{\frac{n}{2}\max_{i}|\alpha_{i}|\cdot\left\lceil\rho(\mathbf{ t}_{i})\right\rceil_{\Phi}^{d}}\leq q^{\frac{n}{2}\xi_{\Phi}} \tag{2.12}\] Thus, by plugging (2.12) into (2.11) we obtain that \(\ell(\mathfrak{r})\gg q^{-\frac{n}{2}\xi_{\Phi}}\). Motivated by Lemma 2.8, we make the following definition: **Definition 2.9**.: Let \(\mathfrak{r}\in\Omega\) and \(M>1\) and let \(\Phi\) be a simplex set for \(\mathfrak{r}\). We say that \(\Phi\) is \(M\)-tight if \(\ell(\mathfrak{r})\leq Mq^{-\frac{n}{2}\xi_{\Phi}}\). Denote by \(\Omega_{M}\) the set of lattices \(\mathfrak{r}\in\Omega\) with an \(M\)-tight simplex set \(\Phi\). We shall now reinterpret Lemma 2.3 in terms of the structure of the \(A_{1}\) orbit. **Proposition 2.10**.: _Let \(\mathfrak{r}\in\Omega_{M}\) be a lattice with an \(M\)-tight simplex set \(\Phi\), let \(c\) be the constant from Lemma 2.3(2), and let \(\kappa\in(0,1)\). Define:_ \[\delta=Mq^{-\frac{n}{2}|\Delta_{\mathfrak{r}}|^{\frac{\kappa}{n}}} \tag{2.13}\] \[\gamma=\frac{|\Delta_{\mathfrak{r}}|^{\frac{\kappa}{n}}}{\xi_{\Phi}} \tag{2.14}\] \[r=c|\Delta_{\mathfrak{r}}|^{\frac{\kappa}{n}} \tag{2.15}\] _Define \(\mathbf{W}_{\Phi,\kappa}:=\rho^{-1}(B_{r}(W_{\Phi})\cap\mathbb{Z}_{0}^{d})\). Then,_ 1. \(\left\{\mathbf{a}\mathfrak{r}:\mathbf{a}\in\mathbf{S}_{\Phi}^{(\gamma)}\cdot \Gamma_{\Phi}\right\}\subseteq[\mathcal{L}_{d}]^{<\delta}\)__ 2. \(\left\{\mathbf{a}\in A_{1}:\mathfrak{a}\mathfrak{r}\in[\mathcal{L}_{d}]^{ \geq\delta}\,\right\}\subseteq\mathbf{W}_{\Phi,\kappa}\cdot\Gamma_{\Phi}\)__ 3. \(\mu_{[A_{1}]\mathfrak{r}}\left([\mathcal{L}_{d}]^{\geq\delta}\right)\ll| \Delta_{\mathfrak{r}}|^{-1+\kappa}\)__ We note that \(|\Delta_{\mathfrak{r}}|\ll\xi_{\Phi}^{n}\), and therefore, \(0<\gamma\ll\xi_{\Phi}^{-1+\kappa}<1\). _Remark 2.11_.: Proposition 2.10 claims that a sequence of lattices \(\mathfrak{r}_{k}\) in \(\Omega_{M}\) with \(|\Delta_{\mathfrak{r}_{k}}|\to\infty\) must satisfy the conclusion of Corollary 1.13. Proof of Proposition 2.10.: Let \(\mathbf{v}\) be a shortest non-zero vector in \(\mathfrak{x}\) and let \(\mathbf{a}\in\mathbf{S}_{\Phi}^{(\gamma)}\). Then, \[\|\mathbf{a}\mathbf{v}\|\leq\lceil\mathbf{a}\rceil\cdot\|\mathbf{v}\|\leq Mq^{- \frac{n}{2}\xi_{\Phi}}q^{\frac{n}{2}(1-\gamma)\xi_{\Phi}} \tag{2.16}\] \[=Mq^{-\frac{n}{2}\gamma\xi_{\Phi}}=Mq^{-\frac{n}{2}|\Delta_{\mathfrak{x}}|^{ \frac{n}{\hbar}}}=\delta \tag{2.17}\] which proves (1). By (1), \[\{\mathbf{a}\in A_{1}:\mathbf{a}\mathfrak{x}\in[\mathcal{L}_{d}]^{\geq\delta }\}\subseteq A_{1}\setminus\left(\mathbf{S}_{\Phi}^{(\gamma)}\cdot\Gamma_{ \Phi}\right) \tag{2.18}\] Thus, by Lemma 2.3(2) and (2.18), \[\{\mathbf{a}\in A_{1}:\mathbf{a}\mathfrak{x}\in[\mathcal{L}_{d}]^{\geq\delta }\}\subseteq\rho^{-1}\left(B_{c\gamma\xi_{\Phi}}(W_{\Phi})\right)\Gamma_{ \Phi}=\mathbf{W}_{\Phi,\kappa}\Gamma_{\Phi} \tag{2.19}\] Thus, \[\mu_{A_{1}\mathfrak{x}}\left([\mathcal{L}_{d}]^{\geq\delta}\right)\ll\frac{r ^{n}}{|\Delta_{\mathfrak{x}}|}\ll|\Delta_{\mathfrak{x}}|^{-1+\kappa} \tag{2.20}\] We shall make the following definition, which pertains to the structure of the \(A_{1}\) orbit during the times \(\mathbf{W}_{\Phi,\kappa}\). **Definition 2.12**.: Let \(\varepsilon>0\), \(M,J>0\). Denote by \(\Omega_{M}(\varepsilon,J)\) the set of \(\mathfrak{x}\in\Omega_{M}\) with an \(M\)-tight simplex set \(\Phi\), such that for any \(\mathbf{w}\in\mathbf{W}_{\Phi,\kappa}\), there exist \(g\in G\), and \(\mathbf{a},\mathbf{a}^{\prime}\in A_{1}\) such that 1. \(\mathbf{w}\mathfrak{x}=\mathbf{a}\mathbf{g}\mathbf{a}^{\prime}\mathcal{R}^{d}\) 2. \(\|g-I\|=\max_{i,j}|g_{ij}-I_{ij}|\leq Jq^{-|\Delta_{\mathfrak{x}}|^{\varepsilon}}\) 3. \(\lceil\mathbf{a}\rceil\leq q^{r}\) We shall now show that a sequence of lattices \(\{\mathfrak{x}_{k}\}\) in \(\Omega_{M}(\varepsilon,J)\) with \(|\Delta_{\mathfrak{x}_{k}}|\to\infty\) must satisfy the conclusion of Theorem 1.12. **Proposition 2.13**.: _Fix \(M,J>0\), \(\varepsilon>0\) and \(\delta\) be as in (2.13). Then for any \(\kappa<\min\{n\varepsilon,1\}\), for all but finitely many \(\mathfrak{x}\in\Omega_{M}(\varepsilon,J)\) and for each \(y\in A_{1}\mathfrak{x}\cap[\mathcal{L}_{d}]^{\geq\delta}\),_ \[d(y,A_{1}\mathcal{R}^{d})\leq Jq^{-\frac{1}{2}|\Delta_{\mathfrak{x}}|^{ \varepsilon}} \tag{2.21}\] Proof.: Let \(y\in A_{1}\mathfrak{x}\cap[\mathcal{L}_{d}]^{\geq\delta}\). Then by Lemma 2.10(2), there exists some \(\mathbf{w}\in\mathbf{W}_{\Phi,\kappa}\) such that \(y=\mathbf{w}\mathfrak{x}\). Since \(\mathfrak{x}\in\Omega_{M}(\varepsilon,J)\), then, there exist some \(\mathbf{a},\mathbf{a}^{\prime}\in A_{1}\) and \(g\in G\) satisfying \(\lceil\mathbf{a}\rceil\leq q^{r}\) and \(\|g-I\|\leq J|\Delta_{\mathfrak{x}}|^{-\varepsilon}\) such that \[y=\mathbf{w}\mathfrak{x}=\mathbf{a}g\mathbf{a}^{\prime}\mathcal{R}^{d}= \mathbf{a}g\mathbf{a}^{-1}(\mathbf{a}\mathbf{a}^{\prime}\mathcal{R}^{d}) \tag{2.22}\] Then, \[d_{[\mathcal{L}_{d}]}(y,A_{1}\mathcal{R}^{d})=d_{[\mathcal{L}_{d}]}\left( \mathbf{a}g\mathbf{a}^{-1}\left(\mathbf{a}\mathbf{a}^{\prime}\mathcal{R}^{d} \right),A_{1}\mathcal{R}^{d}\right) \tag{2.23}\] \[\leq d_{G}(I,\mathbf{a}g\mathbf{a}^{-1})\leq Jq^{dr}q^{-|\Delta_{\mathfrak{x}} |^{\varepsilon}}=Jq^{dc|\Delta_{\mathfrak{x}}|^{\frac{\kappa}{\hbar}-|\Delta_{ \mathfrak{x}}|^{\varepsilon}}}=Jq^{|\Delta_{\mathfrak{x}}|^{\varepsilon}\left( dc|\Delta_{\mathfrak{x}}|^{\frac{\kappa}{\hbar}-\varepsilon}-1\right)} \tag{2.24}\] If \(\kappa<n\varepsilon\), then for \(|\Delta_{\mathfrak{x}}|\) large enough, \(dc|\Delta_{\mathfrak{x}}|^{\frac{\kappa}{\hbar}-\varepsilon}<\frac{1}{2}\). Thus, \[d(y,A_{1}\mathcal{R}^{d})\leq Jq^{-\frac{1}{2}|\Delta_{\mathfrak{x}}|^{ \varepsilon}} \tag{2.25}\] #### 2.2.3. Generating Simplex Sets and Visit Times to \([\mathcal{L}_{d}]^{\geq\delta}\) Given a simplex set \(\Phi\) for a lattice \(\mathtt{r}\in\Omega\), it is desirable to determine whether \(\Phi\) generates \(\Delta_{\mathtt{r}}\). In practice, it can be difficult to determine this. Therefore, in this section we shall provide conditions ensuring that \(\langle\Phi\rangle=\Delta_{\mathtt{r}}\) and we shall also show that under certain conditions the number of visit times to the compact part of the \(A_{1}\) orbit is large. Given a lattice \(\mathtt{r}\in\Omega\), we say that \(\mathbf{t}\in A_{1}\) is a visit time to \([\mathcal{L}_{d}]^{\geq\delta}\) if \(\mathbf{t}\mathtt{r}\in[\mathcal{L}_{d}]^{\geq\delta}\). We shall first distinguish between two distinct visit times. **Definition 2.14**.: Let \(\mathbf{t}_{1},\mathbf{t}_{2}\in A_{1}\) be two visit times to \([\mathcal{L}_{d}]^{\geq\delta}\). We say that \(\mathbf{t}_{1}\) and \(\mathbf{t}_{2}\) are equivalent visit times if there exists a path \[\mathbf{t}_{1\mathtt{f}}=\mathbf{a}_{0\mathtt{f}},\mathbf{a}_{1\mathtt{f}}, \ldots,\mathbf{a}_{m\mathtt{f}}=\mathbf{t}_{2\mathtt{f}} \tag{2.26}\] such that 1. \(\mathbf{a}_{i\mathtt{f}}\in[\mathcal{L}_{d}]^{\geq\delta}\) for each \(i=0,1\ldots m\), and 2. \(d_{[\mathcal{L}_{d}]}(\mathbf{a}_{i\mathtt{f}},\mathbf{a}_{i+1\mathtt{f}})\leq q\) If two visit times are not equivalent, then we say that they are distinct visit times. **Lemma 2.15**.: _Let \(\mathtt{r}\in\Omega\), \(B_{1},B_{2}\subseteq A_{1}\) be two balls in \(A_{1}\) and \(\delta>0\). Assume that_ 1. \(\mathbf{t}_{1}\in B_{1}\) _and_ \(\mathbf{t}_{2}\in B_{2}\) _are equivalent visit times to_ \([\mathcal{L}_{d}]^{\geq\delta}\) _and that_ 2. _If_ \(B_{i}=B(\mathbf{a}_{i}^{\prime},r_{i})\)_, then, for any_ \(\mathbf{t}\in B(\mathbf{a}_{i}^{\prime},qr_{i})\setminus B_{i}\)_,_ \(\mathbf{t}\mathtt{r}\in[\mathcal{L}_{d}]^{<\delta}\) _for_ \(i=1,2\)_._ _Then, there exist \(\mathbf{s}_{i}\in B_{i}\) such that \(\mathbf{s}_{1}\mathbf{s}_{2}^{-1}\in\Delta_{\mathtt{r}}\)._ Proof.: Since \(\mathbf{t}_{1}\) and \(\mathbf{t}_{2}\) are equivalent visit times to \([\mathcal{L}_{d}]^{\geq\delta}\), then, there exists a path \[\mathbf{t}_{1\mathtt{f}}=\mathbf{a}_{0\mathtt{f}},\mathbf{a}_{1\mathtt{f}}, \ldots,\mathbf{a}_{m\mathtt{f}}=\mathbf{t}_{2\mathtt{f}}\in[\mathcal{L}_{d}]^ {\geq\delta} \tag{2.27}\] such that \(d_{[\mathcal{L}_{d}]}(\mathbf{a}_{i\mathtt{f}},\mathbf{a}_{i+1\mathtt{f}})\leq q\). Since \(\mathbf{t}_{1}\in B_{1}\), then 2 and 2 imply that we can choose \(\mathbf{a}_{i}\in B_{1}\) for each \(i=0,\ldots m\). Then, 2.27 implies that \(\mathbf{a}_{m}\mathbf{t}_{2}^{-1}\in\Delta_{\mathtt{r}}\) and therefore, we may choose \(\mathbf{s}_{1}=\mathbf{a}_{m}\in B_{1}\) and \(\mathbf{s}_{2}=\mathbf{t}_{2}\in B_{2}\). We need some definitions regarding simplex sets. We define the standard simplex sets for \(k\geq 1\) as \[\Phi_{*}^{k}:=\{\mathbf{b}_{1}^{k},\ldots\mathbf{b}_{d}^{k}\}\text{, where }\mathbf{b}_{l}:=\left(\begin{array}{c}x\\ \vdots\\ x^{-n}\\ \vdots\\ x\end{array}\right)\text{ $l$-th coordinate} \tag{2.28}\] and denote \(\Gamma_{*}:=\Gamma_{\Phi_{*}}\), \(\Delta_{*}:=\Delta_{\Phi_{*}}\) and \(\Delta_{*}^{k}:=\Delta_{\Phi_{*}^{k}}\). By equation 3.4 in [20], \[\mathbf{S}_{k}:=\mathbf{S}_{\Phi_{*}^{k}}=\rho^{-1}\left(\frac{n}{2}\operatorname {hull}\left(\rho\left(\Phi_{*}^{k}\right)\right)\right)=\left\{\mathbf{a}\in A _{1}:\lceil\mathbf{a}\rceil\leq q^{\frac{n}{2}k}\right\}\] **Definition 2.16**.: Let \(C\in\mathbb{N}\). We say that a simplex set \(\Phi\) is \((k,C)\)-standard if there exist \(\mathbf{c}_{i}\in A_{1}\), \(i=1,\ldots d\), with \(\|\mathbf{c}_{i}\|\leq q^{C}\) such that \[\Phi=\left\{\mathbf{b}_{1}^{k}\mathbf{c}_{1},\ldots\mathbf{b}_{d}^{k}\mathbf{ c}_{d}\right\} \tag{2.29}\] We say that the associated lattice \(\Gamma_{\Phi}=\langle\Phi\rangle\) is a \((k,C)\)-standard lattice. We denote by \(\Omega_{M}^{(C)}\) the set of lattices \(\mathtt{r}\in\Omega_{M}\), such that there exists \(k\) such that \(\mathtt{r}\) has a \((k,C)\)-standard simplex set \(\Phi\) which is \(M\)-tight. We shall now show that if \(\mathtt{r}_{k}\in\Omega_{M}^{(C)}\) are lattices with \(M\) tight, \((k,C)\) standard simplex sets, \(\Phi_{k}\), then, modulo units, \(\Gamma_{k}=\langle\Phi_{k}\rangle\) generates \(\Delta_{\mathtt{r}_{k}}\), and there are at least \(n!\) distinct visit times to the compact part of \(A_{1}\mathtt{r}_{k}\). **Theorem 2.17**.: _Fix \(M>0,C\geq 0\), \(0<\kappa<1\) and \(0<\delta<1\). Let \(\mathfrak{r}_{k}\in\Omega_{M}^{(C)}\) be such that there exists an \(M\)-tight simplex set \(\Phi_{k}\) which is \((k,C)\)-standard. Let \(\Gamma_{k}=\langle\Phi_{k}\rangle\), let \(W_{k}=W_{\Phi_{k}}\), \(\mathbf{W}_{k,\kappa}=\mathbf{W}_{\Phi_{k},\kappa}=\rho^{-1}(B_{r}(W_{k}))\), where \(r\) is as in (2.15) and let_ \[W_{k}^{\prime}=\left\{\mathbf{w}_{\tau}\in W_{k}:\exists\mathbf{t}\in\rho^{-1 }(B_{r}(\mathbf{w}_{\tau}))\subseteq\mathbf{W}_{k,\kappa}:\mathbf{t}\mathfrak{ r}_{k}\in[\mathcal{L}_{d}]^{\geq\delta}\right\}\subseteq W_{k} \tag{2.30}\] _Then, there exists some \(k_{0}\) depending on \(\kappa\) such that for each \(k\geq k_{0}\),_ 1. \([\Delta_{\mathfrak{r}_{k}}:\Gamma_{k}]\leq n!\)__ 2. _If_ \(W_{k}^{\prime}=W_{k}\)_, then_ \(\Delta_{\mathfrak{r}_{k}}A_{1}(\mathbf{U})=\Gamma_{k}A_{1}(\mathbf{U})\)__ 3. _If_ \(W_{k}^{\prime}=W_{k}\)_, then, for any_ \(\delta\in(0,1)\)_, there are at least_ \(n!\) _distinct visits to_ \([\mathcal{L}_{d}]^{\geq\delta}\)_._ Proof.: To save on notation, we denote \(\mathfrak{r}_{k}=\mathfrak{r}\) and assume that \(k\) is large enough so that the conclusion of Proposition 2.10 holds. Consider \(\mathbf{t}\in A_{1}\) such that \(\mathbf{t}\mathfrak{r}\in[\mathcal{L}_{d}]^{\geq\delta}\). By Proposition 2.10(2) and the definition of \(W_{k}^{\prime}\), \[\mathbf{t}\Delta_{\mathfrak{r}}\subseteq\rho^{-1}\left(B_{r}(W_{k}^{\prime}) \right)\Gamma_{k} \tag{2.31}\] where \(r\asymp|\Delta_{\mathfrak{r}}|^{\frac{\kappa}{n}}\). Equation (2.31) implies that \[\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}})\subseteq B_{r}(W_{k}^{\prime})+ \rho(\Gamma_{k}) \tag{2.32}\] We first show that for each \(\tau\in\mathcal{P}_{n}\), the coset \(\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}})\) can contain at most one point of \(B_{r}(\mathbf{w}_{\tau})+\rho(\mathbf{v})\) for \(\mathbf{v}\in\Gamma_{k}\). Statement (1) will follow from this claim along with Lemma 4.11 in [11] when applied to \(\rho(\Delta_{\mathfrak{r}})\) and \(\rho(\Gamma_{k})\). Assume that for some \(\tau\in\mathcal{P}_{n}^{\prime}\) and some \(\mathbf{v}\in\Gamma_{k}\), \[|(\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}}))\cap(B_{r}(\mathbf{w}_{\tau})+ \rho(\mathbf{v}))|\geq 2 \tag{2.33}\] Then, there exist \(\mathbf{u}_{i}\in\Delta_{\mathfrak{r}}\) and \(\left\lceil\mathbf{s}_{i}\right\rceil_{\mathbb{R}_{0}^{d}}\leq r\) such that \(\rho(\mathbf{t})+\rho(\mathbf{u}_{i})=\mathbf{s}_{i}+\rho(\mathbf{v})\), for \(i=1,2\). Hence, \(\rho(\mathbf{u}_{2})-\rho(\mathbf{u}_{1})=\mathbf{s}_{2}-\mathbf{s}_{1}\in \Delta_{\mathfrak{r}}\) has norm at most \(dr\). On the other hand, the distance between the balls composing \(B_{r}(W_{k})+\rho(\Gamma_{k})\) is greater than or equal to \[= \inf_{\begin{subarray}{c}\mathbf{v}_{i}\in\Gamma_{k},\sigma,\tau \in\mathcal{P}_{n}\end{subarray}}d\left(B_{r}\mathbf{w}_{\tau}\right)+\rho( \mathbf{v}_{1}),B_{r}(\mathbf{w}_{\sigma})+\rho(\mathbf{v}_{2})) \tag{2.34}\] \[= \inf_{\begin{subarray}{c}\mathbf{v}_{i}\in\Gamma_{k},\sigma,\tau \in\mathcal{P}_{n},\left\lceil\mathbf{s}_{i}\right\rceil_{\mathbb{R}_{0}^{d}} \leq r\end{subarray}}\left(\mathbf{w}_{\tau}+\mathbf{s}_{1}+\rho(\mathbf{v}_{ 1}),\mathbf{w}_{\sigma}+\mathbf{s}_{2}+\rho(\mathbf{v}_{2})\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\geq\inf_{ \mathbf{v}\in\Gamma_{k},\left\lceil\mathbf{s}\right\rceil\leq rd}[\rho(\mathbf{v}) +\rho(\mathbf{s})]\gg k+C-dr\] Since \(|\Delta_{\mathfrak{r}}|\ll\xi_{\Phi_{k}}^{n}\ll(k+C)^{n}\), then, (2.34) is \(\gg|\Delta_{\mathfrak{r}}|^{\frac{1}{n}}\left(1-dc|\Delta_{\mathfrak{r}}|^{- \frac{1-\kappa}{n}}\right)\), which is larger than \(dr=dc|\Delta_{\mathfrak{r}}|^{\frac{\kappa}{n}}\) as \(|\Delta_{\mathfrak{r}}|\to\infty\), since \(0<\kappa<1\). Thus, we obtain a contradiction to (2.32). Let \(\mathbf{u}\in\Delta_{\mathfrak{r}}\). Assume that \(\mathbf{t}\mathbf{u}\in\rho^{-1}\left(B_{r}(\mathbf{w}_{\tau})\right)\mathbf{v}\) where \(\mathbf{v}\in\Gamma_{k}\) and \(\tau\in\mathcal{P}_{n}\). Then, \(\rho(\mathbf{t})+\rho(\mathbf{u})\) is the unique point of \(\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}})\) which is in \(B_{r}(\mathbf{w}_{\tau})+\rho(\mathbf{v})\). Hence, there exists a unique \(\mathbf{s}\in B_{r}(\mathbf{w}_{\tau})\) such that \(\rho(\mathbf{t})+\rho(\mathbf{u})=\mathbf{s}+\rho(\mathbf{v})\). Thus, we can apply Lemma 4.11 in [11] to \(\rho(\Delta_{\mathfrak{r}})\) and \(\rho(\Gamma_{k})\) to obtain that \([\rho(\Delta_{\mathfrak{r}}):\rho(\Gamma_{\Phi})]\leq n!\). Hence, \([\Delta_{\mathfrak{r}}:\Gamma_{\Phi}]\leq n!\), which proves (1). We now prove (2). Let \(\mathbf{t}\) be such that \(\mathbf{t}\mathfrak{r}\in[\mathcal{L}_{d}]^{\geq\delta_{1}}\). By (2.31), we can assume that \(\mathbf{t}\in\rho^{-1}\left(B_{r}(\mathbf{w}_{\tau})\right)\) for some \(\mathbf{w}_{\tau}\in W_{k}^{\prime}\). We shall show that \(\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}})\subseteq B_{r}(\mathbf{w}_{\tau})+ \rho(\Gamma_{k})\), which together with the fact that each ball composing \(B_{r}(\mathbf{w}_{\tau})+\rho(\Gamma_{k})\) contains at most one point of \(\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}})\) will imply that \(\rho(\Gamma_{k})=\rho(\Delta_{\mathfrak{r}})\). Assume on the contrary that there exists some \(\tau\neq\sigma\in\mathcal{P}_{n}\) such that \[\left(\rho(\mathbf{t})+\rho(\Delta_{\mathfrak{r}})\right)\cap\left(B_{r}(\mathbf{ w}_{\sigma})+\rho(\Gamma_{k})\right)\neq\emptyset \tag{2.35}\] Thus, there exists some \(\mathbf{v}\in\Delta_{\mathfrak{r}}\) and \(\left\lceil\mathbf{s}_{1}\right\rceil_{\mathbb{R}_{0}^{d}}\leq r\) such that \[\rho(\mathbf{t})=\mathbf{w}_{\sigma}+\mathbf{s}_{1}+\rho(\mathbf{v}) \tag{2.36}\] Hence, \[\rho(\mathbf{t})-\mathbf{w}_{\sigma}-\mathbf{s}_{1}\in\rho(\Delta_{\mathfrak{r}}) \tag{2.37}\] On the other hand, since \(\mathbf{t}\in\rho^{-1}\left(B_{r}(\mathbf{w}_{\tau})\right)\), then, there exists some \(\left\lceil\mathbf{s}_{2}\right\rceil_{\mathbb{R}_{0}^{d}}\leq r\) such that \[\rho(\mathbf{t})=\mathbf{w}_{\tau}+\mathbf{s}_{2} \tag{2.38}\] Hence, if we denote \(\mathbf{s}=\mathbf{s}_{2}-\mathbf{s}_{1}\), then, \(\left\lceil\mathbf{s}\right\rceil_{\mathbb{R}_{0}^{d}}\leq nr\) and \[\mathbf{w}_{\tau}-\mathbf{w}_{\sigma}+\mathbf{s}=\rho(\mathbf{v})\in\rho( \Delta_{\mathfrak{f}}) \tag{2.39}\] Define \(\tau^{\prime}\in\mathcal{P}_{n}\) by \(\tau^{\prime-1}(j)=\sigma^{-1}(j)-\tau^{-1}(j)\mod d\). Then, \[\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{\tau}-\mathbf{w}_{\sigma}= \tag{2.40}\] \[\begin{split}\frac{1}{d}\sum_{j=1}^{d}\left(\tau^{\prime-1}(j )-1+\tau^{-1}(j)-\sigma^{-1}(j)\right)\rho(\mathbf{b}_{j}^{k}\mathbf{c}_{j}) \\ =-\frac{1}{d}\sum_{j=1}^{d}\rho(\mathbf{b}_{j}^{k}\mathbf{c}_{j}) =0\mod\rho(\Gamma_{k})\end{split}\] Hence, \(\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{\tau}-\mathbf{w}_{\sigma}\in\rho( \Gamma_{k})\) so that \(\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{\tau}-\mathbf{w}_{\sigma}\notin W_{k}+ \rho(\Gamma_{k})\). Moreover, since \(|\Delta_{\mathfrak{f}}|\ll(k+C)^{n}\) and for each \(\mathbf{w}\in W_{k}\), \(\mathbf{w}\asymp\frac{n}{2}(k+C)\), then, \[\begin{split} d\left(\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{ \tau}-\mathbf{w}_{\sigma},W_{k}+\rho(\Gamma_{k})\right)=d\left(0,W_{k}+\rho( \Gamma_{k})\right)\\ \gg\frac{n}{2}(k+C)\gg\frac{n}{2}|\Delta_{\mathfrak{f}}|^{\frac{ 1}{n}}\gg c|\Delta_{\mathfrak{f}}|^{\frac{n}{n}}=r\end{split} \tag{2.41}\] Hence, \(\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{\tau}-\mathbf{w}_{\sigma}\notin B_{r}( W_{k})+\rho(\Gamma_{k})\). Since \(W_{k}=W_{k}^{\prime}\), then \(\mathbf{w}_{\tau^{\prime}}\in W_{k}^{\prime}\). Thus, there exists \(\mathbf{t}^{\prime}\) such that \[\mathbf{t}^{\prime}\in\rho^{-1}\left(B_{r}(\mathbf{w}_{\tau^{\prime}})\right) \text{ and }\mathbf{t}^{\prime}_{\mathfrak{f}}\in\left[\mathcal{L}_{d}\right]^{ \geq\delta_{1}} \tag{2.42}\] Therefore, there exists some \(\left\lceil\mathbf{s}^{\prime}\right\rceil_{\mathbb{R}_{0}^{d}}\leq r\) such that \(\rho(\mathbf{t}^{\prime})=\mathbf{w}_{\tau}^{\prime}+\mathbf{s}^{\prime}\). Thus, (2.39) and (2.42) imply that \[\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{\tau}-\mathbf{w}_{\sigma}=\rho( \mathbf{t}^{\prime})-\mathbf{s}^{\prime}+\mathbf{s}-\rho(\mathbf{v})\] If we write \(\tilde{\mathbf{s}}=\mathbf{s}-\mathbf{s}^{\prime}\), then, \(\left\lceil\tilde{\mathbf{s}}\right\rceil_{\mathbb{R}_{0}^{d}}\leq 2nr\) so that (2.41) implies that \[\begin{split} d\left(\rho(\mathbf{t}^{\prime}),W_{k}+\rho(\Gamma_ {k})\right)=d\left(\mathbf{w}_{\tau^{\prime}}+\mathbf{w}_{\tau}-\mathbf{w}_{ \sigma}+\tilde{\mathbf{s}}+\mathbf{v},W_{k}+\rho(\Gamma_{k})\right)\\ \gg\frac{n}{2}|\Delta_{\mathfrak{f}}|^{\frac{1}{n}}-2nr\gg c| \Delta_{\mathfrak{f}}|^{\frac{n}{n}}=r\end{split} \tag{2.43}\] This shows that for all but finitely many \(\mathfrak{f}\in\Omega_{M}^{(C)}\), \[\rho(\mathbf{t}^{\prime})\notin B_{r}+W_{k}+\rho(\Gamma_{k})=B_{r}(W_{k})+ \rho(\Gamma_{k})\] Thus, by Proposition 2.10(2), \(\mathbf{t}^{\prime}\mathfrak{f}\in\left[\mathcal{L}_{d}\right]^{<\delta_{1}}\), which contradicts (2.42). It follows that \(\mathbf{t}\Delta_{\mathfrak{f}}\subseteq\rho^{-1}\left(B_{r}(\mathbf{w}_{\tau} )\right)\Gamma_{k}\), and hence, we obtain (2). We now prove (3). Let \(\mathbf{w}_{\tau}\in W_{k}^{\prime\prime}=W_{k}^{\prime}\) and let \(\mathbf{s}_{\tau}\in B_{r}\) be such that \(\rho^{-1}\left(\mathbf{w}_{\tau}+\mathbf{s}_{\tau}\right)\mathfrak{f}\subseteq \left[\mathcal{L}_{d}\right]^{\geq\delta_{1}}\). Assume that there exist \(\sigma\in\mathcal{P}_{n}\) and \(\mathbf{s}_{\sigma}\in B_{r}\) such that for each \(\mathbf{t}_{\tau}\in\rho^{-1}\left(\mathbf{w}_{\tau}+\mathbf{s}_{\tau}\right)\) and \(\mathbf{t}_{\sigma}\in\rho^{-1}\left(\mathbf{w}_{\sigma}+\mathbf{s}_{\sigma}\right)\), \(\mathbf{t}_{\sigma}\) and \(\mathbf{t}_{\tau}\) are equivalent visit times. Then by Lemma 2.15, there exist \(\mathbf{s}_{1}\in\rho^{-1}\left(B_{r}(\mathbf{w}_{\tau})\right)\) and \(\mathbf{s}_{2}\in\rho^{-1}\left(B_{r}(\mathbf{w}_{\sigma})\right)\) such that \(\mathbf{s}_{1}^{-1}\mathbf{s}_{2}\in\Delta_{\mathfrak{f}}\). Thus, there exist \(\mathbf{s}_{1}^{\prime},\mathbf{s}_{2}^{\prime}\in B_{r}\) with \(\rho(\mathbf{s}_{1})=\mathbf{w}_{\tau}+\mathbf{s}_{1}^{\prime}\) and \(\rho(\mathbf{s}_{2})=\mathbf{w}_{\sigma}+\mathbf{s}_{2}^{\prime}\). Therefore, by taking \(\mathbf{s}=\mathbf{s}_{2}^{\prime}-\mathbf{s}_{1}^{\prime}\), then \(\left\lceil\mathbf{s}\right\rceil_{\mathbb{R}_{0}^{d}}\leq dr\) and also, \[\mathbf{w}_{\sigma}-\mathbf{w}_{\tau}+\mathbf{s}=\rho(\mathbf{s}_{1}^{-1} \mathbf{s}_{2})=\rho(\mathbf{s}_{2})-\rho(\mathbf{s}_{1})\in\rho\left(\Delta_{ \mathfrak{f}}\right)\] However this results in a contradiction as shown above after following (2.39). In conclusion, we obtain the following: **Corollary 2.18**.: _Fix \(M,J,\varepsilon>0\) and some \(C\geq 0\). Then,_ 1. _By Proposition_ 2.10_, any sequence of distinct compact orbits_ \(A_{1}\mathfrak{x}_{k}\) _where_ \(\mathfrak{x}_{k}\in\Omega_{M}\) _must satisfy the conclusion of Corollary_ 1.13_._ 2. _By Proposition_ 2.13_, any sequence of distinct compact orbits_ \(A_{1}\mathfrak{x}_{k}\in\Omega_{M}(\varepsilon,J)\) _must satisfy the conclusion of Theorem_ 1.12 _._ 3. _If_ \(\mathfrak{F}_{k}\in\Omega_{M}^{(C)}\cap\Omega_{M}(\varepsilon,J)\) _is a sequence of lattices with_ \(M\)_-tight simplex set_ \(\Phi_{k}\) _generating the lattice_ \(\Gamma_{k}\)_, such that the_ \(\mathbf{a}^{\prime}\) _from Definition_ 2.12 _are uniformly bounded, then, by Theorem_ 2.17_, for all sufficiently large_ \(k\)_,_ \(\Gamma_{k}A\) _generate_ \(\Delta_{\mathfrak{F}_{k}}\) _modulo_ \(A_{1}(\mathbf{U})\)_. Moreover, for any_ \(\delta\in(0,1)\)_, there are at least_ \(n!\) _distinct visit times to_ \([\mathcal{L}_{d}]^{\geq\delta}\)_._ Proof.: Parts (1) and (2) are direct consequences of Propositions 2.10 and 2.13 respectively. Part (3) follows after observing that if \(\mathfrak{F}_{k}\in\Omega_{M}^{(C)}\cap\Omega_{M}(\varepsilon,J)\) and the \(\mathbf{a}^{\prime}\) from Definition 2.12 are uniformly bounded, then \(W_{k}=W_{k}^{\prime}\). ### Construction of the Lattices #### 2.3.1. The Polynomials We first construct polynomials, which we use to construct the lattices exhibiting escape of mass. For every \(\eta>0\), we define the following set of vectors in \(\mathcal{R}^{d}\): \[\mathcal{R}^{d}(\eta):=\left\{\mathbf{Q}\in\mathcal{R}^{d}:\forall i\neq j, \frac{|Q_{i}-Q_{j}|}{\|\mathbf{Q}\|}\geq\eta\right\}\] Now for each \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\), we can define a polynomial: \[P_{\mathbf{Q}}(T):=\prod_{i=1}^{d}(T-Q_{i})-1\] We now prove a positive characteristic analogue of Lemma 5.1 in [11]. **Lemma 2.19**.: _Fix \(\eta>0\). Then, for all but finitely many \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\), the polynomial \(P_{\mathbf{Q}}\) is irreducible over \(\tilde{\mathcal{K}}\) and has \(d\) distinct roots \(\theta_{j}=\theta_{j}(\mathbf{Q})\) which all lie in \(\tilde{\mathcal{K}}\). Moreover,_ \[\theta_{j}=Q_{j}+O_{\eta}\left(\|\mathbf{Q}\|^{-n}\right)\] Proof.: Let \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\), \(L\) be the splitting field of \(P_{\mathbf{Q}}\) over \(\tilde{K}\). and let \(\theta\in L\) be a root of \(P_{\mathbf{Q}}\). Then, \[\prod_{i=1}^{d}(\theta-Q_{i})=1 \tag{2.44}\] By Theorem 1.1 in [11], the absolute value on \(\tilde{K}\) can be extended to a unique absolute value on \(L\), which satisfies the ultrametric inequality. We abuse notation slightly and denote this absolute value on \(L\) by \(|\cdot|\). Thus, (2.44) implies that \[\prod_{i=1}^{d}|\theta-Q_{i}|=1 \tag{2.45}\] If there exist \(i\neq j\) such that \(|\theta-Q_{i}|\leq 1\) and \(|\theta-Q_{j}|\leq 1\), then by the ultrametric inequality, \(|Q_{i}-Q_{j}|\leq 1\). This results in a contradiction for large enough \(\|\mathbf{Q}\|\) since \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\). Thus, there exists a unique \(j\), such that \(|\theta-Q_{j}|\leq 1\). Denote this unique \(j\) by \(j_{\theta}\). Then, if \(j\neq j_{\theta}\), \[\theta-Q_{j}=(\theta-Q_{j_{\theta}})+(Q_{j_{\theta}}-Q_{j}) \tag{2.46}\] Since \(|\theta-Q_{j_{\theta}}|\leq 1\), then for large enough \(\|\mathbf{Q}\|\), \(|\theta-Q_{j}|=|Q_{j_{\theta}}-Q_{j}|\). Therefore, the definition of \(\mathcal{R}^{d}(\eta)\) implies that \[|\theta-Q_{j}|=|Q_{j}-Q_{j_{\theta}}|\asymp_{\eta}\|\mathbf{Q}\| \tag{2.47}\] Thus, \[|\theta-Q_{j_{\theta}}|=\prod_{i\neq j_{\theta}}|\theta-Q_{i}|^{-1}\asymp_{ \eta}\|\mathbf{Q}\|^{-(d-1)}=\|\mathbf{Q}\|^{-n} \tag{2.48}\] Let \(\theta_{j}\) for \(j=1,\ldots d\) be the roots of \(P_{\mathbf{Q}}\). We shall now show that \(\theta\mapsto j_{\theta}\) is an injective map from the set of roots of \(P_{\mathbf{Q}}\) to \(\{1,\ldots d\}\). Assume that the map \(\theta\mapsto j_{\theta}\) is not injective. Write \(P_{\mathbf{Q}}(T)=\prod_{j=1}^{d}(T-\theta_{j})\). Then there exist \(j_{1}\neq j_{2}\) such that \(l\ =j_{\theta_{j_{1}}}=j_{\theta_{j_{2}}}\). Therefore, \[\begin{split} 1=\left|\prod_{j=1}^{d}(Q_{l}-Q_{j})-1\right|& =|P_{\mathbf{Q}}(Q_{l})|\\ =\prod_{j=1}^{d}|Q_{l}-\theta_{j}|&\ll_{\eta}\| \mathbf{Q}\|^{-2n+(n-2)}=\|\mathbf{Q}\|^{-(d+1)}\end{split} \tag{2.49}\] For large enough \(\|\mathbf{Q}\|\), this results in a contradiction. Thus, we can order the \(\theta_{j}\) so that \[|\theta_{j}-Q_{j}|\asymp_{\eta}\|\mathbf{Q}\|^{-n} \tag{2.50}\] \[\forall l\neq j,|\theta_{j}-Q_{l}|\asymp_{\eta}\|\mathbf{Q}\| \tag{2.51}\] In particular, this shows that \(P_{\mathbf{Q}}\) has \(d\) distinct roots, since \[|\theta_{i}-\theta_{j}|=|\theta_{i}-Q_{j}+Q_{j}-\theta_{j}|=|\theta_{i}-Q_{j}| \asymp_{\eta}\|\mathbf{Q}\| \tag{2.52}\] We shall now show that \(\theta_{j}\in\tilde{\mathcal{K}}\) for each \(j\). It is well know that roots in \(L\setminus\tilde{\mathcal{K}}\) come in conjugate sets of size at least \(2\) (see Chapter 1.14 and Chapter 2 of [10]). Thus, if \(\theta_{j}\notin\tilde{\mathcal{K}}\), then there exists an automorphism \(\tau:L\to L\), which preserves \(\tilde{\mathcal{K}}\), and some \(i\neq j\) such that \(\tau(\theta_{j})=\theta_{i}\). It is well known that an automorphism of an extension of a local field equipped with an appropriate norm is an isometry (see Theorem 1.1 in [11]). Since \(\tau(\theta_{j})=\theta_{i}\), then \(|\theta_{i}|=\theta_{j}\) so that \[|\theta_{j}-Q_{i}|=|\tau(\theta_{j}-Q_{i})|=|\theta_{i}-Q_{i}|\] But, since \(i\neq j\), \[\|\mathbf{Q}\|^{-n}\asymp_{\eta}|\theta_{j}-Q_{j}|=|\tau(\theta_{j})-Q_{j}|=| \theta_{i}-Q_{j}|\asymp_{\eta}\|\mathbf{Q}\|\] which is a contradiction for \(\|\mathbf{Q}\|\) large enough. Thus, \(i=j\), so that of the \(\theta_{j}\) must all lie in \(\tilde{\mathcal{K}}\). We now prove that \(P_{\mathbf{Q}}\) is irreducible over \(\mathcal{K}\). Since \(\mathbb{F}_{q}\) is a field, then \(\mathcal{R}=\mathbb{F}_{q}[x]\) is a unique factorization domain. Thus, by Gauss' lemma it suffices to prove that \(P_{\mathbf{Q}}\) is irreducible over \(\mathcal{R}\). If \(P_{\mathbf{Q}}\) is reducible over \(\mathcal{R}\), then there exists some proper subset \(I\subset\{1,\ldots d\}\), such that \(F(T)=\prod_{j\in I}(T-\theta_{j})\) is a polynomial over \(\mathcal{R}\). Let \(l\in I\). Then, \(0\neq F(Q_{l})\in\mathcal{R}\). On the other hand, \[0\neq|F(Q_{l})|=\prod_{j\in I}|Q_{l}-\theta_{j}|=O_{\eta}\left(\|\mathbf{Q}\| ^{|I|-1-n}\right)\] Therefore, \(|F(Q_{l})|<1\) for \(\|\mathbf{Q}\|\) large enough, which is a contradiction to the assumption that \(F(T)\) has coefficients in \(\mathcal{R}\). Thus, for \(\|\mathbf{Q}\|\) large enough \(P_{\mathbf{Q}}\) must be irreducible over \(\mathcal{R}\) and thus also be irreducible over \(\mathcal{K}\). #### 2.3.2. The Lattices Fix \(\eta>0\) and let \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\) be such that \(\|\mathbf{Q}\|\) is large enough so that \(\mathbf{Q}\) satisfies the conclusion of Lemma 2.19. Let \(\theta\) be a root of \(P_{\mathbf{Q}}\) and let \(\mathbb{F}_{\mathbf{Q}}=\mathcal{K}(\theta)\). Then by Lemma 2.19, \(\mathcal{K}<\mathbb{F}_{\mathbf{Q}}\leq\tilde{\mathcal{K}}\) is an extension of degree \(d\) over \(\mathcal{K}\). Moreover, by (2.50) and (2.51), we can order the embeddings \(\sigma_{1},\ldots\sigma_{d}:\mathbb{F}_{\mathbf{Q}}\to\tilde{\mathcal{K}}\) so that \(\theta_{j}=\sigma_{j}(\theta)\) satisfies \(\theta_{j}=Q_{j}+O_{\eta}(\|\mathbf{Q}\|^{-n})\). Let \[\boldsymbol{\sigma}=\begin{pmatrix}\sigma_{1}\\ \vdots\\ \sigma_{d}\end{pmatrix}:\mathbb{F}_{\mathbf{Q}}\to\tilde{\mathcal{K}}^{d}\] and let \[\Lambda_{\mathbf{Q}}:=\operatorname{span}_{\mathcal{R}}\{1,\theta,\ldots\theta ^{n}\}\] Let \(\mathfrak{x}_{\mathbf{Q}}:=[\boldsymbol{\sigma}(\Lambda_{\mathbf{Q}})]\in[ \mathcal{L}_{d}]\). In order to conclude the proof of Theorem 1.12 and Corollary 1.13, we shall show that the lattices \(\mathfrak{x}_{\mathbf{Q}}\) satisfy the conditions of Corollary 2.18. **Proposition 2.20**.: _For any \(\eta>0\), there exist \(M,J>0\), \(C\geq 0\), and \(\varepsilon>0\) such that for all but finitely many \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\), \(\mathbf{{}_{\mathbf{Q}}}\in\Omega_{M}(\varepsilon,J)\cap\Omega_{M}^{(C)}\). Moreover, for all but finitely many \(\mathbf{Q}\in\mathcal{R}^{d}(\eta)\), there exists an \(M\)-tight simplex set \(\Phi_{\mathbf{Q}}\), such that \(\Phi_{\mathbf{Q}}\) generates \(\Delta_{\mathbf{r}}\) up to \(A_{1}(\mathbf{U})\), and for each \(\delta_{1}\in(0,1)\), \([\mathcal{L}_{d}]^{\geq\delta_{1}}\) contains at least \(n!\) distinct visit times of \(A_{1}\mathbf{{}_{\mathbf{Q}}}\)._ Proof.: Let \(\|\mathbf{Q}\|\) be large enough so that the conclusion of Lemma 2.19 holds. We shall first compute the covolume of \(\boldsymbol{\sigma}(\Lambda_{\mathbf{Q}})\). Notice that \(\boldsymbol{\sigma}(\Lambda_{\mathbf{Q}})\) is the lattice spanned over \(\mathcal{R}\) by the columns of the Vandermonde matrix \(\left(\theta_{j}^{i-1}\right)\). Thus, \[\operatorname{covol}\left(\boldsymbol{\sigma}(\Lambda_{\mathbf{Q}})\right)= \left|\det\begin{pmatrix}1&\theta_{1}&\theta_{1}^{2}&\ldots&\theta_{1}^{n}\\ 1&\theta_{2}&\ldots&\ldots&\theta_{2}^{n}\\ \vdots&\vdots&\ddots&\ldots&\vdots\\ 1&\theta_{d}&\ldots&\ldots&\theta_{d}^{n}\end{pmatrix}\right|=\prod_{i<j}| \theta_{j}-\theta_{i}| \tag{2.53}\] Since \(\theta_{j}=Q_{j}+O_{\eta}(\|\mathbf{Q}\|^{-n})\), then by (2.50) and (2.51), \[|\theta_{i}-\theta_{j}|=|\theta_{i}-Q_{j}+Q_{j}-\theta_{j}|=|\theta_{i}-Q_{j}| \asymp_{\eta}\|\mathbf{Q} \tag{2.54}\] Thus, the absolute value of (2.53) is equal to \[\operatorname{covol}(\boldsymbol{\sigma}(\Lambda_{\mathbf{Q}}))\asymp_{\eta} \prod_{i<j}=\|\mathbf{Q}\|\asymp_{\eta}\|\mathbf{Q}\|\binom{d}{2} \tag{2.55}\] Since \(\mathbf{1}=(1,\ldots 1)^{t}\in\mathfrak{r}_{\mathbf{Q}}\) then, due to Definition 1.5, \[\ell(\mathfrak{r}_{\mathbf{Q}})\leq\ell\left(\mathbf{1}\right)\ll_{\eta} \frac{1}{\|\mathbf{Q}\|}\binom{d}{2}=\|\mathbf{Q}\|^{-\frac{n}{2}} \tag{2.56}\] The link to the diagonal group stems from the following relationship - For any \(\alpha,\beta\in\mathbb{F}_{\mathbf{Q}}\): \[\operatorname{diag}(\boldsymbol{\sigma}(\alpha))\cdot\boldsymbol{\sigma}( \beta)=\binom{\sigma_{1}(\alpha)\sigma_{1}(\beta)}{\vdots}\\ \sigma_{d}(\alpha)\sigma_{d}(\beta)\end{pmatrix}=\binom{\sigma_{1}(\alpha) \beta}{\vdots}=\boldsymbol{\sigma}(\alpha\beta) \tag{2.57}\] Note that \(\Lambda_{\mathbf{Q}}\) is the ring \(\mathcal{R}[\theta]\). Denote \(\omega_{l}=\theta-Q_{l}\) and note that \(\omega_{l}\in\mathcal{R}[\theta]\). Furthermore, \(\omega_{l}\) is a unit in \(\mathcal{R}[\theta]=\Lambda_{\mathbf{Q}}\) since \(\prod_{l=1}^{d}|\omega_{l}|=\prod_{l=1}^{d}|\theta-Q_{l}|=1\). Therefore, \(\omega_{l}\Lambda_{\mathbf{Q}}=\Lambda_{\mathbf{Q}}\) so that if \(\beta\in\Lambda_{\mathbf{Q}}\), then \[\operatorname{diag}(\boldsymbol{\sigma}(\omega_{l}))\boldsymbol{\sigma}( \beta)=\boldsymbol{\sigma}(\omega_{l}\beta)\in\boldsymbol{\sigma}(\Lambda_{ \mathbf{Q}}) \tag{2.58}\] Due to Dirichlet's units theorem (see chapter 3 in [10]), the group of units in \(\mathcal{O}_{\mathbf{F}_{\mathbf{Q}}}\) is of rank \(d-1\). Thus, \(\operatorname{stab}_{[A]}(\mathfrak{r}_{\mathbf{Q}})\) has rank \(d-1\), and hence, \(\mathfrak{r}_{\mathbf{Q}}\) has a compact \([A]\) orbit. We now show that \(\mathfrak{r}_{\mathbf{Q}}\in\Omega_{M}^{(C)}\) for some constants \(M,C>0\) (see Definition 2.16). Denote \(\mathbf{t}_{l}=\operatorname{diag}\left(\boldsymbol{\sigma}(\omega_{l})\right)\) and note that \(\Phi_{\mathbf{Q}}=\{\mathbf{t}_{1},\ldots\mathbf{t}_{d}\}\) is a simplex set, since (2.50) and (2.51) imply that the matrix with \(\mathbf{t}_{l}\) in its columns has rank \(n\). Then (2.50) and (2.51) imply that \[\xi_{\Phi_{\mathbf{Q}}}=\max_{l}\log_{q}\lceil\mathbf{t}_{l}\rceil=\log_{q} \left\|\mathbf{Q}\right\|+O_{\eta}(1) \tag{2.59}\] To ease notation, let \(\xi_{\mathbf{Q}}:=\xi_{\Phi_{\mathbf{Q}}}\) and let \(\Delta_{\mathbf{Q}}=\Delta_{\mathfrak{r}_{\mathbf{Q}}}\). Then, (2.50), (2.51) and (2.59) imply that there exists some \(C=C_{\eta}\geq 0\) and some diagonal matrices \(\lceil\mathbf{c}_{l}\rceil\leq q^{C}\) such that \[\mathbf{t}_{l}=\mathbf{b}_{l}^{\xi_{\mathbf{Q}}}\mathbf{c}_{l} \tag{2.60}\] Thus, for large enough \(\|\mathbf{Q}\|\), \(\Phi_{\mathbf{Q}}\) is \((\xi_{\mathbf{Q}},C)\) standard. By combining (2.56) and (2.59), we obtain that \[\ell(\mathfrak{r}_{\mathbf{Q}})\ll_{\eta}\|\mathbf{Q}\|^{-\frac{n}{2}}\ll_{ \eta}q^{-\frac{n}{2}\xi_{\mathbf{Q}}} \tag{2.61}\] Thus, there exists some \(M>0\) such that \(\Phi\) is an \(M\)-tight simplex set for \(\mathfrak{r}_{\mathbf{Q}}\). Hence, \(\mathfrak{r}_{\mathbf{Q}}\in\Omega_{M}^{(C)}\) for large enough \(\|\mathbf{Q}\|\). We now show that for all but finitely many \(\mathbf{Q}\), \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{ }}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \}}}}}}}}}}}}} \ \}}}}}}}}}} \ \ \}}}}}}}}}} \ \ \ \ \ \}}}}}}}}}}} \ \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}} Therefore, we can write \[(c_{ij})=\begin{pmatrix}\frac{c_{11}}{c_{11}}&\frac{c_{12}}{c_{22}}&\dots&\frac{c _{1d}}{c_{dd}}\\ \frac{c_{21}}{c_{11}}&\frac{c_{22}}{c_{22}}&\dots&\frac{c_{2d}}{c_{dd}}\\ \vdots&\dots&\dots\\ \frac{c_{d1}}{c_{11}}&\dots&\dots&\frac{c_{dd}}{c_{dd}}\end{pmatrix}\cdot \begin{pmatrix}c_{11}&&\\ &\ddots&&\\ &&\ddots&\\ &&&\ddots&\\ &&&c_{dd}\end{pmatrix} \tag{2.70}\] Denote the right hand side of (2.70) as \(g\cdot\mathbf{a}^{\prime}\). Notice that \[g_{ij}\ll_{\eta}\begin{cases}1&i=j\\ \|\mathbf{Q}\|^{-1}&i\neq j\end{cases} \tag{2.71}\] and therefore, \(\|g-Id\|\ll_{\eta}\|\mathbf{Q}\|^{-1}\ll_{\eta}q^{-\xi_{\mathbf{Q}}}\) by (2.59). Since \(|\Delta_{\mathbf{Q}}|\ll_{\eta}\xi_{\mathbf{Q}}^{n}\), then by taking \(\varepsilon\) and \(J\) appropriately, we can ensure that \(\|g-Id\|\leq Jq^{-|\Delta_{\mathbf{Q}}|^{\varepsilon}}\). This completes the verification of Definition 2.12 and thus, \(\mathbf{r_{Q}}\in\Omega_{M}(\varepsilon,J)\). Finally, the remaining part of the statement follows from Corollary 2.18(3) after observing that the \(\mathbf{a}^{\prime}\) from Definition 2.12 are uniformly bounded. ## 3. Proof of Theorem 1.9 In this section, we shall show that a subsequence of the lattices we constructed in SS2.3, \(\{\mathbf{r_{Q}}\}\) must satisfy \(\mu(\mathbf{r_{Q}})=q^{-d}\). To do so, we shall prove a positive characteristic analog of Theorem 1.2 by showing that there exists a sequence of lattices \(\mathbf{r_{Q}}\) such that \(\mu(\mathbf{r_{Q}})\to q^{-d}\). Thus, by discreteness of \(\mu\) around non-zero values, upper semicontinuity of \(\mu\) and Theorem 1.12, we obtain that for every \(\|\mathbf{Q}\|\) large enough, \(\mu(\mathbf{r_{Q}})=q^{-d}\). We first return to the notations of SS2.3 to better understand the lattices \(\mathbf{r_{Q}}\). Fix distinct polynomials \(a_{1}\dots a_{d}\) such that \(a_{j}\equiv 0\mod x^{2}\). For \(Q\in\mathcal{R}\), let \(\mathbf{Q}:=(Qa_{1}\dots Qa_{d})\). Assume that \(\eta\) is small enough so that for each \(Q\in\mathcal{R}\), \(\mathbf{Q}\in\mathcal{R}_{\eta}^{d}\). Let \(\theta\) be a root of \[P_{\mathbf{Q}}(t)=\prod_{i=1}^{d}(t-Qa_{i})-1\] Let \(\mathbf{r_{Q}}=\boldsymbol{\sigma}(\Lambda_{\mathbf{Q}})\) be as the symbols of SS2.3. Notice that (2.47) and (2.48) imply that in the symbols of SS2.3, \[\forall i\neq j,|\theta_{i}-Qa_{j}|\asymp_{\eta}\|\mathbf{Q}\| \tag{3.1}\] and \[|\theta_{i}-Qa_{i}|\asymp_{\eta}\|\mathbf{Q}\|^{-n} \tag{3.2}\] Thus, (3.1) and (3.2) along with (2.53) imply together that \[\operatorname{covol}(\mathbf{r_{Q}})\asymp_{\eta}\|\mathbf{Q}\|^{\begin{pmatrix} d\\ 2\end{pmatrix}} \tag{3.3}\] Let \(\theta=\theta_{\mathbf{Q}}\) be a root of \(P_{\mathbf{Q}}\) and let \(\omega_{i}=\theta-Qa_{i}\). Since \(P_{\mathbf{Q}}(\theta)=0\), then \(\prod_{i=1}^{d}\omega_{i}=1\), and therefore, \(\omega_{i}\) are units in \(\mathcal{O}_{\mathbf{F_{Q}}}\). Denote \(u_{1}=1\) and for \(j=2,\dots n\), denote \(u_{j}=\omega_{1}\dots\omega_{j-1}\). Since \(\theta^{k}\) is a linear combination of \(u_{1},\dots u_{k+1}\) with coefficients in \(\mathcal{R}\), then \(u_{1}\dots u_{d}\) is an \(\mathcal{R}\) basis for \(\Lambda_{\mathbf{Q}}=\mathcal{R}[\theta]\). Notice that by (2.64), (3.2) and (3.1), for each \(j=1,\dots d\), \[|\sigma_{i}(u_{j})|\asymp_{\eta}\begin{cases}\|\mathbf{Q}\|^{j-d-1}&i\leq j- 1\\ \|\mathbf{Q}\|^{j-1}&i\geq j\end{cases} \tag{3.4}\] Denote \(\sigma_{i}(u_{j})=u_{ij}\) and let \(M_{\mathbf{Q}}=(u_{ij})_{i,j}\) be as in (2.63). Then, by (3.4), (2.63), and the equality case of the ultrametric inequality, \[\begin{split}|\det(M_{\mathbf{Q}})|=\left|\sum_{\tau\in\mathcal{P} _{d}}(-1)^{\operatorname{sgn}(\tau)}\prod_{j=1}^{d}\sigma_{j}(u_{\tau(j)}) \right|=\prod_{j=1}^{d}|\sigma_{j}(u_{j})|\\ =\prod_{i=1}^{d}\left|\sum_{j=1}^{d}\sigma_{i}(u_{j})\right| \asymp_{\eta}\|\mathbf{Q}\|\binom{d}{2}\end{split} \tag{3.5}\] By Theorem 1.12 and upper-semicontinuity of \(\mu\), for \(\|\mathbf{Q}\|\) large enough, \(\mu(\mathbf{r_{Q}})\leq q^{-d}\). Therefore to prove Theorem 1.9, it suffices to show that for each \(\|\mathbf{Q}\|\) large enough, \(\mu(\mathbf{r_{Q}})\geq q^{-d}\). To prove that \(\mu(\mathbf{r_{Q}})\geq q^{-d}\) for sufficiently large \(\|\mathbf{Q}\|\), we follow the proof of the main result of [10]. Let \(\Theta_{\mathbf{Q}}:=(\theta_{i}^{j-1})_{i,j}\) and let \(y_{\mathbf{Q}}=:\Theta_{\mathbf{Q}}\left(\mathcal{R}^{d}+\begin{pmatrix} \frac{1}{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)\). To prove Theorem 1.9, it suffices to prove the following theorem. **Theorem 3.1**.: _For every \(\mathbf{v}\in y_{\mathbf{Q}}\), \(N(\mathbf{v})=\prod_{i=1}^{d}|v_{i}|\geq q^{-d}|\det(M_{\mathbf{Q}})|\), for all but finitely many \(\mathbf{Q}\)._ Proof of Theorem 1.9.: By definition, \[\mu(\mathbf{r_{Q}})=\frac{1}{|\det(M_{\mathbf{Q}})|}\sup_{y\in\pi^{-1}( \mathbf{r_{Q}})}\inf_{\mathbf{v}\in y}N(\mathbf{v}) \tag{3.6}\] By Theorem 3.1, for all but finitely many \(\mathbf{Q}\), \(\inf_{\mathbf{v}\in y_{\mathbf{Q}}}N(\mathbf{v})\geq q^{-d}|\det(M_{\mathbf{Q}})|\), and therefore, (3.6) is greater than or equal to \(q^{-d}\). On the other hand, by Theorem 1.12 and upper semicontinuity of \(\mu\), \(\mu(\mathbf{r_{Q}})\leq q^{-d}\), and hence \(\mu(\mathbf{r_{Q}})=q^{-d}\). **Lemma 3.2**.: \[y_{\mathbf{Q}}=M_{\mathbf{Q}}\left(\mathcal{R}^{d}+\begin{pmatrix}\frac{1}{x} \\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)\] Proof.: Notice that \(u_{1}=1\) and for each \(j=2\ldots d\), \[u_{j}=\prod_{l=1}^{j-1}(\theta-Qa_{l})=\theta^{j-1}+c_{j,j-2}\theta^{j-2}+ \cdots+c_{j,1}\theta+c_{j,0}\] where \(c_{j,l}\) are are sums of products of \(Qa_{1}\ldots Qa_{j-1}\). Since \(a_{l}\equiv 0\mod x^{2}\), then \(c_{j,l}\equiv 0\mod x^{2}\). Therefore, the change of basis matrix between the \(\mathcal{R}\) basis of \(\Lambda_{\mathbf{Q}}\)\(\{u_{1}\ldots u_{d}\}\) and \(\{1,\theta,\ldots\theta^{n}\}\) is given by \[P=\begin{pmatrix}1&c_{2,0}&\ldots&c_{d,0}\\ 0&1&\ldots&c_{d,1}\\ \vdots&\ldots&\ddots&\vdots\\ 0&\ldots&\ldots&1\end{pmatrix} \tag{3.7}\] Since \(P\) is upper triangular with elements in \(x^{2}\mathcal{R}\) above the diagonal and \(c_{i+1,j}\equiv 0\mod x^{2}\), then \(P^{-1}\) is of the same form as well. Hence, \[P^{-1}\left(\mathcal{R}^{d}+\begin{pmatrix}\frac{1}{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)\subseteq\mathcal{R}^{d}+\begin{pmatrix}\frac{1 }{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix}\] Hence, \[M_{\mathbf{Q}}\left(\mathcal{R}^{d}+\begin{pmatrix}\frac{1}{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)=P\Theta_{\mathbf{Q}}P^{-1}\left(\mathcal{R}^{d}+ \begin{pmatrix}\frac{1}{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)\subseteq\Theta_{\mathbf{Q}}\mathcal{R}^{d}+P \Theta_{\mathbf{Q}}\begin{pmatrix}\frac{1}{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix} \tag{3.8}\] Notice that if \(\xi=\sum_{i=0}^{n}\frac{1}{x}\theta^{i}\) then, since \(c_{j,l}\equiv 0\mod x^{2}\), there exist some \(b_{0}\ldots b_{n}\in\mathcal{R}\) such that \(P\boldsymbol{\sigma}(\xi)=\sum_{i=0}^{d}\left(b_{i}+\frac{1}{x}\right) \boldsymbol{\sigma}(\theta^{i})\in\Theta_{\mathbf{Q}}\left(\mathcal{R}^{d}+ \begin{pmatrix}\frac{1}{x}\\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)\). Hence, the right hand side of (3.8) is contained in \(y_{\mathbf{Q}}\). On the other hand, by doing the same procedure and writing \(\Theta_{\mathbf{Q}}=P^{-1}M_{\mathbf{Q}}P\), we obtain that \(y_{\mathbf{Q}}=M_{\mathbf{Q}}\left(\mathcal{R}^{d}+\begin{pmatrix}\frac{1}{x} \\ \vdots\\ \frac{1}{x}\end{pmatrix}\right)\). Let \(\boldsymbol{T}=M_{\mathbf{Q}}^{-1}\). We will need the following bound on the entries of \(\boldsymbol{T}\), which can be viewed as an analogue of Lemma 2 in [10]. **Lemma 3.3**.: _Write \(\boldsymbol{T}=(T_{ij})\). Then, \(T_{ij}=O(\|\mathbf{Q}\|^{1-i})\) and \(T_{ij}=O(\|\mathbf{Q}\|^{1-j})\)._ Proof.: We use the estimate for the adjugate matrix. Since \(\operatorname{adj}(M_{\mathbf{Q}})M_{\mathbf{Q}}=\det(M_{\mathbf{Q}})I\) then, \[\boldsymbol{T}=\operatorname{adj}(M_{\mathbf{Q}})\det(M_{\mathbf{Q}})^{-1} \tag{3.9}\] By the definition of the adjugate matrix \(\operatorname{adj}(M_{\mathbf{Q}})_{ij}\) is given by the determinant of the matrix given by removing the \(i\)-th row and the \(j\)-th column from (2.63). Notice that (2.68) implies that \[\begin{split}\left(\operatorname{adj}(M_{\mathbf{Q}})\right)_{ ij}&\ll\prod_{l\neq j}|(M_{\mathbf{Q}})_{il}|=\prod_{l\neq j}|\sigma_{l}(u_{l})|\\ &\ll\prod_{l\neq j}\|\mathbf{Q}\|^{l-1}=\|\mathbf{Q}\|\binom{d} {2}^{-(j-1)}\end{split} \tag{3.10}\] and similarly, \[\left(\operatorname{adj}(M_{\mathbf{Q}})\right)_{ij}\ll\prod_{l\neq i}\| \mathbf{Q}\|^{l-1}=\|\mathbf{Q}\|\binom{d}{2}^{-(i-1)} \tag{3.11}\] Hence, by (3.10), \[|T_{ij}|=|(M_{\mathbf{Q}}^{-1})_{ij}|=|\det(M_{\mathbf{Q}})|^{-1}|\operatorname {adj}(M_{\mathbf{Q}})_{ij}|\ll\|\mathbf{Q}\|^{1-j}\] and similarly, by (3.11), \[|T_{ij}|\ll\|\mathbf{Q}\|^{1-i}\] From Lemma 3.3, we obtain that every \(\mathbf{v}\in y_{\mathbf{Q}}\) avoids a certain box. **Lemma 3.4**.: _There exists some \(c>0\) such that for each \(\mathbf{v}\in y_{\mathbf{Q}}\),_ \[\mathbf{v}\notin B(0,c\|\mathbf{Q}\|^{n})\times\cdots\times B(0,c\|\mathbf{Q }\|^{n})\] Proof.: Firstly, write \(\mathbf{v}=\sum_{i=1}^{d}\beta_{i}\boldsymbol{\sigma}(u_{i})\) where \(\beta_{i}\equiv\frac{1}{x}\mod\mathcal{R}\). Thus, for each \(j\), \(|\beta_{j}|\geq\left|\frac{1}{x}\right|=\frac{1}{q}\). Notice that by Lemma 3.3, \[\frac{1}{q}\leq|\beta_{d}|=\left|\sum_{j=1}^{d}T_{d}v_{j}\right|\leq\max_{j}|v _{j}|\cdot|T_{dj}|\ll\|\mathbf{Q}\|^{-n}\max_{j}|v_{j}| \tag{3.12}\] Thus, there exists some \(c>0\) such that \(\max_{j}|v_{j}|>c\|\mathbf{Q}\|^{n}\) and hence the claim follows. Now we shall show that \(\operatorname{diag}(\boldsymbol{\sigma}(\omega_{i})\) stabilize \(y_{\mathbf{Q}}\). **Lemma 3.5**.: _For every \(i=1\ldots d\), \(\operatorname{diag}(\boldsymbol{\sigma}(\omega_{i}))\in\operatorname{stab}_{A_{ 1}}(y_{\mathbf{Q}})\)._ Proof.: We shall now show that \(\omega_{j}\) preserve the grid \[\iota_{\mathbf{Q}}:=\left\{\,\sum_{i=1}^{d}\left(b_{i}+\frac{1}{x}\right) \theta^{i-1}:b_{i}\in\mathcal{R}\right\}\] This will imply that \(\operatorname{diag}\left(\boldsymbol{\sigma}(\omega_{j})\right)\) preserves the grid \(y_{\mathbf{Q}}=\boldsymbol{\sigma}(\iota_{\mathbf{Q}})\), which implies that \(\mathbf{v}^{\prime}\in y_{\mathbf{Q}}\). If \(\xi=\sum_{i=0}^{n}\alpha_{i}\theta^{i}\in\iota_{\mathbf{Q}}\) then, for each \(j\) \[(\theta-Qa_{j})\sum_{i=0}^{n}\alpha_{i}\theta^{i}=\alpha_{n}\theta^{d}+\sum_{ i=1}^{n}\theta^{i}(\alpha_{i-1}-Qa_{j}\alpha_{i})-Qa_{j}\alpha_{0} \tag{3.13}\] Due to (2.44), \(\theta^{d}=c_{0}+c_{1}\theta+\cdots+c_{n}\theta^{n}\) where \(c_{i}\) are composed of sums of products of the polynomials \(Qa_{i}\) and \(c_{0}=1+Q^{d}a_{1}\ldots a_{d}\). In particular \(c_{i}\equiv 0\mod x^{2}\) for each \(i=1\ldots n\) and \(c_{0}\equiv 1\mod x^{2}\). Hence, \[(\theta-Qa_{j})\sum_{i=0}^{n}\alpha_{i}\theta^{i}=\sum_{i=1}^{n}\theta^{i}( \alpha_{i-1}-Qa_{j}\alpha_{i}+c_{i}\alpha_{n})+(c_{0}\alpha_{n}-Qa_{j}\alpha_ {0}) \tag{3.14}\] Since \(\alpha_{i}\equiv\frac{1}{x}\mod\mathcal{R}\), \(a_{j}\equiv 0\mod x^{2}\), and \(c_{i}\equiv 0\mod x^{2}\) then, \(Qa_{j}\alpha_{i}\equiv 0\mod\mathcal{R}\) and \(c_{i}\alpha_{n}\equiv 0\mod\mathcal{R}\) for each \(i=0\ldots n\). Therefore, for each \(i=1\ldots n\), \[\alpha_{i-1}-Qa_{j}\alpha_{i}+c_{i}\alpha_{n}\equiv\alpha_{i-1}\mod\mathcal{R }\equiv\frac{1}{x}\mod\mathcal{R}\] In addition, since \(c_{0}=1+Q^{d}a_{1}\ldots a_{d}\), then, \[c_{0}\alpha_{n}-Qa_{j}\alpha_{0}\equiv\alpha_{n}\mod\mathcal{R}^{2}\equiv \frac{1}{x}\mod\mathcal{R}^{2}\] Hence, the units \(\omega_{i}\) preserve the grid \(\iota_{\mathbf{Q}}\), so that \(\operatorname{diag}\left(\boldsymbol{\sigma}(\omega_{i})\right)\in \operatorname{stab}_{A_{1}}(y_{\mathbf{Q}})\). To conclude the proof of Theorem 1.9, we shall use the following proposition. **Proposition 3.6**.: _For every \(\varepsilon<\frac{1}{2}\), there exists some \(C^{\prime}>0\) such that for all but finitely many \(\mathbf{Q}\), and for each \(\mathbf{v}\in y_{\mathbf{Q}}\) such that_ \[\prod_{i=1}^{d}|v_{i}|\leq q^{-d}|\det(M_{\mathbf{Q}})| \tag{3.15}\] _there exist some \(b_{1}\ldots b_{n}\in\mathbb{Z}\) and a permutation \(\tau\in\mathcal{P}_{d}\) such that the vector_ \[\mathbf{v}^{\prime}=\operatorname{diag}(\boldsymbol{\sigma}(\omega_{1}))^{b_{1 }}\ldots\operatorname{diag}(\boldsymbol{\sigma}(\omega_{n}))^{b_{n}}\mathbf{v}\] _satisfies_ \[\mathbf{v}^{\prime}\in B\left(0,C^{\prime}\|\mathbf{Q}\|^{\tau(1)-\varepsilon }\right)\times B\left(0,C^{\prime}\|\mathbf{Q}\|^{\tau(2)-\varepsilon}\right) \times\cdots\times B\left(0,C^{\prime}\|\mathbf{Q}\|^{\tau(d)-\varepsilon}\right) \tag{3.16}\] In addition, we shall use the following lemma. **Lemma 3.7**.: _There exists some \(C^{\prime}>0\) such that for all but finitely many \(\mathbf{Q}\), if \(\mathbf{v}\) satisfies (3.16) for some \(\tau\in\mathcal{P}_{d}\) and some \(\varepsilon\in(0,1)\) then,_ \[\prod_{i=1}^{d}|v_{i}|\geq q^{-d}|\det(M_{\mathbf{Q}})|(1+o(1)) \tag{3.17}\] Proof.: Without loss of generality, assume that \(\tau=Id\) and write \(\mathbf{v}=\sum_{i=1}^{d}\beta_{i}\boldsymbol{\sigma}(u_{i})\). By Lemma 3.3 and (3.16), \[\begin{split}&|\beta_{i}|=\left|\sum_{j=1}^{d}T_{ij}v_{j}\right| \leq\max_{j=1,\ldots d}|T_{ij}|\cdot|v_{j}|\\ &\ll\max_{j=1,\ldots d}O(\|\mathbf{Q}\|^{1-j+j-\varepsilon})=O( \|\mathbf{Q}\|^{\varepsilon})\end{split} \tag{3.18}\] Thus, by (3.4) and (3.18), \[\begin{split}&|\beta_{i}\sigma_{i}(u_{i})|-|v_{i}|\leq|v_{i}- \beta_{i}\sigma_{i}(u_{i})|=\left|\sum_{j\neq i}\beta_{j}\sigma_{i}(u_{j}) \right|\\ \leq&\max_{j\neq i}|\beta_{j}|\cdot|\sigma_{i}(u_{j} )|\ll\|\mathbf{Q}\|^{\varepsilon}\|\mathbf{Q}\|^{i-2}=O(\|\mathbf{Q}\|^{i-1- \varepsilon})\end{split} \tag{3.19}\] Thus, (3.4), (3.18), (3.19) and the fact that \(|\beta_{i}|\geq\frac{1}{q}\) imply together that \[\begin{split}|v_{i}|&\geq|\beta_{i}\sigma_{i}(u_{i} )|+O(\|\mathbf{Q}\|^{i-1-\varepsilon})\\ &\geq\frac{1}{q}|\sigma_{i}(u_{i})|+O(\|\mathbf{Q}\|^{i-1- \varepsilon})\\ &\qquad=\frac{1}{q}|\sigma_{i}(u_{i})|(1+o(1))\end{split} \tag{3.20}\] Hence, (3.20) and (3.5) imply that \[\prod_{i=1}^{d}|v_{i}|\geq(1+o(1))\prod_{i=1}^{d}\frac{1}{q}|\sigma_{i}(u_{i} )|=\frac{1}{q^{d}}|\det(M_{\mathbf{Q}})|\left(1+o(1)\right) \tag{3.21}\] which implies the claim. Proof of Theorem 3.1.: Let \(\mathbf{v}\in y_{\mathbf{Q}}\). If \[\prod_{i=1}^{d}|v_{i}|>\frac{1}{q^{d}}|\det(M_{\mathbf{Q}})| \tag{3.22}\] then there is nothing to check. Hence, we shall assume that \(\mathbf{v}\) satisfies (3.15). By Proposition 3.6, there exist some \(b_{1}\ldots b_{n}\in\mathbb{Z}\) such that \[\mathbf{v}^{\prime}=\operatorname{diag}(\boldsymbol{\sigma}(\omega_{1}))^{b_ {1}}\ldots\operatorname{diag}(\boldsymbol{\sigma}(\omega_{n}))^{b_{n}} \mathbf{v}\] satisfies (3.16) for some \(\tau\in\mathcal{P}_{d}\). Hence, Lemma 3.5 and Lemma 3.7 imply together that \[N(\mathbf{v})=N(\mathbf{v}^{\prime})\geq q^{-d}|\det(M_{\mathbf{Q}})|(1+o(1))\] Thus, by discreteness of the value set of \(N\) around non-zero points, for all but finitely many \(\mathbf{Q}\) and for every \(\mathbf{v}\in y_{\mathbf{Q}}\), \(N(\mathbf{v})\geq q^{-d}|\det(M_{\mathbf{Q}})|\). Hence to conclude the proof of Theorem 3.1, it suffices to prove Proposition 3.6. ### Proof of Proposition 3.6 Let \(\mathbf{v}\in y_{\mathbf{Q}}\) be such that \(\mathbf{v}\) satisfies (3.15). Then, there exist \(t\in\{0,1\ldots n\}\) and \(s\in\mathbb{Z}\) such that \(|v_{1}\ldots v_{d}|=q^{t}\cdot q^{ds}\). Thus, there exists a diagonal matrix \(\mathbf{g}=\operatorname{diag}\{x^{t},1\ldots 1\}\) and a vector \(\mathbf{v}^{(0)}\) which satisfies \(\prod_{i=1}^{d}\left|\mathbf{v}_{i}^{(0)}\right|=1\), such that \[\mathbf{v}=x^{s}\mathbf{g}\mathbf{v}^{(0)}=x^{s}\begin{pmatrix}x^{t}&&\\ &1&&\\ &&\ddots&\\ &&&1\end{pmatrix}\mathbf{v}^{(0)} \tag{3.23}\] Since \(\mathbf{v}\) satisfies (3.15), \[q^{sd+t}=N(\mathbf{v})=\prod_{i=1}^{d}|v_{i}|\leq\frac{1}{q^{d}}|\det(M_{\mathbf{ Q}})| \tag{3.24}\] **Theorem 3.8**.: _Let \(b_{1},\ldots b_{d}\in\mathbb{Z}\) and define_ \[\tilde{\mathbf{v}}^{(0)}=\left(\prod_{j=1}^{d}\mathrm{diag}(\boldsymbol{\sigma }(\omega_{j}))^{b_{j}}\right)\mathbf{v}^{(0)}\] _Then,_ 1. \(\prod_{i=1}^{d}\left|\tilde{v}_{i}^{(0)}\right|=\prod_{i=1}^{d}\left|v_{i}^{(0) }\right|=1\)__ 2. _The vector_ \(\tilde{\mathbf{v}}^{(0)}\) _arises from some vector_ \(\tilde{\mathbf{v}}\) _through (_3.23_) where_ \(\tilde{\mathbf{v}}\in y_{\mathbf{Q}}\)_._ Proof.: Since \(\omega_{j}\) are units in \(\mathcal{O}_{\mathbb{F}_{\mathbf{Q}}}\), then \(\mathrm{diag}(\boldsymbol{\sigma}(\omega_{j}))\in A_{1}\) for each \(j\). Hence, \[\prod_{i=1}^{d}\left|\tilde{v}_{i}^{(0)}\right|=\prod_{i=1}^{d}\left|v_{i}^{(0 )}\right|\cdot\prod_{j=1}^{d}|\theta_{i}-Qa_{j}|^{b_{j}}=\prod_{i=1}^{d}\left| v_{i}^{(0)}\right|=1 \tag{3.25}\] Define \(\tilde{\mathbf{v}}=\mathrm{diag}\left(\boldsymbol{\sigma}\left(\prod_{j=1}^{d }\omega_{j}^{b_{j}}\right)\right)\mathbf{v}\). Then, by Lemma 3.5, \(\tilde{\mathbf{v}}\in y_{\mathbf{Q}}\). Moreover, \[\tilde{\mathbf{v}}=\mathrm{diag}\left(\prod_{j=1}^{d}\sigma(\omega_{j})^{b_{j }}\right)\mathbf{v}=x^{s}\left(\prod_{j=1}^{d}\sigma(\omega_{j})^{b_{j}} \right)\mathbf{g}\mathbf{v}^{(0)}=x^{s}\mathbf{g}\tilde{\mathbf{v}}^{(0)} \tag{3.26}\] We shall now reinterpret Proposition 2.2 for real simplex sets very close to the standard simplex set. Let \(\Phi_{*}\) be the standard simplex set (see (2.28)) and let \(\psi_{*}=\rho(\Phi_{*})\subseteq\mathbb{R}_{0}^{d}\). Then we can reinterpret Proposition 2.2 or equivalently the corollary to Lemma 2 in [10] in the following manner. **Lemma 3.9**.: _For any \(\varepsilon>0\), there exists \(\delta>0\) such that if \(\psi\subseteq\mathbb{R}_{0}^{d}\) is a simplex set satisfying \(\psi=T\psi_{*}\), where \(T\in\mathrm{GL}_{d}(\mathbb{R})\) satisfies \(\|T-I\|<\delta\) then,_ 1. \(\langle\psi\rangle+\left\{\mathbf{u}\in\mathbb{R}_{0}^{d}:\left[\mathbf{u} \right]_{\mathbb{R}_{0}^{d}}\leq\frac{n}{2}(1+\varepsilon)\right\}=\mathbb{R} _{0}^{d}\)__ 2. \(\mathbb{R}_{0}^{d}\setminus\left(\langle\psi\rangle+\left\{\mathbf{u}\in \mathbb{R}_{0}^{d}:\left[\mathbf{u}\right]\leq\frac{n}{2}(1-\delta)\right\} \right)\subseteq B_{\varepsilon}(W_{*})+\langle\psi\rangle\)__ _where \(W_{*}\) is a set of \(n!\) vectors obtained by permuting the coordinates of_ \[\mathbf{w}=\frac{1}{d}\sum_{l=1}^{n}(l-1)\rho(\mathbf{t}_{l})=\left(\begin{array} []{c}\frac{n}{2}\\ \vdots\\ -\frac{n}{2}\end{array}\right) \tag{3.27}\] _Remark 3.10_.: Lemma 3.9 holds, since as \(\psi\to\psi_{*}\), \(W_{\psi}\to W_{*}\) and \(S_{\psi}\to S_{*}\). Proof of Proposition 3.6.: Let \(\varepsilon<\frac{1}{2}\). Let \(\mathbf{v}\in y_{\mathbf{Q}}\) satisfy (3.24). Then by (3.23), there exist \(s\in\mathbb{Z}\) and \(\mathbf{v}^{(0)}\) with \(\prod_{i=1}^{d}\left|\mathbf{v}_{i}^{(0)}\right|=1\) such that \(\mathbf{v}=x^{s}\mathbf{g}\mathbf{v}^{(0)}\). Furthermore, by Lemma 3.4, \[q^{s+n}\max_{i=1\ldots d}\left|\mathbf{v}_{i}^{(0)}\right|\geq q^{s}\max_{i=1 \ldots d}|\mathbf{g}_{i}|\cdot\left|\mathbf{v}_{i}^{(0)}\right|=\max_{i=1,..d} \left|v_{i}\right|\gg\|\mathbf{Q}\|^{n} \tag{3.28}\] Due to (3.24) and (3.5) \[q^{-s}\geq q^{\frac{t}{d}+1}|\det(M_{Q})|^{-\frac{1}{2}}\geq q|\det(M_{Q})|^{- \frac{1}{d}}\gg\|\mathbf{Q}\|^{-\frac{n}{2}} \tag{3.29}\] Thus, (3.28) and (3.29) imply that \[\max_{i=1,\ldots d}\left|\mathbf{v}_{i}^{(0)}\right|\gg\|\mathbf{Q}\|^{\frac{n}{2}} \tag{3.30}\] By (2.50) and (2.51), the simplex set \(\psi_{\mathbf{Q}}=\left\{\frac{1}{\log\|\mathbf{Q}\|}\rho\left(\boldsymbol{ \sigma}(\omega_{1})\right)\ldots\frac{1}{\log\|\mathbf{Q}\|}\rho\left( \boldsymbol{\sigma}(\omega_{d})\right)\right\}\) converges to the simplex set \(\psi_{*}\). Thus, by Lemma 3.9(1), for all but finitely many \(\mathbf{Q}\), by multiplying \(\mathbf{v}^{(0)}\) by some unit \(\prod_{i=1}^{d}\boldsymbol{\sigma}(\omega_{i})^{b_{i}}\), we can assume that \(\left\lceil\frac{1}{\log\|\mathbf{Q}\|}\rho\left(\mathbf{v}^{(0)}\right) \right\rceil\leq\frac{n}{2}+\varepsilon\). On the other hand, for all but finitely many \(\mathbf{Q}\), (3.30) implies that \(\frac{1}{\log\|\mathbf{Q}\|}\rho\left(\mathbf{v}^{(0)}\right)\) belongs to the left hand side of (2) ( where \(\delta\) is the corresponding number from Lemma 3.9 for our fixed \(\varepsilon\)). Hence, By Lemma 3.9(2), there exists a units vector \(\mathbf{b}=\rho\left(\prod_{i=1}^{d}\boldsymbol{\sigma}(\omega_{i})^{d_{i}}\right)\) where \(d_{i}\in\mathbb{Z}\) and a permutation \(\tau\in\mathcal{P}_{d}\), such that \[\left\lceil\frac{1}{\log\|\mathbf{Q}\|}\rho\left(\mathbf{v}^{(0)}\right)+ \frac{1}{\log\|\mathbf{Q}\|}\rho(\mathbf{b})-\tau(\mathbf{w})\right\rceil<\varepsilon \tag{3.31}\] Since \(\boldsymbol{\sigma}(\omega_{i})\) preserve \(y_{\mathbf{Q}}\) for each \(i\), we can assume for simplicity that \(\mathbf{b}=Id\) and that \(\tau=Id\). Then, (3.31) implies that \[\left\lceil\rho\left(\mathbf{v}^{(0)}\right)-\left(\begin{matrix}\frac{n}{2} \\ \vdots\\ -\frac{n}{2}\end{matrix}-1\\ \vdots\\ -\frac{n}{2}\end{matrix}\right)\log\|\mathbf{Q}\|\right\rceil<\varepsilon\log \|\mathbf{Q}\| \tag{3.32}\] Hence, for each \(i=1\ldots d\), \[\rho\left(v_{i}^{(0)}\right)\leq\left(\frac{n}{2}-i+\varepsilon\right)\log \|\mathbf{Q}\| \tag{3.33}\] Therefore, \[\left|v_{i}^{(0)}\right|\leq\|\mathbf{Q}\|^{\frac{n}{2}-i+\varepsilon} \tag{3.34}\] Thus, (3.29) and (3.34) imply that \[|v_{i}|\leq q^{s+n}\left|v_{i}\right|\ll\|\mathbf{Q}\|^{n-i+\varepsilon} \tag{3.35}\] Define \(\tau^{\prime}(i):=\begin{cases}d-i&i=1\ldots n\\ d&i=d\end{cases}.\) Then since \(\varepsilon\leq\frac{1}{2}\), \(n-i+\varepsilon\leq\tau^{\prime}(i)-\varepsilon\). Thus, by (3.35), \[|v_{i}|\ll\|\mathbf{Q}\|^{\tau^{\prime}(i)-\varepsilon} \tag{3.36}\] This shows that for each \(\|\mathbf{Q}\|\) large enough and for each \(\mathbf{v}\) satisfying (3.24), there exists a diagonal matrix \(\prod_{i=1}^{d}\boldsymbol{\sigma}(\omega_{i})^{m_{i}}\) such that \(\mathbf{u}=\prod_{i=1}^{d}\boldsymbol{\sigma}(\omega_{i})^{m_{i}}\mathbf{v}\) satisfies (3.16) with \(\varepsilon\). This concludes the proof of Proposition 3.6 and hence the proof of Theorem 1.9.
2302.03121
Value Distributions of Perfect Nonlinear Functions
In this paper, we study the value distributions of perfect nonlinear functions, i.e., we investigate the sizes of image and preimage sets. Using purely combinatorial tools, we develop a framework that deals with perfect nonlinear functions in the most general setting, generalizing several results that were achieved under specific constraints. For the particularly interesting elementary abelian case, we derive several new strong conditions and classification results on the value distributions. Moreover, we show that most of the classical constructions of perfect nonlinear functions have very specific value distributions, in the sense that they are almost balanced. Consequently, we completely determine the possible value distributions of vectorial Boolean bent functions with output dimension at most 4. Finally, using the discrete Fourier transform, we show that in some cases value distributions can be used to determine whether a given function is perfect nonlinear, or to decide whether given perfect nonlinear functions are equivalent.
Lukas Kölsch, Alexandr Polujan
2023-02-06T21:03:57Z
http://arxiv.org/abs/2302.03121v2
# Value distributions of perfect nonlinear functions ###### Abstract In this paper, we study the value distributions of perfect nonlinear functions, i.e., we investigate the sizes of image and preimage sets. Using purely combinatorial tools, we develop a framework that deals with perfect nonlinear functions in the most general setting, generalizing several results that were achieved under specific constraints. For the particularly interesting elementary abelian case, we derive several new strong conditions and classification results on the value distributions. Moreover, we show that most of the classical constructions of perfect nonlinear functions have very specific value distributions, in the sense that they are almost balanced. Consequently, we completely determine the possible value distributions of vectorial Boolean bent functions with output dimension at most \(4\). Finally, using the discrete Fourier transform, we show that in some cases value distributions can be used to determine whether a given function is perfect nonlinear, or to decide whether given perfect nonlinear functions are equivalent. **Keywords:** Perfect nonlinear function, bent function, planar function, image sets, value distribution. ## 1 Introduction Let \(G\) and \(H\) be two additively written finite groups. A mapping \(L\colon G\to H\) is called a _homomorphism_ if \(L(x+a)-L(x)=L(a)\) for all \(x,a\in G\). Homomorphisms \(L\colon G\to H\) are essentially _linear mappings_ between the finite groups \(G\) and \(H\), which can be equivalently characterized by the property \[|\{x\in G\colon L(x+a)-L(x)=b\}|\in\{0,|G|\}.\] In this article, we consider functions \(F\colon G\to H\) which are as far as possible from all homomorphisms; such functions can be introduced with the notion of perfect nonlinearity as follows [26]. A function \(F\colon G\to H\) is said to be _perfect nonlinear_ (or simply _bent_) if \[|\{x\in G\colon F(x+a)-F(x)=b\}|=\frac{|G|}{|H|}\quad\text{holds for all $a\in G \setminus\{0\}$ and $b\in H$}.\] In general, the terms "perfect nonlinear" and "bent" are considered to be synonymous. However, in this paper, we will use the term "bent" for mappings between two elementary abelian groups. Bent functions considered in this setting play a very important role in finite geometry (they give rise to commutative semifields [12]), combinatorics (one can use them to construct skew Hadamard difference sets [10]), and applications due to their rich connections to coding theory and cryptography [2, 20]. ### Preliminaries Let \(G\) and \(H\) be two finite groups and let \(F\colon G\to H\) be a function. For an element \(\beta\in H\), we denote by \(F^{-1}(\beta)\) the _preimage set_ of \(\beta\). By _value distribution_ of the function \(F\colon G\to H\) we understand the multiset \(\{*\,|F^{-1}(\beta)|\colon\beta\in H*\}\). We use no special notation for this multiset, since determining the value distribution just boils down to determining the sizes of all preimages. In the following, we will frequently consider functions \(F\colon G\to H\), where \(G\) and \(H\) are two elementary abelian groups. In this case, we use the notation \(G=\mathbb{F}_{p}^{n}\) and \(H=\mathbb{F}_{p}^{m}\), where \(\mathbb{F}_{p}\) is the finite field with \(p\) elements and \(\mathbb{F}_{p}^{n}\) is the vector space of dimension \(n\) over the prime field \(\mathbb{F}_{p}\). For \(x=(x_{1},\ldots,x_{n}),y=(y_{1},\ldots,y_{n})\in\mathbb{F}_{p}^{n}\), we define the scalar product of \(\mathbb{F}_{p}^{n}\) by \(\langle x,y\rangle_{n}=x_{1}y_{1}+\cdots+x_{n}y_{n}\). If necessary, we endow the vector space \(\mathbb{F}_{p}^{n}\) with the structure of the finite field \(\mathbb{F}_{p^{n}}\); in this case, we define the scalar product of \(\mathbb{F}_{p^{n}}\) by \(\langle x,y\rangle_{n}=\operatorname{Tr}(xy)\), where \(\operatorname{Tr}(z):=\operatorname{Tr}_{1}^{n}(z)\) is the absolute trace and \(\operatorname{Tr}_{m}^{n}(z)=\sum_{i=0}^{\frac{n}{m}-1}z^{p^{i}-n}\) is the relative trace of \(z\in\mathbb{F}_{p^{n}}\) from \(\mathbb{F}_{p^{n}}\) into the subfield \(\mathbb{F}_{p^{m}}\). If \(n=2k\) is even, the vector space \(\mathbb{F}_{p}^{n}\) can be identified with \(\mathbb{F}_{p^{k}}\times\mathbb{F}_{p^{k}}\); in this case, we define the scalar product \(\langle\left(u_{1},u_{2}\right),\left(v_{1},v_{2}\right)\rangle_{n}= \operatorname{Tr}_{1}^{k}\left(u_{1}v_{1}+u_{2}v_{2}\right)\). For an odd prime \(p\), the mappings \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) are called _\(p\)-ary functions_, and for \(p=2\), _Boolean functions_. For \(m\geq 2\), the mappings \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) are called _vectorial functions_. Any vectorial function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) can be uniquely described by _m coordinate functions_\(f_{i}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) for \(1\leq i\leq m\) as a vector \(F(x):=(f_{1}(x),\ldots,f_{m}(x))\). For \(b\in\mathbb{F}_{p}^{m}\), the function \(F_{b}(x):=\langle b,F(x)\rangle_{m}\) is called a _component function_ of \(F\). Vectorial and \(p\)-ary functions can be also represented with a help of multivariate polynomials in the ring \(\mathbb{F}_{p}[x_{1},\ldots,x_{n}]/(x_{1}-x_{1}^{p},\ldots,x_{n}-x_{n}^{p})\). This representation is unique and called the _algebraic normal form_ (_ANF_, for short), namely for \(p\)-ary functions \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) it is formally defined as \(f(x)=\sum_{a\in\mathbb{F}_{p}^{n}}c_{a}\left(\prod_{i=1}^{n}x_{i}^{a_{i}}\right)\), where \(x=(x_{1},\ldots,x_{n})\in\mathbb{F}_{p}^{n}\), \(c_{a}\in\mathbb{F}_{p}\) and \(a=(a_{1},\ldots,a_{n})\in\mathbb{F}_{p}^{n}\), while for vectorial functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) it is defined coordinate-wise. Besides the coordinate representation and algebraic normal form, we will also consider trace representations. Identifying \(\mathbb{F}_{p}^{n}\) with \(\mathbb{F}_{p^{n}}\), we can uniquely represent any function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{n}\) as a polynomial \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) of the form \(F(x)=\sum_{i=0}^{p^{n}-1}a_{i}x^{i}\) with coefficients \(a_{i}\in\mathbb{F}_{p^{n}}\). Clearly, when \(m|n\), any function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) can be written as a polynomial \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{m}}\) given by \(F(x)=\operatorname{Tr}_{m}^{n}\left(\sum_{i=0}^{p^{n}-1}a_{i}x^{i}\right)\). This representation is called the _univariate (trace) representation_, however, it is not unique in general. Now, we define the following equivalence relation, which preserves the nonlinearity of functions on elementary abelian groups. Functions \(F,F^{\prime}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), are called _equivalent_ (_extended-affine equivalent_, to be more precise), if \(F^{\prime}=A_{1}\circ F\circ A_{2}+A\) for some affine permutations \(A_{1}\), \(A_{2}\) and an affine mapping \(A\). Clearly, for affine permutations \(A_{1}\) and \(A_{2}\), the functions \(F^{\prime}=A_{1}\circ F\circ A_{2}\) and \(F\) have the same value distributions, while the functions \(F^{\prime}=F+A\) and \(F\), where \(A\) is an affine mapping, generally do not have the same value distributions; the latter will be illustrated with extensive examples in the following sections. Our main tool for dealing with perfect nonlinear functions defined on elementary abelian groups is the _discrete Fourier transform_. In this specific setting, it is often called the _Walsh transform_, which is the term we will use throughout the paper. For a \(p\)-ary function \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\), the Walsh transform is the complex-valued function \(W_{f}\colon\mathbb{F}_{p}^{n}\to\mathbb{C}\) defined by \[W_{f}(b)=\sum_{x\in\mathbb{F}_{p}^{n}}\zeta_{p}^{f(x)-\langle b,x\rangle_{n}}, \quad\text{where }\zeta_{p}=e^{2\pi i/p}\quad\text{and }i^{2}=-1.\] For vectorial functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), the Walsh transform is defined using the notion of component functions as \(W_{F}(b,a)=W_{F_{b}}(a)\) for all \(a\in\mathbb{F}_{p}^{n},b\in\mathbb{F}_{p}^{m}\). ### Value distributions of bent functions: the known cases With the Walsh transform, bent functions can be equivalently defined in the following way, for details we refer to [15, 19, 21]. **Definition 1.1**.: A function \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) is called a _bent_ function, if the Walsh transform satisfies \(|W_{f}(b)|=p^{n/2}\) for all \(b\in\mathbb{F}_{p}^{n}\). First, we consider in detail the Walsh transform of single-output bent functions. In the Boolean case, i.e., \(p=2\) we have that \(\zeta_{2}=-1\), from what follows that \(W_{f}(b)\) is an integer. Consequently, for every \(b\in\mathbb{F}_{2}^{n}\) a Boolean bent function \(f\) on \(\mathbb{F}_{2}^{n}\) satisfies \(W_{f}(b)=2^{n/2}(-1)^{f^{*}(b)}\), where \(f^{*}\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}\) is called the _dual_ of \(f\), what implies that \(n\) must be even. The dual function \(f^{*}\) is bent [28], moreover the equality \((f^{*})^{*}=f\) holds. In the \(p\) odd case, the Walsh transform \(W_{f}(a)\) at \(b\in\mathbb{F}_{p}^{n}\) of a \(p\)-ary bent function \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) satisfies [15]: \[W_{f}(a)=\left\{\begin{array}{ll}\pm\zeta_{p}^{f^{*}(a)}p^{n/2}&\text{ if }p^{n}\equiv 1 \bmod 4\\ \pm i\zeta_{p}^{f^{*}(a)}p^{n/2}&\text{ if }p^{n}\equiv 3\bmod 4\end{array} \right.,\] where \(f^{*}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\), is called the _dual_ of \(f\). Opposite to the Boolean case, \(p\)-ary bent functions \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) exist for all integers \(n\in\mathbb{N}\), however, the dual of a \(p\)-ary bent function is not necessarily bent. A bent function \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) is called _dual-bent_ if the dual \(f^{*}\) is bent as well, otherwise, it is called _non-dual-bent_. Consider the following important classes of dual-bent functions, namely weakly regular and regular bent functions. A bent function \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) is called _weakly regular_ if for all \(a\in\mathbb{F}_{p}^{n}\), we have \(W_{f}(a)=\epsilon\zeta_{p}^{f^{*}(a)}p^{n/2}\) for some fixed \(\epsilon\in\{\pm 1,\pm i\}\). If \(\epsilon=1\), a bent function \(f\) is called _regular_. If no such a fixed \(\epsilon\in\{\pm 1,\pm i\}\) exists, then \(f\) is called _non-weakly regular_ bent; such functions can be either dual-bent or non-dual-bent. For further references on \(p\)-ary bent functions and their duals, we refer to [18]. For the sake of simplicity, we will include Boolean functions when talking about regular functions from \(\mathbb{F}_{p}^{n}\) to \(\mathbb{F}_{p}^{m}\). With the notion of component functions, vectorial bent functions can be defined in the following way [15, 19, 21]. **Definition 1.2**.: A function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) is called a _vectorial bent_ function, if for all \(b\in\mathbb{F}_{p}^{m}\setminus\{0\}\) the component function \(F_{b}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) is bent. For vectorial Boolean bent functions \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\), we have necessarily \(m\leq n/2\) (this fact is also known as the _Nyberg's bound_, see [21]), while for \(p\)-ary vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), it is possible that \(n=m\); in this case bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{n}\) are called _planar_. For a survey on bent and planar functions, we refer to [27]. Note that bent functions belong to a larger class of plateaued functions. A function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) is called _plateaued_, if for every \(b\in\mathbb{F}_{p}^{m}\setminus\{0\}\) the Walsh transform of \(F_{b}\) at \(a\in\mathbb{F}_{p}^{n}\) satisfies \(|W_{F_{b}}(a)|\in\left\{0,p^{(n+s_{b})/2}\right\}\) for an integer \(s_{b}\) with \(0\leq s_{b}\leq n\). Bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) are exactly \(0\)-plateaued functions, i.e., \(s_{b}=0\) for all \(b\in\mathbb{F}_{p}^{m}\setminus\{0\}\). Now, we survey the known results about value distributions of bent functions. The sizes of the preimage sets of Boolean bent functions were determined by Dillon in his thesis [9], whereas the case of \(p\)-ary bent functions was addressed by Nyberg [21]. **Theorem 1.3**.: _[_21_, Theorems 3.2-3.5]_ _Let \(p\) be a prime and \(f\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) be a bent function, and for \(l\in\mathbb{F}_{p}\), let \(b_{l}=\left|f^{-1}(l)\right|\), where \(f^{-1}(l)=\left\{x\in\mathbb{F}_{p}^{n}:f(x)=l\right\}\)._ * _If_ \(n\) _is even, then there exits a unique_ \(c\in\mathbb{F}_{p}\) _such that_ \[\begin{split} b_{c}=& p^{n-1}\pm(p-1)p^{\frac{n}{2}-1}, \\ b_{l}=& p^{n-1}\mp p^{\frac{n}{2}-1}\quad\text{ for all }l\in\mathbb{F}_{p}\backslash\{c\}\end{split}\] (1.1) _Moreover, a regular bent function has the upper signs._ * _If_ \(p\) _and_ \(n\) _are odd, then the value distribution of a regular bent function is given by_ \((b_{0},b_{1},\ldots,b_{p-1})\) _or a cyclic shift of_ \((b_{0},b_{1},\ldots,b_{p-1})\)_, where_ \(b_{0}=p^{n-1}\) _and_ \[\begin{split} b_{l}=& p^{n-1}+\left(\frac{l}{p}\right)p^{ \frac{n-1}{2}}\text{ for all }l\in\mathbb{F}_{p}\setminus\{0\},\text{ or }\\ b_{l}=& p^{n-1}-\left(\frac{l}{p}\right)p^{\frac{n-1}{2}} \text{ for all }l\in\mathbb{F}_{p}\setminus\{0\},\end{split}\] (1.2) _and_ \[\left(\frac{l}{p}\right)=\begin{cases}1&\text{ if }l\text{ is a quadratic residue modulo }p\text{ and }l\not\equiv 0\pmod{p}\\ -1&\text{ if }l\text{ is a non-quadratic residue modulo }p\\ 0&\text{ if }l\equiv 0\pmod{p}\end{cases}\] _is the Legendre symbol._ Value distributions of vectorial bent functions were considered mostly for the classes of bent functions with certain prescribed properties. For instance, Nyberg [22, Theorem 3.2] proved that for a bent function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) that has only regular (non-zero) component functions, all preimage set sizes are divisible by \(p^{n/2-m}\) and derived both lower and upper bounds on preimage set sizes in this setting. Recently, preimage sets of vectorial bent functions attracted a lot of attention due to the connection with partial difference sets observed in [6]. For instance in [6, 29], the value distributions of bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) with the following properties have been considered: * \(l\)_-forms_, i.e., \(F\) satisfies \(F(\alpha x)=\alpha^{l}F(x)\) for all \(\alpha\in\mathbb{F}_{p}^{m}\) and some fixed integer \(l\) with \(\gcd(p^{m}-1,l-1)=1\), and, * _vectorial dual-bent_ functions, i.e., the set of the dual functions of the component functions of \(F\) together with the zero function forms a vector space of bent functions dimension \(m\). Particularly, in [29, Corollary 1], it was shown that a vectorial dual-bent function \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), which satisfies \(F(0)=0,F(-x)=F(x)\) and all component functions are regular (in this case, \(\varepsilon=+1\)) or weakly regular but not regular (in this case, \(\varepsilon=-1\)) satisfies \[\big{|}F^{-1}(0)\big{|}=p^{n-m}+\varepsilon\,(p^{m}-1)\,p^{\frac{n}{2}-m}\text { and }\big{|}F^{-1}(\beta)\big{|}=p^{n-m}-\varepsilon p^{\frac{n}{2}-m},\text{ for }\beta\in\mathbb{F}_{p}^{m}\setminus\{0\}. \tag{1.3}\] Finally, in the case \(n=m\), it was shown in [16, Theorem 2] that planar functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{n}\) with the minimal image set, i.e., \(|\operatorname{Im}(F)|=(p^{n}+1)/2\), have special value distributions, namely, they are _2-to-1 mappings_. As these results show, the value distributions of bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) are well-understood in the extremal cases, namely in the single-output case \(m=1\) as well as in the planar case \(m=n\), for \(p\) odd. However, the knowledge of value distributions in the "in-between" cases \(1<m<n\), is limited to the bent functions with specific additional properties (e.g., vectorial dual bent, \(l\)-forms). Moreover, the non-elementary abelian case has not been addressed at all. In this paper, we develop a purely combinatorial general framework for the study of value distributions of perfect nonlinear functions. With our approach, we are able to unify the known results on value distributions of bent functions in different settings, which were previously obtained with different techniques. In the process, we strengthen many known results and prove new structural properties of functions with specific value distributions. Moreover, we show that our framework is also applicable for perfect nonlinear functions defined on non-elementary abelian groups. The rest of the paper is organized in the following way. In Section 2, we derive general, sharp upper and lower bounds on the cardinalities of the preimage sets of perfect nonlinear functions on arbitrary groups (Theorem 2.3). We show that for functions meeting these bounds, all but one values in the image set are equally distributed between preimages. Additionally, we investigate the surjectivity of perfect nonlinear functions. In Section 3, we introduce the notion of almost balanced perfect nonlinear functions; these are the perfect nonlinear functions which achieve upper/lower bounds on the cardinalities of the preimage sets with equality and thus are extremal objects of particular interest. Considering the elementary abelian framework, we show in Subsection 3.1 that many primary constructions of bent functions are almost balanced. In Subsection 3.2, we demonstrate how one can construct new almost balanced bent functions from known ones using secondary constructions. In particular, combining primary and secondary constructions, we are able to show that almost balanced bent functions exist for all admissable elementary abelian groups (Theorem 3.12). In Section 4, we study the connection between value distributions and the Walsh transform of bent functions. Using these spectral properties, we generalize Nyberg's result on the possible sizes of preimage sets of bent functions, giving stronger and more general conditions on preimage set sizes in both the Boolean case as well as the general \(p\)-ary case (Theorems 4.6, 4.8, 4.9). We are also able to prove that in some cases, knowing the value distribution of two vectorial bent functions is enough to settle the (in general difficult) equivalence question (Corollary 4.4). In Section 5, we determine possible value distributions for bent functions with small output groups. In particular, we give a complete characterization of all possible value distributions for Boolean bent functions with output dimension at most \(4\). In Section 6, we consider planar functions. Using the techniques developed in this paper, we unify several known results on the characterization of planar functions with extremal value distributions and give a more precise description of planar functions with the maximum possible image set size. Finally, we provide new characterizations of planar functions of special shapes, again generalizing several well-known results. For instance, we are able to show that plateaued 2-to-1 functions are automatically planar (Theorem 6.4). In Section 7, we conclude the paper and give a list of open problems on perfect nonlinear functions and their value distributions. ## 2 Bounds on the cardinality of preimage sets In this section, we derive upper and lower bounds on the cardinalities of the preimage sets of perfect nonlinear functions on arbitrary groups and show that in the cases when the bounds are attained, we have that all but one values are equally distributed. We begin with the following technical result. **Proposition 2.1**.: _Let \(G\) and \(H\) be two finite groups, and let \(F\colon G\to H\) be a perfect nonlinear function. Then the following holds_ \[\sum_{\beta\in H}|F^{-1}(\beta)|^{2}=|G|+\frac{|G|}{|H|}(|G|-1).\] Proof.: We have \(\sum_{\beta\in H}|F^{-1}(\beta)|^{2}=|\{(x,y)\in G\times G\colon F(x)=F(y)\}|\). Observe that \[|\{(x,y)\in G\times G\colon F(x)=F(y)\}|=|G|+|\{(x,a)\in G\times(G\setminus\{ 0\})\colon F(x)=F(x+a)\}|.\] Since \(F\) is perfect nonlinear, we have that \(F(x)=F(x+a)\) holds for a fixed value \(a\neq 0\) for exactly \(|G|/|H|\) values of \(x\). In this way, \(|\{(x,a)\in G\times(G\setminus\{0\})\colon F(x)=F(x+a)\}|=|G|/|H|\cdot(|G|-1)\) and the result follows. This result can be applied to get minimum and maximum sizes of preimage set sizes of perfect nonlinear functions. For the sake of brevity, denote for a function \(F\colon G\to H\) the preimage set sizes by \(X_{1},X_{2},\ldots,X_{|H|}\), where we use an arbitrary ordering. By Proposition 2.1, for a perfect nonlinear function, we get \[\sum_{i=1}^{|H|}X_{i}^{2} =|G|+\frac{|G|}{|H|}(|G|-1), \tag{2.1}\] \[\sum_{i=1}^{|H|}X_{i} =|G|, \tag{2.2}\] where the second equation follows from the fact that all preimages exhaust \(G\). We will now look for bounds and explicit solutions of the \(X_{i}\). **Remark 2.2**.: Note that not every solution to Equations (2.1) and (2.2) yields a preimage distribution of a perfect nonlinear function. For instance, there is no vectorial bent function from \(G=\mathbb{F}_{2}^{4}\) to \(H=\mathbb{F}_{2}^{3}\) (since Nyberg's bound is violated) but for \(|G|=16\) and \(|H|=8\) a solution to Equations (2.1) and (2.2) exists, for example \(X_{1}=5,X_{2}=3,X_{3}=X_{4}=2,X_{5}=\cdots=X_{8}=1\). The following theorem gives general bounds on the minimum and maximum preimage set sizes of perfect nonlinear functions in the most general setting. We will see later that the bounds achieved here are (at least for elementary abelian groups) sharp. **Theorem 2.3**.: _Let \(G\) and \(H\) be two finite groups, and let \(F\colon G\to H\) be a perfect nonlinear function. Then for every \(\beta\in H\) the following inequality holds_ \[\frac{|G|}{|H|}-\sqrt{|G|}+\frac{\sqrt{|G|}}{|H|}\leq|F^{-1}(\beta)|\leq\frac {|G|}{|H|}+\sqrt{|G|}-\frac{\sqrt{|G|}}{|H|}. \tag{2.3}\] _1. If_ \(|F^{-1}(\alpha)|=\frac{|G|}{|H|}-\sqrt{|G|}+\frac{\sqrt{|G|}}{|H|}\) _then_ \(|F^{-1}(\beta)|=\frac{|G|}{|H|}+\frac{\sqrt{|G|}}{|H|}\) _for each_ \(\beta\neq\alpha\)_._ 2. _If_ \(|F^{-1}(\alpha)|=\frac{|G|}{|H|}+\sqrt{|G|}-\frac{\sqrt{|G|}}{|H|}\) _then_ \(|F^{-1}(\beta)|=\frac{|G|}{|H|}-\frac{\sqrt{|G|}}{|H|}\) _for each_ \(\beta\neq\alpha\)_._ _If the equality takes place, then \(|H|\) divides \(\sqrt{|G|}\), and consequently \(|G|\) is a square._ Proof.: By the Cauchy-Schwarz inequality, we have \[\sum_{i=2}^{|H|}X_{i}^{2}\geq\left(\sum_{i=2}^{|H|}X_{i}\right)^{2}\cdot\frac{1}{ |H|-1},\] with equality if and only if all \(X_{i}\), \(i>1\) are identical. Then, applying Proposition 2.1, \[|G|+\frac{|G|}{|H|}(|G|-1)=\sum_{i=1}^{|H|}X_{i}^{2}=X_{1}^{2}+\sum_{i=2}^{|H|} X_{i}^{2}\geq X_{1}^{2}+\frac{(|G|-X_{1})^{2}}{|H|-1}.\] This inequality is quadratic in \(X_{1}\) and can be solved with elementary techniques, the result is \[\frac{|G|}{|H|}-\sqrt{|G|}+\frac{\sqrt{|G|}}{|H|}\leq X_{1}\leq\frac{|G|}{|H|}+ \sqrt{|G|}-\frac{\sqrt{|G|}}{|H|}.\] In the extremal cases, equality in the Cauchy-Schwarz inequality has to hold, so all \(X_{i}\), \(i>1\) are identical and we get \[X_{i}=\frac{|G|-X_{1}}{|H|-1}\quad\text{for all $i>1$}.\] The result follows by plugging in the extremal values for \(X_{1}\). In the case of equality, we have that \(|H|\) divides \(|G|\pm\sqrt{|G|}(|H|\mp 1)\), which is only possible if \(|H|\) divides \(\sqrt{|G|}\). The latter, in turn, implies that \(|G|\) is a square. For the sake of convenience, we give the bounds for bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) which will be considered in details in the following sections. **Theorem 2.4**.: _Let \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) be a bent function and \(F^{-1}(\beta)\) the preimage set of \(\beta\in\mathbb{F}_{p}^{m}\). Then for every \(\beta\in\mathbb{F}_{p}^{m}\) the following inequality holds_ \[p^{n-m}-p^{n/2}+p^{n/2-m}\leq|F^{-1}(\beta)|\leq p^{n-m}+p^{n/2}-p^{n/2-m}. \tag{2.4}\] _1. If \(|F^{-1}(\alpha)|=p^{n-m}-p^{n/2}+p^{n/2-m}\) then \(|F^{-1}(\beta)|=p^{n-m}+p^{n/2-m}\) for each \(\beta\neq\alpha\). 2. If \(|F^{-1}(\alpha)|=p^{n-m}+p^{n/2}-p^{n/2-m}\) then \(|F^{-1}(\beta)|=p^{n-m}-p^{n/2-m}\) for each \(\beta\neq\alpha\). If equality takes place, then \(m\leq n/2\) and \(n\) is even._ **Remark 2.5**.: _1._ If we look for a moment at the Boolean case, i.e., \(G=\mathbb{F}_{2}^{n}\) and \(H=\mathbb{F}_{2}\), we see that the two extremal cases in Theorem 2.4 recover the well-known value distributions of Boolean bent functions. Indeed, for a Boolean bent function \(f\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}\), we have \[W_{f}(0)=\sum_{x\in\mathbb{F}_{2}^{n}}(-1)^{f(x)}=|f^{-1}(0)|-|f^{-1}(1)|\in\{ \pm 2^{n/2}\}.\] Since \(|f^{-1}(0)|+|f^{-1}(1)|=2^{n}\), this implies \(|f^{-1}(0)|=2^{n-1}\pm 2^{n/2-1}\) which are exactly the two extremal cases in Theorem 2.4. _2._ For \(p\) odd, \(n\) even and \(m=1\), the two extremal cases in Theorem 2.4 also recover the well-known value distributions of \(p\)-ary bent functions given in Theorem 1.3. For \(p\) odd, \(n\) odd and \(m=1\), compared to Theorem 1.3, we obtain bounds on the cardinality of preimage sets of bent functions, which are not regular. _3._ For the special case \(p\) odd and \(n\) even, the two extremal cases in Theorem 2.4 also cover the extremal distributions obtained in Equation (1.3) found in [6, 29]. Moreover, these extremal cases not only recover the bound from [6, Corollary 2], but also show that the remaining elements in the image set are uniformity distributed between the remaining elements in \(\mathbb{F}_{p}^{n}\setminus\{0\}\). Theorem 2.3 can be used to identify a large class of surjective perfect nonlinear functions. **Corollary 2.6**.: _Let \(G\) and \(H\) be two finite groups, and let \(F\colon G\to H\) be a perfect nonlinear function. If \(|H|\leq\sqrt{|G|}\), then \(F\) is surjective. A preimage set of size \(1\) is only possible if \(|H|=\sqrt{|G|}\)._ Proof.: We apply Theorem 2.3. The function \(F\) is surjective, if for all \(\beta\in H\) we have \[|F^{-1}(\beta)|\geq\frac{|G|}{|H|}-\sqrt{|G|}+\frac{\sqrt{|G|}}{|H|}\geq 1. \tag{2.5}\] From the latter inequality we have that \[|G|+\sqrt{|G|}\geq|H|\cdot\left(\sqrt{|G|}+1\right),\] which is equivalent to \(|H|\leq\sqrt{|G|}\). A preimage set of size \(1\) is only possible if \(|H|=\sqrt{|G|}\). For perfect nonlinear functions beyond the "square root bound", the question about surjectivity becomes much more difficult to answer. In the following statement, we give a bound on the cardinality of the image set of a bent function, which has essentially been proven by Carlet [3] in a different context (namely, to give a connection between nonlinearity and the cardinality of the image sets of mappings \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\)). **Proposition 2.7**.: _Let \(G\) and \(H\) be two finite groups, and let \(F\colon G\to H\) be a perfect nonlinear function. Then_ \[|\operatorname{Im}(F)|\geq\frac{|G|\cdot|H|}{|G|+|H|-1}. \tag{2.6}\] Proof.: We have from Equations (2.1) and (2.2) \[\sum_{i=1}^{|\operatorname{Im}(F)|}X_{i}^{2} =|G|+\frac{|G|}{|H|}\cdot(|G|-1)\] \[\sum_{i=1}^{|\operatorname{Im}(F)|}X_{i} =|G|,\] and again by the Cauchy-Schwarz inequality, the following holds \[|G|+\frac{|G|}{|H|}\cdot(|G|-1)=\sum_{i=1}^{|\operatorname{Im}(F)|}X_{i}^{2} \geq\frac{|G|^{2}}{|\operatorname{Im}(F)|}. \tag{2.7}\] The claim follows by solving Equation (2.7) for \(|\operatorname{Im}(F)|\). Now, we give the expression of the bound in Equation (2.6) for bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) and recover the well-known lower bound [16] for the planar case, i.e., \(p\) is odd and \(n=m\). **Corollary 2.8**.: _Let \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) be a bent function. Then the following hold._ 1. _The cardinality of the image set of_ \(F\) _satisfies_ \[|\operatorname{Im}(F)|\geq\frac{p^{2n}}{p^{n}+p^{n-m}(p^{n}-1)}>\frac{p^{n}}{1 +p^{n-m}}.\] 2. _If_ \(p\) _is odd and_ \(n=m\)_, then_ \(|\operatorname{Im}(F)|\geq\frac{p^{n}+1}{2}\)_._ 3. _If_ \(m\leq n/2\)_, then_ \(F\) _is surjective._ If the "square root bound" is violated, then the expression \(\frac{|G|}{|H|}-\sqrt{|G|}+\frac{\sqrt{|G|}}{|H|}\) in Equation (2.5) becomes negative. That means that our techniques cannot shed more light on the surjectivity of perfect nonlinear functions beyond the "square root bound". In the following example, we show that for small values of \(p\), \(n\) and \(m\geq\lfloor n/2+1\rfloor\) surjective vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) exist. **Example 2.9**.: Consider the planar function \(F(x)=x^{2}\) on \(\mathbb{F}_{p^{n}}\). Denote by \(f_{1},\ldots,f_{n}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\) the coordinate functions of \(F\), i.e., \(F(x)=(f_{1}(x),\ldots,f_{n}(x))\) for \(x\in\mathbb{F}_{p^{n}}\). Let \(F_{k}\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}^{k}\) be the vectorial bent function formed by the first \(k\) coordinate functions of \(F\), i.e., \(F_{k}(x):=(f_{1}(x),\ldots,f_{k}(x))\). With Magma [1], we checked that for the following values of \(p\) and \(n\) given in Table 2.1, the functions \(F_{k}\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}^{k}\), where \(k=\lfloor n/2\rfloor+1\) are surjective. More general, we expect that for a fixed \(p\) and a sufficiently large \(n\) there exist \(m>\lfloor n/2\rfloor+1\), such that the functions \(F_{k}\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}^{k}\) are surjective for all \(\lfloor n/2\rfloor+1\leq k\leq m\), but not for \(k\geq m+1\). Consider \(p=3,n=13\) and \(m=10\). The functions \(F_{k}\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}^{k}\) are surjective for all \(1\leq k\leq 6\) by Corollary 2.8. With Magma [1], we checked that for all \(7\leq k\leq 10\) the functions \(F_{k}\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}^{k}\) are surjective as well. However, the functions \(F_{k}\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}^{k}\) are not surjective for all \(11\leq k\leq 13\). Based on our observations on surjectivity of vectorial bent functions beyond the "square root bound", we formulate the following open problems for vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), since most of the constructions are studied in this setting. Clearly, the asked questions do not lose their relevance for the case of perfect nonlinear functions beyond the "square root bound" on arbitrary groups. **Open Problem 2.10**.: Let \(p\) be odd. 1. Find constructions of surjective vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) for \(m\geq\lfloor n/2\rfloor+1\). 2. What is the maximum \(m\) such that all vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) are surjective? 3. What is the minimum \(m\) such that all vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) are not surjective? ## 3 Almost balanced perfect nonlinear functions By Theorem 2.3, we know that if one preimage set \(F^{-1}(\alpha)\) of a perfect nonlinear function \(F\colon G\to H\) has either minimum or maximum cardinality, then the remaining elements in \(H\setminus\{\alpha\}\) are uniformly distributed between the remaining elements of \(G\setminus F^{-1}(\alpha)\). This fact motivates the following definition. **Definition 3.1**.: Let \(G\) and \(H\) be two finite groups. For a perfect nonlinear function \(F\colon G\to H\), we call the first extremal value distribution in Theorem 2.3 of _type_\((-)\), and the second extremal value distribution of _type_\((+)\). A perfect nonlinear function \(F\colon G\to H\) is said to be _almost balanced_, if its value distribution is extremal. Particularly, we say that \(F\) is _almost balanced of type_\((-)\), if its value distribution is extremal of type \((-)\), and _almost balanced of type_\((+)\), if its value distribution is extremal of type \((+)\). For an almost balanced perfect nonlinear function, we say that \(F^{-1}(\alpha)\) is the _unique preimage_ of \(F\), if \(|F^{-1}(\alpha)|=\frac{|G|}{|H|}\mp\sqrt{|G|}\pm\frac{\sqrt{|G|}}{|H|}\) (where the sign depends on the type). From now on, we consider bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), since the problem of construction of bent functions is mostly considered in this setting. In the following subsections, we prove that many primary constructions of bent functions are, in fact, almost balanced. Moreover, we show how one can construct new almost balanced bent functions from known ones using secondary constructions. ### Primary constructions First, we consider three general classes of vectorial bent functions, namely the Maiorana-McFarland, the Desarguesian partial spread and the o-polynomial construction. We show that all these constructions yield almost balanced bent functions of the \((+)\) type. **Proposition 3.2**.: _Let \(n\) be even._ 1. _Let_ \(F\colon\mathbb{F}_{p^{n/2}}\times\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p}^{m}\) _be a Maiorana-McFarland bent function defined by_ \[F(x,y)=L(x\pi(y))+\rho(y),\] _where_ \(\pi\colon\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p^{n/2}}\) _is a permutation,_ \(\rho\colon\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p}^{m}\) _is an arbitrary function, and_ \(L\colon\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p}^{m}\) _is a surjective linear mapping. Let_ \(\pi(y^{*})=0\) _and_ \(\alpha:=\rho(y^{*})\) _for_ \(y^{*}\in\mathbb{F}_{p^{m}}\)_. Then_ \(|F^{-1}(\alpha)|=p^{n-m}+p^{n/2}-p^{n/2-m}\) _and_ \(|F^{-1}(\beta)|=p^{n-m}-p^{n/2-m}\) _for each_ \(\beta\neq\alpha\)_, and hence_ \(F\) _is almost balanced of_ \((+)\) _type._ 2. _Let_ \(F\colon\mathbb{F}_{p^{n/2}}\times\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p}^{m}\) _be a Desarguesian partial spread bent function defined by_ \[F(x,y)=\Psi(xy^{p^{n/2}-2}),\] _where_ \(\Psi\colon\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p}^{m}\) _is an arbitrary balanced function. Then_ \(|F^{-1}(0)|=p^{n-m}+p^{n/2}-p^{n/2-m}\) _and_ \(|F^{-1}(\beta)|=p^{n-m}-p^{n/2-m}\) _for each_ \(\beta\neq 0\)_, and hence_ \(F\) _is almost balanced of_ \((+)\) _type._ Proof.: In the both cases, it is enough to find an element \(\alpha\in\mathbb{F}_{p}^{m}\), such that for the function \(F\colon\mathbb{F}_{p^{n/2}}\times\mathbb{F}_{p^{n/2}}\to\mathbb{F}_{p}^{m}\) we have \(|F^{-1}(\alpha)|=p^{n-m}+p^{n/2}-p^{n/2-m}\), since the uniformity of the other preimages follows immediately from Theorem 2.4. 1. For \(\alpha=\rho(y^{*})\), the equation \(L(x\pi(y))+\rho(y)=\alpha\) has \(p^{n/2}\) solutions \((x,y^{*})\), where \(x\in\mathbb{F}_{p^{n/2}}\). Now let \(y\neq y^{*}\) be fixed. The equation \(L(z)=\alpha-\rho(y)\) has \(p^{n/2-m}\) solutions \(z\in\mathbb{F}_{p^{n/2}}\), since \(L\) is linear and surjective, and hence it is balanced, i.e., \(|L^{-1}(\gamma)|=p^{n/2-m}\) for all \(\gamma\in\mathbb{F}_{p^{m}}\). In turn, for every fixed \(z\in\mathbb{F}_{p^{n/2}}\), the equation \(z=x\pi(y)\) has a unique solution \(x\in\mathbb{F}_{p^{n/2}}\) given by \(x=z(\pi(y))^{-1}\). Hence, the equation \(L(x\pi(y))+\rho(y)=\alpha\) has \(p^{n/2-m}(p^{n/2}-1)\) additional solutions \((x,y)\), where \(y\neq y^{*}\), and thus \(p^{n-m}+p^{n/2}-p^{n/2-m}\) solutions in total. 2. Let \(\alpha=\Psi(0)\). Consider the equation \(\Psi(z)=\alpha\). Since \(\Psi\) is balanced, we have \(p^{n/2-m}\) solutions \(z\in\mathbb{F}_{p^{n/2}}\). If \(z\neq 0\), then for a fixed \(y\in\mathbb{F}_{p^{n/2}}^{*}\), the equation \(z=xy^{p^{n/2-2}}=xy^{-1}\) has a unique solution \(x=zy\), and hence the equation \(\Psi(xy^{p^{n/2}-2})=\alpha\) has \((p^{n/2-m}-1)(p^{n/2}-1)\) solutions \((x,y)\), where \(x=zy\) and \(y\neq 0\). If \(z=0\), then the set \(\{(x,0)\colon x\in\mathbb{F}_{p^{n}}\}\cup\{(0,y)\colon y\in\mathbb{F}_{p^{n}}\}\) gives \(p^{n/2+1}-1\) more solutions of the equation \(\Psi(xy^{p^{n/2}-2})=\alpha\), and \(p^{n-m}+p^{n/2}-p^{n/2-m}\) solutions in total. Recall the following definition of an o-polynomial [4]. For an extensive summary of the known o-polynomials and in particular their relations to (hyper)ovals in finite geometry, we refer to [20]. **Definition 3.3**.: Let \(k\) be any positive integer. A permutation polynomial \(\Psi\) over \(\mathbb{F}_{2^{k}}\) is called an _o-polynomial (an oval polynomial)_ if, for every \(a\in\mathbb{F}_{2^{k}}^{*}\), the function \[z\in\mathbb{F}_{2^{k}}\mapsto\begin{cases}\frac{\Psi(z+a)+\Psi(a)}{z},&\text{ if }z\neq 0\\ 0,&\text{ if }z=0\end{cases}\] is a permutation of \(\mathbb{F}_{2^{k}}\). In the following statement, we show that bent functions obtained with the o-polynomial construction are also almost balanced of the \((+)\) type. **Proposition 3.4**.: _Let \(n\) be even and \(F\colon\mathbb{F}_{2^{n/2}}\times\mathbb{F}_{2^{n/2}}\to\mathbb{F}_{2^{n/2}}\) be an o-polynomial bent function defined by_ \[F(x,y)=x\Psi(yx^{2^{n/2-1}}), \tag{3.1}\] _where \(\Psi\colon\mathbb{F}_{2^{n/2}}\to\mathbb{F}_{2^{n/2}}\) is an o-polynomial. Then \(|F^{-1}(0)|=2^{n/2+1}-1\) and \(|F^{-1}(\beta)|=2^{n/2}-1\) for each \(\beta\neq 0\), and hence \(F\) is almost balanced of \((+)\) type._ Proof.: We have \(F(x,y)=x\Psi(yx^{2^{n/2}-1})\), where \(\Psi\) is an o-polynomial. Recall that as an o-polynomial \(\Psi\) satisfies \(\Psi(x)=0\) if and only if \(x=0\). The equation \(F(x,y)=0\) is thus only solvable if \(xy=0\), so it has \(2^{n/2+1}-1\) solutions. The uniformity of the other preimages follows then immediately from Theorem 2.4. Now, we consider some monomial bent functions in even and odd characteristics. Again we find that these infinite families give almost balanced bent functions, but this time both \((+)\) and \((-)\) types occur. **Proposition 3.5**.: _Let \(n\) be even._ 1. _Let_ \(n=2^{r+1}s\) _with_ \(r\geq 0\) _and_ \(s\) _odd. Define_ \(F\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2^{n/2}}\) _by_ \(F(x)=\operatorname{Tr}_{n/2}^{n}(\lambda x^{2^{2^{r}}+1})\) _where_ \(\lambda\) _is not a_ \((2^{2^{r}}+1)\)_-st power in_ \(\mathbb{F}_{2^{n}}^{*}\)_. Then_ \(|F^{-1}(0)|=1\) _and_ \(|F^{-1}(\beta)|=2^{n/2}+1\) _for each_ \(\beta\in\mathbb{F}_{2^{n/2}}^{*}\)_, and hence_ \(F\) _is almost balanced of type_ \((-)\)_._ 2. _Let_ \(n/2\) _be odd and define_ \(F\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2^{n/2}}\) _by_ \(F(x)=\operatorname{Tr}_{n/2}^{n}(\lambda x^{4^{i}-2^{i}+1})\) _where_ \(\gcd(i,n)=1\) _and_ \(\lambda\) _is a non-cube in_ \(\mathbb{F}_{2^{n}}^{*}\)_. Then_ \(|F^{-1}(0)|=1\) _and_ \(|F^{-1}(\beta)|=2^{n/2}+1\) _for each_ \(\beta\in\mathbb{F}_{2^{n/2}}^{*}\)_, and hence_ \(F\) _is almost balanced of type_ \((-)\)_._ 3. _Let_ \(p\) _be odd and_ \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n/2}}\) _be a vectorial bent function defined via_ \(F(x)=\operatorname{Tr}_{n/2}^{n}(\lambda x^{d})\) _with_ \(\gcd(d,p^{n/2}-1)=2\)_. If_ \(p^{n/2}\equiv 3\pmod{4}\) _and_ \(\lambda\) _is a square, or_ \(p^{n/2}\equiv 1\pmod{4}\) _and_ \(\lambda\) _is a non-square, then_ \(F\) _is almost balanced of type_ \((+)\)_. In the other cases,_ \(F\) _is almost balanced of type_ \((-)\)_._ Proof.: _1._ These are well-known Gold vectorial bent functions (see, e.g., [11, Theorem 6]). We have \(F(x)=0\) if and only if \(\operatorname{Tr}_{n/2}^{n}(\lambda x^{2^{2^{r}}+1})=0\). Observe that \(\operatorname{Tr}_{n/2}^{n}(x)=0\) if and only if \(x\in\mathbb{F}_{2^{n/2}}\). We have \(\gcd(2^{2^{r}}+1,2^{n/2}-1)=1\) since \((n/2)/\gcd(2^{r},n/2)=1\) so \(x\mapsto x^{2^{2^{r}}+1}\) is bijective on \(\mathbb{F}_{2^{n/2}}\). In particular, each element in \(\mathbb{F}_{2^{n/2}}\) is a \((2^{r}+1)\)-st power. Now note that \(\lambda x^{2^{2^{r}}+1}\) is never a \((2^{2^{r}}+1)\)-st power, since \(\lambda\) is a not a \((2^{2^{r}}+1)\)-st power. So \(\lambda x^{2^{2^{r}}+1}\in\mathbb{F}_{2^{n/2}}\) if and only if \(x=0\). Thus \(F(x)=0\) if and only if \(x=0\). Now consider \(F(x)=y\) with \(y\neq 0\). We can write each \(x\in\mathbb{F}_{2^{n/2}}^{*}\) uniquely as \(x=ab\) where \(a\in\mathbb{F}_{2^{n/2}}^{*}\), \(b\in U_{2^{m}+1}=\{x\in\mathbb{F}_{2^{n}}\colon x^{2^{n/2}+1}=1\}\) since \((2^{n/2}-1)(2^{n/2}+1)=2^{n}-1\) and \(\gcd(2^{n/2}-1,2^{n/2}+1)=1\). Then \[F(ab)=a^{2^{2^{r}}+1}\operatorname{Tr}_{n/2}^{n}(\lambda b^{2^{2^{r}}+1})=y\] has for each \(b\in U_{2^{n/2}+1}\) one unique solution \(a\) (again since \(x\mapsto x^{2^{2^{r}}+1}\) is bijective on \(\mathbb{F}_{2^{n/2}}\)). We conclude that each \(y\neq 0\) has \(2^{n/2}+1\) preimages. (The uniformity also follows immediately from Theorem 2.4.) _2._ These are the well-known Kasami vectorial bent functions (see, e.g., [11, Theorem 7]). We have \(F(x)=0\) if and only if \(\operatorname{Tr}_{n/2}^{n}(\lambda x^{4^{i}-2^{i}+1})=0\). Observe that \(\operatorname{Tr}_{n/2}^{n}(x)=0\) if and only if \(x\in\mathbb{F}_{2^{n/2}}\). Since \(n/2\) is odd, \(\gcd(4^{i}-2^{i}+1,2^{n/2}-1)=1\) (see, e.g., [17, Lemma 3.8.]) and \(x\mapsto x^{4^{i}-2^{i}+1}\) is bijective on \(\mathbb{F}_{2^{n/2}}.\) In particular, each element in \(\mathbb{F}_{2^{n/2}}\) is a \((4^{i}-2^{i}+1)\)-th power. Now note that \(\lambda x^{4^{i}-2^{i}+1}\) is never a \((4^{i}-2^{i}+1)\)-th power, since \(\gcd(4^{i}-2^{i}+1,2^{n}-1)=3\) (which can be readily checked) and \(\lambda\) is a non-cube, so not a \((4^{i}-2^{i}+1)\)-th power. So \(\lambda x^{4^{i}-2^{i}+1}\in\mathbb{F}_{2^{n/2}}\) if and only if \(x=0\). Thus \(F(x)=0\) if and only if \(x=0\). The uniformity follows immediately from Theorem 2.4 _3._ Without loss of generality, we can assume \(d=2\). We then have \(F(x)=0\) if and only if \(\operatorname{Tr}_{n/2}^{n}(\lambda x^{2})=\lambda x^{2}+\lambda^{p^{n/2}}x^{2p^ {n/2}}=0\). The non-zero roots \(r\) must satisfy the equation \(r^{2(p^{n/2}-1)}=-1/(\lambda^{p^{n/2}-1})\). If \(-\lambda^{p^{n/2}-1}\) is a \(2(p^{n/2}-1)\)-st power, this has \(2(p^{n/2}-1)\) non-zero solutions, meaning we have in total \(2p^{n/2}-1\) solutions, so \(F\) is of type \((+)\). If \(-\lambda^{p^{n/2}-1}\) is not a \(2(p^{n/2}-1)\)-st power, we conclude \(F(x)=0\) if and only if \(x=0\), i.e., we only have one solution and thus a type \((-)\) function. The uniformity for the other preimage sizes follows from Theorem 2.4. Note that \(-\lambda^{p^{n/2}-1}\) is a \(2(p^{n/2}-1)\)-st power if and only if either \(-1\) is a \(2(p^{n/2}-1)\)-st power and \(\lambda\) is a square or \(-1\) is not a \(2(p^{n/2}-1)\)-st power and \(\lambda\) is a non-square. The result follows since \(-1\) is a \(2(p^{n/2}-1)\)-st power if and only if \(4(p^{n/2}-1)|p^{n}-1\) which is equivalent to \(4|p^{n/2}+1\). **Remark 3.6**.: Note that all planar monomials \(x\mapsto x^{d}\) on \(\mathbb{F}_{p^{n}}\) satisfy \(\gcd(d,p^{n}-1)=2\) (for a proof, see Corollary 6.6 later) so Proposition 3.5 in particular holds for the vectorial bent functions derived from all planar monomials. This means that Case 3 of Proposition 3.5 yields almost balanced functions for all odd \(p\) and all even \(n\), since \(x\mapsto x^{2}\) always yields a planar function (among other examples). ### Secondary constructions In this subsection, we show how one can construct almost balanced bent functions from the known ones. First, we consider the direct sum construction. **Definition 3.7**.: For two functions \(F_{1}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) and \(F_{2}\colon\mathbb{F}_{p}^{k}\to\mathbb{F}_{p}^{m}\), the function \(F\colon\mathbb{F}_{p}^{n}\times\mathbb{F}_{p}^{k}\to\mathbb{F}_{p}^{m}\) defined by \(F(x,y):=F_{1}(x)+F_{2}(y)\) is called the _direct sum_ of the functions \(F_{1}\) and \(F_{2}\). In the following statement, we give an expression of the cardinality of a preimage set of a direct sum \(F(x,y)=F_{1}(x)+F_{2}(y)\) in terms of cardinalities of preimage sets of \(F_{1}\) and \(F_{2}\). **Proposition 3.8**.: _Let \(F_{1}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m},F_{2}\colon\mathbb{F}_{p}^ {k}\to\mathbb{F}_{p}^{m}\) and \(F\colon\mathbb{F}_{p}^{n}\times\mathbb{F}_{p}^{k}\to\mathbb{F}_{p}^{m}\) be defined as the direct sum of \(F_{1}\) and \(F_{2}\), i.e., \(F(x,y)=F_{1}(x)+F_{2}(y)\) for \(x\in\mathbb{F}_{p}^{n}\) and \(y\in\mathbb{F}_{p}^{k}\). Then, for \(c\in\mathbb{F}_{p}^{m}\) we have_ \[|F^{-1}(c)|=\sum_{a\in\mathbb{F}_{p}^{m}}|F_{1}^{-1}(a)|\cdot|F_{2}^{-1}(c-a)|.\] Proof.: Let \(c\in\mathbb{F}_{p}^{m}\). Clearly, \(F(x,y)=c\) can be written as \(F(x,y)=a+b\), where \(a=F_{1}(x)\), \(b=F_{2}(y)\) and \(c=a+b\). In this way, the cardinality of the preimage set \(F^{-1}(c)\) is given by \[|F^{-1}(c)|=\sum_{\begin{subarray}{c}a,b\in\mathbb{F}_{p,a},\\ a+b=c\in\mathbb{F}_{p}^{m}\end{subarray}}|F_{1}^{-1}(a)|\cdot|F_{2}^{-1}(b)|= \sum_{a\in\mathbb{F}_{p}^{m}}|F_{1}^{-1}(a)|\cdot|F_{2}^{-1}(c-a)|, \tag{3.2}\] completing the proof. Recall the following well-known result on the direct sum of two bent functions. **Proposition 3.9**.: _[_28_]_ _Let \(F_{1}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m},F_{2}\colon\mathbb{F}_{p}^ {k}\to\mathbb{F}_{p}^{m}\) be two bent functions and let \(F\colon\mathbb{F}_{p}^{n}\times\mathbb{F}_{p}^{k}\to\mathbb{F}_{p}^{m}\) be defined as \(F(x,y)=F_{1}(x)+F_{2}(y)\) for \(x\in\mathbb{F}_{p}^{n}\) and \(y\in\mathbb{F}_{p}^{k}\). Then \(F\) is bent if and only if both \(F_{1}\) and \(F_{2}\) are bent._ In the following statement, we show that the direct sum of two almost balanced bent functions is almost balanced again. **Proposition 3.10**.: _Let \(F_{1}\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m},F_{2}\colon\mathbb{F}_{p}^ {k}\to\mathbb{F}_{p}^{m}\) be two almost balanced bent functions and let \(F\colon\mathbb{F}_{p}^{n}\times\mathbb{F}_{p}^{k}\to\mathbb{F}_{p}^{m}\) be defined as \(F(x,y)=F_{1}(x)+F_{2}(y)\) for \(x\in\mathbb{F}_{p}^{n}\) and \(y\in\mathbb{F}_{p}^{k}\). Then the following hold._ 1. _If_ \(F_{1}\) _and_ \(F_{2}\) _are both of the_ \((+)\) _type, then the direct sum_ \(F\) _is of the_ \((+)\) _type as well._ 2. _If_ \(F_{1}\) _and_ \(F_{2}\) _are both of the_ \((-)\) _type, then the direct sum_ \(F\) _is of the_ \((+)\) _type._ 3. _If_ \(F_{1}\) _is of the_ \((+)\) _type, and_ \(F_{2}\) _is of the_ \((-)\) _type, then the direct sum_ \(F\) _is of the_ \((-)\) _type._ Proof.: Let \(F_{1}^{-1}(a_{1})\) and \(F_{2}^{-1}(a_{2})\) be the unique preimages of \(F_{1}\) and \(F_{2}\), respectively. Define \(c^{*}\in\mathbb{F}_{p}^{m}\) as \(c^{*}=a_{1}+a_{2}\). Since \(F\) is bent by Proposition 3.9, it is enough to show that \(|F^{-1}(c^{*})|=p^{n+k-m}-p^{(n+k)/2}+p^{(n+k)/2-m}\), i.e., \(F\) is almost balanced of \((-)\) type, or \(|F^{-1}(c^{*})|=p^{n+k-m}+p^{(n+k)/2}-p^{(n+k)/2-m}\), i.e., \(F\) is almost balanced of \((+)\) type, since by Theorem 2.4 the uniformity of the other preimages is forced automatically. From Equation 3.2, we have \[|F^{-1}(c^{*})|= \sum_{a\in\mathbb{F}_{p}^{m}}|F_{1}^{-1}(a)|\cdot|F_{2}^{-1}(c^{ *}-a)| \tag{3.3}\] \[= |F_{1}^{-1}(a_{1})|\cdot|F_{2}^{-1}(a_{2})|+\sum_{a\in\mathbb{F} _{p}^{m}\setminus\{a_{1}\}}|F_{1}^{-1}(a)|\cdot|F_{2}^{-1}(c^{*}-a)|\] \[= |F_{1}^{-1}(a_{1})|\cdot|F_{2}^{-1}(a_{2})|+(p^{m}-1)\cdot|F_{1}^ {-1}(a)|\cdot|F_{2}^{-1}(c^{*}-a)|,\] where \(a\in\mathbb{F}_{p}^{m}\setminus\{a_{1}\}\). _1._ Since \(F_{1}\) and \(F_{2}\) are both of the \((+)\) type, we get from Equation (3.3) that the cardinality of \(F^{-1}(c^{*})\) is given by \[|F^{-1}(c^{*})|= \left(-p^{\frac{k}{2}-m}+p^{k-m}+p^{\frac{k}{2}}\right)\cdot\left( -p^{\frac{n}{2}-m}+p^{n-m}+p^{\frac{n}{2}}\right)\] \[+ \left(p^{m}-1\right)\cdot\left(p^{k-m}-p^{\frac{k}{2}-m}\right) \cdot\left(p^{n-m}-p^{\frac{n}{2}-m}\right)\] \[= p^{n+k-m}+p^{\frac{n+k}{2}}-p^{\frac{n+k}{2}-m},\] from what follows that \(F\) is almost balanced of the \((+)\) type. _2._ Since \(F_{1}\) and \(F_{2}\) are both of the \((-)\) type, we get from Equation (3.3) that the cardinality of \(F^{-1}(c^{*})\) is given by \[|F^{-1}(c^{*})|= \left(p^{\frac{k}{2}-m}+p^{k-m}-p^{\frac{k}{2}}\right)\cdot\left( p^{\frac{n}{2}-m}+p^{n-m}-p^{\frac{n}{2}}\right)\] \[+ \left(p^{m}-1\right)\cdot\left(p^{\frac{k}{2}-m}+p^{k-m}\right) \cdot\left(p^{\frac{n}{2}-m}+p^{n-m}\right)\] \[= p^{n+k-m}+p^{\frac{n+k}{2}}-p^{\frac{n+k}{2}-m},\] from what follows that \(F\) is almost balanced of the \((+)\) type. _3._ Since \(F_{1}\) is of the \((+)\) type, and \(F_{2}\) is of the \((-)\) type, we get from Equation (3.3) that the cardinality of \(F^{-1}(c^{*})\) is given by \[|F^{-1}(c^{*})|= \left(-p^{\frac{k}{2}-m}+p^{k-m}+p^{\frac{k}{2}}\right)\cdot\left( p^{\frac{n}{2}-m}+p^{n-m}-p^{\frac{n}{2}}\right)\] \[+ \left(p^{m}-1\right)\cdot\left(p^{k-m}-p^{\frac{k}{2}-m}\right) \cdot\left(p^{\frac{n}{2}-m}+p^{n-m}\right)\] \[= p^{n+k-m}-p^{\frac{n+k}{2}}+p^{\frac{n+k}{2}-m},\] from what follows that \(F\) is almost balanced of the \((-)\) type. Finally, we show that all possible bent functions which are "contained" in a given almost bent function are almost balanced of the same type. **Proposition 3.11**.: _Let \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) be an almost balanced surjective bent function and let \(L\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}^{k}\) be a surjective linear mapping. Then the following hold._ 1. _If_ \(F\) _is of the_ \((+)\) _type, then_ \(L\circ F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{k}\) _is of the_ \((+)\) _type as well._ 2. _If_ \(F\) _is of the_ \((-)\) _type, then_ \(L\circ F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{k}\) _is of the_ \((-)\) _type as well._ _Consequently, if \(F\) is of the \((+)\) type (resp. \((-)\) type), then every component bent function is of the \((+)\) type (resp. \((-)\) type)._ Proof.: For \(b\in\mathbb{F}_{p}^{m}\), let \(F^{-1}(b)\) be the unique preimage of \(F\). Denote by \(c=L(b)\in\mathbb{F}_{p}^{k}\). Since every non-zero component function of \(F\) is bent, we have that \(L\circ F\) is bent as well, and hence it is enough to show that \(|(L\circ F)^{-1}(c)|=p^{n-k}\pm p^{n/2}\mp p^{n/2-k}\), since by Theorem 2.4 the uniformity of the other preimages is forced automatically. Since \(L\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}^{k}\) is a surjective linear mapping, it is balanced, and hence \(|L^{-1}(a)|=p^{m-k}\) for all \(a\in\mathbb{F}_{p}^{k}\). In this way, the cardinality of \((L\circ F)^{-1}(c)\) is given by \[|(L\circ F)^{-1}(c)|=|F^{-1}(b)|+(p^{m-k}-1)|F^{-1}(a)|, \tag{3.4}\] where \(a\in\mathbb{F}_{p}^{m}\setminus\{b\}\). _1._ If \(F\) is of the \((+)\) type, then \(|F^{-1}(b)|=p^{n-m}+p^{n/2}-p^{\frac{n}{2}-m}\) and \(|F^{-1}(a)|=p^{n-m}-p^{\frac{n}{2}-m}\) for \(a\in\mathbb{F}_{p}^{n}\setminus\{b\}\). Then, from Equation (3.4), we have \(|(L\circ F)^{-1}(c)|=p^{\frac{1}{2}(n-2k)}\left(p^{k}+p^{n/2}-1\right)\). _2._ If \(F\) is of the \((-)\) type, then \(|F^{-1}(b)|=p^{n-m}-p^{n/2}+p^{\frac{n}{2}-m}\) and \(|F^{-1}(a)|=p^{n-m}+p^{\frac{n}{2}-m}\) for \(a\in\mathbb{F}_{p}^{m}\setminus\{b\}\). Then, from Equation (3.4), we have \(|(L\circ F)^{-1}(c)|=p^{\frac{1}{2}(n-2k)}\left(-p^{k}+p^{n/2}+1\right)\). The last claim follows by considering the composition \(L\circ F\), where \(L\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) is linear and surjective. In particular, these results show that almost balanced bent functions of both types exist for all possible choices of \(p,n,m\) (of course, the trivial restrictions \(n\) even and \(m\leq n/2\) follow immediately from the definition of the two types). **Theorem 3.12**.: _Almost balanced bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) of types \((+)\) and \((-)\) exist for all \(m\leq n/2\), where \(n\in\mathbb{N}\) is an arbitrary even number and \(p\) is an arbitrary prime number._ Proof.: Follows from the application of Proposition 3.11 to almost balanced vectorial bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{n/2}\) from primary constructions given in Propositions 3.2, 3.4 and 3.5, all of which are surjective. ## 4 Value distributions and the Walsh transform In this section, we develop the connection between the Walsh transform of a bent function and its value distribution. Particularly, we show that with the knowledge of the Walsh transform, one can get precise information about the value distribution, and vice versa. Recall the following well-known result, which will play an important role in the connection between value distributions and the Walsh transform. **Lemma 4.1**.: _[_21_]_ _Let \(p\) be an odd prime. Then_ \[\sum_{r=1}^{p-1}a_{r}\zeta_{p}^{r}=\begin{cases}\sqrt{p},&\text{if }p\equiv 1 \pmod{4}\\ i\sqrt{p},&\text{if }p\equiv 3\pmod{4}\end{cases}\] _has a (unique) integer solution \(a_{r}=\left(\frac{r}{p}\right)\) for all \(r\), where \(\left(\frac{r}{p}\right)\in\{-1,1\}\) is the Legendre symbol._ As we have seen previously, many of the known constructions yield almost balanced bent functions. Interestingly, these preimage set distributions actually force a plateaued function to be bent. Since plateaued functions are much more prevalent than bent functions, this again underscores the special nature of the almost balanced bent functions we introduced. **Theorem 4.2**.: _Let \(F\colon\mathbb{F}_{p^{m}}\to\mathbb{F}_{p^{m}}\) be a plateaued function with the preimage distribution of type \((+)\) or \((-)\) where the unique preimage is \(F^{-1}(0)\). Then \(F\) is bent. More precisely, we have \(W_{F}(b,0)=-p^{n/2}\) for all \(b\in\mathbb{F}_{p^{m}}^{*}\) for the type \((-)\) function and \(W_{F}(b,0)=p^{n/2}\) for all \(b\in\mathbb{F}_{p^{m}}^{*}\) for the type \((+)\) function._ Proof.: Let us first consider the type \((-)\) distribution, so \(|F^{-1}(0)|=p^{n-m}-p^{n/2}+p^{n/2-m}\) and \(|F^{-1}(y)|=p^{n-m}+p^{n/2-m}\) for non-zero \(y\). Then \(|\{x\notin F^{-1}(0)\colon\operatorname{Tr}(bF(x))=c\}|\) is divisible by \(p^{n-m}+p^{n/2-m}\) for any \(c\in\mathbb{F}_{p}\) and any \(b\in\mathbb{F}_{p^{m}}^{*}\). In particular, \[W_{F}(b,0)\equiv p^{n-m}-p^{n/2}+p^{n/2-m}\equiv-p^{n/2}\pmod{p^{n-m}+p^{n/2 -m}}.\] Since \(F\) is plateaued, \(W_{F}(b,0)\) can only attain the values with absolute value \(p^{(n+k)/2}\) with \(k\geq 0\) or \(0\) for \(b\neq 0\). Clearly, \(W_{F}(b,0)=0\) is not possible. By the congruence above, \(p^{n/2}+W_{F}(b,0)=C(p^{n-m}+p^{n/2-m})\) for some \(C\in\mathbb{Z}[\zeta_{p}]\). This simplifies to \[C(p^{n/2}+1)=p^{m}(1+W_{F}(b,0)/p^{n/2}). \tag{4.1}\] Let us first focus on the case \(p=2\), so \(C\in\mathbb{Z}\). Then \(W_{F}(b,0)=\pm 2^{n/2+k}\) with \(k\leq n/2\). Observe that \(2^{n/2}+1\) can divide \(2^{k}\pm 1\) only if \(k\in\{0,n/2\}\). If \(k=n/2\) then \(\operatorname{Tr}(bF(x))\) is constant \(0\), in particular, the image sets of \(bF(x)\) would be contained in a hyperplane of size \(2^{n-1}\). This contradicts Proposition 2.7. We conclude that \(k=0\) and \(W_{F}(b,0)=-2^{n/2}\) for all \(b\neq 0\). Since \(F\) is plateaued, it is thus bent. Now consider \(p>2\). Then \(C=\sum_{r=1}^{p-1}a_{r}\zeta_{p}^{r}\) with integer coefficients \(a_{r}\). Further, recall that \(\sum_{r=1}^{p-1}\zeta_{p}^{r}=-1\), so Equation (4.1) becomes \[\sum_{r=1}^{p-1}(p^{m}+(p^{n/2}+1)a_{r})\zeta_{p}^{r}=p^{m-n/2}W_{F}(b,0) \tag{4.2}\] By [13, Theorem 2], we have \(W_{F}(b,0)=\epsilon p^{\frac{n+k}{2}}\zeta_{p}^{t}\) for some \(0\leq k\leq n\) and \(t\) where \(\epsilon\in\{1,-1\}\) if \(n+k\) is even or \(n+k\) is odd and \(p\equiv 1\pmod{4}\) and \(\epsilon\in\{i,-i\}\) if \(n+k\) is odd and \(p\equiv 3\pmod{4}\). Let us first deal with the case that \(W_{F}(b,0)=\pm p^{\frac{n+k}{2}}\in\mathbb{Z}\). Equation (4.2) then states \[\sum_{r=1}^{p-1}(p^{m}+(p^{n/2}+1)a_{r}\mp p^{m+k/2})\zeta_{p}^{r}=0.\] We conclude that \(p^{m}+(p^{n/2}+1)a_{r}\mp p^{m+k/2}=0\) for all \(r\), that is, \[p^{m}\frac{\pm p^{k/2}-1}{p^{n/2}+1}=a_{r}\in\mathbb{Z}.\] We can now argue as in the \(p=2\) case that, for divisibility reasons, we must have the minus sign in the equation and \(k=0\), leading to a bent function and \(W_{F}(b,0)=-p^{n/2}\). It remains to exclude the case \(W_{F}(b,0)\notin\mathbb{Z}\). Let us first deal with the case \(\epsilon=\pm 1\). Then, again from Equation (4.2), we have that \[\sum_{r=1}^{p-1}(p^{m}+(p^{n/2}+1)a_{r})\zeta_{p}^{r}-p^{\frac{n+k}{2}}\epsilon \zeta_{p}^{t}=0\] for some \(1\leq t\leq p-1\), leading to \(p^{m}+(p^{n/2}+1)a_{r}=0\) for any \(r\neq t\). This is clearly never satisfied for \(a_{r}\in\mathbb{Z}\), so this case cannot occur. Let us now assume \(\epsilon=\pm i\). Then \[\sum_{r=1}^{p-1}(p^{m}+(p^{n/2}+1)a_{r})\zeta_{p}^{r}=\pm p^{\frac{n+k-1}{2}} \left(\sum_{r=1}^{p-1}c_{r}\zeta_{p}^{r+t}\right), \tag{4.3}\] using Lemma 4.1, where \(c_{r}=\left(\frac{r}{p}\right)\). If \(t=0\) this means \(p^{m}+(p^{n/2}+1)a_{r}\mp p^{\frac{n+k-1}{2}}c_{r}=0\) for all \(r\). This is equivalent to \(a_{r}=p^{m}c_{r}\frac{\pm p^{\frac{n+k-m}{2}-1}-c_{r}}{p^{n/2}+1}-c_{r}\). Since both \(c_{r}=-1,1\) occur, this cannot always be an integer for a fixed \(k\), yielding a contradiction. If \(t\neq 0\), then \(\zeta_{p}^{n}\) occurs on the right hand side of Equation (4.3), yielding \(p^{m}+(p^{n/2}+1)a_{r}\mp p^{\frac{n+k-1}{2}}c_{r}=\pm 1\) for all but one \(r\). Since both \(c_{r}=-1,1\) occur, \(2p^{\frac{n+k-1}{2}}\) must be divisible by \(p^{n/2}+1\), which is clearly not the case. We get again a contradiction. This concludes the \((-)\) case. The second extremal case \(|F^{-1}(0)|=p^{n-m}+p^{n/2}-p^{n/2-m}\) and \(|F^{-1}(y)|=p^{n-m}-p^{n/2-m}\) for each \(y\neq 0\) can be dealt with in a similar fashion. This time, we have \[W_{F}(b,0)\equiv p^{n/2}\pmod{p^{n-m}-p^{n/2-m}}\] and with the same argumentation as above with only a change of signs throughout, \(F\) is bent where \(W_{F}(b,0)=p^{n/2}\) for all \(b\neq 0\). **Remark 4.3**.: Note that the condition in Theorem 4.2 that the unique preimage is the preimage of \(0\) is not restrictive. Indeed, one can shift a plateaued function always to achieve this without changing the preimage set sizes or losing the plateaued property. Of course, such a shift will however change the signs of the Walsh transform. The following result is a direct consequence of Theorem 4.2, which gives a purely combinatorial way (via preimage set sizes) to check for the regularity of bent functions. **Corollary 4.4**.: _Let \(p\) be odd and \(n\) be even. Let \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) be a bent function._ 1. _If_ \(F\) _is of type_ \((+)\) _it is inequivalent to a function of the_ \((-)\) _type._ 2. _If_ \(F\) _is of the_ \((+)\) _type (or equivalent to a_ \((+)\) _type function) then each weakly regular component function of_ \(F\) _is regular._ 3. _If_ \(F\) _is of the_ \((-)\) _type (or equivalent to a_ \((-)\) _type function) then each weakly regular component function of_ \(F\) _is not regular._ Proof.: This follows from Theorem 4.2 and the fact that for \(p\) odd and \(n\) even regularity is preserved under equivalence, see [5, p. 233] Theorem 4.2 also shows that almost balanced bent functions of type \((+)\) or \((-)\) have a very special Walsh transform in the sense that (potentially after a shift) \(W_{F}(b,0)\) is always plus or minus \(p^{n/2}\). Interestingly, this is a precise characterization of these distributions, i.e., the converse also holds: **Proposition 4.5**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{m}}\) be a bent function such that \(W_{F}(b,0)=p^{n/2}\) (resp. \(-p^{n/2}\)) for all \(b\in\mathbb{F}_{p^{m}}^{*}\). Then \(F\) is almost balanced of type \((+)\) (resp. \((-)\)) and the unique preimage is \(F^{-1}(0)\)._ Proof.: We only deal with the \((+)\) case, the \((-)\) case works identically up to changing the sign throughout. We have \[\sum_{b\in\mathbb{F}_{p^{m}}}W_{F}(b,0)=\sum_{x\in\mathbb{F}_{p^{n}}}\sum_{b \in\mathbb{F}_{p^{m}}}\zeta_{p}^{\operatorname{Tr}(bF(x))}=p^{m}\cdot|\{x\in \mathbb{F}_{p^{n}}\colon F(x)=0\}|.\] Counting another way, we also have \[\sum_{b\in\mathbb{F}_{p^{m}}}W_{F}(b,0)=p^{n}+\sum_{b\in\mathbb{F}_{p^{m}}^{* }}W_{F}(b,0)=p^{n}+p^{n/2}(p^{m}-1).\] Comparing these two equations yields \[|\{x\in\mathbb{F}_{p^{n}}\colon F(x)=0\}|=\frac{p^{n}+p^{n/2}(p^{m}-1)}{p^{m} }=p^{n-m}+p^{n/2}-p^{n/2-m}.\] By Theorem 2.4, it follows that \(F\) is necessarily of type \((+)\). For bent functions, where every component function is weakly regular with the same sign (for instance, if all component functions are regular), the possible preimage set sizes are actually very limited. Note that this in particular holds for all Boolean bent functions, where we consider all component functions to be regular by default. **Theorem 4.6**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{m}}\) be a bent function where \(W_{F}(b,0)=\epsilon p^{n/2}\zeta_{p}^{r_{b}}\) with \(\epsilon\in\{\pm 1,\pm i\}\) for all \(b\in\mathbb{F}_{p^{m}}^{*}\). Then \(n\) is even, \(m\leq n/2\) and for any \(a\in\mathbb{F}_{p^{m}}\) we have_ * _If_ \(\epsilon\in\{1,i\}\)_:_ \[|F^{-1}(a)|=p^{n-m}+p^{n/2}-p^{n/2-m}(pk_{a}+1),\] _where_ \(0\leq k_{a}\leq\frac{p^{m}-1}{p-1}\)_. In particular,_ \(|F^{-1}(a)|\geq p^{n-m}-\frac{p^{m}-1}{p-1}p^{n/2-m}\)_._ * _If_ \(\epsilon\in\{-1,-i\}\)_: :_ \[|F^{-1}(a)|=p^{n-m}-p^{n/2}+p^{n/2-m}(pk_{a}+1),\] _where_ \(0\leq k_{a}\leq\frac{p^{m}-1}{p-1}\)_. In particular,_ \(|F^{-1}(a)|\leq p^{n-m}+\frac{p^{m}-1}{p-1}p^{n/2-m}\)_._ _Additionally, \(k_{0}=|\{b\in\mathbb{F}_{p^{m}}^{*}\colon W_{F}(b,0)=p^{n/2}\gamma\zeta_{p} \}|\)._ Proof.: We start with the \(\epsilon\in\{1,i\}\) case. We have \[\sum_{b\in\mathbb{F}_{p^{m}}}W_{F}(b,0)=\sum_{x\in\mathbb{F}_{p^{m}}}\sum_{b \in\mathbb{F}_{p^{m}}}\zeta_{p}^{\operatorname{Tr}(bF(x))}=p^{m}\cdot|\{x\in \mathbb{F}_{p^{m}}\colon F(x)=0\}|.\] Counting another way, we also have \[\sum_{b\in\mathbb{F}_{p^{m}}}W_{F}(b,0)=p^{n}+\sum_{b\in\mathbb{F}_{p^{m}}^{* }}W_{F}(b,0)=p^{n}+p^{n/2}\left(\sum_{r=0}^{p-1}\epsilon k_{r}\zeta_{p}^{r} \right), \tag{4.4}\] where \(k_{r}=|\{b\in\mathbb{F}_{p^{m}}^{*}\colon F_{b}^{*}(0)=r\}|\). Comparing these two equations yields \[|\{x\in\mathbb{F}_{p^{n}}\colon F(x)=0\}|=p^{n-m}+p^{n/2-m}\left(\sum_{r=0}^{ p-1}\epsilon k_{r}\zeta_{p}^{r}\right). \tag{4.5}\] Observe that the left-hand side of Equation (4.5) is an integer, so the right-hand side also has to be an integer. We divide the proof now into two cases: \(n\) **even:** Now \(\epsilon=1\) and \(p^{n/2-m}\) is rational, implying that \(\sum_{r=1}^{p-1}k_{r}\zeta_{p}^{r}\) has to be an integer, which in turn implies that all \(k_{r}=k_{1}=:k\), for all \(r>0\) and \(\sum_{r=1}^{p-1}k_{r}\zeta_{p}^{r}=-k\). Clearly, \(\sum_{r=0}^{p-1}k_{r}=(p-1)k+k_{0}=p^{m}-1\), leading to \(k_{0}=p^{m}-1-(p-1)k\). Substituting this into Equation (4.5) yields \[|\{x\in\mathbb{F}_{p^{n}}\colon F(x)=0\}|=p^{n-m}+p^{n/2-m}(p^{m}-1-pk)=p^{n-m} +p^{n/2}-p^{n/2-m}(pk+1)\] as claimed where \(0\leq k\leq\frac{p^{m}-1}{p-1}\). Note that \(pk+1\) is not divisible by \(p\), so \(p^{n/2-m}\) has to be an integer, leading to \(m\leq n/2\). \(n\) **odd:** Now \(p^{n/2-m}\) is not rational, so Equation (4.5) implies that \(\sum_{r=0}^{p-1}k_{r}\zeta_{p}^{r}\) has to be \(0\) or \(\sqrt{p}\) times an integer. In the first case, all \(k_{r}\) have to be the same, contradicting \(p^{m}-1=\sum_{r=0}^{p-1}k_{r}\). In the other case, we have necessarily by Lemma 4.1 that \(k_{r}=k\cdot\left(\frac{r}{p}\right)+k_{0}\) for all \(r>0\) and some \(k\). Then \[p^{m}-1=\sum_{r=0}^{p-1}k_{r}=k_{0}+\sum_{r=1}^{p-1}k\left(\frac{r}{p}\right)+ (p-1)k_{0}=pk_{0}.\] This is a contradiction since \(p\) does not divide the left-hand side. Clearly, shifting the function preserves the bentness as well as the sizes of the preimage sets, so we get the result not only for the preimage of \(0\) but for all preimages. For \(\epsilon\in\{-1,-i\}\), we get on the right-hand side of Equation (4.4) \(p^{n}-p^{n/2}\left(\sum_{r=0}^{p-1}\epsilon k_{r}\zeta_{p}^{r}\right)\), i.e., just a change of signs. Then, the same argument as for the regular case leads to the result. The minimal image set size in the regular case is clearly reached by setting \(k=\frac{p^{m}-1}{p-1}\) and substituting this into the equation yields \[|F^{-1}(a)| =p^{n-m}+p^{n/2}-p^{n/2-m}\left(\frac{p}{p-1}(p^{m}-1)+1\right)\] \[=p^{n-m}-\frac{1}{p-1}\left(p^{n/2}-p^{n/2-m}\right)=p^{n-m}- \frac{p^{m}-1}{p-1}p^{n/2-m}.\] Similarly, the maximal image set size for \(\epsilon\in\{-1,-i\}\) case is reached by setting \(k=\frac{p^{m}-1}{p-1}\) and the result follows again immediately. **Remark 4.7**.: The condition that \(W_{F}(b,0)=\epsilon p^{n/2}\zeta_{p}^{r_{b}}\) with \(\epsilon\in\{\pm 1,\pm i\}\) for all \(b\in\mathbb{F}_{p^{m}}^{*}\) is less restrictive than it might appear. It holds in particular for vectorial bent function where all component functions are regular, and for all Boolean bent functions. Note that Theorem 4.6 in particular gives a simple proof of Nyberg's bound, i.e., it shows that \(m\leq n/2\) for Boolean bent functions. In fact, the result is a stronger version of Nyberg's original result [22, Theorem 3.2.] which showed that all preimage set sizes of vectorial bent functions that have only regular component functions are of the form \(p^{n-m/2}\cdot k\) where \(k\) is not divisible by \(p\). Theorem 4.6 gives both more precise information on the preimage set sizes as well as generalizes the result to a wider set of bent functions. For \(p=2\), the constraints on the Walsh transform in Theorem 4.6 are trivial and can be dropped. In this case, the possible values coincide and the bounds coincide with the ones from Theorem 2.4 (while in the \(p\)-ary case the bounds from Theorem 4.6 are better). In the Boolean case we can in fact derive an extra condition. **Theorem 4.8**.: _Let \(F\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2^{m}}\) be a Boolean bent function. Then for any \(a\in\mathbb{F}_{2^{m}}\) we have_ \[|F^{-1}(a)|=2^{n-m}+2^{n/2}-2^{n/2-m}(2k_{a}+1),\] _where \(k_{a}=|\{b\in\mathbb{F}_{2^{m}}^{*}\colon W_{F}(b,0)=-2^{n/2}\cdot(-1)^{\mathrm{ Tr}(ab)}\}|\) and all \(k_{a}\) have the same parity._ Proof.: The result on the preimage set sizes follows immediately from Theorem 4.6. It remains to show that all \(k_{a}\) have the same parity. We have \(k_{0}=|\{b\in\mathbb{F}_{2^{m}}^{*}\colon W_{F}(b,0)=-2^{n/2}\}|\). Further \[W_{F+a}(b,0)=(-1)^{\mathrm{Tr}(ab)}W_{F}(b,0).\] In particular, we see that \(W_{F+a}(b,0)\) coincides with \(W_{F}(b,0)\) if \(\operatorname{Tr}(ab)=0\) and does not coincide if \(\operatorname{Tr}(ab)=1\). So we have \(2^{m-1}\) sign changes. In particular, the number of \(+\) signs that get turned into \(-\) signs has the same parity as the \(-\) signs that get turned into \(+\) signs. Consequently, \(k_{0}=|\{b\in\mathbb{F}_{2^{m}}^{*}\colon W_{F}(b,0)=-2^{n/2}\}|\) and \(s_{a}=|\{b\in\mathbb{F}_{2^{m}}^{*}\colon W_{F+a}(b,0)=-2^{n/2}\}|\) have the same parity for any \(a\). But for \(a\in\mathbb{F}_{2}^{m}\) we have again \[\sum_{b\in\mathbb{F}_{2^{m}}}W_{F+a}(b,0)=\sum_{x\in\mathbb{F}_{2^{n}}}\sum_{b \in\mathbb{F}_{2^{m}}}(-1)^{\operatorname{Tr}(b(F(x)+a))}=2^{m}\cdot|F^{-1}(a )|.\] and \[\sum_{b\in\mathbb{F}_{2^{m}}}W_{F+a}(b,0)=2^{n}+\sum_{b\in\mathbb{F}_{2^{m}}^ {*}}W_{F+a}(b,0)=2^{n}+2^{n/2}\left(2^{m}-1-2s_{a}\right).\] By comparison, we see that \(s_{a}=k_{a}\), so all \(k_{a}\) have the same parity as claimed. In the case that \(n\) is odd (which necessarily implies that \(p\) is odd) we also get more precise information on the possible size of the preimages. Note that here we do not need any additional conditions on the Walsh transform. **Theorem 4.9**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{m}}\) be a bent function with \(p,n\) odd. Then for any \(a\in\mathbb{F}_{p^{m}}\) we have \(|F^{-1}(a)|=p^{n-m}\) or_ \[|F^{-1}(a)|=p^{n-m}\pm p^{(n+1)/2-m}\sum_{r=1}^{p-1}\left(k\left(\frac{r}{p} \right)+k_{0}\right),\] _where \(1\leq k\leq\frac{p^{m}-1}{p-1}-(p-1)k_{0}\) and \(k_{0}\) is a non-negative integer._ Proof.: We get again (like Equation (4.5) just without assuming additional conditions on the Walsh transform) \[|F^{-1}(0)|=p^{n-m}+p^{n/2-m}\left(\sum_{r=0}^{p-1}\epsilon_{r}\delta k_{r} \zeta_{p}^{r}\right),\] where \(k_{r}\) are non-negative integers satisfying \(\sum_{r}k_{r}\leq p^{m}-1\), \(\epsilon_{r}\in\{1,-1\}\) and \(\delta\in\{1,i\}\) depending on \(p\). Since \(n\) is odd, we know that \(p^{n/2-m}=\sqrt{p}\cdot p^{(n-1)/2-m}\) is not rational. Then, either all \(k_{r}=0\) (leading to \(|F^{-1}(0)|=p^{n-m}\)) or, using Lemma 4.1, we have \(\epsilon_{r}k_{r}=\epsilon_{1}k\left(\frac{r}{p}\right)+k_{0}\) for all \(r>0\) with \(1\leq k\leq\frac{p^{m}-1}{p-1}-(p-1)k_{0}\), leading to \[|\{x\in\mathbb{F}_{p^{n}}\colon F(x)=0\}|=p^{n-m}\pm p^{(n+1)/2-m}\sum_{r=1}^ {p-1}\left(k\left(\frac{r}{p}\right)+k_{0}\right).\] Again, shifting does not affect the preimage set sizes, so we get the same conditions also on the preimages of non-zero elements. Value distributions of bent functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) with small values of \(m\) While listing all possible preimage size distributions for vectorial bent functions in complete generality seems to be a very difficult and out-of-reach task, for small values of \(m\) such results are possible to obtain from our work in the previous sections. We want to remind the reader that the case \(m=1\) is well-known (see Theorem 1.3), while the situation for \(m>1\) has up until now not been determined. Considering Equations (2.1) and (2.2), it is clear that if \(X_{1},\ldots,X_{p^{m}}\) are a solution, the average value of \(X_{i}\) is \(p^{n-m}\). As the next proposition shows, handling these equations is made a lot easier when one considers the deviations from this mean instead of the \(X_{i}\) directly. **Proposition 5.1**.: _Define \(H_{i}=X_{i}-\frac{|G|}{|H|}\). Then Equations (2.1) and (2.2) are satisfied if and only if_ \[\sum_{i=1}^{|H|}H_{i}^{2} =|G|-\frac{|G|}{|H|} \tag{5.1}\] \[\sum_{i=1}^{|H|}H_{i} =0. \tag{5.2}\] Proof.: Equation (2.2) is clearly equivalent to Equation (5.2). For Equation (2.1), we have \[|G|+\frac{|G|}{|H|}(|G|-1) =\sum_{i=1}^{|H|}X_{i}^{2}=\sum_{i=1}^{|H|}\left(\frac{|G|}{|H|}+H _{i}\right)^{2}\] \[=\frac{|G|^{2}}{|H|}+2\frac{|G|}{|H|}\sum_{i=1}^{|H|}H_{i}+\sum_{i =1}^{|H|}H_{i}^{2}\] \[=\frac{|G|^{2}}{|H|}+\sum_{i=1}^{|H|}H_{i}^{2}.\] Rearranging yields \(\sum_{i=1}^{|H|}H_{i}^{2}=|G|-\frac{|G|}{|H|}\) as desired. Note that the two extremal distributions \((+)\) and \((-)\) belong to the solution \(H_{1}=\pm\sqrt{|G|}-\frac{\sqrt{|G|}}{|H|}\), \(H_{i}=\mp\frac{\sqrt{|G|}}{|H|}\) for all \(i>1\). Further, all solutions come in pairs since one can change the signs of all the \(H_{i}\). For bent functions from \(\mathbb{F}_{p}^{n}\) to \(\mathbb{F}_{p}^{m}\) one can use the results from the previous section to derive very strong conditions on the preimage distributions if certain spectral conditions are satisfied. Note that this again covers the important cases of Boolean vectorial bent functions, \(p\)-ary bent functions with regular component functions. **Theorem 5.2**.: _Let \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) be a bent function such that \(W_{F}(b,0)=\epsilon p^{n/2}\zeta_{p}^{n}\) with \(\epsilon\in\{1,i\}\) for all \(b\in\mathbb{F}_{p^{m}}^{*}\). Then, the preimage set sizes are \(X_{i}=p^{n-m}+p^{n/2-m}(pT_{i}-1)\) for all \(i\in\{1,\dots,p^{m}\}\) where the \(T_{i}\) are integers satisfying the two equations_ \[\sum_{i=1}^{p^{m}}T_{i}^{2} =p^{2m-2} \tag{5.3}\] \[\sum_{i=1}^{p^{m}}T_{i} =p^{m-1}. \tag{5.4}\] Proof.: Assume Equations (5.1) and (5.2) hold for \(H_{1},\dots,H_{p^{m}}\). By Theorem 4.6, we have \(H_{i}=p^{n/2}-p^{n/2-m}(pk_{i}+1)=p^{n/2-m}(p^{m}-pk_{i}-1)\) for \(0\leq k_{i}\leq\frac{p^{m}-1}{p-1}\) and thus \(p^{n/2-m}|H_{i}\). Write \(H_{i}^{\prime}=\frac{H_{i}}{p^{m/2-m}}\). Plugging this into Equations (5.1) and (5.2) yields \[\sum_{i=1}^{p^{m}}(H_{i}^{\prime})^{2} =p^{2m}-p^{m}\] \[\sum_{i=1}^{p^{m}}H_{i}^{\prime} =0.\] Observe that \(H_{i}^{\prime}\equiv-1\pmod{p}\) and set \(H_{i}^{\prime}=pT_{i}-1\). Plugging this into the equations above yields the desired equations on the \(T_{i}\). Retracing the substitutions yields \(X_{i}=p^{n-m}+H_{i}=p^{n-m}+p^{n/2-m}H_{i}^{\prime}=p^{n-m}+p^{n/2-m}(pT_{i}-1)\) The extremal distribution \((+)\) solves Equations (5.3) and (5.4) with \(T_{1}=p^{m-1}\) and \(T_{i}=0\) for all \(i>0\) (recall that the \((-)\) case cannot occur here if \(p\) is odd by Theorem 4.2). For \(p=2\), the solution \(T_{1}=-2^{m-1}+1\), \(T_{i}=1\) for \(i>0\) yields the \((-)\) case. **Theorem 5.3**.: _Let \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) be a bent function such that \(W_{F}(b,0)=\epsilon p^{n/2}\zeta_{p}^{r_{b}}\) with \(\epsilon\in\{-1,-i\}\) for all \(b\in\mathbb{F}_{p^{m}}^{*}\). Then, the preimage set sizes are \(X_{i}=p^{n-m}+p^{n/2-m}(1-pT_{i})\) for all \(i\in\{1,\dots,p^{m}\}\) where the \(T_{i}\) are integers satisfying the two equations_ \[\sum_{i=1}^{p^{m}}T_{i}^{2}=p^{2m-2} \tag{5.5}\] \[\sum_{i=1}^{p^{m}}T_{i}=p^{m-1}. \tag{5.6}\] Proof.: Assume Equations (5.1) and (5.2) hold for \(H_{1},\dots,H_{p^{m}}\). By Theorem 4.6 we have \(H_{i}=p^{n/2-m}(pk_{i}+1)-p^{n/2}=p^{n/2-m}(pk_{i}-p^{m}+1)\) for \(0\leq k_{i}\leq\frac{p^{m}-1}{p-1}\) and thus \(p^{n/2-m}|H_{i}\). Write \(H_{i}^{\prime}=\frac{H_{i}}{p^{n/2-m}}\). Plugging this into Equations (5.1) and (5.2) yields \[\sum_{i=1}^{p^{m}}(H_{i}^{\prime})^{2}=p^{2m}-p^{m}\] \[\sum_{i=1}^{p^{m}}H_{i}^{\prime}=0.\] Observe that \(H_{i}^{\prime}\equiv 1\pmod{p}\) and set \(H_{i}^{\prime}=-pT_{i}+1\). Plugging this into the equations above yields the desired equations on the \(T_{i}\). Retracing the substitutions yields \(X_{i}=p^{n-m}+H_{i}=p^{n-m}+p^{n/2-m}H_{i}^{\prime}=p^{n-m}+p^{n/2-m}(-pT_{i}+1)\). Here, the extremal distribution \((-)\) solves Equations (5.5) and (5.6) with \(T_{1}=p^{m-1}\) and \(T_{i}=0\) for all \(i>0\) (here the \((+)\) case cannot occur if \(p\) is odd). For \(p=2\), the solution \(T_{1}=-2^{m-1}+1\), \(T_{i}=1\) for \(i>0\) yields the \((+)\) case. For \(p=2\) and low values of \(m\) (and arbitrary \(n\)) we can use these results to determine all possible preimage distributions of vectorial Boolean bent functions. Since the conditions on the Walsh transform hold trivially in this case, we can use both Theorem 5.2 and Theorem 5.3 and get the same results. Note also that the symmetry (with respect to the signs) from Equations (5.1) and (5.2) is still visible in the binary case: Indeed, if \(\{T_{1},\dots,T_{2^{m}}\}\) is a valid solution then so is \(\{-T_{1}+1,-T_{2}+1,\dots,-T_{2^{m}}+1\}\). Note that all \(T_{i}\) have the same parity by Theorem 4.8. By the symmetry above, we can concentrate on even \(T_{i}\) since the solutions with odd \(T_{i}\) are covered by the symmetry above. While this direct connection does not exist anymore in the \(p\)-ary case, the solutions here still come in pairs since a solution of the Equations (5.3) and (5.4) yield different distributions for the two Theorems 5.2 and Theorem 5.3 (while the distributions in the Boolean case overlap). **Theorem 5.4**.: _Let \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{2}\) be a bent function. Then there are only two possible preimage distributions which are exactly the two extremal distributions \((+)\) and \((-)\). Both distributions occur for any even \(n\geq 4\)._ Proof.: From Theorem 5.2 we get the equations in the \(T_{i}\): \[\sum_{i=1}^{4}T_{i}^{2}=4,\ \ \sum_{i=1}^{4}T_{i}=2.\] It is easy to see that the only possible integer solutions are (up to permutation of the \(T_{i}\)): \(T_{1}=2\), \(T_{2}=T_{3}=T_{4}=0\) and \(T_{1}=-1\), \(T_{2}=T_{3}=T_{4}=1\). These distributions belong to the two extremal distributions \((+)\) and \((-)\). Conversely, Theorem 3.12 shows that vectorial bent functions \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{2}\) of types \((+)\) and \((-)\) exist for all \(n\geq 4\) This allows us to deduce a simple corollary. **Corollary 5.5**.: _Let \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\) with \(m\in\{1,2\}\) be a plateaued function. Then \(F\) is bent if and only if \(F\) is of type \((-)\) or \((+)\)._ Proof.: Follows from Theorem 4.2 together with Remark 2.5 and Theorem 5.4, respectively. **Theorem 5.6**.: _Let \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{3}\) be a bent function. Then there are only four possible preimage distributions which are the distributions \((+)\) and \((-)\), and the distributions with preimage set sizes \(X_{i}=2^{n-3}+2^{n/2-3}(2T_{i}-1)\) where_ \[T_{1}=-2,T_{2}=T_{3}=T_{4}=2,T_{5}=\cdots=T_{8}=0\text{, or}\] \[T_{1}=3,T_{2}=T_{3}=T_{4}=-1,T_{5}=\cdots=T_{8}=1.\] Proof.: From Theorem 5.2 we get the equations in the \(T_{i}\): \[\sum_{i=1}^{8}T_{i}^{2}=16,\ \ \sum_{i=1}^{8}T_{i}=4.\] From Theorem 4.8 and the discussion above we can concentrate on even \(T_{i}\), getting the odd solutions via symmetry. This effectively makes the set of equations even easier. With little effort, one gets the solutions. Up to a permutation, we only get the two solutions belonging to \((+)\) and \((-)\): \(T_{1}=4\), \(T_{2}=T_{3}=\cdots=T_{8}=0\) and \(T_{1}=-3\), \(T_{2}=T_{3}=\cdots=T_{8}=1\), as well as the even solution \(T_{1}=-2\), \(T_{2}=T_{3}=T_{4}=2\), \(T_{5}=\cdots=T_{8}=0\) and its "symmetric" solution \(T_{1}=3\), \(T_{2}=T_{3}=T_{4}=-1\), \(T_{5}=\cdots=T_{8}=1\). **Remark 5.7**.: We have checked with a computer program that by adding linear functions to representatives of the equivalence classes of vectorial bent functions \(F\colon\mathbb{F}_{2}^{6}\to\mathbb{F}_{2}^{3}\) from [25], it is possible to obtain all four distributions in Theorem 5.6. We conjecture that all four preimage distributions from Theorem 5.6 occur for all vectorial bent functions \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{3}\) with even \(n\geq 6\). Of course, it was only left to prove the existence of the two non-extremal distributions since the two extremal distributions are covered by Theorem 3.12. In the following statement, we also analyse value distributions of vectorial Boolean bent functions \(F\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2^{4}}\). **Theorem 5.8**.: _Let \(F\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2^{4}}\) be a bent function. Then there are only 14 possible preimage distributions which are the distributions with preimage set sizes \(X_{i}=2^{n-4}+2^{n/2-4}(2T_{i}-1)\) where the \(T_{i}\) are one of the following:_ 1. \(T_{1}=-6,T_{2}=\cdots=T_{9}=0,T_{10}=\cdots=T_{16}=2,\)__ 2. \(T_{1}=-4,T_{2}=T_{3}=-2,T_{4}=\cdots=T_{9}=0,T_{10}=\cdots=T_{15}=2,T_{16}=4,\)__ 3. \(T_{1}=-4,T_{2}=\cdots=T_{13}=0,T_{14}=\cdots=T_{16}=4,\)__ 4. \(T_{1}=\cdots=T_{6}=-2,T_{7}=\cdots=T_{16}=2,\)__ 5. \(T_{1}=\cdots=T_{4}=-2,T_{5}=\cdots=T_{10}=0,T_{11}=\cdots=T_{14}=2,T_{15}=T_{16 }=4,\)__ 6. \(T_{1}=\cdots=T_{3}=-2,T_{4}=\cdots=T_{11}=0,T_{12}=\cdots=T_{15}=2,T_{16}=6,\)__ 7. \(T_{1}=\cdots=T_{15}=0,T_{16}=8,\)__ 8. _The symmetric solutions_ \[\{-T_{1}+1,-T_{2}+1,\ldots,-T_{16}+1\}\] _for all 7 solutions above._ Proof.: We again solve the equations on the \(T_{i}\), focusing only on even \(T_{i}\). The solutions for odd \(T_{i}\) follow again using the symmetry \(\{T_{1},\ldots,T_{16}\}\mapsto\{-T_{1}+1,-T_{2}+1,\ldots,-T_{16}+1\}\). Setting \(T_{i}=2T_{i}^{\prime}\), we get \[\sum_{i=1}^{16}T_{i}^{\prime 2}=16,\ \sum_{i=1}^{16}T_{i}^{\prime}=4.\] We find all possible solutions of this system with Wolfram Mathematica [31] as follows. Firstly, we construct the set of all possible last four coordinates \[LC=\{\{T^{\prime}_{13},T^{\prime}_{14},T^{\prime}_{15},T^{\prime}_{16}\}\colon-3 \leq T^{\prime}_{i}\leq 3,T^{\prime}_{i}\in\mathbb{Z}\mid(T^{\prime}_{13})^{2}+(T^{ \prime}_{14})^{2}+(T^{\prime}_{15})^{2}+(T^{\prime}_{16})^{2}\leq 16\},\] whose cardinality is \(114\). Note that we can consider the condition \(-3\leq T^{\prime}_{i}\leq 3,T^{\prime}_{i}\in\mathbb{Z}\) instead of \(-4\leq T^{\prime}_{i}\leq 4,T^{\prime}_{i}\in\mathbb{Z}\), since having one \(T^{\prime}_{i}=\pm 4\) implies immediately that all other \(T^{\prime}_{j}=0\) for \(j\neq i\), which corresponds precisely to the two extremal distributions. Secondly, for every fixed \(\{T^{\prime}_{13},T^{\prime}_{14},T^{\prime}_{15},T^{\prime}_{16}\}\in LC\), we find all integer solutions of the system of linear equations \[\sum_{i=1}^{12}T^{\prime 2}_{i}=16-(T^{\prime}_{13})^{2}-(T^{\prime}_{14})^{2}- (T^{\prime}_{15})^{2}-(T^{\prime}_{16})^{2},\ \sum_{i=1}^{12}T^{\prime}_{i}=4-T^{\prime}_{13}-T^{\prime}_{14}-T^{\prime}_{15 }-T^{\prime}_{16}.\] with the standard tools of Wolfram Mathematica [31]. Finally, we collect all the different solutions in \(T^{\prime}_{i}\) for all \(114\) cases (note that the ordering of variables is ignored), and reconstruct from them the solutions in \(T_{i}=2T^{\prime}_{i}\). In total, we get \(14\) solutions in \(T_{i}\), namely, \(7\) of them correspond to the ones given in the statement of the theorem, as well as the following \(7\) additional ones: 1. \(T_{1}=T_{2}=-4,T_{3}=\cdots=T_{8}=0,T_{9}=\cdots=T_{16}=2\), 2. \(T_{1}=-4,T_{2}=\cdots=T_{4}=-2,T_{5}=\cdots=T_{7}=0,T_{8}=\cdots=T_{16}=2\), 3. \(T_{1}=-4,T_{2}=-2,T_{3}=\cdots=T_{11}=0,T_{12}=\cdots=T_{14}=2,T_{15}=T_{16}=4\), 4. \(T_{1}=-4,T_{2}=\cdots=T_{12}=0,T_{13}=\cdots=T_{15}=2,T_{16}=6\), 5. \(T_{1}=\cdots=T_{5}=-2,T_{6}=\cdots=T_{8}=0,T_{9}=\cdots=T_{15}=2,T_{16}=4\), 6. \(T_{1}=\cdots=T_{3}=-2,T_{4}=\cdots=T_{12}=0,T_{13}=2,T_{14}=\cdots=T_{16}=4\), 7. \(T_{1}=T_{2}=-2,T_{3}=\cdots=T_{13}=0,T_{14}=2,T_{15}=4,T_{16}=6\). Now, we show that all \(7\) distributions from the list cannot occur for bent functions. By Theorem 4.6, we have \(k_{a}=|\{b\in(\mathbb{F}_{2^{4}})^{*}\colon W_{F}(b,0)=-2^{n/2}\cdot(-1)^{ \operatorname{Tr}(ab)}\}|=8-T_{i}\) for some \(a\in\mathbb{F}_{2}^{4}\) and some \(1\leq i\leq 16\). We can pick without loss of generality (by shifting the function by a constant) the \(i\) that corresponds to \(k_{0}\). Set \(K=\{b\in\mathbb{F}_{2^{4}}^{*}\colon W_{F}(b,0)=-2^{n/2}\}\). Let us first assume that \(k_{0}=12\), corresponding to a value of \(T_{i}=-4\), so \(W_{F}(b,0)=-2^{n/2}\) for \(12\) choices of \(b\) and \(W_{F}(b,0)=2^{n/2}\) for \(3\) choices of \(b\). Then \(k_{a}\in\{4,6,8,10\}\) for \(a\neq 0\), depending on \(\operatorname{Tr}(ab)\) for \(b\in K\). For instance, \(k_{a}=4\) iff all \(b\) with \(\operatorname{Tr}(ab)=1\) are contained in \(K\) and \(k_{a}=6\) iff precisely \(7\)\(b\) with \(\operatorname{Tr}(ab)=1\) are contained in \(K\). In particular, it is impossible that \(k_{a}=12\) for \(a\neq 0\). This means that there can only be at most one \(i\) such that \(T_{i}=-4\), excluding the case i). We can write \(\{x,y,z\}=\mathbb{F}_{2^{4}}^{*}\setminus K\) and let \(H\) be a hyperplane of \(\mathbb{F}_{2^{4}}\) containing \(x,y,z\). Then \(|H\cap K|=4\) and (denoting \(\overline{H}=\mathbb{F}_{2^{4}}\setminus H\)) clearly \(|\overline{H}\cap K|=8\). By the considerations above this means that there exists an \(a\) such that \(k_{a}=4\), corresponding to a \(T_{j}=4\). We conclude that if one \(T_{i}=-4\) there also has to exist a \(j\) with \(T_{j}=4\). This excludes the cases ii) and iv). Assume now that we have another value of \(j\) such that \(T_{j}=4\) (still \(T_{i}=-4\)), corresponding to a \(k_{a}=4\). Then (as outlined above) there is a hyperplane \(H\) with \(|H\cap K|=4\) and \(H\) must contain \(x,y,z\). If \(x,y,z\) are linearly independent, this \(H=\langle x,y,z\rangle\) is uniquely determined. If \(x+y=z\) then there are precisely \(3\) choices for \(H\). We conclude that if \(T_{i}=-4\) there are either one or three \(j\) such that \(T_{j}=4\). This excludes the case iii). Let us now deal with the last three cases v), vi) and vii). All have in common that there is no \(i\) with \(T_{i}=-4\) but there is an \(i\) with \(T_{i}=4\). Let us thus assume that \(k_{0}=4\), which corresponds to a \(T_{i}=4\). We can thus set \(K=\{x,y,z,w\}\). We have \(k_{a}=12\) (corresponding to \(T_{j}=-4\)) if and only if \(\operatorname{Tr}(ax)=\operatorname{Tr}(ay)=\operatorname{Tr}(az)=\operatorname{ Tr}(aw)=0\) for some \(a\neq 0\), i.e., \(K\) is contained in a hyperplane. Since \(T_{j}=-4\) does not occur, this means \(K\) is not contained in a hyperplane which means that \(x,y,z,w\) are linearly independent. Similarly, we have \(k_{a}=4\) (corresponding to a second \(T_{j}=4\)) for \(a\neq 0\) if and only if \(\operatorname{Tr}(ax)=\operatorname{Tr}(ay)=\operatorname{Tr}(az)= \operatorname{Tr}(aw)=1\) for some \(a\), i.e., \(K\) is contained in an affine hyperplane. This occurs if and only if \(x+y,x+z,x+w\) are contained in the hyperplane \(H_{a}=\{b\colon\operatorname{Tr}(ab)=0\}\) and \(x\notin H_{a}\). But \(x+y,x+z,x+w\) are linearly independent, so \(H_{a}=\langle x+y,x+z,x+w\rangle\) is uniquely determined. This means that there is at most one second \(j\) (next to \(i\)) such that \(T_{j}=4\), excluding case vi). If there is no \(j\neq i\) such that \(T_{j}=4\) then (by the argument above) \(K\) is not contained in an affine hyperplane, in other words, \(|H\cap K|\geq 1\) for all hyperplanes \(H\). But set again \(H=\langle x+y,x+z,x+w\rangle\) and since \(x,y,z,w\) are linearly independent, we have \(H\cap K=\emptyset\), yielding a contradiction. We conclude that there must be exactly one second \(j\neq i\) such that \(T_{j}=4\). This excludes the last remaining two cases v) and vii). It is easy to observe that the symmetric cases \(\{-T_{1}+1,-T_{2}+1,\ldots,-T_{16}+1\}\) to the \(7\) cases we just excluded also cannot occur. Indeed, we can repeat the arguments above, just replacing \[k_{a}= |\{b\in\mathbb{F}_{2^{4}}^{*}\colon W_{F}(b,0)=-2^{n/2}\cdot(-1)^ {\operatorname{Tr}(ab)}\}|\text{ with }\] \[k^{\prime}_{a}= |\{b\in\mathbb{F}_{2^{4}}^{*}\colon W_{F}(b,0)=2^{n/2}\cdot(-1)^ {\operatorname{Tr}(ab)}\}|,\] i.e., a change of a sign. We have checked by computer that all the \(14\) cases in Theorem 5.8 do in fact occur for vectorial Boolean bent functions in \(8\) variables. **Proposition 5.9**.: _The preimage distributions of vectorial bent functions \(F\colon\mathbb{F}_{2}^{8}\to\mathbb{F}_{2}^{4}\) are precisely the 14 distributions given in Theorem 5.8._ Verification.: Consider the following vectorial bent function \(F\colon\mathbb{F}_{2}^{8}\to\mathbb{F}_{2}^{4}\) (this is the first function in the list [23] obtained in [24]), which is given by its algebraic normal form as follows: \[F(x_{1},\ldots,x_{8})=\begin{pmatrix}x_{1}x_{5}+x_{2}x_{6}+x_{3}x_{7}+x_{4}x_ {8}\\ x_{1}x_{3}+x_{1}x_{4}+x_{3}x_{4}+x_{2}x_{5}+x_{4}x_{5}+x_{3}x_{6}+x_{4}x_{6}+x_ {1}x_{7}+x_{3}x_{7}+x_{4}x_{7}+x_{2}x_{8}\\ x_{1}x_{5}+x_{3}x_{5}+x_{4}x_{5}+x_{2}x_{6}+x_{3}x_{6}+x_{2}x_{7}+x_{1}x_{8}+ x_{2}x_{8}\\ x_{1}x_{3}+x_{1}x_{4}+x_{3}x_{5}+x_{2}x_{7}+x_{5}x_{7}+x_{1}x_{8}+x_{6}x_{8} \end{pmatrix}. \tag{5.7}\] With a help of a computer program, it is possible to check that by adding random linear functions to the bent function \(F\) defined in (5.7), one soon gets all possible distributions given in the statement of Theorem 5.8. **Remark 5.10**.: In view of Proposition 5.9 and Theorem 5.8, we conjecture that all 14 preimage distributions from Theorem 5.8 occur for all vectorial bent functions \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{4}\) with even \(n\geq 8\). Finally, we demonstrate that in some cases it is possible to determine possible value distributions of bent functions without the exact knowledge of spectral properties of the considered functions. This allows us to extend the proof of Theorem 5.4 to arbitrary groups of sizes \(|G|=2^{n}\) and \(|H|=4\). **Theorem 5.11**.: _Let \(G\) and \(H\) be two finite groups with \(|G|=2^{n}\) and \(|H|=4\). Let \(F\colon G\to H\) be a perfect nonlinear function. Then there are only two possible preimage distributions which are exactly the two extremal distributions from Theorem 2.3._ Proof.: Assume Equations (5.1) and (5.2) hold for \(H_{1},\ldots,H_{4}\) and \(n>4\). Then \[\sum_{i=1}^{4}H_{i}^{2}\equiv 0\pmod{8}.\] Observe that \(H_{i}^{2}\) is \(0,1,4\pmod{8}\) and \(H_{i}^{2}\equiv 1\pmod{8}\) if and only if \(H_{i}\) is odd. This implies that the \(H_{i}\) are all even, say \(H_{i}=2H_{i}^{\prime}\). Then Equations (5.1) and (5.2) become \[\sum_{i=1}^{4}(H_{i}^{\prime})^{2} =2^{n-2}-2^{(n-2)-2}\] \[\sum_{i=1}^{4}H_{i}^{\prime} =0.\] \(H_{1},\ldots,H_{4}\) is thus a solution of Equations (5.1) and (5.2) for \(n\) if and only if \(H_{1}^{\prime},\ldots,H_{4}^{\prime}\) is a solution of Equations (5.1), (5.2) for \(n-2\). We can continue this procedure until \(n=4\), since in this case \(2^{n-2}\) is no longer divisible by \(8\). We arrive at \[\sum_{i=1}^{4}(H^{\prime}_{i})^{2} =2^{4}-2^{4-2}=12\] \[\sum_{i=1}^{4}H^{\prime}_{i} =0.\] One can see that the only possible solutions (up to permutation of the \(H_{i}\)) are \(H_{1}=H_{2}=H_{3}=\pm 1\), \(H_{4}=\mp 3\). This means that the only possible solutions for the general case are \(H_{1}=H_{2}=H_{3}=\pm 2^{n/2-2}\), \(H_{4}=\mp 3\cdot 2^{n/2-2}\). These correspond to preimage set sizes \(X_{1}=X_{2}=X_{3}=2^{n-2}\pm 2^{n/2-2}\), \(X_{4}=2^{n-2}\mp 3\cdot 2^{n/2-2}\). ## 6 Value distributions of planar functions In this section, we discuss the particularly interesting case of planar functions, i.e., vectorial bent functions with \(p\) odd and \(n=m\). Planar functions have important applications both for difference sets (as they give rise to examples of skew Hadamard difference sets that are inequivalent to Paley difference sets [10]) and commutative semifields (see, e.g., [12]) which play an important role in finite geometry. The image sets of planar functions were considered in [16] and [8] where lower and upper bounds (respectively) of the image set sizes of planar functions were derived. Using the tools we developed in the previous sections we are able to unify these results and give an alternative proof of one of the main results in [16, Theorem 2] as well as [8], while giving (for the upper bound) more precise information on the preimage set distribution that occurs in the extremal cases. Recall that we call a function \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) with \(p\) odd \(2\)-to-\(1\) on \(\mathbb{F}_{p^{n}}\) if one unique element has one preimage and \((p^{n}-1)/2\) elements have \(2\) preimages. **Proposition 6.1**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) be a planar function. Then_ \[\frac{p^{n}+1}{2}\leq|\operatorname{Im}(F)|\leq p^{n}-\frac{1}{2}(\sqrt{4p^{n }-3}-1).\] _The lower bound is satisfied with equality if and only if \(F\) is \(2\)-to-\(1\) on \(\mathbb{F}_{p^{n}}\) and the upper bound is satisfied with equality if and only if all but one element in the image set have a unique preimage._ Proof.: We consider Equations (5.1) and (5.2). Since \(H_{i}=1+X_{i}\) we have \(H_{i}\geq-1\). Set \(k=|\{i\colon H_{i}=-1\}|\) and let \(H_{1}=\cdots=H_{k}=-1\), \(H_{k+1}=\cdots=H_{k+s}=0\) and \(H_{i}>0\) for \(i>k+s\). Clearly, \(|\operatorname{Im}(F)|=p^{n}-k\). We get from Equations (5.1) and (5.2) \[\sum_{i=k+s+1}^{p^{n}}H_{i}^{2} =p^{n}-1-k \tag{6.1}\] \[\sum_{i=k+s+1}^{p^{n}}H_{i} =k. \tag{6.2}\] By the Cauchy-Schwarz inequality, we get \(k^{2}\leq(p^{n}-1-k)(p^{n}-k-s)\) with equality if and only if all \(H_{i}\) with \(i>k+s\) are equal. From Proposition 2.7 we know the possible minimum value is \(k=\frac{p^{n}-1}{2}\). Plugging this in yields \(\frac{p^{n}-1}{2}\leq\frac{p^{n}+1}{2}-s\), only leaving \(s\in\{0,1\}\) as possibilities. If \(s=0\), then Equation (6.2) cannot be satisfied as all \(H_{i}\geq 1\) and the sum contains \(\frac{p^{n}+1}{2}=k+1\) terms. Thus \(s=1\) and it is easy to see that \(H_{i}=1\) for all \(i>k+s\), meaning that \(X_{i}=2\) for these \(i\) and \(F\) is \(2\)-to-\(1\). Let us consider the upper bound now. Define \(M_{i}\) as the number of elements \(y\in\operatorname{Im}(F)\) with precisely \(i\) preimages. Then (by Proposition 2.1) \(p^{n}+p^{n-m}(p^{n}-1)=\sum_{i=1}^{k}i^{2}M_{i}\) where \(r\) is the maximum preimage set size of \(F\). Note that \(\sum_{i=1}^{r}iM_{i}=p^{n}\), so \[p^{n-m}(p^{n}-1)=\sum_{i=1}^{r}i(i-1)M_{i}\leq r\sum_{i=1}^{r}(i-1)M_{i},\] with equality if and only if \(M_{i}=0\) for all \(2\leq i<r\). Then \[|\operatorname{Im}(F)| =\sum_{i=1}^{r}M_{i}=\sum_{i=1}^{r}iM_{i}-\sum_{i=1}^{r}(i-1)M_{i}\] \[=p^{n}-\sum_{i=1}^{r}(i-1)M_{i}\leq p^{n}-\frac{p^{n-m}(p^{n}-1)}{r},\] still with equality if and only if \(M_{i}=0\) for all \(2\leq i<r\). Clearly, the bound is best if \(r\) is maximal, i.e., if the maximum preimage set size is as high as possible, which means maximizing one \(H_{i}\) in Equation (6.1). This occurs if Equations (6.1) and (6.2) only have one term on the left-hand side, which yields \(H_{i}=k\),\(H_{i}^{2}=p^{n}-1-k\), i.e., \(k^{2}+k=p^{n}-1\) which has the positive solution \(k=\frac{1}{2}(\sqrt{4p^{n}-3}-1)\), leading to \(|\operatorname{Im}(F)|=p^{n}-\frac{1}{2}(\sqrt{4p^{n}-3}-1)\). Equality is achieved if and only if there is one element with \(\frac{1}{2}(\sqrt{4p^{n}-3}-1)+1\) preimages (since \(X_{i}=1+H_{i}=1+k\)) and all other elements in the image set have a unique preimage. **Remark 6.2**.: We are not aware of any planar functions attaining the upper bound. A necessary condition is that \(4p^{n}-3=4(p^{n}-1)+1=8(\frac{p^{n}-1}{2})+1\) is a square. As already observed by Coulter and Senger in a slightly different context [8], this is the case if and only if \(\frac{p^{n}-1}{2}\) is a triangular number, i.e., a number of the form \(u(u-1)/2\). This would mean \(p^{n}-1=u(u-1)\), i.e., this occurs if and only if \(p^{n}-1\) is the product of two consecutive numbers. This can clearly never occur if \(n\) is even since then \(p^{n}-1=(p^{n/2}-1)(p^{n/2}+1)\) and it is clear that \(p^{n}-1\) is not the product of two consecutive numbers. For \(n\) odd, this can however happen, simple examples include \(7-1=2\cdot 3\) and \(7^{3}-1=18\cdot 19\). Note that many examples of planar functions satisfying the lower bound are known, in fact, all planar Dembowski-Ostrom polynomials (i.e., polynomials of the form \(F(x)=\sum_{i,j=0}^{n-1}a_{i,j}x^{p^{i}+p^{j}}\), where \(a_{i,j}\in\mathbb{F}_{p^{n}}\) and \(x\in\mathbb{F}_{p^{n}}\)) are necessarily \(2\)-to-\(1\) which is well known [16, Corollary 1]. We add a short proof of this statement using our techniques here as well. **Proposition 6.3**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) be a planar function such that \(F(x)=F(-x)\) for all \(x\in\mathbb{F}_{p^{n}}\). Then \(F\) is \(2\)-to-\(1\)._ Proof.: We again consider Equations (5.1) and (5.2). We have \(F(x)=F(-x)\) for all \(x\in\mathbb{F}_{p^{n}}\), so the number of preimages is odd only precisely once. Then the \(H_{i}=1+X_{i}\) are all odd with exactly one exception. By Equation (5.1), one \(H_{i}\) has to be \(0\) and all other satisfy \(H_{i}^{2}=1\) and then by Equation (5.2), necessarily \(\frac{p^{n}-1}{2}\) choices of \(H_{i}\) are \(1\) and the same holds for \(-1\). Keeping in mind that \(X_{i}=1+H_{i}\), we conclude that \(F\) is \(2\)-to-\(1\). Again, the \(2\)-to-\(1\) property (i.e., information on the preimage size distribution) is enough to force planarity as long as we only consider plateaued functions. This is in many ways surprising since both the \(2\)-to-\(1\) functions as well as plateaued functions seem to be much more prevalent than planar functions. The result and proof idea is an analogue of a similar result for \(3\)-to-\(1\) almost perfect nonlinear functions achieved in [14]. **Theorem 6.4**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) be a plateaued, \(2\)-to-\(1\) function. Then \(F\) is planar._ Proof.: Denote for simplicity by \(\chi(x)=\zeta_{p}^{\operatorname{Tr}(x)}\) for \(x\in\mathbb{F}_{p}^{n}\) the canonical additive character. Let us first show that \(F\) does not have any balanced component functions, i.e., \(W_{F}(b,0)\neq 0\) for \(b\in\mathbb{F}_{p^{n}}^{*}\). Since \(F\) is \(2\)-to-\(1\) we have \[W_{F}(b,0)=\sum_{x\in\mathbb{F}_{p^{n}}}\chi(bF(x))=\chi(bx_{0})+2\sum_{x\in M }\chi(bx),\] where \(x_{0}\) is the element that has \(1\) preimage and \(M\) is the set of elements with \(2\) preimages. We have thus clearly \(W_{F}(b,0)\equiv\chi(bx_{0})\not\equiv 0\pmod{2}\), in particular \(W_{F}(b,0)\neq 0\). Now set \(N_{k}=|\{b\in\mathbb{F}_{p^{n}}^{*}\colon|W_{F}(b,0)|=p^{(n+k)/2}\}|\). Since \(W_{F}(b,0)\neq 0\), we infer that \(N_{k}\) is the number of plateaued component functions with amplitude \(k\), so \[\sum_{k\geq 0}N_{k}=p^{n}-1. \tag{6.3}\] Since \(F\) is \(2\)-to-\(1\), we have \[\frac{1}{p^{n}}\sum_{b\in\mathbb{F}_{p^{n}}}\sum_{x_{1},x_{2}\in\mathbb{F}_{p^{n}} }\chi(b(F(x_{1})-F(x_{2}))=1+2(p^{n}-1)=2p^{n}-1.\] On the other hand, \[\frac{1}{p^{n}}\sum_{b\in\mathbb{F}_{p^{n}}}\sum_{x_{1},x_{2}\in \mathbb{F}_{p^{n}}}\chi(b(F(x_{1})-F(x_{2})) =p^{n}+\frac{1}{p^{n}}\sum_{b\in\mathbb{F}_{p^{n}}^{*}}\sum_{x_{1},x_{2}\in\mathbb{F}_{p^{n}}}\chi(b(F(x_{1})-F(x_{2}))\] \[=p^{n}+\frac{1}{p^{n}}\sum_{b\in\mathbb{F}_{p^{n}}^{*}}\sum_{x_{1},x_{2}\in\mathbb{F}_{p^{n}}}\chi(bF(x_{1}))\overline{\chi(bF(x_{2}))}\] \[=p^{n}+\frac{1}{p^{n}}\sum_{b\in\mathbb{F}_{p^{n}}^{*}}|W_{F}(b,0 )|^{2}\] \[=p^{n}+N_{0}+pN_{1}+p^{2}N_{2}+\ldots\] We thus infer \[p^{n}-1=N_{0}+pN_{1}+p^{2}N_{2}+\ldots\] and with Equation (6.3) \(N_{0}=p^{n}-1\) and \(N_{k}=0\) for all \(k>0\), so all component functions of \(F\) are bent and \(F\) is planar. This allows us to state the following corollary. Note that this is a strict generalization of one of the main results in [30, Theorem 2.3.] which showed the statement only for DO polynomials, which are a subclass of plateaued functions satisfying \(F(x)=F(-x)\). **Corollary 6.5**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) be a plateaued function such that \(F(x)=F(-x)\) for all \(x\in\mathbb{F}_{p^{n}}\). Then \(F\) is planar if and only if \(F\) is \(2\)-to-\(1\)._ Proof.: Follows from Theorem 6.4 and Proposition 6.3. Corollary 6.5 is indeed a generalization from the DO case since plateaued planar functions that are not DO polynomials do in fact exist, an example is the Coulter-Matthews planar monomial [7]. In particular, for monomials, proving planarity can then essentially be reduced to proving the plateaued condition (recall that planar functions cannot be bijective by Proposition 6.1). **Corollary 6.6**.: _Let \(F\colon\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{n}}\) be a plateaued monomial \(F=x^{d}\). Then \(F\) is planar if and only if \(\gcd(d,p^{n}-1)=2\)._ Proof.: Follows from Theorem 6.4 and Proposition 6.1. ## 7 Conclusion and open problems In this paper, we systematically developed the theory of value distributions for perfect nonlinear functions. Particularly, we provided a purely combinatorial framework for checking the equivalence of perfect nonlinear functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) in terms of the value distributions. Moreover, we were able to describe all possible value distributions for several large classes of perfect nonlinear functions. In general, however, it seems to be a very difficult problem to determine all possible and impossible value distributions completely since the techniques we used rely on precise spectral information and solving systems of quadratic Diophantine equations, whose possible number of solutions grows with the increasing output group's order. To conclude, we believe that answering the following questions (in addition to already mentioned open problems and conjectures in the previous sections) will help to provide a better understanding of perfect nonlinear functions, and, more generally, cryptographically significant classes of functions. 1. The theory of value distributions of perfect nonlinear functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) (and in general, on arbitrary groups) developed in this article actually only hinges on the fact that \(F(x)=F(y)\) has precisely \(p^{n}+p^{n-m}(p^{n}-1)\) solutions. This is, however, not unique for bent functions. Are there other functions of interest with this property? It would be also interesting to investigate in a similar manner other classes of cryptographically significant functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\), for instance, plateaued and differentially uniform functions. 2. So far, the known constructions of almost balanced perfect nonlinear functions \(F\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{m}\) are mostly dominated by the \((+)\) type constructions. It would be interesting to provide more primary constructions of almost balanced bent functions of the type \((-)\), especially in the \(p\) odd case. 3. Besides the "direct sum" construction, there exist many other secondary constructions of bent functions, which can be described without loss of generality in the following form \(F(F_{1}(x_{1}),\ldots,F_{k}(x_{k}))\) where \(F_{i}\colon\mathbb{F}_{p}^{n_{i}}\to\mathbb{F}_{p}^{m}\) are perfect nonlinear functions, and \(F\) is some function. It would be interesting to provide initial conditions on bent functions \(F_{1},\ldots,F_{k}\), which guarantee that the obtained perfect nonlinear functions of the form \((x_{1},\ldots,x_{k})\in\mathbb{F}_{p}^{n_{1}}\times\cdots\times\mathbb{F}_{p} ^{n_{k}}\mapsto F(F_{1}(x_{1}),\ldots,F_{k}(x_{k}))\) are almost balanced of \((+)\) and \((-)\) types, and hence are inequivalent (in the \(p\) odd case). 4. Many of the known constructions of vectorial bent functions, whose preimage sets can be used to construct partial difference sets, are, in fact, almost balanced, see [6, 29]. In this regard, it is natural to find other constructions of almost balanced bent functions, which give rise to partial difference sets. 5. Our computer experiments show that in the Boolean case, it is possible to obtain both extremal value distributions by adding suitable linear functions from a single given vectorial Boolean bent function, opposite to the \(p\)-ary case. It would be interesting to investigate whether for every vectorial Boolean bent function one can add linear functions to obtain both \((+)\) and \((-)\) extremal value distributions, or whether there exist vectorial Boolean functions which cannot reach both extremal value distributions with a help of the addition of suitable linear functions. 6. We showed that many primary and secondary constructions of perfect nonlinear functions appear to have extremal value distributions, which implies that the "almost balanced" property unifies many algebraically different constructions. With this observation, it is essential to construct more perfect nonlinear functions in a purely combinatorial manner, using the "almost balanced" property. ## Acknowledgements The ideas in this article were partially developed while both authors visited Gohar Kyureghyan at the University of Rostock in late September 2022. We are grateful to her for the invitation, fruitful discussions and excellent working conditions. We would also like to thank Jan-Christoph Schlage-Puchta, who kindly suggested the idea of considering "deviations from the mean" as well as the proof of Theorem 5.11, which were then later developed to the results in Section 5. The first author is supported by the National Science Foundation under grant number 2127742.
2301.05000
A sample iterated small cancellation theory for groups of Burnside type
We develop yet another technique to present the free Burnside group $B(m,n)$ of odd exponent $n$ with $m\ge2$ generators as a group satisfying a certain iterated small cancellation condition. Using the approach, we provide a reasonably accessible proof that $B(m,n)$ is infinite with a moderate bound $n > 2000$ on the odd exponent $n$.
Igor Lysenok
2023-01-12T13:27:08Z
http://arxiv.org/abs/2301.05000v1
# A sample iterated small cancellation theory for groups of Burnside type ###### Abstract. We develop yet another technique to present the free Burnside group \(B(m,n)\) of odd exponent \(n\) with \(m\geq 2\) generators as a group satisfying a certain iterated small cancellation condition. Using the approach, we provide a reasonably accessible proof that \(B(m,n)\) is infinite with a moderate bound \(n>2000\) on the odd exponent \(n\). This research was supported by the Russian Science Foundation (project No. 21-11-00318). ## 1. Introduction The free \(m\)-generated Burnside group \(B(m,n)\) of exponent \(n\) is, by definition, the relatively free group in the variety of groups satisfying the identity \(x^{n}=1\), i.e. \(B(m,n)\simeq F_{m}/F_{m}^{n}\) where \(F_{m}\) is the free group of rank \(m\) and \(F_{m}^{n}\) is the subgroup of \(F_{m}\) generated by all \(n\)-th powers. Obtaining a structural information about groups \(B(m,n)\) is known to be a difficult problem. The primary question of this sort is whether \(B(m,n)\) is finite for given \(m,n\geq 2\). The question is known as the _Burnside problem_[1] and it is still not completely answered. The group is shown to be finite for exponents \(n=2\), \(3\)[1], \(n=4\)[14] and \(n=6\)[7]. A negative solution to the Burnside problem is given by the Novikov-Adian theorem [11, 8] stating that the Burnside group \(B(m,n)\) of odd exponent \(n\geq 665\) with \(m\geq 2\) generators is infinite. As for now, infiniteness of \(B(m,n)\) is established for exponents of the form \(n=665r\) or \(n\geq 8000\) and any number \(m\geq 2\) of generators. Note that \(B(m,r)\) is a homomorphic image of \(B(m,n)\) if \(n\) is a multiple of \(r\), so in this case infiniteness of \(B(m,r)\) implies infiniteness of \(B(m,n)\). The case when the exponent \(n\) does not have a large odd divisor was treated in [4, 9]. Although it is believable that free Burnside groups \(B(m,n)\) are infinite for considerably lower values of \(n\) (and there are several announcements of results of this sort) the lowest published and carefully checked bound is still \(665\), obtained by Adian [8] for the case of odd exponent \(n\). A principal step in understanding the structure of the group \(B(m,n)\) in the infinite case was made in the fundamental work by Novikov and Adian [11] and its improved version [8]. One of the ingredients of the proof was a tightly interweaved version of the small cancellation theory similar to one developed by Tartakovskii [15]. It was also shown in [8] that for \(m\geq 2\) and odd \(n\geq 665\) the group \(B(m,n)\) has several properties similar to key properties of small cancellation groups. A basic one is _layered Dehn's property:_ a freely reduced nonempty word representing the identity in the group contains a large part of a defining relator modulo relations of the previous layer. This easily implies that any such word should contain a subword of the form \(X^{t}\) for sufficiently large \(t\) which in turn implies that \(B(m,n)\) is infinite. Unfortunately, the approach due to Novikov-Adian, even in its polished and improved form in [8], is extremely technical and has a complicated logical structure. Several later works [12, 13, 3, 2] pursued the goal to find a more conceptually explicit and technically simpler approach to infinite Burnside groups, and more generally, to "infinite quotient of bounded exponent" phenomena in wider classes of groups as in [5, 3, 2]. As an underlying basic idea, all these approaches utilize a small cancellation theory in a more or less explicit form though based on different implementation techniques. It was eventually realized that iterated small cancellation theory is indeed a relevant framework to present Burnside groups of large exponents as well as many other examples of infinitely presented groups of a "monster" nature. In an explicit form, a relevant version of the theory was formulated by Gromov and Delzant [3] and Coulon [2]. However, both approaches need extremely large exponents to be applied to Burnside groups. (In fact, the both incorporate "non-constructive" tools so that the proof does not provide any explicit lower bound on the exponent \(n\).) Two questions naturally arise. What is the lower bound on the exponent \(n\) for which the iterated small cancellation approach can be applied to Burnside groups \(B(m,n)\)? Do we need a sophisticated technical framework to use the approach for reasonably small values of the exponent; for example, for values which are about several hundreds or less? The main goal of the present paper is to develop a sample version of the iterated small cancellation theory specially designed for free Burnside groups \(B(m,n)\) with a "moderate" lower bound on the exponent \(n\). More precisely, our technique works for odd exponents \(n>2000\). We consider our approach as a first approximation and an introduction to a considerably more technical result on infiniteness of Burnside groups with substantially smaller bounds on the exponent. ## 2. The iterated small cancellation condition We fix a group \(G\) given by a graded presentation (2-1) \[\big{\langle}\mathcal{A}\ \big{|}\ \ R=1\ (R\in\bigcup_{\alpha\geq 1} \mathcal{X}_{\alpha})\big{\rangle}.\] Here we assume that the set of defining relators is partitioned into the union of subsets \(\mathcal{X}_{\alpha}\) indexed by a positive integer \(\alpha\). We call cyclic shifts of words \(R\in\mathcal{X}_{\alpha}^{\pm 1}\)_relators of rank \(\alpha\)_. Thus, the set of all relators of rank \(\alpha\) is symmetrized, i.e. closed under cyclic shifts and taking inverses. With the presentation of \(G\), there are naturally associated _level groups_\(G_{\alpha}\) defined by all relations of rank up to \(\alpha\), i.e. (2-2) \[G_{\alpha}=\big{\langle}\mathcal{A}\ \big{|}\ \ R=1\ (R\in\bigcup_{\beta\leq \alpha}\mathcal{X}_{\beta})\big{\rangle}\] Our small cancellation condition depends on two positive real-valued parameters \(\lambda\) and \(\Omega\) satisfying (2-3) \[\lambda\leq\frac{1}{24},\quad\lambda\Omega\geq 20.\] We introduce also two other parameters with fixed value: \[\rho=1-9\lambda,\quad\zeta=\frac{1}{20}.\] The role of \(\lambda\), \(\Omega\), \(\rho\) and \(\zeta\) can be described as follows: * \(\lambda\) is an analog of the small cancellation parameter in the classical condition \(C^{\prime}(\lambda)\); * \(\Omega\) is the lower bound on the size of a relator \(R\) of rank \(\alpha\) in terms of the length function \(|\cdot|_{\alpha-1}\) associated with \(G_{\alpha-1}\) (defined below in 2.7); see condition (S1) in 2.8. * \(\rho\) is the reduction threshold used in the definition of a reduced in \(G_{\alpha}\) word. Informally, a reduced in \(G_{\alpha}\) word cannot contain more that \(\rho\)-th part of a relator of rank \(\alpha\) up to closeness in \(G_{\alpha-1}\). * \(\zeta\) is the rank scaling factor; it determines how the function \(|\cdot|_{\alpha}\) rescales when incrementing the rank. For any \(\alpha\geq 0\), we introduce the set \(\mathcal{H}_{\alpha}\) of _bridge words of rank \(\alpha\)_ recursively by setting \[\mathcal{H}_{0}=\{\text{the empty word}\},\] \[\mathcal{H}_{\alpha}=\{uSv\mid u,v\in\mathcal{H}_{\alpha-1},\ S\text{ is a subword of a relator of rank }\alpha\}.\] The definition immediately implies that \(\mathcal{H}_{\alpha-1}\subseteq\mathcal{H}_{\alpha}\). Note also that all sets \(\mathcal{H}_{\alpha}\) are closed under taking inverses. We call two elements \(x,y\in G_{\alpha}\)_close_ if \(x=uyv\) for some \(u,v\in\mathcal{H}_{\alpha}\). This relation will be often used in the case when \(x\) and \(y\) are represented by words in the generators \(\mathcal{A}\). In that case we say that words \(X\) and \(Y\) are _close in rank \(\alpha\)_ if they represent close elements of \(G_{\alpha}\), or, equivalently, \(X=uYv\) in \(G_{\alpha}\) for some \(u,v\in\mathcal{H}_{\alpha}\). For \(\alpha\geq 0\), the set \(\mathcal{R}_{\alpha}\) of words _reduced in \(G_{\alpha}\)_, the set of _fragments of rank \(\alpha\)_ and the length function \(|\cdot|_{\alpha}\) are defined by joint recursion. A word \(X\) in the generators \(\mathcal{A}\) is _reduced in \(G_{0}\)_ if \(X\) is freely reduced. A word \(X\) is _reduced in \(G_{\alpha}\)_ for \(\alpha\geq 1\) if it is reduced in \(G_{\alpha-1}\) and the following is true: if a subword \(S\) of a relator \(R\) of rank \(\alpha\) is close in rank \(\alpha-1\) to a subword of \(X\) then \[|S|_{\alpha-1}\leq\rho|R|_{\alpha-1}.\] A word \(X\) is _cyclically reduced in \(G_{\alpha}\)_ if any cyclic shift of \(X\) is reduced in \(G_{\alpha}\). A nonempty word \(F\) is a _fragment of rank \(\alpha\geq 1\)_ if \(F\) is reduced in \(G_{\alpha-1}\) and is close in rank \(\alpha-1\) to a subword \(P\) of a word of the form \(R^{k}\) where \(R\) is a relator of rank \(\alpha\). (In almost all situations \(P\) will be a subword of a cyclic shift of \(R\).) A _fragment of rank 0_ is a word of length 1, i.e. a single letter of the alphabet \(\mathcal{A}^{\pm 1}\). It is convenient to assume that each fragment \(F\) of rank \(\alpha\geq 1\) is considered with fixed associated words \(P\), \(u\), \(v\) and a relator \(R\) of rank \(\alpha\) such that \(F=uPv\) in \(G_{\alpha-1}\), \(u,v\in\mathcal{H}_{\alpha-1}\) and \(P\) is a subword of \(R^{k}\) for some \(k>0\), i.e. a fragment is formally a quintuple \((F,P,u,v,R)\). A _fragmentation of rank \(\alpha\)_ of a (linear or cyclic) word \(X\) is a partition of \(X\) into nonempty subwords of fragments of ranks \(\beta\leq\alpha\). If \(\mathcal{F}\) is a fragmentation of rank \(\alpha\) of \(X\) then by definition, the _weight of \(\mathcal{F}\) in rank \(\alpha\)_ is defined by \[\text{weight}_{\alpha}(\mathcal{F})=m_{\alpha}+\zeta m_{\alpha-1}+\zeta^{2}m_{ \alpha-2}+\cdots+\zeta^{\alpha}m_{0}\] where \(m_{\beta}\) is the number of subwords of fragments of rank \(\beta\) in \(\mathcal{F}\). Here we assume that each subword in \(\mathcal{F}\) is assigned a unique rank \(\beta\). We now define a semi-additive length function \(|\cdot|_{\alpha}\) on words in the generators \(\mathcal{A}\): \[|X|_{\alpha}=\min\{\text{weight}_{\alpha}(\mathcal{F})\mid\mathcal{F}\text{ is a fragmentation of rank }\alpha\text{ of }X\}.\] Note that \(|X|_{0}\) is the usual length \(|X|\) of \(X\). 2.8. The iterated small cancellation condition consists of the following three conditions (S0)-(S3) where the quantifier 'for all \(\alpha\geq 1\)' is assumed. 1. ("Relators are reduced") Any relator of rank \(\alpha\) is cyclically reduced in \(G_{\alpha-1}\). 2. ("Relators are large") Any relator \(R\) of rank \(\alpha\) satisfies \[|R|_{\alpha-1}\geq\Omega.\] 3. ("Small overlapping") For \(i=1,2\), let \(S_{i}\) be a starting segment of a relator \(R_{i}\) of rank \(\alpha\). Assume that \(S_{1}=uS_{2}v\) in \(G_{\alpha-1}\) for some \(u,v\in\mathcal{H}_{\alpha-1}\) and \(|S_{1}|_{\alpha-1}\geq\lambda|R_{1}|_{\alpha-1}\). Then \(R_{1}=uR_{2}u^{-1}\) in \(G_{\alpha-1}\). 2.9. It can be proved that a group \(G\) satisfying conditions (S0)-(S2) possesses core properties of small cancellation groups, in particular, a version of Dehn's property. We will impose, however, an extra condition on the graded presentation of \(G\) which implies cyclicity of all finite subgroups of groups \(G_{\alpha}\) and avoids difficulties caused by existence of non-cyclic finite subgroups in the case of Burnside groups \(B(m,n)\) of even exponent \(n\). 1. ("No inverse conjugate relators") No relator of rank \(\alpha\) is conjugate in \(G_{\alpha-1}\) to its inverse. As we see below, this condition is satisfied if each relator \(R\) of rank \(\alpha\) has the form \(R_{0}^{n}\) where the exponent \(n\) (which can vary for different \(R\)) is odd and \(R_{0}\) is a non-power in \(G_{\alpha-1}\). See Corollary 13.11. Starting from Section 8, we will use a mild extra assumption on the graded presentation (2-1) by requiring it to be normalized in the following sense. The assumption is not essential and just makes arguments simpler (mainly due to Lemma 8.1) slightly improving bounds on the constants. 2.10. **Definition.** We call a graded presentation (2-1) _normalized_ if the following assertions hold: 1. Every relator \(R\in\mathcal{X}_{\alpha}\) has the form \(R=R_{0}^{t}\) where \(R_{0}\) represents a non-power element of \(G_{\alpha-1}\) (i.e. \(R_{0}\) does not represent in \(G_{\alpha-1}\) an element of the form \(g^{k}\) for \(k\geq 2\)); we call \(R_{0}\) the _root_ of a relator \(R\). 2. If \(R,S\in\mathcal{X}_{\alpha}\) and \(R\neq S\) then \(R\) and \(S\) are not conjugate in \(G_{\alpha-1}\). Note that the condition to be normalized is not restrictive: every graded presentation can be replaced with a normalized one (although formally speaking, this replacement could affect the iterated small cancellation condition; however, in real applications this would hardly be the case). _Remark_.: Checking conditions (S0)-(S3) requires knowledge about groups \(G_{\alpha-1}\). Thus presenting a group by relations satisfying the iterated small cancellation condition already requires a proof of properties of groups \(G_{\alpha}\) by induction on the rank. ## 3. Main results As in the case of classical small cancellation, the iterated small cancellation condition has strong consequences on the presented group \(G\). A basic one is an analog of the Dehn property: every non-empty freely reduced word representing the trivial element of the group "contains a large part" of a relator. In what follows, we assume that a group \(G\) is given by a normalized graded presentation satisfying conditions (S0)-(S3) above and for any \(\alpha\geq 0\), \(G_{\alpha}\) denotes the group defined by all relations of ranks up to \(\alpha\). We say that a word \(X\) is _reduced in \(G\)_ if it is reduced in \(G_{\alpha}\) for all \(\alpha\geq 0\). The following theorem is an immediate consequence of Proposition 7.6. **Theorem 1**.: _Let \(X\) be a non-empty word in the generators \(\mathcal{A}\). If \(X\) reduced in \(G_{\alpha}\) then \(X\neq 1\) in \(G_{\alpha}\). If \(X\) is reduced in \(G\) then \(X\neq 1\) in \(G\)._ By expanding the definition of a reduced word in \(G\) we get an equivalent formulation which is more in the spirit of the small cancellation theory. **Corollary**.: _Let \(X\) be a freely reduced non-empty word. If \(X=1\) in \(G\) then for some \(\alpha\geq 1\), \(X\) has a subword close in \(G_{\alpha-1}\) to a subword \(P\) of a relator \(R\) of rank \(\alpha\) with \(|P|_{\alpha-1}\geq\rho|R|_{\alpha-1}\)._ In the classical small cancellation theory, existence of a Dehn reduced representatives for group elements is a simple consequence of the fact that a word containing more than a half of a relator can be shortened by applying the corresponding relation. This approach does not work in our version of the iterated small cancellation and existence of reduced representatives is a nontrivial fact proved below and formulated in Proposition 11.1 and Corollary 14.8. **Theorem 2**.: _Every element of \(G_{\alpha}\) can be represented by a word reduced in \(G_{\alpha}\). Every element of \(G\) can be represented by a word reduced in \(G\)._ Many other properties of groups \(G_{\alpha}\) and \(G\) are established in Sections 5-14. Our principal result shows that our version of the iterated small cancellation theory can be applied to free Burnside groups of odd exponent \(n\) with a moderate lower bound on \(n\). The following theorem is a consequence of Propositions 16.8 and Corollary 16.10 (see also Remark 15.4). **Theorem 3**.: _For odd \(n>2000\) and \(m\geq 2\), the free Burnside group \(B(n,m)\) has a normalized graded presentation_ \[\left\langle\mathcal{A}\ \left|\ \ C^{n}=1\ (C\in\bigcup_{\alpha\geq 1} \mathcal{E}_{\alpha})\right\rangle\] _satisfying conditions (S0)-(S3) with \(\lambda=\frac{80}{n}\), \(\Omega=0.25n\)._ The following theorem is a well known property of Burnside groups of sufficiently large odd exponent. It is direct consequence of Propositions 9.14 and 16.6 (the definition of \(\omega\) is given in 4.19). **Theorem 4**.: _Let \(n>2000\) be odd. Let \(X\) be a non-empty freely reduced word that is equal 1 in \(B(m,n)\). Then \(X\) has a subword of the form \(C^{480}\) where \(C\in\bigcup_{\alpha\geq 1}\mathcal{E}_{\alpha}\)._ Note that, with existence of infinite aperiodic words in the 2-letter alphabet (see for example [8, SSI.3]) this implies infiniteness of \(B(n,m)\) for odd \(n>2000\) and \(m\geq 2\). _Some remarks._ The present approach has much in common with paper [9]. However, the approach in [9] was based on the assumption that defining relations of the group under consideration are of the form \(x^{n}=1\) for sufficiently large \(n\). Although the general scheme of a large portion of our proofs is the same as in [9], our arguments are in different technical environment. We tried to make the iterated small cancellation condition as simple possible. In particular, we use a simple version of closeness in groups \(G_{\alpha}\) (see 2.3 and 2.4). However, when presenting the free Burnside group as an iterated small cancellation group, this version is not optimal for the bound on the exponent. A more refined version would significantly lower the bound. Nevertheless, we consider the bound \(n>2000\) on the exponent as a reasonable balance between its optimality and the complexity of definitions and proofs. The whole approach relies essentially on the simultaneous induction on the rank \(\alpha\). Since the proof of required statements about groups \(G_{\alpha}\) needs a comprehensive analysis of certain types of relations in groups of previous ranks, the number of inductive hypotheses in quite large (several tens). We think that a large number of inductive hypotheses is an unavoidable feature of any "small cancellation" approach to infinite Burnside groups with a reasonably small lower bound on the exponent. Note that in the "basic" small cancellation theory in Sections 5-7 we use Proposition 7.8 (with its immediate consequence Proposition 7.9) as the only inductive hypothesis. We briefly mention essential ingredients of our approach. Sections 5-7 are devoted to analysis of van Kampen diagrams over the presentation (2-2) of the group \(G_{\alpha}\). In 5.1 we introduce diagrams with a special marking of the boundary so that the boundary loops of a diagram are divided into sides and bridges. The label of a side is a word reduced in \(G_{\alpha}\) and bridges are "small" sections between sides labeled by bridge words of rank \(\alpha\). According to the marking, there are diagrams of bigon, trigon, etc. type. We then analyze a global structure of a diagram with marked boundary using the notion of contiguity subdiagram (see 6.5). For the quantitative analysis, we use a version of discrete connection in the spirit of [10] and the corresponding discrete analog of the Gauss-Bonnet formula (Proposition 7.3). The main outcomes are the bound on the total size of sides of a diagram with no bonds (Propositions 7.9 and 7.12) and the "single layered" structure of diagrams of small complexity (Propositions 7.11 and 7.13). The results of Sections 5-7 serve as a background for further analysis of relations in \(G_{\alpha}\). The most important type of relations under consideration are "closeness" relations in \(G_{\alpha}\) of the form \(X=uYv\) where \(X,Y\in\mathcal{R}_{\alpha}\) and \(u,v\in\mathcal{H}_{\alpha}\). The structural description of diagrams over the presentation of \(G_{\alpha}\) transfers naturally to the language of the Cayley graph \(\Gamma_{\alpha}\) of \(G_{\alpha}\), see 9.4. In \(\Gamma_{\alpha}\), words in the generators of the group are represented by paths and relations in \(G_{\alpha}\) are represented by loops. The relation \(X=uYv\) becomes a loop \(\mathsf{X}^{-1}\mathsf{uYv}\) in \(\Gamma_{\alpha}\) which can be viewed as a coarse bigon; we say also that paths \(\mathsf{X}\) and \(\mathsf{Y}\) are close. The single layered structure of the filling diagram implies one-to-one correspondence between fragments of rank \(\alpha\) in \(\mathsf{X}\) and in \(\mathsf{Y}\) that come from the 2-cells of the diagram, called _active_ fragments of rank \(\alpha\) with respect to the coarse bigon \(\mathsf{X}^{-1}\mathsf{uYv}\). To express the correspondence, we use the _compatibility_ relation, defined in 8.6, on the set of fragments of rank \(\alpha\) in \(\Gamma_{\alpha}\) (i.e. paths in \(\Gamma_{\alpha}\) labeled by fragments of rank \(\alpha\)): if \(\mathsf{K}\) and \(\mathsf{M}\) are the corresponding active fragments of rank \(\alpha\) in \(\mathsf{X}\) and \(\mathsf{Y}\), respectively, then \(\mathsf{K}\) and \(\mathsf{M}^{-1}\) are compatible (Proposition 9.7). In Section 9 we perform this passage from diagrams over the presentation of \(G_{\alpha}\) to the Cayley graph \(\Gamma_{\alpha}\). We establish several properties of coarse bigons, trigons and more generally, coarse polygons in \(\Gamma_{\alpha}\). We consider also conjugacy relations in \(G_{\alpha}\) which are represented by parallel infinite lines in \(\Gamma_{\alpha}\) (see 4.3). A fundamental property of close paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha}\) with \(\mathit{label}(\mathsf{X}),\mathit{label}(\mathsf{Y})\in\mathcal{R}_{\alpha}\) is that the correspondence between fragments of rank \(\alpha\) in \(\mathsf{X}\) and \(\mathsf{Y}\) extends to non-active ones. If \(\mathsf{K}\) is a fragment in \(\mathsf{X}\) of sufficiently large size then there exists a fragment of \(\mathsf{M}\) of rank \(\alpha\) in \(\mathsf{Y}\) such that \(\mathsf{K}\) is compatible with either \(\mathsf{M}\) or \(\mathsf{M}^{-1}\), with possible exceptions of extreme positions of \(\mathsf{K}\) in \(\mathsf{X}\) (Proposition 10.6). Speaking informally, fragments of rank \(\alpha\) play the role of letters when coincidence of words is replaced by closeness in \(G_{\alpha}\). This property of close paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha}\) and its analogs for coarse trigons in \(G_{\alpha}\) (Proposition 10.7) and for conjugacy relations in \(G_{\alpha}\) (Propositions 10.10 and 10.12) provide a technical base to analyze further properties of groups \(G_{\alpha}\) and \(G\). In particular, the correspondence between fragments of rank \(\alpha\) in coarse bigons, under an appropriate adaptation, is crucial when we consider in Section 13 close in \(G_{\alpha}\) periodic words. In Section 11 we prove that any element of \(G_{\alpha}\) can be represented by a reduced word (Proposition 11.1) and is conjugate to an element represented by a cyclically reduced word and, moreover, by a strongly cyclically reduced word if it has infinite order (definition 4.15, Proposition 11.5). Sections 12 and 13 are preparatory for analysis of periodic relations over \(G_{\alpha}\). In Section 12 we introduce the set of _coarsely periodic words_ over \(G_{\alpha}\) which are close (in a stronger sense then defined in 2.4) to periodic words with a strongly reduced in \(G_{\alpha}\) period (Definition 12.4). The main result of Section 13, Proposition 13.4, is an analog of a well known property of periodic words stating that if two periodic words have a sufficiently large overlapping (for example, if they overlap for at least two of each of the periods) then they have a common period. In the last two Sections 15 and 16 we define a set of defining relations of the form \(C^{n}=1\) (\(C\in\bigcup_{\alpha\geq 1}\mathcal{E}_{\alpha}\)) for the Burnside group \(B(m,n)\) and prove that this set satisfies the iterated small cancellation condition (S0)-(S3). More precisely, in Definitions 15.1-15.3 we describe the recursive step to define \(\mathcal{E}_{\alpha+1}\) given \(\mathcal{E}_{\beta}\) for \(\beta\leq\alpha\), i.e. given the presentation of \(G_{\alpha}\). The principal idea to build sets \(\mathcal{E}_{\alpha}\) can be roughly described as "classification of periodic words by depth of periodicity" and is similar to one used in [11, 8]. Note that other approaches [12, 13, 4, 5, 3, 2] to groups of "Burnside type" use construction of periodic relations \(C^{n}=1\) where for the next rank, \(C\) are chosen to be "short in size" with respect to the current group. We believe that the "depth of periodicity" approach, allthough more technical in several aspects, gives a more optimal lower bound on the exponent \(n\). ## 4. Preliminaries Starting from Section 5 we assume fixed a value of rank \(\alpha\geq 0\) and a presentation (2-2) of a group \(G_{\alpha}\) with relators \(R\in\mathcal{X}_{\beta}\) defined for all ranks \(\beta\leq\alpha\). We assume that the presentation of \(G_{\alpha}\) is normalized and satisfies conditions (S0)-(S3) and inequalities (2-3) for all ranks up to the fixed value \(\alpha\). In the proofs we will use forward references to statements for smaller values of rank, as already established. We will use references like "Proposition 2.3\({}_{\alpha-1}\)" or "Lemma 3.4\({}_{<\alpha}\)" etc. which mean "statement of Proposition 2.3 for rank \(\alpha-1\)" or "statement of Lemma 3.4 for all ranks \(\beta<\alpha\)" respectively. With a few exceptions, statements whose formulation includes the case \(\alpha=0\), are trivial or follow directly from definitions in that case. ### Words We fix a set \(\mathcal{A}\) of generators for a group \(G\). By a word we always mean a group word over the alphabet \(\mathcal{A}^{\pm 1}=\mathcal{A}\cup\{a^{-1}\mid a\in\mathcal{A}\}\). We use notation \(X=Y\) for identical equality of words \(X\) and \(Y\). By \(X^{\circ}\) we denote the cyclic word represented by a plain word \(X\). A _subword_\(Y\) of a word \(X\) is always considered with an associated occurrence of \(Y\) in \(X\) that is clear from the context. To make it formal, we associate with a subword \(Y\) of \(X\) a pair of words \((U,V)\) such that \(UYV=X\). If \(Y\) is a subword of \(X\) with an associated pair \((U,V)\) then writing \(Y=WZ\) we mean that \(W\) and \(Z\) are viewed as subwords of \(X\) with associated pairs \((U,ZV)\) and \((UW,V)\) respectively. Note that'subword \(Y\) of \(X_{1}\)' and'subword \(Y\) of \(X_{2}\)' are formally two distinct objects if \(X_{1}\neq X_{2}\). It will be always clear from the context which ambient word is assumed for \(Y\). A _periodic word with period \(A\)_, or an \(A\)_-periodic word_ for short, is any subword of \(A^{t}\) for \(t>0\). According to the convention about subwords, an \(A\)-periodic word \(P\) is always considered with an associated occurrence of \(P\) in a word \(A^{t}\). A _partition_ of a word \(X\) is a representation of \(X\) as concatenation \(X=X_{1}\cdot X_{2}\cdot\ldots\cdot X_{k}\) of some subwords \(X_{i}\). A word \(X\) is _covered_ by a collection of words \((Y_{i})_{i}\) if \(X\) admits a partition \(X=X_{1}\cdot X_{2}\cdot\ldots\cdot X_{k}\) such that \(X_{i}\) is a subword of some \(Y_{t_{i}}\) and \(t_{i}\neq t_{j}\) for \(i\neq j\). ### Graphs We use the term 'graph' as a synonym for 'combinatorial 1-complex'. Edges of a graph are considered as having one of the two possible directions, so formally all our graphs are directed. By \(\iota(\mathsf{e})\) and \(\tau(\mathsf{e})\) we denote the starting and the ending vertices of an edge \(\mathsf{e}\), respectively, and \(\mathsf{e}^{-1}\) denotes the inverse edge. An \(\mathcal{A}\)_-labeling_ on a graph \(\Gamma\) is a function from the set of edges of \(\Gamma\) with values in \(\mathcal{A}^{\pm 1}\cup\{1\}\) such that \(\mathit{label}(\mathsf{e}^{-1})=\mathit{label}(\mathsf{e})^{-1}\) for any \(\mathsf{e}\); here 1 denotes the empty word. An \(\mathcal{A}\)-labeling naturally transfers to paths in \(\Gamma\), so the label of a path \(\mathsf{P}\) is a word in \(\mathcal{A}^{\pm 1}\). If \(\mathsf{P}\) is a path in \(\Gamma\) then \(\iota(\mathsf{P})\) and \(\tau(\mathsf{P})\) denote the starting and the ending vertices of \(\mathsf{P}\), respectively. For any vertex \(\mathsf{a}\) of \(\Gamma\), there is the unique _empty path at \(\mathsf{a}\)_. We identify this empty path with vertex \(\mathsf{a}\) itself, so \(\iota(\mathsf{a})=\tau(\mathsf{a})=\mathsf{a}\) and \(\mathit{label}(\mathsf{a})=1\). A path is _simple_ if it visits no vertex twice. Two paths are _disjoint_ if they have no common and no mutually inverse edges. A _line_ in \(\Gamma\) is a bi-infinite path (we do not assume that lines have no loops). If \(\mathsf{X}\) and \(\mathsf{Y}\) are subpaths of a simple path \(\mathsf{Z}\) then we write \(\mathsf{X}\ll\mathsf{Y}\) if \(\mathsf{Z}=\mathsf{Z}_{1}\mathsf{X}\mathsf{Z}_{2}\mathsf{Y}\mathsf{Z}_{3}\) for some \(\mathsf{Z}_{i}\) and \(\mathsf{X}<\mathsf{Y}\) if \(\mathsf{Z}=\mathsf{Z}_{1}\mathsf{X}\mathsf{u}\mathsf{Z}_{2}=\mathsf{Z}_{1} \mathsf{v}\mathsf{Y}\mathsf{Z}_{2}\) for some \(\mathsf{Z}_{i}\) and non-empty \(\mathsf{u}\) and \(\mathsf{v}\). Although both relations depend on \(\mathsf{Z}\), it will be always clear from the context which \(\mathsf{Z}\) is assumed. Clearly, if neither \(\mathsf{X}\) and \(\mathsf{Y}\) is contained in the other then either \(\mathsf{X}<\mathsf{Y}\) or \(\mathsf{Y}<\mathsf{X}\). The _union_\(\mathsf{X}\cup\mathsf{Y}\) of subpaths \(\mathsf{X}\) and \(\mathsf{Y}\) of \(\mathsf{Z}\) is the shortest subpath of \(\mathsf{Z}\) containing both \(\mathsf{X}\) and \(\mathsf{Y}\). The Cayley graph \(\Gamma(G,\mathcal{A})\) of a group \(G\) with a generating set \(\mathcal{A}\) is naturally viewed as an \(\mathcal{A}\)-labeled graph. We identify vertices of \(\Gamma(G,\mathcal{A})\) with elements of \(G\), so if \(\iota(\mathsf{P})=\mathsf{a}\) and \(\tau(\mathsf{P})=\mathsf{b}\) then \(\mathit{label}(\mathsf{P})\) is a word representing \(\mathsf{a}^{-1}\mathsf{b}\). The group \(G\) acts on \(\Gamma(G,\mathcal{A})\) by left multiplication. A path \(\mathsf{P}\) in \(\Gamma(G,\mathcal{A})\) labeled by an \(A\)-periodic word is an \(A\)_-periodic segment_. An _\(A\)-periodic line_ is a bi-infinite path labeled by \(A^{\infty}\). Since an \(A\)-periodic word is assumed to have an associated occurrence in some \(A^{t}\), an \(A\)-periodic segment \(\mathsf{P}\) can be uniquely extended to an \(A\)-periodic line called the _infinite periodic extension_ of \(\mathsf{P}\). If \(\mathsf{P}\) and \(\mathsf{Q}\) are \(A\)-periodic segments, \(\mathsf{P}\) is a subpath of \(\mathsf{Q}\) and the both have the same infinite periodic extension then \(\mathsf{Q}\) is a _periodic extension_ of \(\mathsf{P}\). We define also the _translation element_\(s_{A,\mathsf{P}}\in G\) that shifts the infinite periodic extension \(\mathsf{L}\) of \(\mathsf{P}\) forward by a period \(A\). By definition, \(s_{A,\mathsf{P}}\) can be computed as follows. Take any vertex \(\mathsf{a}\) on \(\mathsf{L}\) such that the label of \(\mathsf{L}\) at \(\mathsf{a}\) starts with \(A\). Then \(s_{A,\mathsf{P}}=\mathsf{a}A\mathsf{a}^{-1}\). If \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are two periodic lines with periods \(A_{1}\) and \(A_{2}\) respectively then \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are _parallel_ if \(s_{A_{1},\mathsf{L}_{1}}=s_{A_{2},\mathsf{L}_{2}}\). ### Mapping relations in the Cayley graph It follows from the definition of the Cayley graph that a word \(X\) in the generators \(\mathcal{A}\) represents the identity of \(G\) if and only if some (and therefore, any) path \(\mathsf{X}\) in \(\Gamma(G,\mathcal{A})\) with \(\mathit{label}(\mathsf{X})=X\) is a loop. Thus relations in \(G\) are represented by loops in \(\Gamma(G,\mathcal{A})\). This representation will be our basic tool to analyze relations in a group using geometric properties of its Cayley graph. We will often use the following notational convention. If \(X_{1}X_{2}\dots X_{n}=1\) is a relation in a group \(G\) then we represent it by a loop \(\mathsf{X}_{1}\mathsf{X}_{2}\dots\mathsf{X}_{n}\) in the Cayley graph of \(G\) typed with the same letters in sans serif where, by default, \(\mathit{label}(\mathsf{X}_{i})=X_{i}\) for all \(i\). We represent also conjugacy relations in \(G\) by parallel periodic lines in \(\Gamma(G,\mathcal{A})\) as follows. Let \(X=Z^{-1}YZ\) in \(G\). Consider a loop \(\mathsf{X}^{-1}\mathsf{Z}^{-1}\mathsf{YZ}^{\prime}\) in \(\Gamma(G,\mathcal{A})\) with \(\mathit{label}(\mathsf{X})=X\), \(\mathit{label}(\mathsf{Y})=Y\) and \(\mathit{label}(\mathsf{Z})=\mathit{label}(\mathsf{Z}^{\prime})=Z\). We extend \(\mathsf{X}\) to an \(X\)-periodic line \(\mathsf{L}_{1}=\dots\mathsf{X}_{-1}\mathsf{X}_{0}\mathsf{X}_{1}\dots\) with \(\mathit{label}(\mathsf{X}_{i})=X\) and \(\mathsf{X}_{0}=\mathsf{X}\) and, in a similar way, extend \(Y\) to a \(Y\)-periodic line \(\mathsf{L}_{2}=\dots\mathsf{Y}_{-1}\mathsf{Y}_{0}\mathsf{Y}_{1}\dots\) with \(\mathit{label}(\mathsf{Y}_{i})=Y\) and \(\mathsf{Y}_{0}=\mathsf{Y}\). Then we get a pair of parallel lines \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) that represents conjugacy of \(X\) and \(Y\) in \(G\). We will be freely switch between the language of paths in Cayley graphs and word relations. ### Van Kampen diagrams Let \(G\) be a group with a presentation \(\mathcal{P}=\langle\mathcal{A}\,|\,\mathcal{R}\rangle\). A _diagram \(\Delta\) over \(\mathcal{P}\)_ is a finite \(2\)-complex \(\Delta\) embedded in \(\mathbb{R}^{2}\) with a given \(\mathcal{A}\)-labeling of the \(1\)-skeleton \(\Delta^{(1)}\) such that the label of the boundary loop of every \(2\)-cell of \(\Delta\) is either empty, has the form \(a^{\pm 1}a^{\mp 1}\) for \(a\in\mathcal{A}\) or is a relator in \(\mathcal{R}^{\pm 1}\). Note that here we use an extended version of the widely used definition by allowing boundary loops of \(2\)-cells labeled with empty word or freely cancellable pair of letters. This allows us to avoid technical issues related to singularities (see [13, SS11.5] or [9, SS4]). By default, all diagrams are assumed to be connected. We refer to \(2\)-cells of a diagram \(\Delta\) simply as to _cells_; \(1\)-cells and \(0\)-cells are _edges_ and _vertices_ as usual. By \(\delta\mathsf{D}\) we denote the boundary loop of a cell \(\mathsf{D}\) and by \(\delta\Delta\) we denote the unique boundary loop of \(\Delta\) in case when \(\Delta\) is simply connected. We fix an orientation of \(\mathbb{R}^{2}\) and assume that boundary loops of cells of \(\Delta\) and boundary loops of \(\Delta\) are positively oriented with respect to the cell or to the diagram, respectively. This means, for example, that \((\delta\mathsf{D})^{-1}\) is a boundary loop of the diagram \(\Delta-\mathsf{D}\) obtained by removal of a cell \(\mathsf{D}\) from \(\Delta\). Note that boundary loops of \(\Delta\) and of its cells are defined up to cyclic shift. According to van Kampen lemma ([6, Theorem V.1.1] and [13, Theorem 11.1]) a word \(X\) in the generators \(\mathcal{A}\) represents the identity in \(G\) if and only if there exists a simply connected diagram \(\Delta\) over \(\mathcal{P}\) with \(\mathit{label}(\delta\Delta)=X\). Words \(X\) and \(Y\) represent conjugate elements of \(G\) if and only if there exists an annular (i.e. homotopy equivalent to an annulus) diagram over \(\mathcal{P}\) with boundary loops \(\mathsf{X}\) and \(\mathsf{Z}\) such that \(\mathit{label}(\mathsf{X})=X\) and \(\mathit{label}(\mathsf{Z})=Y^{-1}\) ([6, Lemma V.5.2] and [13, Theorem 11.2]). If \(\Sigma\) is a subdiagram of \(\Delta\) then \(\Delta-\Sigma\) denotes the subdiagram of \(\Delta\) obtained as the topological closure of the complement \(\Delta\setminus\Sigma\). Let \(\Delta\) and \(\Delta^{\prime}\) be diagrams over \(\mathcal{P}\) such that \(\Delta^{\prime}\) is obtained from \(\Delta\) by either * contracting an edge \(\mathsf{e}\) with \(\mathit{label}(\mathsf{e})=1\) to a vertex, * contracting a cell \(\mathsf{D}\) with \(\mathit{label}(\delta\mathsf{D})=1\) to a vertex, or * contracting a cell \(\mathsf{D}\) with \(\mathit{label}(\delta\mathsf{D})=a^{\pm 1}a^{\mp 1}\), \(a\in\mathcal{A}\), to an edge labeled \(a^{\pm 1}\). We call the inverse transition from \(\Delta^{\prime}\) to \(\Delta\) an _elementary refinement_. A sequence of elementary refinements is a _refinement_. There are several common use cases for refinement: * Any diagram can be made by refinement _non-singular_, i.e. homeomorphic to a punctured disk. In particular, any simply connected diagram can be refined to a non-singular disk. * If \(\mathsf{C}\) is a boundary loop of \(\Delta\) represented as a product \(\mathsf{C}=\mathsf{X}_{1}\ldots\mathsf{X}_{k}\) of paths \(\mathsf{X}_{i}\) then, after refinement, the corresponding boundary loop of a new diagram \(\Delta^{\prime}\) becomes \(\mathsf{X}^{\prime}_{1}\ldots\mathsf{X}^{\prime}_{k}\) where each \(\mathsf{X}_{i}\) refines to a nonempty path \(\mathsf{X}^{\prime}_{i}\) (see the definition in 4.5). ### Combinatorially continuous maps of graphs We consider the class of maps between \(\mathcal{A}\)-labeled graphs which are label preserving and can be realized as continuous maps of topological spaces. More precisely, a map \(\phi:\Lambda\to\Lambda^{\prime}\) between \(\mathcal{A}\)-labeled graphs \(\Lambda\) and \(\Lambda^{\prime}\) is _combinatorially continuous_ if * \(\phi\) sends vertices to vertices and edges to edges or vertices; for any edge \(\mathsf{e}\) of \(\Lambda\), \(\phi(\mathsf{e})\) is a vertex only if \(e\) has the empty label; if \(\phi(\mathsf{e})\) is an edge then \(\mathit{label}(\phi(\mathsf{e}))=\mathit{label}(\mathsf{e})\). * if \(\phi(\mathsf{e})\) is an edge then \(\phi\) preserves the starting and the ending vertices of \(\mathsf{e}\); if \(\phi(\mathsf{e})\) is a vertex then \(\phi(\mathsf{e})=\phi(\iota(\mathsf{e}))=\phi(\tau(\mathsf{e}))\). A combinatorially continuous map \(\phi:\Lambda\to\Lambda^{\prime}\) extends in a natural way to the map denoted also by \(\phi\), from the set of paths in \(\Lambda\) to the set of paths in \(\Lambda^{\prime}\). Clearly, \(\phi\) preserves path labels. If a diagram \(\Delta^{\prime}\) is obtained from a diagram \(\Delta\) by refinement then we have a combinatorially continuous map \(\phi:\Delta^{\prime(1)}\to\Delta^{(1)}\) induced by the sequence of contractions \(\Delta^{\prime}\to\Delta\). If \(\mathsf{P}\) is a path in \(\Delta\) and \(\mathsf{P}^{\prime}=\phi(\mathsf{P})\) then \(\mathsf{P}\)_refines_ to \(\mathsf{P}^{\prime}\). ### Mapping diagrams in Cayley graphs It is well known that simply connected diagrams can be viewed as combinatorial surfaces in the Cayley complex of a group. Since we do not make use of two-dimensional structure, we adapt this view to the case of Cayley graphs. If \(\Delta\) is a simply connected diagram over \(\mathcal{P}\) then there exists a combinatorially continuous map \(\phi:\Delta^{(1)}\to\Gamma(G,\mathcal{A})\). Any two such maps \(\phi_{1},\phi_{2}:\Delta^{(1)}\to\Gamma(G,\mathcal{A})\) differ by translation by some element \(g\in G\), i.e. \(\phi_{1}=t_{g}\phi_{2}\) where \(t_{g}:x\mapsto gx\) is the translation. In particular, if \(\mathsf{X}\) is a loop in \(\Gamma(G,\mathcal{A})\) and for the boundary loop \(\bar{\mathsf{X}}\) of \(\Delta\) we have \(\mathit{label}(\bar{\mathsf{X}})=\mathit{label}(\mathsf{X})\) then there is a map \(\phi:\Delta^{(1)}\to\Gamma(G,\mathcal{A})\) such that \(\phi(\bar{\mathsf{X}})=\mathsf{X}\). In this case we say that \(\Delta\)_fills_\(\mathsf{X}\) via \(\phi\). If \(\Delta\) is not simply connected then we can consider a combinatorially continuous map \(\phi:\tilde{\Delta}^{(1)}\to\Gamma(G,\mathcal{A})\) where \(\tilde{\Delta}\) is the universal cover of \(\Delta\). Again, any two such maps \(\phi_{1},\phi_{2}:\tilde{\Delta}^{(1)}\to\Gamma(G,\mathcal{A})\) differ by translation by an element of \(G\). The set \(\{\mathsf{L}_{i}\}_{i}\) of boundary loops of \(\Delta\) lifts to a (possibly infinite) set of bi-infinite boundary lines \(\{\tilde{\mathsf{L}}_{i}^{j}\}_{i,j}\) of \(\tilde{\Delta}\) and thus produces a set of lines \(\{\phi(\tilde{\mathsf{L}}_{i}^{j})\}_{i,j}\) in \(\Gamma(G,\mathcal{A})\). Each \(\phi(\tilde{\mathsf{L}}_{i}^{j})\) can be viewed as an \(P_{i}\)-periodic line with period \(P_{i}=\mathit{label}(\mathsf{L}_{i})\). We will be interested mainly in the case when \(\Delta\) is an _annular_ diagram, i.e. homotopy equivalent to a circle. In this case, boundary loops \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) of \(\Delta\) produce two \(P_{i}\)-periodic lines \(\phi(\tilde{\mathsf{L}}_{i})\) (\(i=1,2\)) in \(\Gamma(G,\mathcal{A})\) such that \(\phi(\tilde{\mathsf{L}}_{1})\) and \(\phi(\tilde{\mathsf{L}}_{2})^{-1}\) are parallel. **4.7 Definition**.: Let \(\Delta\) and \(\Delta^{\prime}\) be diagrams of the same homotopy type over a presentation of a group \(G\). We assume that a label preserving bijection \(\mathsf{L}_{i}\mapsto\mathsf{L}_{i}^{\prime}\) is given between boundary loops of \(\Delta\) and \(\Delta^{\prime}\) (which is usually clear from the context). We say that \(\Delta\) and \(\Delta^{\prime}\) have the same _frame type_ if there exist combinatorially continuous maps \(\phi:\tilde{\Delta}^{(1)}\to\Gamma(G,\mathcal{A})\) and \(\psi:\tilde{\Delta}^{\prime(1)}\to\Gamma(G,\mathcal{A})\) such that for each \(i\) we have the same sets of lines (or loops if \(\Delta\) and \(\Delta^{\prime}\) are simply connected) \(\{\phi(\tilde{\mathsf{L}}_{i}^{j})\}_{j}=\{\psi(\tilde{\mathsf{L}}_{i}^{\prime j })\}_{j}\). The following two observations follow easily from the definition. **4.8 Lemma**.: _Two simply connected diagrams \(\Delta\) and \(\Delta^{\prime}\) have the same frame type if and only if the labels of their boundary loops are equal words._ _Let \(\Delta\) and \(\Delta^{\prime}\) be annular diagrams with boundary loops \(\{\mathsf{L}_{1},\mathsf{L}_{2}\}\) and \(\{\mathsf{L}_{1}^{\prime},\mathsf{L}_{2}^{\prime}\}\). Then \(\Delta\) and \(\Delta^{\prime}\) have the same frame type if and only if the following is true. Take any vertices \(\mathsf{a}_{i}\) on \(\mathsf{L}_{i}\)\((i=1,2)\) and let \(\mathsf{p}\) be a path from \(\mathsf{a}_{1}\) to \(\mathsf{a}_{2}\) in \(\Delta\). Then there exist vertices \(\mathsf{a}_{i}^{\prime}\) on \(\mathsf{L}_{i}^{\prime}\)\((i=1,2)\) and a path \(\mathsf{p}^{\prime}\) from \(\mathsf{a}_{1}^{\prime}\) to \(\mathsf{a}_{2}^{\prime}\) in \(\Delta^{\prime}\) such that the label of \(\mathsf{L}_{i}\) read at \(\mathsf{a}_{i}\) and the label of \(\mathsf{L}_{i}^{\prime}\) read at \(\mathsf{a}_{i}^{\prime}\) are equal words and label\((\mathsf{p})=\) label\((\mathsf{p}^{\prime})\) in \(G\)._ **4.9 Lemma**.: _Diagrams \(\Delta\) and \(\Delta^{\prime}\) have the same frame type in the following two cases:_ * \(\Delta^{\prime}\) _is obtained from_ \(\Delta\) _by refinement;_ * \(\Delta^{\prime}\) _is obtained from_ \(\Delta\) _by cutting off a simply connected subdiagram and replacing it with another simply connected subdiagram._ ### Groups \(G_{\alpha}\) Throughout the paper we will study a fixed family of groups \(G_{\alpha}\) given by a presentation (2-2). Consequently, most of the related terminology will involve rank \(\alpha\) as a parameter (though in some cases, it is not mentioned explicitly; for example, the already introduced measure \(\mu_{\mathsf{f}}(F)\) of fragments of rank \(\alpha\) formally depends on \(\alpha\)). Diagrams over the presentation of \(G_{\alpha}\) are referred simply as diagrams over \(G_{\alpha}\). For \(1\leq\beta\leq\alpha\), a cell of a diagram \(\mathsf{D}\) over \(G_{\alpha}\) with \(\mathit{label}(\delta\mathsf{D})\in\mathcal{X}_{\beta}\) is a _cell of rank \(\beta\)_. Cells with trivial boundary labels (i.e. empty or of the form \(aa^{-1}\)) are _cells of rank \(0\)_. The Cayley graph of \(G_{\alpha}\) is denoted \(\Gamma_{\alpha}\). Note that if \(\beta>\alpha\) then we have a natural covering map \(\Gamma_{\beta}\to\Gamma_{\alpha}\) of labeled graphs. A loop \(\mathsf{L}\) in \(\Gamma_{\alpha}\) lifts to \(\Gamma_{\beta}\) as a loop if and only if \(\mathit{label}(\mathsf{L})=1\) in \(G_{\beta}\). ### Pieces By a _piece of rank \(\alpha\)_ we call any (including empty) subword of a relator of rank \(\alpha\). If \(S\) is a subword of a cyclic shift of a relator \(R\) then we say also that \(S\) is a _piece of \(R\)_. We admit that a piece of rank \(\alpha\) be the empty word. Note that our definition differs from the traditional view on a piece in the small cancellation theory as a common starting segment of two distinct relators. We assume that a piece \(S\) of rank \(\alpha\) always has an associated relator \(R\) of rank \(\alpha\) such that \(S\) is a start of \(R\); so formally a piece of rank \(\alpha\) should be viewed as a pair of the form \((S,R)\). Associated relators are naturally inherited under taking subwords and inversion: if \(S\) is a piece of rank \(\alpha\) with associated relator \(R=ST\) and \(S=S_{1}S_{2}\) then \(S_{1}\) and \(S_{2}\) are viewed as pieces of rank \(\alpha\) with associated relators \(R\) and \(S_{2}TS_{1}\) respectively and \(S^{-1}\) is viewed as a piece of rank \(\alpha\) with associated relator \(S^{-1}T^{-1}\). For pieces of rank \(\alpha\) we use a "measure" \(\mu(S)\in[0,1]\) defined by \(\mu(S)=\frac{|S|_{\alpha-1}}{|R^{\circ}|_{\alpha-1}}\) as in (8-1) where \(R\) is the associated relator. (Recall that \(R^{\circ}\) denotes the cyclic word represented by \(R\).) If for some \(\beta\), \(\mathsf{S}\) is a path in \(\Gamma_{\beta}\) or in a diagram over the presentation of \(G_{\beta}\) and \(\mathsf{S}\) is labeled by a piece of a relator of rank \(\alpha\) (or by an \(R\)-periodic word where \(R\) is a relator of rank \(\alpha\)) then we abbreviate \(\mu(\mathit{label}(\mathsf{S}))\) simply as \(\mu(\mathsf{S})\). ### Reformulation of conditions (S2) and (S3) in terms of Cayley graph The following conditions on the presentation (2-1) are equivalent to (S2) and (S3), respectively. (S2-Cayley) Let \(\mathsf{L}_{i}\) (\(i=1,2\)) be an \(R_{i}\)-periodic line in \(\Gamma_{\alpha-1}\) where \(R_{i}\) is a relator of rank \(\alpha\). If \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) have close subpaths \(\mathsf{P}_{1}\) and \(\mathsf{P}_{2}\) with \(|\mathsf{P}_{i}|\leq|R_{i}|\) and \(\mu(\mathsf{P})\geq\gamma\) then \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are parallel. (S3-Cayley) There are no parallel \(R\)-periodic and \(R^{-1}\)-periodic lines in \(\Gamma_{\alpha-1}\) where \(R\) is a relator of rank \(\alpha\). ### Bridge partition We define also a _bridge partition of rank \(\alpha\)_ of a word \(w\in\mathcal{H}_{\alpha}\) as follows. A bridge partition of rank \(0\) is empty. A bridge partition of rank \(\alpha\geq 1\) either * has the form \(w_{1}\cdot S\cdot w_{2}\) where \(w_{i}\in\mathcal{H}_{\alpha-1}\) and \(S\) is a piece of rank \(\alpha\) called the _central piece_ of \(w\); or * is a single factor \(w\) itself in the case \(w\in\mathcal{H}_{\alpha-1}\). If \(w\) is a bridge word of rank \(\alpha\) endowed with a bridge partition \(u\cdot S\cdot v\) and \(ST\) is the relator of rank \(\alpha\) associated with \(S\) then \(w^{\prime}=uT^{-1}v\) is a bridge word of rank \(\alpha\) equal to \(w\) in \(G_{\alpha}\). We say that \(w^{\prime}\) is obtained from \(w\) by _switching_. In this case we assume also that \(w^{\prime}\) is endowed with the bridge partition \(u\cdot T^{-1}\cdot v\). Thus, applying the switching operation twice results in the initial word \(w\). We will be considering paths in Cayley graphs \(\Gamma_{\beta}\) labeled by bridge words of rank \(\alpha\). We call them _bridges of rank \(\alpha\)_ (with a slight abuse of terminology, we will also use this term in Section 5 for boundary paths with appropriate label in diagrams over the presentation of \(G_{\alpha}\)). If \(\mathsf{w}\) is bridge of rank \(\alpha\) in \(\Gamma_{\beta}\) then a _bridge partition of rank \(\alpha\) of \(\mathsf{w}\)_ is either a factorization \(\mathsf{w}=\mathsf{u}\cdot\mathsf{S}\cdot\mathsf{v}\) where \(\mathsf{u}\) and \(\mathsf{v}\) are bridges of rank \(\alpha-1\) and \(\mathit{label}(\mathsf{S})\) is a piece of rank \(\alpha\) or a trivial factorization with the single factor \(\mathsf{w}\) if \(\mathsf{w}\) is bridge of rank \(\alpha-1\). In the former case, if also \(\beta\geq\alpha\), we define the _switching operation_ on \(\mathsf{w}\) in a similar way as in the case of words; namely, we take the word \(w^{\prime}\) obtained from \(w=\mathit{label}(\mathsf{w})\) by switching and consider the path \(\mathsf{w}^{\prime}\) with \(\mathit{label}(\mathsf{w}^{\prime})=w^{\prime}\) starting at the same vertex as \(\mathsf{w}\). Since \(w=w^{\prime}\) in \(\Gamma_{\beta}\), bridges \(\mathsf{w}\) and \(\mathsf{w}^{\prime}\) have the same endpoints. The following properties of the function \(|\cdot|_{\alpha}\) follow from the definition: * \(|X|_{\alpha}+|Y|_{\alpha}-1\leq|XY|_{\alpha}\leq|X|_{\alpha}+|Y|_{\alpha}\); in particular, if \(Y\) is a subword of \(X\) then \(|Y|_{\alpha}\leq|X|_{\alpha}\). * More generally, if a collection of words \((X_{i})_{i}\) covers a (plain or cyclic) word \(X\) then \[|X|_{\alpha}\leq\sum_{i}|X_{i}|_{\alpha}.\] If \((X_{i})_{1\leq i\leq k}\) is a collection \(k\) of disjoint subwords of \(X\) then \[\sum_{i}|X_{i}|_{\alpha}\leq|X|_{\alpha}+k.\] * \(|X|_{\alpha}\leq\zeta|X|_{\alpha-1}\). * \(|X^{\circ}|_{\alpha}=\min\{|Y|_{\alpha}\mid Y\text{ is a cyclic shift of }X\}\). If \(\mathsf{X}\) is a path in \(\Gamma_{\beta}\) or in a diagram over the presentation of \(G_{\beta}\) then we use abbreviation \(|\mathsf{X}|_{\alpha}=|\mathit{label}(\mathsf{X})|_{\alpha}\). ### Reduced words The set of words reduced in \(G_{\alpha}\) is denoted \(\mathcal{R}_{\alpha}\). The definition immediately implies that \(\mathcal{R}_{\alpha}\) is closed under taking subwords. A word \(X\) is _strongly cyclically reduced in \(G_{\alpha}\)_ if any power \(X^{t}\) is reduced in \(G_{\alpha}\). 4.16. _Coarse polygon relations._ A relation in \(G_{\alpha}\) of the form \(X_{1}u_{1}\ldots X_{m}u_{m}=1\) where words \(X_{i}\) are reduced in \(G_{\alpha}\) and \(u_{i}\) are bridge words of rank \(\alpha\), is called a _coarse \(m\)-gon relation_ in \(G_{\alpha}\). We can write coarse polygon relations in different forms. For example, a coarse bigon relation can be written as \(X=uYv\) where \(X\) and \(Y\) are reduced in \(G_{\alpha}\) and \(u,v\in\mathcal{H}_{\alpha}\). In this form, the relation represents closeness of words \(X\) and \(Y\) in \(G_{\alpha}\). 4.17. We transfer some terminology from words to paths in \(\Gamma_{\alpha}\). We call paths in \(\Gamma_{\alpha}\) with label reduced in \(G_{\alpha}\) simply _reduced_. Note that, according to Proposition 7.6, a reduced path \(\mathsf{X}\) in \(\Gamma_{\alpha}\) is simple. This implies that we can correctly treat the ordering of subpaths of \(\mathsf{X}\), intersections of subpaths, unions etc. Two vertices of \(\Gamma_{\alpha}\) are _close_ if they can be joined by a bridge of rank \(\alpha\) (see 4.13). Two paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha}\) are _close_ if their starting vertices and their ending vertices are close. We say that a loop \(\mathsf{P}=\mathsf{X}_{1}\mathsf{u}_{1}\mathsf{X}_{2}\mathsf{u}_{2},\ldots, \mathsf{X}_{r}\mathsf{u}_{r}\) in \(\Gamma_{\alpha}\) is a _coarse \(r\)-gon_ if each \(\mathsf{X}_{i}\) is reduced and each \(\mathsf{u}_{i}\) is a bridge of rank \(\alpha\). Paths \(\mathsf{X}_{i}\) are _sides_ of \(\mathsf{P}\). Note that paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha}\) are close if and only if \(\mathsf{X}^{-1}\mathsf{u}\mathsf{Y}\mathsf{v}\) is a coarse bigon for some \(\mathsf{u}\) and \(\mathsf{v}\). 4.18. _Symmetry._ All concepts (i.e. relations, functions etc.) and statements involving paths in the Cayley graphs \(\Gamma_{\alpha}\) are invariant under the action of \(G_{\alpha}\) in a natural way. For example, if paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha}\) are close then paths \(g\mathsf{X}\) and \(g\mathsf{Y}\) are also close for any \(g\in G_{\alpha}\). We adopt a convention (which is essential for the invariance) that the action of \(G_{\alpha}\) is extended onto extra data associated with paths in \(\Gamma_{\alpha}\): for example, if \(\mathsf{F}\) is a fragment of rank \(\beta\) with base \(\mathsf{P}\) then then \(g\mathsf{F}\) is considered as a fragment of rank \(\beta\) with base \(g\mathsf{P}\) and so on. This implies, for example, that \(\mu_{\mathrm{f}}(\mathsf{F})=\mu_{\mathrm{f}}(g\mathsf{F})\) for any \(g\in G_{\alpha}\). We will implicitly use symmetry with respect to inversion. For example, if \(\mathsf{F}\) is a fragment of rank \(\beta\) with base \(\mathsf{P}\) then \(\mathsf{F}^{-1}\) is a fragment of rank \(\beta\) with base \(\mathsf{P}^{-1}\) and \(\mu_{\mathrm{f}}(\mathsf{F}^{-1})=\mu_{\mathrm{f}}(\mathsf{F})\). If a statement admits two symmetric forms then only one of them is formulated (as in case of Lemma 10.15, for instance). 4.19. _Numerical parameters._ In many cases, it will be notationally more convenient to use instead of \(\Omega\) its inverse: \[\omega=\frac{1}{\Omega}.\] Note that by (2-3), (4-1) \[\omega\leq\frac{1}{480}\quad\text{and}\quad\lambda\geq 20\omega.\] We will extensively use \(\omega\) as a unit to measure pieces and fragments of rank \(\alpha\). Condition (S1) in 2.8 will be often used in the following form: _if \(P\) is a piece of a relator \(R\) of rank \(\alpha\) then_ (4-2) \[\mu(P)\leq\omega|P|_{\alpha-1}.\] For reader's convenience, we list our other global numerical parameters indicating the places where they first appeared. \[\nu=\frac{\zeta}{1-2\zeta}=\frac{1}{18},\quad\theta=\frac{1}{6}(5-22\nu)= \frac{17}{27}\quad\text{(Proposition~{}\ref{prop:1})},\] \[\eta=\frac{1+2\nu}{\theta}=\frac{30}{17}\quad\text{(Proposition \ref{prop:1})},\] \[\xi_{0}=7\lambda-1.5\omega\quad\text{(Proposition \ref{prop:1})},\] \[\xi_{1}=\xi_{0}-2.6\omega\quad\text{(Definition \ref{prop:1})},\] \[\xi_{2}=\xi_{1}-2\lambda-3.4\omega\quad\text{(Definition \ref{prop:1})}.\] ## 5. Diagrams with marked boundary ### Boundary marking of rank \(\alpha\) We start with introducing a class of diagrams over the presentation (2-2) of \(G_{\alpha}\) with extra data which, in particular, represent coarse polygon relations in \(G_{\alpha}\). Let \(\Delta\) be a non-singular diagram over the presentation (2-2). We say that \(\Delta\) has a _boundary marking of rank \(\alpha\)_ if for each boundary loop \(\mathsf{L}\) of \(\Delta\), there is fixed a representation as a product \(\mathsf{L}=\mathsf{X}_{1}\mathsf{u}_{1}\ldots\mathsf{X}_{m}\mathsf{u}_{m}\) of nonempty paths \(\mathsf{X}_{i}\) and \(\mathsf{u}_{i}\) where labels of \(\mathsf{X}_{i}\) are reduced in \(G_{\alpha}\) and the label of each \(\mathsf{u}_{i}\) belongs to \(\mathcal{H}_{\alpha}\). Paths \(\mathsf{X}_{i}\) are called _sides_ and paths \(\mathsf{u}_{i}\) are called _bridges_ of \(\Delta\). We allow also that the whole boundary loop \(\mathsf{L}\) of \(\Delta\) is viewed a side called a _cyclic side_. In this case we require that the label of \(\mathsf{L}\) is cyclically reduced in \(G_{\alpha}\). If \(X_{1}u_{1}\ldots X_{m}u_{m}=1\) is a coarse polygon relation in \(G_{\alpha}\) then there exists a disk diagram with boundary label \(\mathsf{X}_{1}\mathsf{u}_{1}\ldots\mathsf{X}_{m}\mathsf{u}_{m}\) such that \(\mathit{label}(\mathsf{X}_{i})=X_{i}\) and \(\mathit{label}(\mathsf{u}_{i})=u_{i}\) for all \(i\). Refining \(\Delta\) if necessary (see 4.4) we can assume that \(\Delta\) is non-singular and all paths \(\mathsf{X}_{i}\) and \(\mathsf{u}_{i}\) are nonempty, i.e. \(\Delta\) satisfies the definition above. In a similar way, we can associate with a conjugacy relation in \(G_{\alpha}\) an annular diagram over the presentation of \(G_{\alpha}\) with an appropriate boundary marking. Unless otherwise stated, "a diagram of rank \(\alpha\)" will always mean "a non-singular diagram over the presentation (2-2) with a fixed boundary marking of rank \(\alpha\)". We use terms "diagrams of monogon, bigon, trigon type etc." to name disk diagrams of rank \(\alpha\) with the appropriate number of sides. ### Complexity If \(\Delta\) is a diagram of rank \(\alpha\) then by \(b(\Delta)\) we denote the number of bridges of \(\Delta\). We define the _complexity_\(c(\Delta)\) of \(\Delta\) by \[c(\Delta)=b(\Delta)-2\chi(\Delta).\] ### Decrementing the rank Let \(\Delta\) be a diagram of rank \(\alpha\geq 1\). By \(\Delta_{\alpha-1}\) we denote the diagram over the presentation of \(G_{\alpha-1}\) obtained by removal from \(\Delta\) of all cells of rank \(\alpha\). Up to refinement of \(\Delta\), we assume that \(\Delta_{\alpha-1}\) is non-singular. We assume that every bridge \(\mathsf{w}\) of \(\Delta\) is given a bridge partition of rank \(\alpha\) as defined in 4.13, i.e. for some bridges \(\mathsf{w}\) a factorization \(\mathsf{w}=\mathsf{u}\cdot\mathsf{S}\cdot\mathsf{v}\) is fixed where \(\mathit{label}(\mathsf{u}),\mathit{label}(\mathsf{v})\in\mathcal{H}_{\alpha-1}\) and \(\mathit{label}(\mathsf{S})\) is a piece of rank \(\alpha\), and for all other \(\mathsf{w}\) we have \(\mathit{label}(\mathsf{w})\in\mathcal{H}_{\alpha-1}\). In the case when \(\mathsf{w}\) has a nontrivial bridge partition \(\mathsf{u}\cdot\mathsf{S}\cdot\mathsf{v}\) we say that \(\mathsf{w}\) has _native rank_\(\alpha\) and call \(\mathsf{S}\) the _central arc_ of \(\mathsf{u}\). We will be always assuming that all factors \(\mathsf{u}\), \(\mathsf{v}\) and \(\mathsf{S}\) are nonempty paths (this can be achieved by refinement). We then define a naturally induced boundary marking of rank \(\alpha-1\) of \(\Delta_{\alpha-1}\) (see Figure 1): * Sides of \(\Delta\) become sides of \(\Delta_{\alpha-1}\); we have also extra sides of \(\Delta_{\alpha-1}\) defined as follows. * If \(\mathsf{D}\) is a cell of rank \(\alpha\) of \(\Delta\) then boundary loop \((\delta\mathsf{D})^{-1}\) of \(\Delta_{\alpha-1}\) becomes a cyclic side of \(\Delta_{\alpha-1}\). * For each bridge \(w\) of rank \(\alpha\) of \(\Delta\) we do the following. If the bridge partition of \(w\) is of the form \(u=v\cdot S\cdot w\) then we take \(v\) and \(w\) as bridges of \(\Delta_{\alpha-1}\) and the central arc \(S\) as a side of \(\Delta_{\alpha-1}\). Otherwise we have \(\mathit{label}(w)\in\mathcal{H}_{\alpha-1}\) and we take \(w\) as a bridge of \(\Delta_{\alpha-1}\). ### Cell cancellation We introduce two types of elementary reductions of a diagram \(\Delta\) of rank \(\alpha\geq 1\). In both cases, we reduce the number of cells of rank \(\alpha\). As in 5.3, we assume that a bridge partition is fixed for each bridge \(\Delta\). Let \(C\) and \(D\) be two cells of rank \(\alpha\) of \(\Delta\). We say that \(C\) and \(D\) form a _cell-cell cancellable pair_ if there exists a simple path \(p\) joining two vertices \(a\) and \(b\) in the boundaries of \(C\) and \(D\) respectively, so that the label of the path \(QpRp^{-1}\) is equal \(1\) in \(G_{\alpha-1}\) where \(Q\) and \(R\) are boundary loops of \(C\) and \(D\) starting at \(a\) and \(b\) respectively see Figure 2a). In this case, we can perform the procedure of _cell-cell cancellation_ as follows. We remove cells \(C\) and \(D\) from \(\Delta\), cut the remaining diagram along \(p\) and fill in the resulting region by a diagram \(\Theta\) over the presentation of \(G_{\alpha-1}\) (see Figure 2b). The boundary marking of the new diagram naturally inherits the boundary marking of \(\Delta\) and the labels of sides and bridges are not changed. Now let \(u\) be a bridge of native rank \(\alpha\) of \(\Delta\) with bridge partition \(u=v\cdot S\cdot w\). The label \(S\) of \(S\) has an associated relator \(R\) of rank \(\alpha\) such that \(R=ST\) for some \(T\) (according to the convention in 4.11). We attach a cell \(C\) of rank \(\alpha\) to \(\Delta\) along \(S\) so that \((ST)^{-1}\) becomes the label of the boundary loop \((ST)^{-1}\) of \(C\) (see Figure 2c). For the new diagram \(\Delta\cup C\) we Figure 2. define the boundary marking of rank \(\alpha\) with a new bridge \(\mathsf{v}\mathsf{T}^{-1}\mathsf{w}\) instead of \(\mathsf{u}\). We call this operation _switching of \(\mathsf{u}\)_. If \(\mathsf{C}\) and another cell \(\mathsf{D}\) of rank \(\alpha\) of \(\Delta\) form a cell-cell cancellation pair in \(\Delta\cup\mathsf{C}\) then we say that \(\mathsf{u}\) and \(\mathsf{D}\) form a _bridge-cell cancellable pair_. In this case, after performing a cell-cell cancellation in \(\Delta\cup\mathsf{C}\) we obtain a diagram \(\Delta^{\prime}\) having one cell of rank \(\alpha\) less than \(\Delta\). We will refer to this reduction step as _bridge-cell cancellation_. **5.5 Definition** (Reduced diagram).: Let \(\Delta\) be a diagram of rank \(\alpha\geq 1\) with fixed bridge partitions for all bridges of \(\Delta\). We say that \(\Delta\) is _reduced_ if it has no cancellable pairs after any refinement. _5.6 Remark_.: In what follows, we will be assuming that a diagram \(\Delta\) of rank \(\alpha\geq 1\) has fixed bridge partitions of all bridges of \(\Delta\) if it is required by context. In particular, this applies when we consider the subdiagram \(\Delta_{\alpha-1}\) and the property of \(\Delta\) to be reduced. _5.7 Reduction process_.: If a diagram \(\Delta\) of rank \(\alpha\) is not reduced then, after possible refinement, we obtain a cancellable pair which can be removed by performing the reduction procedure described above. Thus, any diagram of rank \(\alpha\geq 1\) can be transformed to a reduced one. Note that we use a sequence of transformations of the following two types in the reduction process: * transformations preserving the frame type (see Lemma 4.9); * bridge switching. Thus, after reduction the new diagram \(\bar{\Delta}\) has the same frame type as \(\Delta\) up to bridge switching. The following observation follows from definitions 5.4 and 5.5 and will be used without explicit reference. **5.8 Proposition**.: _Let \(\Sigma\) be a subdiagram of a reduced diagram \(\Delta\) of rank \(\alpha\geq 1\) such that the central arc of any bridge of \(\Sigma\) is either a subpath of the central arc of a bridge of \(\Delta\) or a subpath of \((\delta\mathsf{D})^{-1}\) where \(\mathsf{D}\) is a cell of rank \(\alpha\) of \(\Delta\). Then \(\Sigma\) is reduced as well._ ## 6. Reduction to the previous rank **6.1 Definition**.: Let \(\Delta\) be a diagram of rank \(\alpha\). A _bond_ in \(\Delta\) is a simple path \(\mathsf{u}\) satisfying the following conditions: * \(\mathsf{u}\) joins two vertices on sides of \(\Delta\) and intersects the boundary of \(\Delta\) only at the endpoints of \(\mathsf{u}\); * _label_(\(\mathsf{u}\)) is equal in \(G_{\alpha}\) to a word in \(\mathcal{H}_{\alpha}\). * \(\mathsf{u}\) is not homotopic in \(\Delta\) (rel endpoints) to a subpath of a side of \(\Delta\); * \(\mathsf{u}\) does not cut off from \(\Delta\) a simply connected subdiagram with boundary loop \(\mathsf{u}^{\pm 1}\mathsf{pvq}\) where \(\mathsf{p}\) is an end of a side of \(\Delta\), \(\mathsf{v}\) is a bridge of \(\Delta\), \(\mathsf{q}\) is a start of a side of \(\Delta\) and labels of \(\mathsf{p}\) and \(\mathsf{q}\) are empty words. See Figure 3. In most cases, we will assume that the label of a bond \(\mathsf{u}\) already belongs to \(\mathcal{H}_{\alpha}\). Note that this condition can always be achieved by cutting \(\Delta\) along \(\mathsf{u}\) and attaching a subdiagram with boundary loop \(\mathsf{u}^{\pm 1}\mathsf{v}\) where \(\textit{label}(\mathsf{v})\in\mathcal{H}_{\alpha}\) and its mirror copy, see Figure 4. **6.3 Definition**.: A diagram of rank \(\alpha\) is _small_ if it has no bonds after any refinement. **6.4 Proposition**.: __ 1. _The property of a diagram_ \(\Delta\) _of rank_ \(\alpha\) _to be small depends only on the frame type of_ \(\Delta\)_._ 2. _The property of a diagram of rank_ \(\alpha\) _to be small is preserved under switching of bridges._ 3. _If_ \(\Delta\) _is a small diagram of rank 0 with_ \(c(\Delta)>0\) _then labels of all sides of_ \(\Delta\) _are empty words._ ### Definition Let \(\Delta\) be a diagram of rank \(\alpha\geq 1\). A disk subdiagram \(\Pi\) of \(\Delta_{\alpha-1}\) is a _contiguity subdiagram_ of \(\Delta\) if the boundary loop of \(\Pi\) has the form \(\mathsf{Pu}_{1}\mathsf{Qu}_{2}\) where \(\mathsf{P}^{-1}\) and \(\mathsf{Q}^{-1}\) are nonempty subpaths of sides of \(\Delta_{\alpha-1}\) and each of the two paths \(\mathsf{u}_{i}\) is either a bond in \(\Delta_{\alpha-1}\) with \(\mathit{label}(\mathsf{u}_{i})\in\mathcal{H}_{\alpha-1}\) or a bridge of \(\Delta_{\alpha-1}\). Note that here we use Definition 6.1 with rank \(\alpha-1\) instead of \(\alpha\). The paths \(\mathsf{P}^{\pm 1}\) and \(\mathsf{Q}^{\pm 1}\) are _contiguity arcs_ of \(\Pi\). If \(\mathsf{P}^{-1}\) and \(\mathsf{Q}^{-1}\) occur, respectively, in sides \(\mathsf{S}\) and \(\mathsf{T}\) of \(\Delta_{\alpha-1}\) then we say that \(\Pi\) is a contiguity subdiagram _of \(\mathsf{S}\) to \(\mathsf{T}\)_ (or _between \(\mathsf{S}\) and \(\mathsf{T}\)_). According to definition 2.4, if \(\mathsf{P}\) and \(\mathsf{Q}\) are contiguity arcs of a contiguity subdiagram with boundary loop \(\mathsf{Pu}_{1}\mathsf{Qu}_{2}\) then labels of \(\mathsf{P}^{-1}\) and \(\mathsf{Q}\) are close in \(G_{\alpha-1}\). **6.6 Lemma** (small cancellation in reduced diagrams).: _Let \(\Delta\) be a reduced diagram of rank \(\alpha\). Let \(\Pi\) be a contiguity subdiagram of \(\Delta\) with boundary loop \(\delta\Pi=\mathsf{Pu}\mathsf{Q}\mathsf{v}\) where \(\mathsf{P}\) and \(\mathsf{Q}\) are the contiguity arcs of \(\Pi\). Assume that \(\mathsf{P}^{-1}\) occurs in the boundary loop of a cell \(\mathsf{D}\) of rank \(\alpha\) and \(\mathsf{Q}^{-1}\) occurs in a side \(\mathsf{S}\) of \(\Delta_{\alpha-1}\). Then:_ [MISSING_PAGE_POST] _._ 2. _If_ \(\mathsf{S}\) _is the boundary loop of a cell_ \(\mathsf{D}^{\prime}\) _distinct from_ \(\mathsf{D}\) _then_ \(\mu(\mathsf{P})<\lambda\)_;_ 3. _If_ \(\mathsf{S}\) _is the central arc of a bridge of_ \(\Delta\) _then_ \(\mu(\mathsf{P})<\lambda\)_;_ Proof.: If \(\mathsf{S}\) is a side of \(\Delta\) then the label of \(\mathsf{S}\) is reduced in \(G_{\alpha}\) (or cyclically reduced in \(G_{\alpha}\) if \(\mathsf{S}\) is a cyclic side), as defined in 5.1. Then \(\mu(\mathsf{P})<\rho\) by the definition of a reduced word in 2.5. Assume that \(\mu(\mathsf{P})\geq\gamma\) and \(\mathsf{S}=\delta\mathsf{D}^{\prime}\) where \(\mathsf{D}^{\prime}\) is a cell distinct from \(\mathsf{D}\). Let \(\mathsf{R}\) and \(\mathsf{R}^{\prime}\) be boundary loops of \(\mathsf{D}\) and \(\mathsf{D}^{\prime}\) starting at the initial and terminal vertices of \(\mathsf{u}\), respectively. By the small cancellation condition (S2) we have \(\mathit{label}(\mathsf{R})=\mathit{label}(\mathsf{uR}^{\prime}\mathsf{u}^{-1})\) in \(G_{\alpha-1}\), hence \(\mathsf{D}\) and \(\mathsf{D}^{\prime}\) form a cell-cell cancellable pair contrary to the hypothesis that \(\Delta\) is reduced. If \(\mu(\mathit{label}(\mathsf{P}))\geq\lambda\) and \(\mathsf{S}\) is the central arc of a bridge of \(\Delta\) then in a similar way we see that \(\mathsf{D}\) and \(\mathsf{S}\) form a cell-bridge cancellable pair. Note that the lemma leaves uncovered a possibility when \(\mathsf{S}=\delta\mathsf{D}\), i.e. when \(\Pi\) is a contiguity subdiagram of \(\mathsf{D}\) to itself. This case needs a special consideration. **6.7 Definition**.: A cell \(\mathsf{D}\) of rank \(\alpha\) in a diagram \(\Delta\) of rank \(\alpha\geq 1\) is _folded_ if there exists a simple path \(\mathsf{u}\) joining two vertices \(\mathsf{a}\) and \(\mathsf{b}\) in the boundary of \(\mathsf{D}\) so that \(\mathit{label}(\mathsf{PQ}\mathsf{uQ}\mathsf{P}\mathsf{u}^{-1})=1\) in \(G_{\alpha-1}\) where \(\mathsf{P}\) and \(\mathsf{Q}\) are subpaths of \(\delta\mathsf{D}\) from \(\mathsf{a}\) to \(\mathsf{b}\) and from \(\mathsf{b}\) to \(\mathsf{a}\) respectively (Figure 5). **6.8 Lemma** (no folded cells).: _Assume that no relator of rank \(\alpha\) is conjugate in \(G_{\alpha-1}\) to its inverse. Then folded cells do not exist. Consequently, if \(\Pi\) is a contiguity subdiagram of a cell of rank \(\alpha\) to itself then for a contiguity arc \(\mathsf{P}\) of \(\Pi\) we have \(\mu(\mathit{label}(\mathsf{P}))<\lambda\)._ Proof.: The first statement is an immediate consequence of Definition 6.7. If \(\Pi\) is a contiguity subdiagram of a cell \(\mathsf{D}\) of rank \(\alpha\) to itself and \(\mathsf{P}\) is a contiguity arc of \(\Pi\) with \(\mu(\mathit{label}(\mathsf{P}))\geq\lambda\) then, as in the proof of Lemma 6.6, we conclude that \(\mathsf{D}\) is a folded cell. We will be considering finite sets of disjoint contiguity subdiagrams of a diagram \(\Delta\) of rank \(\alpha\geq 1\). Our goal is to produce a maximal, in an appropriate sense, such a set. Let \(\{\Pi_{i}\}\) be a finite set of pairwise disjoint contiguity subdiagrams of \(\Delta\). Each connected component \(\Theta\) of the complement \(\Delta_{\alpha-1}-\bigcup\Pi_{i}\) is a diagram of rank \(\alpha-1\) with a naturally induced boundary marking of rank \(\alpha-1\) defined as follows: Figure 5. * Bridges of \(\Delta_{\alpha-1}\) occurring in the boundary of \(\Theta\) become bridges of \(\Theta\); * If \(\mathfrak{u}\) is a bond of \(\Delta_{\alpha-1}\) occurring in the boundary of some contiguity subdiagram \(\Pi_{i}\) and \(\mathfrak{u}^{-1}\) occurs in the boundary of \(\Theta\) then \(\mathfrak{u}^{-1}\) becomes a bridge of \(\Theta\); * The rest of the boundary of \(\Theta\) consists of subpaths of sides of \(\Delta_{\alpha-1}\), or possibly cyclic sides of \(\Delta_{\alpha-1}\), which are viewed as sides of \(\Theta\). The following observation follows easily by induction on the number of contiguity subdiagrams in a set \(\{\Pi_{i}\}\). **6.10 Lemma**.: _Let \(\{\Pi_{i}\}\) be a set of \(r\) pairwise disjoint contiguity subdiagrams of a diagram \(\Delta\) of rank \(\alpha\geq 1\). Let \(\{\Theta_{j}\}\) be the set of all connected components of the complement \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\). Then_ \[\sum_{j}c(\Theta_{j})=c(\Delta_{\alpha-1}),\] \[\sum_{j}\chi(\Theta_{j})=\chi(\Delta_{\alpha-1})+r.\] **6.11 Proposition**.: _Let \(\Delta\) be a diagram of rank \(\alpha\geq 1\). Then there exists another diagram \(\Delta^{\prime}\) of rank \(\alpha\) and a finite set \(\{\Pi_{i}\}\) of pairwise disjoint contiguity subdiagrams of \(\Delta^{\prime}\) such that:_ * \(\Delta^{\prime}\) _is obtained from_ \(\Delta\) _by replacing its subdiagram_ \(\Delta_{\alpha-1}\) _with another subdiagram over the presentation of_ \(G_{\alpha-1}\) _of the same frame type; in particular,_ \(\Delta\) _and_ \(\Delta^{\prime}\) _have the same boundary marking and the same frame type._ * _any connected component_ \(\Theta\) _of_ \(\Delta^{\prime}_{\alpha-1}-\bigcup_{i}\Pi_{i}\) _is a small diagram of rank_ \(\alpha-1\)_._ * _if_ \(c(\Delta_{\alpha-1})>0\) _then_ \(c(\Theta)>0\) _for each connected component_ \(\Theta\) _of_ \(\Delta^{\prime}_{\alpha-1}-\bigcup_{i}\Pi_{i}\)_._ Proof.: Let \(\Delta\) be a diagram of rank \(\alpha\) and let \(\{\Pi_{i}\}\) be a finite set of pairwise disjoint contiguity subdiagrams of \(\Delta\). Assume that a connected component \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\) has a bond, possibly after refinement. We describe how to obtain from \(\{\Pi_{i}\}\) a new set of disjoint contiguity subdiagrams by either increasing the set or increasing the part of \(\Delta\) covered by \(\{\Pi_{i}\}\). We track on two inductive parameters: the number \(N\) of connected components of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\) and the total length \(L\) of sides of these components. Refining \(\Theta\) inside \(\Delta\) we may assume that \(\Theta\) has a bond \(\mathfrak{u}\). An easy analysis shows that any bond in \(\Theta\) is also a bond in \(\Delta_{\alpha-1}\). Performing surgery as described in 6.2 we may assume that the label of \(\mathfrak{u}\) belongs to \(\mathcal{H}_{\alpha-1}\). Observe that \(\mathfrak{u}\) cuts \(\Theta\) into a subdiagram \(\Theta_{1}\) or two subdiagrams \(\Theta_{1}\) and \(\Theta_{2}\) which inherit the boundary marking of rank \(\alpha-1\). From the definition of complexity \(c(\ast)\) we immediately see that \(c(\Theta)=\sum_{i}c(\Theta_{i})\) in either of the two cases. Since \(\mathfrak{u}\) is not homotopic to a subpath of a side of \(\Theta\) we have \(c(\Theta_{i})\geq 0\) for each \(\Theta_{i}\). We change the set \(\{\Pi_{i}\}\) depending on the following two cases: _Case_ 1: \(\mathfrak{u}\) cuts \(\Theta\) into two subdiagrams \(\Theta_{1}\) and \(\Theta_{2}\) and at least one of them, say \(\Theta_{1}\), satisfies \(c(\Theta_{1})=0\). Then \(\Theta_{1}\) is a simply connected subdiagram with two bridges, and hence a contiguity subdiagram of \(\Delta\). Note that if for both \(\Theta_{1}\) and \(\Theta_{2}\) we have \(c(\Theta_{1})=c(\Theta_{2})=0\) then \(\Delta\) has no cells of rank \(\alpha\) and is itself a contiguity subdiagram. We then can take \(\{\Pi_{i}\}=\{\Delta\}\). We assume that this is not the case. Let \(\mathfrak{v}\) be the other bridge of \(\Theta_{1}\). If \(\mathfrak{u}\) is a bridge of \(\Delta_{\alpha-1}\) then we simply add \(\Theta_{1}\) to the set \(\{\Pi_{i}\}\). Otherwise \(\mathfrak{v}^{-1}\) is a bond of \(\Delta_{\alpha-1}\) occurring in the boundary loop of some \(\Pi_{i}\); then we attach \(\Theta_{1}\) to \(\Pi_{i}\) (see Figure 6. Note that the label of at least one side of \(\Theta_{1}\) is nonempty (by condition (iv) of Definition 6.1 applied to \(\Theta\) and \(\mathfrak{u}\)). Hence after performing this operation, \(L\) is strictly decreased and \(N\) is not changed. _Case_ 2: Case 1 does not hold. We refine \(\Delta\) so that \(\mathfrak{u}\) "bifurcates" into two paths \(\mathfrak{u}^{\prime}\) and \(\mathfrak{u}^{\prime\prime}\) (Figure 7) and obtain a "degenerate" contiguity subdiagram \(\Pi\) of \(\Delta\) between \(\mathfrak{u}^{\prime}\) and \(\mathfrak{u}^{\prime\prime}\). We then add \(\Pi\) to the set \(\{\Pi_{i}\}\). The operation strictly increases \(N\) not changing \(L\). Starting from the empty set of contiguity subdiagrams \(\Pi_{i}\), we perform recursively the procedure described above. Each step we either decrease \(L\) not changing \(N\) or increase \(N\) not changing \(L\). Furthermore, each time there is at most one connected component \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\) with \(c(\Theta)\leq 0\) and it exists only if \(c(\Delta_{\alpha-1})\leq 0\) for the initial diagram \(\Delta\). By Lemma 6.10, \(N\) is bounded from above, so the procedure terminates after finitely many steps. Upon termination, all connected components of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\) become small by construction. ### Definition We say that a set \(\{\Pi_{i}\}\) satisfying the conclusion of Proposition 6.11 is a _tight_ set of contiguity subdiagrams of \(\Delta^{\prime}\). ## 7. Global bounds on diagrams Let \(\Delta\) be a diagram of rank \(\alpha\geq 1\) and \(\{\Pi_{j}\}\) a set of disjoint contiguity subdiagrams of \(\Delta\). We have a tiling of \(\Delta\) by subdiagrams of three types: cells of rank \(\alpha\), contiguity subdiagrams \(\Pi_{i}\) and connected components of the complement \(\Delta_{\alpha-1}-\bigcup\Pi_{i}\). We name these subdiagrams _tiles of index 2, 1 and 0_ respectively and refer to them also as _internal_ tiles. We consider also external 2-cells of \(\Delta\) as tiles of index 2, so with these extra tiles we obtain a tiling of the 2-sphere. Boundary loops of all tiles carry naturally induced partitions into subpaths (allowed to be whole loops) called _tiling sides_, defined precisely as follows (see Figure 8): * The boundary loop \(\delta\Pi_{i}\) of each contiguity subdiagram \(\Pi_{i}\) is partitioned as \(\mathsf{P}\cdot\mathfrak{u}\cdot\mathsf{Q}\cdot\mathsf{v}\) where \(\mathsf{P}\) and \(\mathsf{Q}\) are the contiguity arcs; thus \(\delta\Pi_{i}\) consists of four tiling sides. Figure 6. Figure 7. * A component \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\) has the induced boundary marking of rank \(\alpha-1\) (in this case, a tiling side can be a cyclic side of \(\Theta\)). * The boundary loop of a cell of rank \(\alpha\) either has no nontrivial partition (in this case it is considered as a cyclic tiling side) or is partitioned as an alternating product of contiguity arcs of subdiagrams \(\Pi_{i}\) and paths \(\mathsf{S}\) where \(\mathsf{S}^{-1}\) is a side of a component of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\). * The partition of the boundary loop \(\mathsf{L}\) of an external cell is defined as follows: we take the partition of \(\mathsf{L}\) induced by the boundary marking of rank \(\alpha-1\) of \(\Delta_{\alpha-1}\) and additionally subdivide sides of rank \(\alpha-1\) into alternating products of contiguity arcs of subdiagrams \(\Pi_{i}\) and paths \(\mathsf{S}\) where \(\mathsf{S}^{-1}\) is a side of a component of \(\Delta_{\alpha-1}-\bigcup_{i}\Pi_{i}\). Note that we view on tiling sides as paths, i.e. they are considered with direction. By construction, the set of all tiling sides is closed under inversion, and each tiling side occurs in a unique way in a boundary loop of a tile. **7.2 Definition**.: Let \(\mathcal{S}\) be the set of tiling sides associated with \(\{\Pi_{i}\}\). For every tile \(T\), we denote \(\mathcal{S}(T)\) the set of tiling sides occurring in the boundary loops of \(T\). A _discrete connection_ on a pair \((\Delta,\{\Pi_{i}\})\) is a function \(w:\mathcal{S}\to\mathbb{R}\) such that \(w(\mathsf{s}^{-1})=-w(\mathsf{s})\) for any \(\mathsf{s}\). Given \(w\), we define the _curvature_\(\kappa(T)\) of each internal tile \(T\): \[\kappa(T)=(-1)^{\operatorname{index}(T)}\chi(T)+\sum_{\mathsf{s}\in\mathcal{S} (T)}w(\mathsf{s}).\] (Note that inequality \(\chi(T)\neq 1\) is possible only if \(T\) has index \(0\).) For an external tile \(T\), by definition, \[\kappa(T)=\sum_{\mathsf{s}\in\mathcal{S}(T)}w(\mathsf{s}).\] By definition, the total curvature \(\kappa(\Delta)\) of \(\Delta\) is the sum of curvatures of all internal tiles of \(\Delta\). The total curvature of external tiles of \(\Delta\) is the _curvature along the boundary of \(\Delta\)_, denoted \(\kappa(\partial\Delta)\). **7.3 Proposition** (A discrete version of the Gauss-Bonnet theorem).: _For any diagram \(\Delta\) of rank \(\alpha\geq 1\) and any set \(\{\Pi_{i}\}\) of disjoint contiguity subdiagrams of \(\Delta\),_ \[\kappa(\Delta)+\kappa(\partial\Delta)=\chi(\Delta).\] _In particular, if \(\kappa(T)\) is non-positive for any internal tile \(T\) then \(\kappa(\partial\Delta)\geq\chi(\Delta)\)._ Figure 8. Proof.: Let \(t\) be the number of cells of rank \(\alpha\) of \(\Delta\). It follows from the second equality of Lemma 6.10 that \[\sum_{T}(-1)^{\operatorname{index}(T)}\chi(T)=\chi(\Delta_{\alpha-1})+t=\chi(\Delta)\] where the sum is taken over all internal tiles \(T\) of \(\Delta\). In the expansion of \(\kappa(\Delta)+\kappa(\partial\Delta)\) all summands \(w(\mathsf{s})\) are canceled because of the assumption \(w(\mathsf{s}^{-1})=-w(\mathsf{s})\). **7.4 Proposition** (bounding the number of cells).: _Let \(\Delta\) be a reduced diagram of rank \(\alpha\geq 1\) with \(c(\Delta_{\alpha-1})>0\). Denote_ (7-1) \[\nu=\frac{\zeta}{1-2\zeta}=\frac{1}{18},\quad\theta=\frac{1}{6}(5-22\nu)= \frac{17}{27}.\] _Let \(\mathcal{T}\) be a tight set of contiguity subdiagrams of \(\Delta\). We assume that the following extra condition is satisfied:_ * _Each cell of rank_ \(\alpha\) _of_ \(\Delta\) _has at most one contiguity subdiagram_ \(\Pi\in\mathcal{T}\) _to sides of_ \(\Delta\)_._ _Let \(M\) be the number of cells of rank \(\alpha\) of \(\Delta\). Then_ (7-2) \[\theta M\leq\frac{2}{3}(1+\nu)b(\Delta)-\chi(\Delta).\] For the proof, we define a discrete connection \(w\) on the pair \((\Delta,\{\Pi_{i}\})\). Note that \(w(\mathsf{S}^{-1})=-w(\mathsf{S})\) by Definition 7.2 and thus defining \(w(\mathsf{S})\) automatically defines \(w(\mathsf{S}^{-1})\). Recall that sides of \(\Delta_{\alpha-1}\) are divided into three types: sides of \(\Delta\), central arcs of bridges of native rank \(\alpha\) and the boundary loops of cells of rank \(\alpha\). If \(\mathsf{S}\) is a side of \(\Delta_{\alpha-1}\) or a subpath of a side of \(\Delta_{\alpha-1}\) then we assign to \(\mathsf{S}\) type I, II or III respectively. Before defining \(w\), we perform on \(\Delta\) the following "cleaning" procedure: if a bridge of \(\Delta_{\alpha-1}\) occurs in the boundary of some contiguity subdiagram \(\Pi_{i}\) then we cut off \(\Pi_{i}\) from \(\Delta\) taking the bond in the boundary of \(\Pi_{i}\) as a new bridge of the resulting \(\Delta_{\alpha-1}\). Thus we may assume that * every bridge of \(\Delta_{\alpha-1}\) occurs in the boundary of a tile of index \(0\) (i.e. a connected component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\)). We define \(w\) as follows: * Let \(\Theta\) be a connected component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\). For each bond or bridge \(\mathsf{u}\) of rank \(\alpha-1\) occurring in the boundary of \(\Theta\), define \[w(\mathsf{u})=-\frac{1}{3}(1+\nu).\] For each side \(\mathsf{S}\) of \(\Theta\), \[w(\mathsf{S})=\zeta\theta|\mathsf{S}|_{\alpha-1}.\] * Let \(\Pi\in\mathcal{T}\) and let \(\delta\Pi=\mathsf{Pu}_{1}\mathsf{Qu}_{2}\) as in Definition 6.5. By (**), for each \(i=1,2\) the tiling side \(\mathsf{u}_{i}^{-1}\) occurs in the boundary of a connected component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\). By (i), we already have \[w(\mathsf{u}_{i})=-w(\mathsf{u}_{i}^{-1})=\frac{1}{3}(1+\nu).\] We define \(w(\mathsf{P})\) (the definition of \(w(\mathsf{Q})\) is similar): (7-3) \[w(\mathsf{P})=\begin{cases}0&\text{if $\mathsf{P}$ has type I or II}\\ \frac{1}{3}(1-2\nu)&\text{if $\mathsf{P}$ has type III and $\mathsf{Q}$ has type I}\\ \frac{1}{6}(1-2\nu)&\text{if $\mathsf{P}$ has type III and $\mathsf{Q}$ has type II or III}\end{cases}\] Let \(\mathsf{D}\) be a cell of rank \(\alpha\) of \(\Delta\) and \(\mathsf{S}\) be a tiling side occurring in \(\delta\mathsf{D}\). The value of \(w(\mathsf{S})\) is already defined by (i) and (ii). We have: * If \(\mathsf{S}^{-1}\) is the contiguity arc of a contiguity subdiagram \(\Pi\in\mathscr{T}\) of \(\mathsf{D}\) to a side of \(\Delta_{\alpha-1}\) of type I or II then \(w(\mathsf{S})=-\frac{1}{3}(1-2\nu)\). * If \(\mathsf{S}^{-1}\) is the contiguity arc of a contiguity subdiagram \(\Pi\in\mathscr{T}\) of \(\mathsf{D}\) to a side of \(\Delta_{\alpha-1}\) of type III then \(w(\mathsf{S})=-\frac{1}{6}(1-2\nu)\). * If \(\mathsf{S}^{-1}\) occurs in the boundary of a connected component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathscr{T}}\Pi\) then \(w(\mathsf{S})=-\zeta\theta|\mathsf{S}|_{\alpha-1}\). We provide an upper bound for the curvature of any internal tile. For contiguity subdiagrams \(\Pi\in\mathscr{T}\) we immediately have \(\kappa(\Pi)\leq 0\) by (ii). Let \(\Theta\) be a connected component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathscr{T}}\Pi\). We have \[\kappa(\Theta)=\chi(\Theta)-\frac{1}{3}(1+\nu)b(\Theta)+\zeta\theta\sum_{ \mathsf{S}}|\mathsf{S}|_{\alpha-1}\] where the sum is taken over the sides \(\mathsf{S}\) of \(\Theta\). If \(\alpha=1\) then \(\sum|\mathsf{S}|_{\alpha-1}=0\) (Proposition 6.4(iii)). If \(\alpha\geq 2\) then by Proposition 7.8\({}_{\alpha-1}\), \[\theta\sum|\mathsf{S}|_{\alpha-1}\leq\frac{2}{3}(1+\nu)b(\Theta)-\chi(\Theta)\] Using the fact that \(c(\Theta)>0\) it is easy to check that \(\kappa(\Theta)\leq 0\) in both cases \(\alpha=1\) and \(\alpha\geq 2\). (The critical case is when \(b(\Theta)=3\) and \(\chi(\Theta)=1\); in this case we have \(\kappa(\Theta)=-\nu\) if \(\alpha=1\) and \(\kappa(\Theta)=0\) if \(\alpha\geq 2\) by definition (7-1) of \(\nu\)). Finally, let \(\mathsf{D}\) be a cell of rank \(\alpha\) of \(\Delta\). We prove that \(\kappa(\mathsf{D})\leq-\theta\). By (*), \(\mathsf{D}\) has at most one contiguity subdiagram to sides of \(\Delta_{\alpha-1}\) of type I. We consider first the case when \(\mathsf{D}\) has one. Let \(r\) be the number of contiguity subdiagrams of \(\mathsf{D}\) to sides of types II and III. The remaining \(r+1\) subpaths \(\mathsf{S}_{1},\mathsf{S}_{2},\ldots\mathsf{S}_{r+1}\) of \(\delta\mathsf{D}\) are tiling sides such that \(\mathsf{S}_{i}^{-1}\) belong to boundary loops of connected components of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathscr{T}}\Pi\); so we have \[\kappa(\mathsf{D})\leq 1-\frac{1}{3}(1-2\nu)-r\left(\frac{1}{6}(1-2\nu)\right)- \zeta\theta\sum_{i=1}^{r+1}|\mathsf{S}_{i}|_{\alpha-1}.\] By condition (S1) in 2.8 and Lemmas 6.6, 6.8, \[\sum_{i=1}^{r+1}|\mathsf{S}_{i}|_{\alpha-1}\geq(1-\rho-r\lambda)\Omega=(9-r) \lambda\Omega.\] Hence (7-4) \[\kappa(\mathsf{D})\leq\frac{2}{3}(1+\nu)-r\left(\frac{1}{6}(1-2\nu)\right)- \zeta\theta\lambda\Omega\max(0,\ 9-r).\] If \(r\geq 9\) then the coefficient before \(r\) in the right-hand side of (7-4) is negative. If \(r\leq 9\) then the coefficient is \[-\frac{1}{6}(1-2\nu)+\zeta\theta\lambda\Omega\] which is positive since by the second inequality (2-3) we have \(\zeta\theta\lambda\Omega\geq 20\zeta\theta=\theta>\frac{1}{6}\). Hence the maximal value of the expression in (7-4) is when \(r=9\). Substituting \(r=9\) into the right-hand side of (7-4) we obtain the expression \[\frac{2}{3}(1+\nu)-\frac{9}{6}(1-2\nu)\] which is equal \(-\theta\) by (7-1). This shows that \(\kappa(\mathsf{D})\leq-\theta\). Assume that \(\mathsf{D}\) has no contiguity subdiagrams to sides of type I. Let, as above, \(r\) be the number of contiguity subdiagrams of \(\mathsf{D}\) to sides of types II and III and \(\mathsf{S}_{1},\mathsf{S}_{2},\ldots\mathsf{S}_{r}\) be the remaining \(r\) tiling sides occurring in \(\delta\mathsf{D}\) such that \(\mathsf{S}_{i}^{-1}\) belong to boundary loops of connected components of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\). Instead of (7-4) we have (7-5) \[\kappa(\mathsf{D})\leq 1-r\left(\frac{1}{6}(1-2\nu)\right)-\zeta\theta N\max( 0,\ 1-r\lambda).\] If we allow \(r\) to be a non-negative real then the maximal value of the right-hand side is when \[1-r\lambda=0.\] Substituting \(r=\frac{1}{\lambda}\) into the left-hand side of (7-5) we obtain the expression \[1-\frac{1-2\nu}{6\lambda}\] which is less then \(-\theta\) since \(\lambda\leq\frac{1}{24}\). Finally, we compute an upper bound for \(\kappa(\partial\Delta)\). For a tiling side \(\mathsf{S}\) occurring in the boundary loop of an external cell of \(\Delta\) (the loop has the form \(\mathsf{L}^{-1}\) where \(\mathsf{L}\) is a boundary loop of \(\Delta\)) we have three possibilities: either \(\mathsf{S}^{-1}\) is a contiguity arc of a subdiagram \(\Pi\in\mathcal{T}\), \(\mathsf{S}^{-1}\) is a side of a component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\), or \(\mathsf{S}^{-1}\) is a bridge of \(\Delta_{\alpha-1}\) In the first two cases we have \(w(\mathsf{S})\leq 0\) according to (ii) or (i) respectively. If \(\mathsf{S}^{-1}\) is a bridge of \(\Delta_{\alpha-1}\) then by (**), \(\mathsf{S}^{-1}\) is also a bridge of some component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\) and by (i), \[w(\mathsf{S})=\frac{1}{3}(1+\nu).\] Note that each bridge of \(\Delta\) produces at most two bridges of \(\Delta_{\alpha-1}\). Hence \(b(\Delta_{\alpha-1})\leq 2b(\Delta)\). We obtain (7-6) \[\kappa(\partial\Delta)\leq\frac{1}{3}(1+\nu)b(\Delta_{\alpha-1})\leq\frac{2}{ 3}(1+\nu)b(\Delta)\] Application of Proposition 7.3 gives \[\frac{2}{3}(1+\nu)b(\Delta)-\theta M\geq\chi(\Delta)\] as required. The proof of Proposition 7.4 is finished. **7.5 Lemma**.: _Let \(\Delta\) be a reduced disk diagram of rank \(\alpha\geq 1\). If \(\Delta\) has a single (cyclic or non-cyclic) side then \(\Delta\) has no cells of rank \(\alpha\)._ Proof.: Let \(\Delta\) be a reduced disk diagram of rank \(\alpha\) with a single side, i.e. \(\Delta\) is of monogon or nullgon type. Assume that \(\Delta\) has a cell of rank \(\alpha\). We choose such \(\Delta\) with minimal possible non-zero number \(M\) of cells of rank \(\alpha\). We then have \(\chi(\Delta_{\alpha-1})\leq 0\) and hence \(c(\Delta_{\alpha-1})>0\). We can assume that \(\Delta\) is given a tight set \(\mathcal{T}\) of contiguity subdiagrams. If each cell of rank \(\alpha\) of \(\Delta\) has at most one contiguity subdiagram \(\Pi\in\mathcal{T}\) to the side of \(\Delta\) then application of Proposition 7.4 would give \[\theta M\leq\frac{2}{3}(1+\nu)-1<0.\] Therefore, \(\Delta\) has a cell \(\mathsf{D}\) of rank \(\alpha\) having two contiguity subdiagram \(\Pi_{1},\Pi_{2}\in\mathcal{T}\) to the side of \(\Delta\). The union \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) cuts off from \(\Delta\) a disk diagram \(\Delta^{\prime}\) of rank \(\alpha\) with a single side and a single bridge (Figure 9). The assumption that \(\Delta\) is reduced implies that \(\Delta^{\prime}\) is reduced as well. By the choice of \(\Delta\), \(\Delta^{\prime}\) has no cells of rank \(\alpha\). Then for some component \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\) we have \(c(\Theta)=0\) contrary to the choice of a tight set \(\mathcal{T}\) of contiguity subdiagrams of \(\Delta\) (Definition 6.12). ### Proposition _If a non-empty word \(X\) is reduced in \(G_{\alpha}\) then \(X\neq 1\) in \(G_{\alpha}\)._ Proof.: Let \(\alpha\geq 1\). Let \(X\) be reduced in \(G_{\alpha}\) and \(X=1\) in \(G_{\alpha}\). Consider a reduced disk diagram \(\Delta\) of rank \(\alpha\) with one side labeled \(X\) and one bridge labeled by the empty word. Lemma 7.5 says that \(\Delta\) has no cells of rank \(\alpha\) and hence we have \(X=1\) in \(G_{\alpha-1}\). Since \(\mathcal{R}_{\alpha}\subseteq\mathcal{R}_{\alpha-1}\), arguing by induction we conclude that \(X=1\) in the free group \(G_{0}\). Since \(X\) is freely reduced (definition 2.5) we conclude that \(X\) is empty. ### Lemma _Let \(\Delta\) be a reduced diagram of rank \(\alpha\geq 1\) and let \(\mathsf{u}\) be a simple path in \(\Delta\) homotopic rel endpoints to a subpath \(\mathsf{S}\) of a side of \(\Delta\). Assume, moreover, that the label of \(\mathsf{u}\) is equal in \(G_{\alpha-1}\) to a word in \(\mathcal{H}_{\alpha-1}\). Then the subdiagram of \(\Delta\) with boundary loop \(\mathsf{S}\mathsf{u}^{-1}\) has no cells of rank \(\alpha\)._ Proof.: Let \(\Delta^{\prime}\) be the subdiagram of \(\Delta\) with boundary loop \(\mathsf{S}\mathsf{u}^{-1}\) and let \(w\in\mathcal{H}_{\alpha-1}\) be a word such that \(\mathit{label}(\mathsf{u})=w\) in \(G_{\alpha-1}\). We attach to \(\Delta^{\prime}\) a diagram \(\Theta\) over the presentation of \(G_{\alpha-1}\) with boundary loop \(\mathsf{uw}^{-1}\) where \(\mathit{label}(\mathsf{w})=w\). We consider \(\Delta^{\prime}\cup\Theta\) as a diagram of rank \(\alpha\) with one side \(\mathsf{S}\) and one bridge \(\mathsf{w}^{-1}\). Note that any simple path in \(\Delta^{\prime}\cup\Theta\) with endpoints in \(\Delta^{\prime}\) is homotopic rel endpoints to a simple path in \(\Delta^{\prime}\). Moreover, this holds also if \(\Delta^{\prime}\cup\Theta\) is refined to a diagram \(\Sigma\) and we take a refinement of \(\Delta^{\prime}\) in \(\Sigma\) instead of \(\Delta^{\prime}\). This implies that \(\Delta^{\prime}\cup\Theta\) is a reduced diagram of rank \(\alpha\). Then by Lemma 7.5, \(\Delta^{\prime}\cup\Theta\) has no cells of rank \(\alpha\). Figure 9. **7.8 Proposition** (bounding sides of a small diagram, raw form).: _Let \(\Delta\) be a small diagram of rank \(\alpha\geq 1\). Assume that \(\Delta\) is not of bigon type and \(c(\Delta_{\alpha-1})>0\). Then_ (7-7) \[\theta\sum_{\mathsf{S}}|\mathsf{S}|_{\alpha}\leq\frac{2}{3}(1+\nu)b(\Delta)- \chi(\Delta)\] _where the sum is taken over all sides \(\mathsf{S}\) of \(\Delta\)._ Proof.: We make \(\Delta\) reduced and endow it with a tight set \(\mathscr{T}\) of contiguity subdiagrams. We assign to subpaths of sides of \(\Delta_{\alpha-1}\) type I, II and III as in the proof of Proposition 7.4 and make several observations about \(\mathscr{T}\). _Claim 1: There are no contiguity subdiagrams \(\Pi\in\mathscr{T}\) between two (not necessarily distinct) sides of type I of \(\Delta_{\alpha-1}\)._ Assume \(\Pi\) is such a contiguity subdiagram. Let \(\delta\Pi=\mathsf{Pu}_{1}\mathsf{Qu}_{2}\) where \(\mathsf{P}\) and \(\mathsf{Q}\) are the contiguity arcs of \(\Pi\). According to Definition 6.5 at least one of \(\mathsf{u}_{i}\)'s, say \(\mathsf{u}_{1}\), is a bond in \(\Delta_{\alpha-1}\) (otherwise \(\Pi=\Delta_{\alpha-1}\) contrary to the assumption \(c(\Delta_{\alpha-1})>0\)). Checking with Definition 6.1 we see that \(\mathsf{u}_{1}\) is also a bond in \(\Delta\) (condition (iii) of Definition 6.1 holds due to Lemma 7.7). This contradicts the assumption that \(\Delta\) is small. _Claim 2: Up to inessential change of \(\Delta\) we may assume that condition (*) of Proposition 7.4 is satisfied, i.e. each cell of rank \(\alpha\) of \(\Delta\) has at most one contiguity subdiagram \(\Pi\in\mathscr{T}\) to sides of type I of \(\Delta_{\alpha-1}\)._ Assume that a cell \(\mathsf{D}\) of rank \(\alpha\) has two contiguity subdiagrams \(\Pi_{i}\in\mathscr{T}\) (\(i=1,2\)) to sides \(\mathsf{S}_{i}\) of type I. Let \(\mathsf{P}_{i}\) be the contiguity arc of \(\Pi_{i}\) that occurs in \(\mathsf{S}_{i}\). The boundary loop of \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) has the form \(\mathsf{P}_{1}\mathsf{u}_{1}\mathsf{P}_{2}\mathsf{u}_{2}\) where labels of \(\mathsf{u}_{i}\) are in \(\mathscr{H}_{\alpha}\). Since \(\Delta\) is small, at least one of the conditions (iii) or (iv) of Definition 6.1 should be violated for each of the paths \(\mathsf{u}_{i}\). If \(\mathsf{S}_{1}=\mathsf{S}_{2}\) and some \(\mathsf{u}_{i}\) (and hence both \(\mathsf{u}_{1}\) and \(\mathsf{u}_{2}\)) are homotopic rel endpoints to a subpath of \(\mathsf{S}_{1}\) then \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) cuts off a reduced disk subdiagram \(\Delta^{\prime}\) of \(\Delta\) with one bridge \(\mathsf{u}_{1}^{-1}\) or \(\mathsf{u}_{2}^{-1}\). By Lemma 7.5, \(\Delta^{\prime}\) has no cells of rank \(\alpha\). Then either \(\Delta^{\prime}\) is a component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathscr{T}}\Pi\) or \(\Delta^{\prime}\) contains a component \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathscr{T}}\Pi\) with \(c(\Theta)=0\). We come to a contradiction with the choice of a tight set \(\mathscr{T}\) of contiguity subdiagrams of \(\Delta\). Assume that condition (iv) of Definition 6.1 fails for both \(\mathsf{u}_{1}\) and \(\mathsf{u}_{2}\). Then, up to renumeration of \(\Pi_{1}\) and \(\Pi_{2}\), \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) cuts off a simply connected subdiagram \(\Delta^{\prime}\) with boundary loop \(\mathsf{u}_{1}^{-1}\mathsf{T}_{1}\mathsf{v}\mathsf{T}_{2}\) where \(\mathsf{P}_{1}\mathsf{T}_{1}\) is an ending subpath of \(\mathsf{S}_{1}\), \(\mathsf{v}\) is a bridge of \(\Delta\), \(\mathsf{T}_{2}\mathsf{P}_{2}\) is a starting subpath of \(\mathsf{S}_{2}\) and labels of \(\mathsf{P}_{1}\mathsf{T}_{1}\) and \(\mathsf{T}_{2}\mathsf{P}_{2}\) are empty, see Figure 9(a). In this case, we cut off the subdiagram \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\cup\Delta^{\prime}\) from \(\Delta\). The operation does not change the values of \(\sum|\mathsf{S}|_{\alpha}\), \(b(\Delta)\) and \(\chi(\Delta)\) in (7-7) and preserves the assumption that \(\Delta\) is small. We have also \(c(\Delta_{\alpha-1})>0\) for the modified \(\Delta\) (otherwise \(\Delta\) would be a monogon type contradicting Lemma 7.5). _Claim 3: Up to inessential change of \(\Delta\) we may assume that there are no contiguity subdiagrams \(\Pi\in\mathscr{T}\) between sides of type I and II of \(\Delta_{\alpha-1}\)._ Assume that \(\Pi\in\mathscr{T}\) is a contiguity subdiagram between sides of type I and II. Let \(\delta\Pi=\mathsf{Pu}_{1}\mathsf{Qu}_{2}\) where \(\mathsf{P}\) occurs in a side \(\mathsf{S}\) of \(\Delta\) and \(\mathsf{Q}\) occurs in the central arc \(\mathsf{R}\) of a bridge \(\mathsf{v}=\mathsf{v}_{1}\mathsf{Rv}_{2}\). Observe that any of the endpoints of \(\mathsf{P}\) can be joined with any of the endpoints of \(\mathsf{v}\) by a path labeled with a word in \(\mathscr{H}_{\alpha}\) in a graph composed from paths \(\mathsf{u}_{1}\), \(\mathsf{u}_{2}\) and \(\mathsf{v}\), see Figure 10b. Since \(\Delta\) is small, this easily implies that \(\mathsf{v}\) and \(\mathsf{S}\) are adjacent in the boundary of \(\Delta\). Up to symmetry, assume that \(\mathsf{v}\mathsf{S}\) occurs in a boundary loop of \(\Delta\). so \(\mathsf{R}=\mathsf{R}_{1}\mathsf{QR}_{2}\) and \(\mathsf{S}=\mathsf{S}_{1}\mathsf{PS}_{2}\). Note that \(\mathit{label}(\mathsf{S}_{1}\mathsf{P})\) is empty (otherwise \(\mathsf{v}_{1}\mathsf{R}_{1}\mathsf{u}_{1}^{-1}\) would give a bond in \(\Delta\) after refinement) and \(\mathit{label}(\mathsf{QR}_{2})\) is nonempty (because \(\mathsf{u}_{1}\) is a bond in \(\Delta_{\alpha-1}\)). We cut off the subdiagram of \(\Delta\) bounded by \(\mathsf{QR}_{2}\mathsf{v}_{2}\mathsf{S}_{1}\mathsf{Pu}_{1}\). As in the proof of the previous claim, the operation does not change the values of terms in (7-7), the value of \(c(\Delta_{\alpha-1})\) and keeps the assumption that \(\Delta\) is small. On the other hand, we decrease the total length of labels of sides \(\Delta_{\alpha-1}\). The claim is proved. We now define a discrete connection \(w^{*}\) on \((\Delta,\mathcal{T})\) by changing the function \(w\) defined in the proof of Proposition 7.4. The new function \(w^{*}\) differs from \(w\) only on contiguity arcs of contiguity subdiagrams \(\Pi\in\mathcal{T}\) as follows. Let \(\delta\Pi=\mathsf{Pu}_{1}\mathsf{Qu}_{2}\) where \(\mathsf{P}\) and \(\mathsf{Q}\) are the contiguity arcs of \(\Pi\). By Claims 1 and 3, if \(\mathsf{P}\) has type I then \(\mathsf{Q}\) has necessarily type III. Instead of (7-3) we define \[w^{*}(\mathsf{P})=\begin{cases}\theta&\text{if $\mathsf{P}$ has type I}\\ \frac{1}{3}(1-2\nu)-\theta&\text{if $\mathsf{P}$ has type III and $\mathsf{Q}$ has type I}\\ \frac{1}{6}(1-2\nu)&\text{in all other cases}\end{cases}\] For contiguity subdiagrams \(\Pi\in\mathcal{T}\) we immediately have \(\kappa^{*}(\Pi)\leq 0\) where \(\kappa^{*}\) denotes the curvature function defined from \(w^{*}\). If \(\Theta\) is a connected component of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\) then \(\kappa^{*}(\Theta)=\kappa(\Theta)\leq 0\). Let \(\mathsf{D}\) be a cell of rank \(\alpha\) of \(\Delta\). In view of Claim 2 \[\kappa^{*}(\mathsf{D})\leq\kappa(\mathsf{D})+\theta\leq 0.\] We provide a bound for \(\kappa^{*}(\partial\Delta)\). Let \(t\) be the number of all contiguity subdiagrams \(\Pi\in\mathcal{T}\) between sides of type I and sides of type III. Then \[\kappa^{*}(\partial\Delta) \leq\frac{1}{3}(1+\nu)b(\Delta_{\alpha-1})-\theta t-\zeta\theta \underset{\mathsf{S}\in\operatorname{sides}(\Theta)}{\sum}|\mathsf{S}|_{ \alpha-1}\] \[\leq\frac{2}{3}(1+\nu)b(\Delta)-\theta\underset{\mathsf{S}\in \operatorname{sides}(\Delta)}{\sum}|\mathsf{S}|_{\alpha}\] Figure 10. Since \(\Delta\) is small, this easily implies that \(\mathsf{v}\) and \(\mathsf{S}\) are adjacent in the boundary of \(\Delta\). Up to symmetry, assume that \(\mathsf{v}\mathsf{S}\) occurs in a boundary loop of \(\Delta\). so \(\mathsf{R}=\mathsf{R}_{1}\mathsf{QR}_{2}\) and \(\mathsf{S}=\mathsf{S}_{1}\mathsf{PS}_{2}\). Note that \(\mathit{label}(\mathsf{S}_{1}\mathsf{P})\) is empty (otherwise \(\mathsf{v}_{1}\mathsf{R}_{1}\mathsf{u}_{1}^{-1}\) would give a bond in \(\Delta\) after refinement) and \(\mathit{label}(\mathsf{QR}_{2})\) is nonempty (because \(\mathsf{u}_{1}\) is a bond in \(\Delta_{\alpha-1}\)). We cut off the subdiagram of \(\Delta\) bounded by \(\mathsf{QR}_{2}\mathsf{v}_{2}\mathsf{S}_{1}\mathsf{Pu}_{1}\). As in the proof of the previous claim, the operation does not change the values of terms in (7-7), the value of \(c(\Delta_{\alpha-1})\) and keeps the assumption that \(\Delta\) is small. On the other hand, we decrease the total length of labels of sides \(\Delta_{\alpha-1}\). The claim is proved. where \(\Theta\) runs over all connected components of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\). Applying Proposition 7.3 we obtain \[\frac{2}{3}(1+\nu)b(\Delta)-\theta\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: (i) Assume that \(\Theta\) is simply connected. We consider \(\Theta\) as a diagram of rank \(\alpha\) with a single side that is a subpath of \(\mathsf{S}\). The assumption that \(\Delta\) is reduced implies that \(\Theta\) is reduced. By Lemma 7.5\(\Theta\) has no cells of rank \(\alpha\). Then we obtain a contradiction with the choice of a tight set \(\mathcal{T}\) of contiguity subdiagrams of \(\Delta\). (ii) Assume that \(\Theta^{\prime}\) is simply connected. Let \(\partial\Theta^{\prime}=\mathsf{Ru}\) where \(\mathsf{R}^{-1}\) occurs in the boundary loop of \(\mathsf{D}\) and \(\mathsf{u}^{-1}\) is the bond in \(\Delta_{\alpha-1}\) that occurs in \(\partial\Pi\). We consider \(\Theta^{\prime}\) as a a diagram of rank \(\alpha\) with one side \(\mathsf{S}\) labeled by the empty word and one bridge \(\mathsf{Ru}\) (formally, to fit the definition in 5.1 we have to take a copy of \(\Theta^{\prime}\) and perform a refinement to make \(\mathsf{S}\) a non-empty path). By Lemma 7.5\(\Theta^{\prime}\) has no cells of rank \(\alpha\) and we come to a contradiction since in this case \(\mathsf{u}^{-1}\) cannot be a bond in \(\Delta_{\alpha-1}\) due to condition (iii) of Definition 6.1. (iii) follows from (i) and (ii). **7.11 Proposition** (diagrams of small complexity are single layered).: _Let \(\Delta\) be a reduced diagram of rank \(\alpha\geq 1\) and let \(\mathcal{T}\) be a tight set of contiguity subdiagrams of \(\Delta\)._ 1. _If_ \(\Delta\) _is a disk diagram of bigon type then every cell of rank_ \(\alpha\) _of_ \(\Delta\) _has a contiguity subdiagram_ \(\Pi\in\mathcal{T}\) _to each of the two sides of_ \(\Delta\)_._ 2. _If_ \(\Delta\) _is a disk diagram of trigon or tetragon type then every cell of rank_ \(\alpha\) _of_ \(\Delta\) _has contiguity subdiagrams_ \(\Pi\in\mathcal{T}\) _to at least two sides of_ \(\Delta\)_._ 3. _If_ \(\Delta\) _is an annular diagram with two cyclic sides then every cell of rank_ \(\alpha\) _of_ \(\Delta\) _has a contiguity subdiagram_ \(\Pi\in\mathcal{T}\) _to each of the sides of_ \(\Delta\)_._ 4. _If_ \(\Delta\) _is an annular diagram with one cyclic side and one non-cyclic side then every cell_ \(\mathsf{D}\) _of rank_ \(\alpha\) _of_ \(\Delta\) _has at least two contiguity subdiagrams_ \(\Pi,\Pi^{\prime}\in\mathcal{T}\) _to sides of_ \(\Delta\)_. Here we admit the possibility that both_ \(\Pi\) _and_ \(\Pi^{\prime}\) _are contiguity subdiagrams between_ \(\mathsf{D}\) _and the non-cyclic side of_ \(\Delta\)_._ Proof.: Let \(\Delta\) be a reduced diagram of rank \(\alpha\) of a type listed in (i)-(iv). We call a cell \(\mathsf{D}\) of rank \(\alpha\) of \(\Delta\)_regular_ if it satisfies the conclusion of the corresponding statement (i)-(iv) and _exceptional_ otherwise. We need to prove that \(\Delta\) has no exceptional cells. Observe that by Lemma 7.10, an exceptional cell has at most one contiguity subdiagram to sides of \(\Delta\), i.e. such a cell satisfies condition (*) of Proposition 7.4. We use induction on the number \(M\) of cells of rank \(\alpha\) of \(\Delta\). (i) Let \(\Delta\) be of bigon type, i.e. a disk diagram with two sides. If \(\Delta\) has no regular cells of rank \(\alpha\) but has at least one exceptional cell then application of Proposition 7.4 gives a contradiction. Assume that \(\mathsf{D}\) is a regular cell of \(\Delta\). Let \(\Pi_{i}\) (\(i=1,2\)) be the contiguity subdiagram of \(\mathsf{D}\) to \(\mathsf{X}_{i}\). The complement of \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) in \(\Delta\) consists of two components \(\Delta_{1}\) and \(\Delta_{2}\) of bigon type with the induced boundary marking of rank \(\alpha\) (see Figure 12a). The set of subdiagrams \(\Pi\in\mathcal{T}\) contained in \(\Delta_{i}\) is a tight set of contiguity subdiagrams of \(\Delta_{i}\). Each of the subdiagrams \(\Delta_{i}\) has a smaller number of cells of rank \(\alpha\), so the statement follows by induction. (ii) Let \(\Delta\) be of trigon or tetragon type. Assume that \(\Delta\) has a regular cell \(\mathsf{D}\). Let \(\Pi_{i}\) (\(i=1,2\)) be contiguity subdiagrams of \(\mathsf{D}\) to sides of \(\Delta\). The complement of \(\Delta-\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) consists of two components \(\Delta_{1}\) and \(\Delta_{2}\) with the induced boundary marking of rank \(\alpha\) (Figure 12b) making them diagrams of rank \(\alpha\). If \(\Delta\) is of trigon type then \(\Delta_{1}\) and \(\Delta_{2}\) are of trigon and bigon types. If \(\Delta\) is of tetragon type then either \(\Delta_{1}\) and \(\Delta_{2}\) are of tetragon and bigon types, or both \(\Delta_{i}\) are of trigon type. Then we can refer to (i) and the inductive hypothesis. Assume that all cells of rank \(\alpha\) of \(\Delta\) are exceptional. Then by Proposition 7.4 (7-9) \[\theta M\leq\frac{8}{3}(1+\nu)-1\] which implies \(M\leq 2\). Following the proof of Proposition 7.4 we compute a better bound for \(M\) and conclude that \(M=0\). Assume that \(M\geq 1\) and let \(\mathsf{D}\) be a cell of rank \(\alpha\) of \(\Delta\). Consider the discrete connection \(w\) on \((\Delta,\mathcal{T})\) defined in the proof of Proposition 7.4. An upper bound for \(\kappa(\mathsf{D})\) is given by (7-4). The right-hand side of (7-4) is a linear expression on \(r\) and, as we have seen in the proof of Proposition 7.4, in the case \(r\leq 9\) the coefficient before \(r\) is positive. To get a value for the upper bound, we compute the maximal possible value of \(r\). Observe that by Lemma 7.10, \(\mathsf{D}\) has no contiguity subdiagrams to itself, has at most one contiguity subdiagram to another cell of rank \(\alpha\) of \(\Delta\) (if that cell exists) and the number of contiguity subdiagrams of \(\mathsf{D}\) to sides of type II is at most \(4\); so \(r\leq 5\). Then the maximal value of the right-hand side of (7-4) is achieved when \(r=5\). Substituting \(r=5\) into (7-4) and using (2-3) we obtain \[\kappa(\mathsf{D}) \leq\frac{2}{3}(1+\nu)-\frac{5}{6}(1-2\nu)-4\zeta\theta\lambda\Omega\] \[\leq-\frac{1}{6}+\frac{7}{3}\nu-4\theta=-\frac{138}{54}.\] By (7-6) \[\kappa(\partial\Delta)\leq\frac{8}{3}(1+\nu)=\frac{152}{54}.\] Proposition 7.3 gives \[1=\kappa(\Delta)+\kappa(\partial\Delta)\leq\frac{14}{54}.\] The contradiction shows that the assumption \(M\geq 1\) is impossible. (iii): Similarly to the proof of (ii), assume first that \(\Delta\) has a regular cell \(\mathsf{D}\) of rank \(\alpha\) with two contiguity subdiagrams \(\Pi_{1}\) and \(\Pi_{2}\) to sides of \(\Delta\). By Lemma 7.10(i) these are contiguity subdiagrams to distinct sides of \(\Delta\). Then the complement \(\Delta-(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2})\) is a diagram of bigon type and the statement follows directly from (i). If all cells of rank \(\alpha\) of \(\Delta\) are exceptional and there is at least one cell of rank \(\alpha\) then application of Proposition 7.4 gives an immediate contradiction. (iv): Assume that \(\Delta\) has a regular cell \(\mathsf{D}\) of rank \(\alpha\) with two contiguity subdiagrams \(\Pi_{i}\) (\(i=1,2\)) to sides of \(\Delta\). There are two cases depending on whether or not \(\Pi_{1}\) and \(\mathsf{\Pi}_{2}\) are contiguity subdiagrams to distinct sides of \(\Delta\) (see Figure 13). In the first case, the Figure 12. complement \(\Delta-(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2})\) is a diagram of trigon type and the statement follows from the already proved part (ii). In the second case, \(\Delta-(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2})\) consists of a simply connected component \(\Delta_{1}\) and and an annular component \(\Delta_{2}\) with one non-cyclic side. For cells of rank \(\alpha\) in \(\Delta_{1}\) the statement follows by (i) and for cells of rank \(\alpha\) in \(\Delta_{2}\) we can apply induction since \(\Delta_{2}\) has a strictly smaller number of cells of rank \(\alpha\) than \(\Delta\). If all cells of rank \(\alpha\) of \(\Delta\) are exceptional then application of Proposition 7.4 gives \(M=0\). **7.12 Proposition** (small diagrams of trigon or tetragon type).: _Let \(\Delta\) be a small diagram of rank \(\alpha\) of trigon or tetragon type with sides \(\mathsf{S}_{i}\) (\(1\leq i\leq k\), \(k=3\) or \(k=4\)). Then_ \[\sum_{i=1}^{3}|\mathsf{S}_{i}|_{\alpha}\leq 4\zeta\eta\quad\text{or}\quad\sum_{ i=1}^{4}|\mathsf{S}_{i}|_{\alpha}\leq 6\zeta\eta\] _in the trigon and tetragon cases, respectively._ Proof.: By Proposition 6.4(iii) we may assume that \(\alpha\geq 1\). We assume that \(\Delta\) is reduced and is given a tight set \(\mathcal{T}\) of contiguity subdiagrams. Following arguments from the proof of Proposition 7.8 we can assume that Claims 1-3 from that proof hold in our case. By Claim 2 and Proposition 7.11(ii), \(\Delta\) has no cells of rank \(\alpha\). By Claims 1 and 3, \(\mathcal{T}\) has only contiguity subdiagrams between sides of \(\Delta_{\alpha-1}\) of type II. Hence any side of \(\Delta\) occurs entirely in a boundary loop of a connected component \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\). By Lemma 6.10, \(\sum_{\Theta}c(\Theta)=c(\Delta_{\alpha-1})\). Applying Proposition 7.9\({}_{\alpha-1}\) to components \(\Theta\) of \(\Delta_{\alpha-1}-\bigcup_{\Pi\in\mathcal{T}}\Pi\) we obtain \[\sum_{i}|\mathsf{S}_{i}|_{\alpha-1}\leq\eta c(\Delta_{\alpha-1})\leq(b(\Delta _{\alpha-1})-2)\eta\] which gives the required inequality by 4.14(iii). **7.13 Proposition** (cell in a diagram of small complexity).: _Let \(\Delta\) be a reduced diagram of rank \(\alpha\geq 1\) of one of the types listed in Proposition 7.11. Let \(\mathcal{T}\) be a tight set of contiguity subdiagrams on \(\Delta\) and let \(\mathsf{D}\) be a cell of rank \(\alpha\) of \(\Delta\). Let \(\mathsf{P}_{i}\), \(i=1,2,\ldots,r\) be the contiguity arcs of contiguity subdiagrams of \(\mathsf{D}\) to sides of \(\Delta\) that occur in \(\delta\mathsf{D}\). Then:_ 1. _If_ \(\Delta\) _has bigon type or is an annular diagram with two cyclic sides then_ \(r=2\) _and_ \[\mu(\mathsf{P}_{1})+\mu(\mathsf{P}_{2})\geq 1-2\lambda-16\zeta\eta\omega.\] Figure 13. 2. _If_ \(\Delta\) _has trigon type then_ \(2\leq k\leq 3\) _and_ \[\sum_{i=1}^{k}\mu(\mathsf{P}_{i})\geq 1-3\lambda-24\zeta\eta\omega.\] 3. _If_ \(\Delta\) _is an annular diagram with one cyclic side and one non-cyclic side then_ \(2\leq k\leq 3\) _and_ \[\sum_{i=1}^{k}\mu(\mathsf{P}_{i})\geq 1-4\lambda-24\zeta\eta\omega.\] Proof.: Assume that \(\mathsf{C}\) is another cell of rank \(\alpha\) of \(\Delta\). By Proposition 7.11, \(\mathsf{C}\) has at least two contiguity subdiagrams \(\Pi_{1}\), \(\Pi_{2}\) to sides of \(\Delta\). Let \(\Delta^{\prime}\) be the connected component of \(\Delta-\mathsf{C}-\Pi_{1}-\Pi_{2}\) containing \(\mathsf{D}\). Then \(\Delta^{\prime}\) inherits from \(\Delta\) the boundary marking of rank \(\alpha\) and the tight set of contiguity subdiagrams. Observe also that \(\Delta^{\prime}\) is also a diagram of rank \(\alpha\) of one of the types in cases (i)-(iii); moreover, it is of the same type (i)-(iii) or has a smaller complexity. In this case the statement is reduced by induction to the case of a diagram with a smaller number of cells of rank \(\alpha\). It remains to consider the case when \(\mathsf{D}\) is a single cell of rank \(\alpha\) of \(\Delta\). The equality \(r=2\) in (i) and the bound \(2\leq r\leq 3\) in (ii) and (iii) follow from Lemma 7.10. With bounds from Lemmas 6.6, 6.8, Propositions 7.9, 7.12 for \(\alpha:=\alpha-1\) and inequality (4-2), an easy analysis shows that the worst cases for the lower bound on \(\sum_{i}\mu(\mathsf{P}_{i})\) are as shown in Figure 14. We then get the corresponding inequality in (i)-(iii). Figure 14. ## 8. Fragments In this section we establish several properties of fragments of rank \(\alpha\geq 1\). Most of them are proved using facts about relations in \(G_{\alpha-1}\). Starting from this point we use extensively statements from subsequent Sections 9-13 for values of rank \(\beta<\alpha\). We also switch our main action scene to Cayley graphs \(\Gamma_{\alpha-1}\) and \(\Gamma_{\alpha}\). All statements in this section are formulated and proved under assumption \(\alpha\geq 1\). The following observation is a consequence of the assumption that the graded presentation of \(G_{\alpha}\) is normalized, condition (S3) and the fact that centralizers of non-torsion elements of \(G_{\alpha-1}\) are cyclic (Proposition 13.8\({}_{\alpha-1}\)). Recall that two periodic lines \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) in \(\Gamma_{\alpha-1}\) are called parallel if \(s_{P_{1},\mathsf{L}_{1}}=s_{P_{2},\mathsf{L}_{2}}\) where \(P_{i}\) is the period of \(\mathsf{L}_{i}\) (see 4.2). **8.1 Lemma**.: _If \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are two parallel periodic lines in \(\Gamma_{\alpha-1}\) whose periods are relators of rank \(\alpha\) then \(\mathsf{L}_{1}=\mathsf{L}_{2}\)._ Proof.: Let \(\mathsf{L}_{i}\) (\(i=1,2\)) be two parallel periodic lines in \(\Gamma_{\alpha-1}\) whose periods \(R_{i}\) are relators of rank \(\alpha\). Up to cyclic shift of \(R_{i}\) we can assume that \(R_{i}\in\mathcal{X}_{\alpha}^{\pm 1}\) where \(\mathcal{X}_{\alpha}\) is the set of defining relators of rank \(\alpha\) in the presentation (2-1). Let \(\mathsf{v}_{i}\) be a vertex on \(\mathsf{L}_{i}\) such that the label of \(\mathsf{L}_{i}\) starts at \(\mathsf{v}_{i}\) with \(R_{i}\). Let \(g=\mathsf{v}_{1}^{-1}\mathsf{v}_{2}\in G_{\alpha}\) (recall that we identify vertices of \(\Gamma_{\alpha}\) with elements of \(G_{\alpha}\)). Since \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are parallel we have \(gR_{2}g^{-1}=R_{1}\). By (S3) we have either \(R_{1},R_{2}\in\mathcal{X}_{\alpha}\) or \(R_{1}^{-1},R_{2}^{-1}\in\mathcal{X}_{\alpha}\), so according to Definition 2.10, we get \(R_{1}=R_{2}\) and \(R_{1}=R_{0}^{t}\) where \(R_{0}\) it the root of \(R_{1}\). Since the centralizer of \(R_{1}\) is cyclic, we have \(g=R_{0}^{k}\) for some integer \(k\). This implies \(\mathsf{L}_{1}=\mathsf{L}_{2}\). ### Corollary (Small cancellation in the Cayley graph).: _Let \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) be periodic lines in \(\Gamma_{\alpha-1}\) with periods \(R_{1}\) and \(R_{2}\), respectively, where both \(R_{i}\) are relators of rank \(\alpha\). Assume that \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) have close subpaths \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) such that \(|\mathsf{S}_{1}|_{\alpha-1}\geq\lambda|R_{1}|_{\alpha-1}\). Then \(\mathsf{L}_{1}=\mathsf{L}_{2}\)._ Proof.: If \(|\mathsf{S}_{i}|\leq|R_{i}|\) for \(i=1,2\) then the statement follows directly from condition (S2-Cayley) in 4.12. Let \(|\mathsf{S}_{1}|>|R_{1}|\) or \(|\mathsf{S}_{2}|>|R_{2}|\). Using Proposition 9.21\({}_{\alpha-1}\) and condition (S1) we find close subpaths \(\mathsf{S}_{1}^{\prime}\) and \(\mathsf{S}_{2}^{\prime}\) of \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) with \(|\mathsf{S}_{i}|\leq|R_{i}|\), \(i=1,2\) and \(|\mathsf{S}_{j}|_{\alpha-1}\geq\lambda|R_{j}|_{\alpha-1}\) for \(j=1\) or \(j=2\). This reduces the statement to the previous case. ### Proposition _A relator of rank \(\alpha\) is strongly cyclically reduced in \(G_{\alpha-1}\)._ Proof.: Let \(R\) be a relator of rank \(\alpha\). Assume that some power \(R^{t}\) is not reduced in \(G_{\alpha-1}\). According to definition 2.5, for some \(1\leq\beta\leq\alpha-1\) there exists a subword \(S\) of \(R^{t}\) which is close in \(G_{\beta-1}\) to a piece \(P\) of rank \(\beta\) with \(\mu(P)>\rho\). Since \(R\) is cyclically reduced in \(G_{\alpha-1}\) we have \(|S|>|R|\). Then according to the definition in 2.6 we have \(|R^{\circ}|_{\beta}\leq 1\) and hence \[|R^{\circ}|_{\alpha-1}\leq\zeta^{\alpha-\beta-1}|R^{\circ}|_{\beta}\leq 1\] contradicting (S1) and (2-3). A _fragment path of rank \(\alpha\)_ in \(\Gamma_{\alpha-1}\) is a path \(\mathsf{F}\) labeled by a fragment of rank \(\alpha\). We assume that \(\mathsf{F}\) has an associated \(R\)-periodic segment \(\mathsf{P}\) with \(R\in\mathcal{X}_{\alpha}\) which is close to \(\mathsf{F}\). We call \(\mathsf{P}\) the _base_ for \(\mathsf{F}\). Note that this agrees with the definition in 2.6. If \(F\) is a fragment of rank \(\alpha\) with associated triple \((P,u,v)\) and \(\mathsf{F}\) is a path in \(\Gamma_{\alpha-1}\) with \(\mathit{label}(\mathsf{F})=F\) then the loop \(\mathsf{F}^{-1}\mathsf{u}\mathsf{P}\mathsf{v}\) with \(\mathit{label}(\mathsf{u}\mathsf{P}\mathsf{v})=uPv\) gives a base \(\mathsf{P}\) for \(\mathsf{F}\). Conversely, if \(\mathsf{F}\) is a fragment of rank \(\alpha\) in \(\Gamma_{\alpha-1}\) with base \(\mathsf{P}\) then choosing a loop \(\mathsf{F}^{-1}\mathsf{u}\mathsf{P}\mathsf{v}\) with \(\mathit{label}(\mathsf{u}),\mathit{label}(\mathsf{v})\in\mathcal{H}_{\alpha-1}\) and denoting \(F\), \(P\) \(u\) and \(v\) the corresponding labels we obtain a fragment \(F\) of rank \(\alpha\) with associated triple \((P,u,v)\). If \(\beta\geq\alpha\) and paths \(\mathsf{F}\) and \(\mathsf{P}\) in \(\Gamma_{\beta}\) are obtained by mapping a fragment \(\bar{\mathsf{F}}\) of rank \(\alpha\) with base \(\bar{\mathsf{P}}\) in \(\Gamma_{\alpha-1}\) then, by definition, we consider \(\mathsf{F}\) as a fragment of rank \(\alpha\) with base \(\mathsf{P}\) in \(\Gamma_{\beta}\). Abusing the language we will use the term 'fragment' for both fragment words and fragment paths in \(\Gamma_{\beta}\). Recall that by a convention in 4.2, a base \(\mathsf{P}\) for a fragment \(\mathsf{F}\) of rank \(\alpha\) in \(\Gamma_{\beta}\) has an associated relator \(R\) of rank \(\alpha\) and the unique infinite \(R\)-periodic extension \(\mathsf{L}\). If \(\beta=\alpha-1\) then \(\mathsf{L}\) is a bi-infinite path (which is simple by Proposition 8.3) that we call the _base axis_ for \(\mathsf{F}\). If \(\beta>\alpha\) then \(\mathsf{L}\) is winding over a relator loop labeled \(R\) that we call the _base relator loop_ for \(\mathsf{F}\). We describe a way to measure fragments of rank \(\alpha\). If \(P\) is a subword of a word \(R^{k}\) where \(R\) is a relator of rank \(\alpha\) then we define (8-1) \[\mu(P)=\frac{|P|_{\alpha-1}}{|R^{\circ}|_{\alpha-1}}.\] Note that this agrees with the definition in 4.11 of the function \(\mu(S)\) on the set of pieces \(S\) of rank \(\alpha\). If \(F\) is a fragment of rank \(\alpha\geq 1\) then the size \(\mu_{\mathrm{f}}(F)\) of \(F\) is defined to be equal to \(\mu(P)\) where \(P\) is the associated subword of \(R^{k}\) and \(R\) is the associated relator of rank \(\alpha\). Thus, for example, \(\mu_{\mathrm{f}}(F)=\frac{1}{2}\) means approximately that \(F\) is close in rank \(\alpha-1\) to a "half" of its associated relator of rank \(\alpha\). If \(\mathsf{F}\) is a fragment of rank \(\alpha\) in \(\Gamma_{\beta}\) then we set \(\mu_{\mathrm{f}}(\mathsf{F})=\mu_{\mathrm{f}}(label(\mathsf{F}))\). This means that \(\mu_{\mathrm{f}}(\mathsf{F})\) is given by the formula \[\mu_{\mathrm{f}}(\mathsf{F})=\frac{|\mathsf{P}|_{\alpha-1}}{|R^{\circ}|_{ \alpha-1}}.\] where \(\mathsf{P}\) is the base for \(\mathsf{F}\) and \(R\) is the relator associated with \(\mathsf{P}\). Using Proposition 9.21\({}_{<\alpha}\) we can easily reformulate the definition of a reduced in \(G_{\alpha}\) word in 2.5 in the following way: a word \(X\) is reduced in \(G_{\alpha}\) if and only if \(X\) is freely reduced and contains no fragments \(F\) of rank \(1\leq\beta\leq\alpha\) with \(\mu_{\mathrm{f}}(F)>\rho\). **8.6 Definition**.: Two fragments \(\mathsf{F}\) and \(\mathsf{G}\) of rank \(\alpha\) in \(\Gamma_{\alpha-1}\) are _compatible_ if their base axes are parallel. Note that by Lemma 8.1, the base axes of fragments of rank \(\alpha\) are parallel if and only if they coincide. In the case \(\beta\geq\alpha\), two fragments \(\mathsf{F}\) and \(\mathsf{G}\) of rank \(\alpha\) in \(\Gamma_{\beta}\) are defined to be compatible if they have compatible lifts in \(\Gamma_{\alpha-1}\), or, equivalently, \(\mathsf{F}\) and \(\mathsf{G}\) have the same base relator loop. It will be convenient to extend compatibility relation to fragments of rank \(0\). Recall that according to the definition in 2.6 fragments of rank \(0\) are letters in \(\mathcal{A}^{\pm 1}\). Thus, fragments of rank \(0\) in \(\Gamma_{\beta}\) are paths of length \(1\). By definition, fragments \(\mathsf{F}\) and \(\mathsf{G}\) of rank \(0\) in \(\Gamma_{\beta}\) are compatible if and only if \(\mathsf{F}=\mathsf{G}\). We write compatibility of fragments as \(\mathsf{F}\sim\mathsf{G}\). Note that we have in fact a family of relations with two parameters \(\alpha\geq 0\) and \(\beta\geq\max(0,\alpha-1)\): compatibility of fragments of rank \(\alpha\) in \(\Gamma_{\beta}\). The values of \(\beta\) and \(\alpha\) will be always clear from the context. Below we will use also "compatibility up to invertion" relation on the set of fragments of rank \(\alpha\) in \(\Gamma_{\beta}\), denoted \(\mathsf{F}\sim\mathsf{G}^{\pm 1}\) and meaning that \(\mathsf{F}\sim\mathsf{G}\) or \(\mathsf{F}\sim\mathsf{G}^{-1}\). Both are obviously equivalence relations. **8.7 Proposition** (fragment stability in bigon of the previous rank).: _Let \(\alpha\geq 1\). Let \(\mathsf{X}\) and \(\mathsf{Y}\) be reduced close paths in \(\Gamma_{\alpha-1}\). Let \(\mathsf{K}\) be a fragment of rank \(\alpha\) in \(\mathsf{X}\) with \(\mu_{\mathrm{f}}(\mathsf{K})\geq 2.3\omega.\) Then there exists a fragment \(\mathsf{M}\) of rank \(\alpha\) in \(\mathsf{Y}\) such that \(\mathsf{M}\sim\mathsf{K}\) and_ \[\mu_{\mathrm{f}}(\mathsf{M})>\mu_{\mathrm{f}}(\mathsf{K})-2.6\omega.\] Proof.: Let \(\mathsf{P}\) be the base for \(\mathsf{K}\). By (4-2) and Proposition 10.16\({}_{\alpha-1}\) we have \(\mathsf{P}=\mathsf{z}_{1}\mathsf{P}^{\prime}\mathsf{z}_{2}\) where \(\mathsf{P}^{\prime}\) is close to a subpath \(\mathsf{M}\) of \(\mathsf{Y}\) and \(|\mathsf{z}_{i}|_{\alpha-1}<1.3\) (\(i=1,2\)). Then \(\mathsf{M}\) is a fragment of rank \(\alpha\) with base \(\mathsf{P}^{\prime}\), so \(\mu_{\mathrm{f}}(\mathsf{M})=\mu(\mathsf{P}^{\prime})\). By (4-2) \[\mu(\mathsf{z}_{1})+\mu(\mathsf{z}_{2})<2.6\omega\] and hence \[\mu(\mathsf{P}^{\prime})>\mu(\mathsf{P})-2.6\omega=\mu_{\mathrm{f}}(\mathsf{K })-2.6\omega.\] **8.8 Proposition** (fragment stability in trigon of the previous rank).: _Let \(\mathsf{X}^{-1}*\mathsf{Y}_{1}*\mathsf{Y}_{2}*\) be a coarse trigon in \(\Gamma_{\alpha-1}\). Let \(\mathsf{K}\) be a fragment of rank \(\alpha\) in \(\mathsf{X}\) such that \(\mu_{\mathrm{f}}(\mathsf{K})\geq 2.5\omega\). Then at least one of the following statements holds:_ * _For_ \(i=1\) _or_ \(i=2\) _there is a fragment_ \(\mathsf{M}_{i}\) _of rank_ \(\alpha\) _in_ \(\mathsf{Y}_{i}\) _such that_ \(\mathsf{M}_{i}\sim\mathsf{K}\) _and_ \[\mu_{\mathrm{f}}(\mathsf{M}_{i})>\mu_{\mathrm{f}}(\mathsf{K})-2.8\omega.\] * _For each_ \(i=1,2\) _there is a fragments_ \(\mathsf{M}_{i}\) _of rank_ \(\alpha\) _in_ \(\mathsf{Y}_{i}\) _such that_ \(\mathsf{M}_{i}\sim\mathsf{K}\) _and_ \[\mu_{\mathrm{f}}(\mathsf{M}_{1})+\mu_{\mathrm{f}}(\mathsf{M}_{2})>\mu_{ \mathrm{f}}(\mathsf{K})-3\omega.\] Proof.: This follows from Proposition 10.18\({}_{\alpha-1}\) in a similar way as in the proof of Proposition 8.7. **8.9 Proposition** (fragment stability in conjugacy relations of the previous rank).: _Let \(X\) be a word cyclically reduced in \(G_{\alpha-1}\). Let \(Y\) be a word reduced in \(G_{\alpha-1}\), \(u\in\mathcal{H}_{\alpha-1}\) and \(Yu=z^{-1}Xz\) in \(G_{\alpha-1}\) for some \(z\). We represent the conjugacy relation by two lines \(\ldots\mathsf{Y}_{-1}\mathsf{u}_{-1}\mathsf{Y}_{0}\mathsf{u}_{0}\mathsf{Y}_{1 }\mathsf{u}_{1}\ldots\) and \(\mathsf{X}=\ldots\mathsf{X}_{-1}\mathsf{X}_{0}\mathsf{X}_{1}\ldots\) in \(\Gamma_{\alpha-1}\) where \(\text{label}(\mathsf{X}_{i})=X\), \(\text{label}(\mathsf{Y}_{i})=Y\) and \(\text{label}(\mathsf{u}_{i})=u\) (see 4.3). Let \(\mathsf{K}\) be a fragment of rank \(\alpha\) in \(\mathsf{X}\) with \(|\mathsf{K}|\leq|X|\) and \(\mu_{\mathrm{f}}(\mathsf{K})\geq 2.5\omega\). Then at least one of the following statements is true:_ * _For some_ \(i\)_, there is a fragment_ \(\mathsf{M}\) _of rank_ \(\alpha\) _in_ \(\mathsf{Y}_{i}\) _such that_ \(\mathsf{M}\sim\mathsf{K}\) _and_ \[\mu_{\mathrm{f}}(\mathsf{M})>\mu_{\mathrm{f}}(\mathsf{K})-2.9\omega.\] * _For some_ \(i\)_, there are fragments_ \(\mathsf{M}_{1}\) _and_ \(\mathsf{M}_{2}\) _of rank_ \(\alpha\) _in_ \(\mathsf{Y}_{i}\) _and_ \(\mathsf{Y}_{i+1}\) _respectively such that_ \(\mathsf{M}_{i}\sim\mathsf{K}\)__\((i=1,2)\) _and_ \[\mu_{\mathrm{f}}(\mathsf{M}_{1})+\mu_{\mathrm{f}}(\mathsf{M}_{2})>\mu_{\mathrm{f}}( \mathsf{K})-3\omega.\] Proof.: Follows from Proposition 10.19\({}_{\alpha-1}\). **8.10 Proposition** (inclusion implies compatibility).: _Let \(\mathsf{K}\) and \(\mathsf{M}\) be fragments of rank \(\alpha\) in \(\Gamma_{\beta}\), \(\beta\geq\alpha-1\). Assume that \(\mathsf{K}\) is contained in \(\mathsf{M}\) and \(\mu_{\mathrm{f}}(\mathsf{K})\geq\lambda+2.6\omega\). Then \(\mathsf{K}\sim\mathsf{M}\)._ Proof.: First consider the case \(\beta=\alpha-1\). Let \(\mathsf{P}\) and \(\mathsf{Q}\) be bases for \(\mathsf{K}\) and \(\mathsf{M}\), respectively. By Proposition 10.16\({}_{\alpha-1}\), there are close subpaths \(\mathsf{P}^{\prime}\) of \(\mathsf{P}\) and \(\mathsf{Q}^{\prime}\) of \(\mathsf{Q}\) such that \(\mu(\mathsf{P}^{\prime})\geq\lambda\). Then by Corollary 8.2 P and \(\mathsf{Q}\) have the same infinite periodic extension and we conclude that \(\mathsf{K}\) and \(\mathsf{M}\) are compatible. If \(\beta\geq\alpha\) then we consider lifts \(\tilde{\mathsf{K}}\) and \(\tilde{\mathsf{M}}\) of \(\mathsf{K}\) and \(\mathsf{M}\) in \(\Gamma_{\alpha-1}\) such that \(\tilde{\mathsf{K}}\) is contained in \(\tilde{\mathsf{M}}\) and apply the already proved part. ### Proposition (dividing a fragment).: _Let \(\mathsf{K}\) be a fragment of rank \(\alpha\) in \(\Gamma_{\beta}\), \(\beta\geq\alpha-1\). If \(\mathsf{K}=\mathsf{K}_{1}\mathsf{K}_{2}\) then either \(\mathsf{K}_{1}\) or \(\mathsf{K}_{2}\) contains a fragment \(\mathsf{F}\) of rank \(\alpha\) with \(\mathsf{F}\sim\mathsf{K}\) and \(\mu_{\mathsf{f}}(\mathsf{F})>\mu_{\mathsf{f}}(\mathsf{K})-\zeta\omega\), or \(\mathsf{K}\) can be represented as \(\mathsf{K}=\mathsf{F}_{1}\mathsf{u}\mathsf{F}_{2}\) where \(\mathsf{F}_{i}\) are fragments of rank \(\alpha\), \(\mathsf{F}_{1}\) is a start of \(\mathsf{K}_{1}\), \(\mathsf{F}_{2}\) is an end of \(\mathsf{K}_{2}\), \(\mathsf{F}_{1}\sim\mathsf{F}_{2}\sim\mathsf{K}\) and_ \[\mu_{\mathsf{f}}(\mathsf{F}_{1})+\mu_{\mathsf{f}}(\mathsf{F}_{2})>\mu_{\mathsf{ f}}(\mathsf{K})-\zeta\omega.\] Proof.: If \(\alpha=1\) then \(\mathsf{u}\) can be taken empty and the statement is trivial. If \(\beta=\alpha-1\geq 1\) then the statement follows from Proposition 9.21\({}_{\alpha-1}\). The case \(\beta>\alpha-1\) follows from the case \(\beta=\alpha-1\). As an immediate consequence of Propositions 8.10 and 8.11 we get: **Proposition** (overlapping fragments).: _Let \(\mathsf{X}\) be a reduced path in \(\Gamma_{\beta}\), \(\beta\geq\alpha-1\). Let \(\mathsf{K}\) and \(\mathsf{M}\) be non-compatible fragments of rank \(\alpha\) in \(\mathsf{X}\). Assume that \(\mathsf{K}\leq\mathsf{M}\) and \(\mu_{\mathsf{f}}(\mathsf{K}),\mu_{\mathsf{f}}(\mathsf{M})\geq\lambda+2.7\omega\). Then there are a start \(\mathsf{K}_{1}\) of \(\mathsf{K}\) disjoint from \(\mathsf{M}\) and an end \(\mathsf{M}_{1}\) of \(\mathsf{M}\) disjoint from \(\mathsf{K}\) such that \(\mathsf{K}_{1}\) and \(\mathsf{M}_{1}\) are fragments of rank \(\alpha\), \(\mathsf{K}_{1}\sim\mathsf{K}\), \(\mathsf{M}_{1}\sim\mathsf{M}\), \(\mu_{\mathsf{f}}(\mathsf{K})-\mu_{\mathsf{f}}(\mathsf{K}_{1})<\lambda+2.7\omega\) and \(\mu_{\mathsf{f}}(\mathsf{M})-\mu_{\mathsf{f}}(\mathsf{M}_{1})<\lambda+2.7\omega\)._ ### Proposition (union of fragments).: _Let \(\mathsf{X}\) be a reduced path in \(\Gamma_{\alpha-1}\) and let \(\mathsf{K}_{i}\)\((i=1,2)\) be compatible fragments of rank \(\alpha\) in \(\mathsf{X}\). Assume that \(\mu_{\mathsf{f}}(\mathsf{K}_{i})\geq 5.7\omega\) for \(i=1\) or \(i=2\). Then the union of \(\mathsf{K}_{1}\) and \(\mathsf{K}_{2}\) is a fragment of rank \(\alpha\) with the same base axis. Moreover, if \(\mathsf{K}_{1}\) and \(\mathsf{K}_{2}\) are disjoint then \(\mu_{\mathsf{f}}(\mathsf{K}_{1}\cup\mathsf{K}_{2})\geq\mu_{\mathsf{f}}(\mathsf{ K}_{1})+\mu_{\mathsf{f}}(\mathsf{K}_{2})-5.7\omega\)._ Proof.: By Lemma 8.1, \(\mathsf{K}_{1}\) and \(\mathsf{K}_{2}\) have a common base axis. If some of the \(\mathsf{K}_{i}\)'s is contained in the other then there is nothing to prove. Otherwise the statement easily follows from Proposition 10.21\({}_{\alpha-1}\). ### Corollary (compatibility preserves order).: _Let \(\mathsf{X}\) be a reduced path in \(\Gamma_{\alpha-1}\), let \(\mathsf{K}_{i},\mathsf{M}_{i}\) (\(i=1,2\)) be fragments of rank \(\alpha\) in \(\mathsf{X}\) and let \(\mu_{\mathsf{f}}(\mathsf{K}_{i}),\mu_{\mathsf{f}}(\mathsf{M}_{i})\geq\lambda+2.6\omega\). Assume that \(\mathsf{K}_{1}\sim\mathsf{K}_{2}\), \(\mathsf{M}_{1}\sim\mathsf{M}_{2}\) and \(\mathsf{K}_{1}\not\sim\mathsf{M}_{1}\). Then \(\mathsf{K}_{1}<\mathsf{M}_{1}\) if and only if \(\mathsf{K}_{2}<\mathsf{M}_{2}\)._ Proof.: By Proposition 8.10, for each \(i=1,2\) neither of \(\mathsf{K}_{i}\) or \(\mathsf{M}_{i}\) can be contained in the other, so we have either \(\mathsf{K}_{i}<\mathsf{M}_{i}\) or \(\mathsf{M}_{i}<\mathsf{K}_{i}\). It is enough to prove the statement in the case \(\mathsf{K}_{1}=\mathsf{K}_{2}\). Assume, for example, that \(\mathsf{M}_{1}<\mathsf{K}_{1}<\mathsf{M}_{2}\). Then by Proposition 8.13\(\mathsf{M}_{1}\cup\mathsf{M}_{2}\) is a fragment of rank \(\alpha\) with \(\mathsf{M}_{1}\cup\mathsf{M}_{2}\not\sim\mathsf{K}_{1}\) and we get a contradiction with Proposition 8.10. ### Proposition (no inverse compatibility).: _Let \(\mathsf{K}\) and \(\mathsf{M}\) be fragments of rank \(\alpha\) in a reduced path \(\mathsf{X}\) in \(\Gamma_{\alpha-1}\). Let \(\mu_{\mathsf{f}}(\mathsf{K}),\mu_{\mathsf{f}}(\mathsf{M})\geq 5.7\omega\). Then \(\mathsf{K}\not\sim\mathsf{M}^{-1}\)._ Proof.: Follows from Lemma 8.1 and Proposition 10.21\({}_{\alpha-1}\). ### Proposition _Let \(\mathsf{K}\) be a fragment of rank \(\beta\) in \(\Gamma_{\alpha}\) where \(1\leq\beta\leq\alpha\)._ 1. _Let_ \(\mathsf{R}\) _be the base loop for_ \(\mathsf{K}\) _labeled by a relator_ \(R\) _of rank_ \(\beta\) _and let_ \(R_{0}\) _be the root of_ \(R\)_. Then the subgroup_ \(\{g\in G_{\alpha}\mid g\mathsf{K}\sim\mathsf{K}\}\) _is finite cyclic and conjugate to_ \(\langle R_{0}\rangle\)_._ 2. _Let_ \(X\) _be a word representing an element of_ \(G_{\alpha}\) _which is not conjugate to a power of_ \(R_{0}\)_. Let_ \(\bar{\mathsf{X}}\) _be an_ \(X\)_-periodic line in_ \(\Gamma_{\alpha}\) _labeled_ \(X^{\infty}\)_. Then_ \(s_{X,\bar{\mathsf{X}}}\mathsf{K}\not\sim\mathsf{K}\)_._ 3. _Under hypothesis of (ii), if_ \(\mathsf{K}\) _is a subpath of_ \(\bar{\mathsf{X}}\) _and_ \(\mu_{\mathsf{f}}(\mathsf{K})\geq 2\lambda+5.3\omega\) _then_ \(|\mathsf{K}|<2|X|\)_._ Proof.: (i) It follows from Lemma 8.1\({}_{\beta}\) that \(g{\sf K}\sim{\sf K}\) if and only if \(g{\sf R}={\sf R}\). Since \(\mbox{\em label}({\sf R})=R_{0}^{t}\) and \(R_{0}\) is a non-power, the stabilizer of \({\sf K}\) in \(G_{\alpha}\) is a subgroup conjugate to \(\langle R_{0}\rangle\). (ii) follows immediately from (i). (iii) If \({\sf K}\) is a subpath of \(\tilde{\sf X}\), \(\mu_{t}({\sf K})\geq 2\lambda+5.3\omega\) and \(|{\sf K}|\geq 2|X|\) then using Propositions 8.11\({}_{\beta}\) and 8.10\({}_{\beta}\) we conclude that either \(s_{X,\tilde{\sf X}}^{-1}{\sf K}\sim{\sf K}\) or \(s_{X,\tilde{\sf X}}{\sf K}\sim{\sf K}\), a contradiction with (ii). ## 9. Consequences of diagram analysis Following the terminology introduced in 4.16, a _coarse \(r\)-gon_ in \(\Gamma_{\alpha}\) is a loop of the form \[{\sf P}={\sf X}_{1}{\sf u}_{1}{\sf X}_{2}{\sf u}_{2},\ldots,{\sf X}_{r}{\sf u} _{r}\] where paths \({\sf X}_{i}\) are reduced and \({\sf u}_{i}\) are bridges of rank \(\alpha\). Let us assume that each bridge \({\sf u}_{i}\) of \({\sf P}\) is given an associate bridge partition of rank \(\alpha\) (see 4.13) and consider a filling \(\phi:\Delta^{(1)}\to\Gamma_{\alpha}\) of \({\sf P}\) by a disk diagram \(\Delta\) over the presentation of \(G_{\alpha}\), i.e. \(\Delta\) has boundary loop \(\tilde{\sf X}_{1}\tilde{\sf u}_{1}\tilde{\sf X}_{2}\tilde{\sf u}_{2},\ldots, \tilde{\sf X}_{r}\tilde{\sf u}_{r}\) where \(\phi(\tilde{\sf X}_{i})={\sf X}_{i}\) and \(\phi(\tilde{\sf u}_{i})={\sf u}_{i}\). We can assume that \(\Delta\) has a boundary marking of rank \(\alpha\) with sides \(\tilde{\sf X}_{i}\) and bridges \(\tilde{\sf u}_{i}\) (see 5.1) and that each \(\tilde{\sf u}_{i}\) has an induced bridge partition of rank \(\alpha\). Applying to \(\Delta\) the reduction process described in 5.4 we get a reduced diagram. Note that during the process, bridges \(\tilde{\sf u}_{i}\) of \(\Delta\) can be changed by switching. To keep the equality \(\phi(\tilde{\sf u}_{i})={\sf u}_{i}\) we have to perform appropriated switching of bridges \({\sf u}_{i}\) (see 4.13). As a consequence we obtain: **9.1 Proposition** (filling coarse polygons by diagrams).: _Let \(\alpha\geq 1\) and \({\sf P}={\sf X}_{1}{\sf u}_{1}{\sf X}_{2}{\sf u}_{2},\ldots,{\sf X}_{r}{\sf u }_{r}\) be a coarse \(r\)-gon in \(\Gamma_{\alpha}\) with fixed bridge partitions of all bridges \({\sf u}_{i}\). Then, after possible switching of bridges \({\sf u}_{i}\), there exists a reduced disk diagram \(\Delta\) of rank \(\alpha\) which fills \({\sf P}\)._ **9.2 Definition**.: The \(\alpha\)_-area_ of \({\sf P}\), denoted \(\mbox{\rm Area}_{\alpha}({\sf P})\), is the number of cells of rank \(\alpha\) of a filling diagram \(\Delta\) as in Proposition 9.1. To avoid correctness issues, we assume formally that \(\mbox{\rm Area}_{\alpha}({\sf P})\) is defined with respect to a particular choice of \(\Delta\). The image \(\phi(\delta{\sf D})\) in \(\Gamma_{\alpha}\) of the boundary loop of a cell of rank \(\alpha\) of \(\Delta\) is an _active relator loop_ for \({\sf P}\) for a particular choice \(\Delta\). Thus \(\mbox{\rm Area}_{\alpha}({\sf P})\) is the number of active relator loops for \({\sf P}\). Abusing the language, we call the inverse loop \(\phi(\delta{\sf D})^{-1}\) an active relator loop for \({\sf P}\) as well. _9.3 Remark_.: Equality \(\mbox{\rm Area}_{\alpha}({\sf X}_{1}{\sf u}_{1}{\sf X}_{2}{\sf u}_{2},\ldots,{ \sf X}_{r}{\sf u}_{r})=0\) is equivalent to the assertion that \({\sf X}_{1}{\sf u}_{1}{\sf X}_{2}{\sf u}_{2},\ldots,{\sf X}_{r}{\sf u}_{r}\) lifts to \(\Gamma_{\alpha-1}\) after possible switching of bridges \({\sf u}_{i}\). As a special case of a coarse polygon, consider a coarse bigon \({\sf X}^{-1}{\sf u}{\sf Y}{\sf v}\) in \(\Gamma_{\alpha}\), \(\alpha\geq 1\). Up to switching of bridges \({\sf u}\) and \({\sf v}\) we can assume that there is a reduced diagram \(\Delta\) of rank \(\alpha\) which fills \({\sf X}^{-1}{\sf u}{\sf Y}{\sf v}\) via a map \(\phi:\Delta^{(1)}\to\Gamma_{\alpha}\). We can assume also that \(\Delta\) is given a tight set \({\sf T}\) of contiguity subdiagrams. The boundary loop of \(\Delta\) has the form \(\tilde{\sf X}^{-1}\tilde{\sf u}\tilde{\sf Y}\tilde{\sf v}\) with sides \(\tilde{\sf X}^{-1}\) and \(\tilde{\sf Y}\) which are mapped onto \({\sf X}^{-1}\) and \({\sf Y}\) respectively. By Proposition 7.11(i) each cell of rank \(\alpha\) of \(\Delta\) has a contiguity subdiagram to each of the sides \(\tilde{\sf X}^{-1}\) and \(\tilde{\sf Y}\). The boundary loops of cells of rank \(\alpha\) and the bridges of these contiguity subdiagrams form a graph mapped in \(\Gamma_{\alpha}\) as in Figure 15. Let \({\sf R}_{i}\) be images in \(\Gamma_{\alpha}\) of boundary loops of cells of rank \(\alpha\) of \(\Delta\) and let \({\sf K}_{i}\), \({\sf M}_{i}\), \({\sf Q}_{i}\) and \({\sf S}_{i}\) be subpaths of \({\sf X}\), \({\sf Y}\) and \({\sf R}_{i}\), respectively, that are images of the corresponding contiguity arcs of contiguity subdiagrams of cells of rank \(\alpha\) to \(\tilde{\sf X}^{-1}\) and \(\tilde{\sf Y}\), as shown in the figure. According to the definition in 8.4, \({\sf K}_{i}\) and \({\sf M}_{i}\) are fragments of rank \(\alpha\) with bases \(\mathsf{Q}_{i}^{-1}\) and \(\mathsf{S}_{i}\) and base relator loops \(\mathsf{R}_{i}^{-1}\) and \(\mathsf{R}_{i}\) respectively. We call \(\mathsf{K}_{i}\) and \(\mathsf{M}_{i}\)_active fragments_ of rank \(\alpha\) of the coarse bigon \(\mathsf{X}^{-1}\mathsf{u}\mathsf{Y}\mathsf{v}\). Thus, if \(\operatorname{Area}_{\alpha}(\mathsf{X}^{-1}\mathsf{u}\mathsf{Y}\mathsf{v})=t\) then there are precisely \(t\) disjoint active fragments of rank \(\alpha\) in each of the paths \(\mathsf{X}\) and \(\mathsf{Y}\). Note again that the set of active relator loops and the set of active fragments formally depend on the choice of particular \(\Delta\) and \(\mathscr{T}\). 9.5. Let, as above, \(\mathsf{P}=\mathsf{X}^{-1}\mathsf{u}\mathsf{Y}\mathsf{v}\) be a coarse bigon in \(\Gamma_{\alpha}\) and \(\Delta\) a reduced diagram of rank \(\alpha\) with \(\delta\Delta=\hat{\mathsf{X}}^{-1}\hat{\mathsf{u}}\tilde{\mathsf{Y}}\tilde{ \mathsf{v}}\) filling \(\mathsf{P}\) via a map \(\phi:\Delta^{(1)}\to\Gamma_{\alpha}\) (we assume that the switching operation is already applied to \(\mathsf{u}\) and \(\mathsf{v}\) if needed). We assume that \(\Delta\) has a tight set \(\mathscr{T}\) of contiguity subdiagrams. Let \(\mathsf{R}=\phi(\delta\mathsf{D})\) be an active relator loop of \(\mathsf{P}\) and let \(\mathsf{Q}^{-1}\mathsf{w}_{1}\mathsf{K}^{-1}\mathsf{w}_{2}\) and \(\mathsf{S}^{-1}\mathsf{w}_{3}\mathsf{Mw}_{4}\) be images of boundary loop of contiguity subdiagrams in \(\mathscr{T}\) of the cell \(\mathsf{D}\) to sides \(\hat{\mathsf{X}}^{-1}\) and \(\tilde{\mathsf{Y}}\) respectively as in Figure 16. Then two loops \(\mathsf{P}_{1}\) and \(\mathsf{P}_{2}\) as shown in the figure can be considered as coarse bigons in \(\Gamma_{\alpha}\) with sides that are subpaths of \(\mathsf{X}\) and \(\mathsf{Y}\). They are filled by reduced subdiagrams of \(\Delta\), so we have \(\operatorname{Area}_{\alpha}(\mathsf{P}_{1})+\operatorname{Area}_{\alpha}( \mathsf{P}_{2})=\operatorname{Area}_{\alpha}(\mathsf{P})-1\). We will use this simple observation in inductive arguments. Figure 16. Figure 15. In a similar way, let \(P=X_{1}u_{1}X_{2}u_{2}X_{3}u_{3}\) be a coarse trigon in \(\Gamma_{\alpha}\). After possible switching of bridges \(u_{i}\), we can find a reduced diagram \(\Delta\) of rank \(\alpha\) with boundary loop \(\tilde{X}_{1}\tilde{u}_{1}\tilde{X}_{2}\tilde{u}_{2}\tilde{X}_{3}\tilde{u}_{3}\) which fills \(P\) via a map \(\phi:\Delta^{(1)}\to\Gamma_{\alpha}\) of \(P\) where \(\phi(\tilde{X}_{i})=X_{i}\) and \(\phi(\tilde{u}_{i})=u_{i}\). We can also assume that \(\Delta\) has a tight set \(\mathcal{T}\) of contiguity subdiagrams. By Proposition 7.11(ii) each cell of rank \(\alpha\) of \(\Delta\) has contiguity subdiagrams in \(\mathcal{T}\) to at least two sides \(\tilde{X}_{i}\). This implies that for any active relator loop \(R\) of \(P\) there are two or three fragments \(K_{i}\) (\(i=1,2\) or \(i=1,2,3\)) of rank \(\alpha\) with base loop \(R\) that occur in distinct paths \(X_{j}\). Similarly to the bigon case, we call them _active fragments_ of rank \(\alpha\) of \(P\). As in the bigon case, for any active relator loop \(R\) of \(P\) we can consider a coarse bigon \(P_{1}\) and a coarse trigon \(P_{2}\) respectively, as shown in Figure 17, with \(\operatorname{Area}_{\alpha}(P_{1})+\operatorname{Area}_{\alpha}(P_{2})= \operatorname{Area}_{\alpha}(P)-1\). ### Proposition (active fragments in bigon) _Let \(P=X^{-1}uYv\) be a coarse bigon in \(\Gamma_{\alpha}\), \(\alpha\geq 1\)._ 1. _Let_ \(K\) _and_ \(M\) _be active fragments of rank_ \(\alpha\) _of_ \(P\) _in_ \(X\) _and_ \(Y\)_, respectively, with mutually inverse base active relator loops. Then_ \(K\sim M^{-1}\)_,_ \[\mu_{f}(K)+\mu_{f}(M)>1-2\lambda-1.5\omega\] _and_ \[\mu_{f}(K),\mu_{f}(M)>7\lambda-1.5\omega.\] 2. _Let_ \(K\) _and_ \(K^{\prime}\) _be two distinct active fragments of rank_ \(\alpha\) _in_ \(X\)_. Then_ \(K\not\sim K^{\prime}\)_._ Proof.: (i): It follows directly from the construction that \(K\sim M^{-1}\). The first inequality follows from Proposition 7.13(i). Since \(X\) and \(Y\) are reduced we have \(\mu_{f}(K)\leq\rho\) and \(\mu_{f}(M)\leq\rho\) which implies the lower bound on \(\mu_{f}(K)\) and \(\mu_{f}(M)\). (ii): Assume that \(K\sim K^{\prime}\). Let \(M\) and \(M^{\prime}\) be the corresponding active fragments of rank \(\alpha\) in \(Y\). By (i), we have \(M\sim M^{\prime}\). Then by Proposition 8.13 and the first inequality of (i), \[\mu_{f}(K\cup K^{\prime})+\mu_{f}(M\cup M^{\prime})\geq 2-4\lambda-17.4\omega>2\rho\] which contradicts the hypothesis that \(X\) and \(Y\) are reduced. We introduce the notation for the lower bound on the size of active fragments in (i): \[\xi_{0}=7\lambda-1.5\omega.\] Figure 17. **9.8 Definition**.: We say that paths \(X\) and \(Y\) in \(\Gamma_{\alpha}\) are _close in rank_\(\beta\leq\alpha\) if there exist bridges \(u\) and \(v\) of rank \(\beta\) such that \(X^{-1}uYv\) is a loop that can be lifted to \(\Gamma_{\beta}\). (So 'being close' for paths in \(\Gamma_{\alpha}\) means the same as 'being close in rank \(\alpha\)'.) _9.9 Remark_.: If \(X\) and \(Y\) are labeled with freely reduced words then \(X\) and \(Y\) are close in rank \(0\) if and only if \(X=Y\). **9.10 Proposition** (lifting bigon).: _Let \(0\leq\beta<\alpha\) and \(X^{-1}uYv\) be a coarse bigon in \(\Gamma_{\alpha}\) where \(u\) and \(v\) are bridges of rank \(\beta\). Assume that for all \(\gamma\) in the interval \(\beta+1\leq\gamma\leq\alpha\) either \(X\) or \(Y\) has no fragments \(K\) of rank \(\gamma\) with \(\mu_{f}(K)\geq\xi_{0}\). Then \(X^{-1}uYv\) can be lifted to \(\Gamma_{\beta}\) and, consequently, \(X\) and \(Y\) are close in rank \(\beta\)._ Proof.: This is a consequence of Proposition 9.7 and Remark 9.3. **9.11 Proposition** (no active relators).: _Let \(\alpha\geq 1\), \(X^{-1}uYv\) be a coarse bigon in \(\Gamma_{\alpha}\) and \(\operatorname{Area}_{\alpha}(X^{-1}uYv)=0\). Assume that \(|X|_{\alpha}>2+6\zeta^{2}\eta\). Then \(X\) and \(Y\) can be represented as \(X=w_{1}X_{1}w_{2}\) and \(Y=z_{1}Y_{1}z_{2}\) where \(X_{1}\) and \(Y_{1}\) are close in rank \(\alpha-1\) and \(|w_{i}|_{\alpha},|z_{i}|_{\alpha}\leq 1+4\zeta^{2}\eta\)\((i=1,2)\)._ Proof.: By Remark 9.3 we can assume that \(X^{-1}uYv\) lifts to \(\Gamma_{\alpha-1}\). To simplify notations, we assume that \(X^{-1}uYv\) is already in \(\Gamma_{\alpha-1}\). Let \(u=u_{1}Pu_{2}\) and \(v=v_{1}Qv_{2}\) where \(u_{i}\), \(v_{i}\) are bridges of rank \(\alpha-1\) and \(P\), \(Q\) are paths labeled by pieces of rank \(\alpha\). We apply Proposition 9.19(ii)\({}_{\alpha-1}\) to the coarse tetragon \(X^{-1}u_{1}Pu_{2}Yv_{1}Qv_{2}\). Observe that if a subpath of \(P\) or \(Q\) is close (in \(\Gamma_{\alpha-1}\)) to a subpath \(S\) of \(X\) then \(|S|_{\alpha}\leq 1\). Since \(|X|_{\alpha}>2+6\zeta^{2}\eta\) we cannot get the first case of the conclusion of Proposition 9.19(ii)\({}_{\alpha-1}\). Therefore, the second case holds: we have \(X=X_{1}z_{1}X_{2}z_{2}X_{3}\) where \(X_{1}\) is close to a start of \(P\), \(X_{2}\) is close to a subpath of \(Y\), \(X_{3}\) is close to an end of \(Q\) and \(|z_{i}|_{\alpha-1}\leq 4\zeta\eta\)\((i=1,2)\). Then \(|X_{1}z_{1}|_{\alpha}\leq 1+4\zeta^{2}\eta\), \(|z_{2}X_{3}|_{\alpha}\leq 1+4\zeta^{2}\eta\) and we get the required bound. **9.12 Corollary** (no active fragments).: _Let \(X\) and \(Y\) be close reduced paths in \(\Gamma_{\alpha}\), \(\alpha\geq 1\). Assume that either \(X\) or \(Y\) has no fragments \(K\) of rank \(\alpha\) with \(\mu_{f}(K)\geq\xi_{0}\). Assume also that \(|X|_{\alpha}>2+6\zeta^{2}\eta\). Then \(X\) and \(Y\) can be represented as \(X=w_{1}X_{1}w_{2}\) and \(Y=z_{1}Y_{1}z_{2}\) where \(X_{1}\) and \(Y_{1}\) are close in rank \(\alpha-1\) and \(|w_{i}|_{\alpha},|z_{i}|_{\alpha}\leq 1+4\zeta^{2}\eta\)\((i=1,2)\)._ **9.13 Corollary** (no active fragments, iterated).: _Let \(X\) and \(Y\) be close reduced paths in \(\Gamma_{\alpha}\). Let \(0\leq\beta<\alpha\) and assume that for all \(\gamma\) in the interval \(\beta+1\leq\gamma\leq\alpha\) either \(X\) or \(Y\) has no fragments \(K\) of rank \(\gamma\) with \(\mu_{f}(K)\geq\xi_{0}\). Let \(|X|_{\alpha}\geq 2+3\zeta\). Then \(X\) and \(Y\) can be represented as \(X=w_{1}X_{1}w_{2}\) and \(Y=z_{1}Y_{1}z_{2}\) where \(X_{1}\) and \(Y_{1}\) are close in rank \(\beta\) and \(|w_{i}|_{\alpha}<1+5\zeta^{2}\eta\)\((i=1,2)\)._ **9.14 Proposition**.: _Let \(X\) be a nonempty freely reduced word equal 1 in \(G_{\alpha}\). Then \(X\) has a subword \(P\) which is a piece of rank \(\beta\) where \(1\leq\beta\leq\alpha\) and \(\mu(P)>136\omega\)._ Proof.: By Proposition 7.6, \(X\) is not reduced in \(G_{\alpha}\) and therefore contains a fragment \(K\) of rank \(\beta\) where \(1\leq\beta\leq\alpha\) and \(\mu_{f}(K)\geq\rho\). Let \(\beta\geq 1\) be the minimal rank such that \(X\) contains a fragment \(K\) of rank \(\beta\) with \(\mu_{f}(K)\geq\xi_{0}\). If \(\beta=1\) then \(K\) is already a piece of rank \(1\) with \(\mu(K)\geq\xi_{0}>138\omega\) by (4-1). Let \(\mathsf{K}\) be a fragment in \(\Gamma_{\beta-1}\) with \(\mathit{label}(\mathsf{K})=K\) and \(S\) a base for \(K\). By Corollary 9.13\({}_{\beta-1}\) we have \(S=w_{1}Pw_{2}\) where \(|w_{i}|_{\beta-1}<1.03\)\((i=1,2)\) and \(P=\mathit{label}(\mathsf{P})\) occurs in \(K\). By (4-1), \(\mu(P)\geq\xi_{0}-2.06\omega=7\lambda-3.56\omega>136\omega\). **9.15 Proposition** (active fragments in trigon).: _Let \(\mathsf{P}=\mathsf{X}_{1}\mathsf{u}_{1}\mathsf{X}_{2}\mathsf{u}_{2}\mathsf{X}_{3} \mathsf{u}_{3}\) be a coarse trigon in \(\Gamma_{\alpha}\), let \(\mathsf{R}\) be an active relator loop for \(\mathsf{P}\) and let \(\mathsf{K}_{i}\) (\(i=1,2\) or \(i=1,2,3\)) be active fragments of rank \(\alpha\) with base loop \(\mathsf{R}\). Then \(\mathsf{K}_{i}\sim\mathsf{K}_{j}\) for all \(i,j\),_ \[\sum_{i}\mu_{\mathrm{f}}(\mathsf{K}_{i})>1-3\lambda-2.2\omega\] _and_ \[\mu_{\mathrm{f}}(\mathsf{K}_{i})>3\lambda-1.1\omega\quad\text{for at least two indices $i$}.\] Proof.: We have \(\mathsf{K}_{i}\sim\mathsf{K}_{j}\) by construction. The first inequality follows from Proposition 7.13(ii). Since \(\mathsf{X}_{i}\) is reduced in \(G_{\alpha}\) we have \(\mu(\mathsf{K}_{i})\leq\rho=1-9\lambda\). This implies the second inequality. **9.16 Proposition** (no active fragments in conjugacy relations).: _Let \(X\) and \(Y\) be words cyclically reduced in \(G_{\alpha}\), \(\alpha\geq 1\). Let \(X=Z^{-1}YZ\) in \(G_{\alpha}\) for some \(Z\). Assume that no cyclic shift of \(X\) contains a fragment \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K)\geq\xi_{0}\). Then there exists a word \(Z_{1}\) such that \(Z_{1}=Z\) in \(G_{\alpha}\) and \(X=Z_{1}^{-1}YZ_{1}\) in \(G_{\alpha-1}\)._ Proof.: Let \(\Delta_{0}\) be a disk diagram of rank \(\alpha\) with boundary label \(X^{-1}Z^{-1}YZ\). We produce an annular diagram \(\Delta_{1}\) by gluing two boundary segments of \(\Delta_{0}\) labeled \(Z^{-1}\) and \(Z\). The diagram \(\Delta_{1}\) can be assigned a boundary marking of rank \(\alpha\) with two cyclic sides \(\mathsf{X}^{-1}\) and \(\mathsf{Y}\). We denote \(\mathsf{Z}\) the path in \(\Delta\) with \(\mathit{label}(\mathsf{Z})=Z\) that joins starting vertices of \(\mathsf{Y}\) and \(\mathsf{X}\). Let \(\Delta_{2}\) be a reduced diagram of rank \(\alpha\) obtained from \(\Delta_{1}\) by reduction process. According to the remark in 5.7, \(\Delta_{1}\) and \(\Delta_{2}\) have the same frame type. It follows from Lemma 4.8 that there exists a path \(\mathsf{Z}_{1}\) in \(\Delta_{2}\) joining starting vertices of boundary loops \(\mathsf{Y}_{1}\) and \(\mathsf{X}_{1}^{-1}\) such that \(\mathit{label}(\mathsf{X}_{1})=X\), \(\mathit{label}(\mathsf{Y}_{1})=Y\) and \(Z_{1}=Z\) in \(G_{\alpha}\) where \(Z_{1}=\mathit{label}(\mathsf{Z}_{1})\). By Proposition 7.13(i), \(\Delta_{2}\) has no cells of rank \(\alpha\). Then \(X=Z_{1}^{-1}YZ_{1}\) in \(G_{\alpha-1}\). **9.17 Proposition** (no active fragments in conjugacy relations, iterated).: _Let \(X\) and \(Y\) be cyclically reduced in \(G_{\alpha}\) words which represent conjugate elements of \(G_{\alpha}\), \(\alpha\geq 1\). Let \(\beta\leq\alpha\). Assume that at least one of the words \(X\) or \(Y\) has the property that no its cyclic shift contains a fragment \(K\) of rank \(\gamma\) with \(\mu_{\mathrm{f}}(K)\geq\xi_{0}\) and \(\beta<\gamma\leq\alpha\). Let \(\bar{\mathsf{X}}=\ldots\mathsf{X}_{-1}\mathsf{X}_{0}\mathsf{X}_{1}\ldots\) and \(\bar{\mathsf{Y}}=\ldots\mathsf{Y}_{-1}\mathsf{Y}_{0}\mathsf{Y}_{1}\ldots\) be parallel periodic lines in \(\Gamma_{\alpha}\) with \(\mathit{label}(\mathsf{X}_{i})=X\) and \(\mathit{label}(\mathsf{Y}_{i})=Y\) representing the conjugacy relation. Then some vertices on \(\bar{\mathsf{X}}\) and \(\bar{\mathsf{Y}}\) are joined by a bridge of rank \(\beta\)._ _Moreover, for any subpath \(\mathsf{Z}\) of \(\bar{\mathsf{X}}\) there exists a loop \(\mathsf{S}^{-1}\mathsf{u}\mathsf{T}\mathsf{v}\) which can lifted to \(\Gamma_{\beta}\) such that \(\mathsf{S}\) and \(\mathsf{T}\) are subpaths of \(\bar{\mathsf{X}}\) and \(\bar{\mathsf{Y}}\) respectively, \(\mathsf{u}\) and \(\mathsf{v}\) are bridges of rank \(\beta\) and \(\mathsf{Z}\) is contained in \(\mathsf{S}\)._ Proof.: Since \(\bar{\mathsf{X}}\) and \(\bar{\mathsf{Y}}\) are parallel, if vertices \(\mathsf{a}\) on \(\bar{\mathsf{X}}\) and \(\mathsf{b}\) on \(\bar{\mathsf{Y}}\) are joined by a path labeled \(Z\) then the same is true for all their translates \(s^{k}_{X,\bar{\mathsf{X}}}\mathsf{a}\) and \(s^{k}_{Y,\bar{\mathsf{Y}}}\mathsf{b}\). Then the second statement follows from the first. Let \(\Delta\) be an annular diagram of rank \(\alpha\) with boundary loops \(\hat{\mathsf{X}}^{-1}\) and \(\hat{\mathsf{Y}}\) and \(\phi:\tilde{\Delta}^{(1)}\to\Gamma_{\alpha}\) a combinatorially continuous map of the \(1\)-skeleton of the universal cover \(\tilde{\Delta}\) of \(\Delta\) to \(\Gamma_{\alpha}\) sending lifts \(\hat{\mathsf{X}}\) of \(\hat{\mathsf{X}}\) and \(\tilde{\mathsf{Y}}\) of \(\hat{\mathsf{Y}}\) to \(\bar{\mathsf{X}}\) and \(\bar{\mathsf{Y}}\) respectively. We can assume that \(\Delta\) is reduced and has a tight set of contiguity subdiagrams. If \(\beta=\alpha\) and \(\Delta\) has a cell of rank \(\alpha\) then the statement follows from Proposition 7.11(iii). If \(\Delta\) has no cells of rank \(\alpha\) then we can lift \(\bar{\mathsf{X}}\) and \(\bar{\mathsf{Y}}\) to \(\Gamma_{\alpha-1}\) and use induction on \(\alpha\). If \(\beta<\alpha\) and at least one of the words \(X\) or \(Y\) has no cyclic shift containing a fragment \(K\) of rank \(\alpha\) with \(\mu_{t}(K)>\xi_{0}\) then by Proposition 7.13(i), \(\Delta\) has no cells of rank \(\alpha\) and, again, the statement follows by induction. **Proposition** (small coarse polygons).: _Let \(\mathsf{P}=\mathsf{X}_{1}{*}\mathsf{X}_{2}{*}\ldots\mathsf{X}_{r}{*}\) be a coarse \(r\)-gon in \(\Gamma_{\alpha}\) where \(r\geq 3\) and \(\mathsf{X}_{i}\) are sides of \(\mathsf{P}\). Assume that there are no pairs of close vertices lying on distinct paths \(\mathsf{X}_{i}\) and \(\mathsf{X}_{j}\) except pairs \(\{\tau(\mathsf{X}_{i}),\iota(\mathsf{X}_{i+1})\}\) and \(\{\tau(\mathsf{X}_{r}),\iota(\mathsf{X}_{1})\}\). Then_ \[\sum_{i}|\mathsf{X}_{i}|_{\alpha}\leq(r-2)\eta.\] _If \(r=3\) or \(r=4\) then we have a stronger bound_ \[\sum_{i}|\mathsf{X}_{i}|_{\alpha}\leq 2(r-1)\zeta\eta.\] Proof.: Consider a filling \(\phi:\Delta^{(1)}\to\Gamma_{\alpha}\) of \(\mathsf{P}\) by a reduced disk diagram \(\Delta\) of rank \(\alpha\). Let \(\delta\Delta=\bar{\mathsf{X}}_{1}\mathsf{u}_{1}\bar{\mathsf{X}}_{2}\mathsf{u} _{2}\ldots\bar{\mathsf{X}}_{r}\mathsf{u}_{r}\) where \(\mathsf{u}_{i}\) are bridges and \(\mathsf{X}_{i}\) are sides of \(\Delta\) with \(\phi(\bar{\mathsf{X}}_{i})=\mathsf{X}_{i}\). The hypothesis of the proposition implies that \(\Delta\) is small. Then the statement follows from Propositions 7.9 and 7.12. **9.19**.: **Proposition** (trigons and tetragons are thin). 1. _Let_ \(\mathsf{X}^{-1}{*}\mathsf{Y}_{1}{*}\mathsf{Y}_{2}{*}\) _be a coarse trigon in_ \(\Gamma_{\alpha}\)_. Then_ \(\mathsf{X}\) _can be represented as_ \(\mathsf{X}=\mathsf{X}_{1}\mathsf{z}\mathsf{X}_{2}\) _where_ \(\mathsf{X}_{1}\) _is close to a start of_ \(\mathsf{Y}_{1}\)_,_ \(\mathsf{X}_{2}\) _is close to an end of_ \(\mathsf{Y}_{2}\) _and_ \(|\mathsf{z}|_{\alpha}\leq 4\zeta\eta\)_._ 2. _Let_ \(\mathsf{X}^{-1}{*}\mathsf{Y}_{1}{*}\mathsf{Y}_{2}{*}\mathsf{Y}_{3}{*}\) _be a coarse tetragon in_ \(\Gamma_{\alpha}\)_. Then at least one of the following possibilities holds:_ * \(\mathsf{X}\) _can be represented as_ \(\mathsf{X}=\mathsf{X}_{1}\mathsf{z}\mathsf{X}_{2}\) _where_ \(\mathsf{X}_{1}\) _is close to a start of_ \(\mathsf{Y}_{1}\)_,_ \(\mathsf{X}_{2}\) _is close to an end of_ \(\mathsf{Y}_{3}\) _and_ \(|\mathsf{z}|_{\alpha}\leq 6\zeta\eta\)_; or_ * \(\mathsf{X}\) _can be represented as_ \(\mathsf{X}=\mathsf{X}_{1}\mathsf{z}_{1}\mathsf{X}_{2}\mathsf{z}_{2}\mathsf{X} _{3}\) _where_ \(\mathsf{X}_{1}\) _is close to a start of_ \(\mathsf{Y}_{1}\)_,_ \(\mathsf{X}_{2}\) _is close to a subpath of_ \(\mathsf{Y}_{2}\)_,_ \(\mathsf{X}_{3}\) _is close to an end of_ \(\mathsf{Y}_{3}\) _and_ \(|\mathsf{z}_{i}|_{\alpha}\leq 4\zeta\eta\)__\((i=1,2)\)_._ Proof.: (i) We can represent \(\mathsf{X}_{1}=\mathsf{X}_{1}\mathsf{z}\mathsf{X}_{2}\), \(\mathsf{Y}_{i}=\mathsf{Y}_{i1}\mathsf{w}_{i}\mathsf{Y}_{i2}\) (\(i=1,2\)) with close pairs \((\mathsf{X}_{1},\mathsf{Y}_{11})\), \((\mathsf{Y}_{12},\mathsf{Y}_{21}^{-1})\) and \((\mathsf{Y}_{22},\mathsf{X}_{2})\) where no vertices lying on distinct paths \(\mathsf{z}\), \(\mathsf{w}_{1}\) and \(\mathsf{w}_{2}\) are close except appropriate endpoints (Figure 18a). Then the statement follows by application of Proposition 9.18 to \(\mathsf{z}^{-1}{*}\mathsf{w}_{1}{*}\mathsf{w}_{2}{*}\). (ii) If there is a pair of close vertices on \(\mathsf{Y}_{1}\) and \(\mathsf{Y}_{3}\) then the statement follows from (i) giving the first alternative. If there is a pair of close vertices on \(\mathsf{X}\) and on \(\mathsf{Y}_{2}\) then we represent \(\mathsf{X}\) and \(\mathsf{Y}_{2}\) as \(\mathsf{X}=\mathsf{X}_{1}\mathsf{X}_{2}\), \(\mathsf{Y}_{2}=\mathsf{Y}_{21}\mathsf{Y}_{22}\) where \(\tau(\mathsf{X}_{1})\) and \(\tau(\mathsf{Y}_{21})\) are close, and apply (i) to Figure 18. \(X_{1}^{-1}*Y_{1}*Y_{21}*\) and \(X_{2}^{-1}*Y_{22}*Y_{3}*\) (Figure 18b). We then come to the second alternative to the statement. Otherwise we use an argument similar to the proof of (i) coming to the first alternative. ### Proposition (small cyclic monogon) _Let \(X\) be a word cyclically reduced in \(G_{\alpha}\) and let \(X\) be conjugate in \(G_{\alpha}\) to a word \(Yu\) where \(Y\) is reduced in \(G_{\alpha}\) and \(u\) is a bridge of rank \(\alpha\). Let \(\bar{X}=\prod_{i\in\mathbb{Z}}X_{i}\) and \(\prod_{i\in\mathbb{Z}}Y_{i}\mathfrak{u}_{i}\) be lines in \(\Gamma_{\alpha}\) representing the conjugacy relation. Assume that no vertex on \(X_{0}\) is close to a vertex on \(Y_{i}\). Then \(|X|_{\alpha}\leq\eta\)._ Proof.: Let \(\Delta\) be an annular diagram of rank \(\alpha\) with boundary loops \(\hat{X}\) and \(\hat{Y}^{-1}\hat{\mathfrak{u}}^{-1}\) representing the conjugacy relation. We consider \(\Delta\) as having a cyclic side \(\hat{X}\), a non-cyclic side \(\hat{Y}^{-1}\) and a bridge \(\hat{\mathfrak{u}}^{-1}\). Up to switching of \(\hat{\mathfrak{u}}^{-1}\) we can assume that \(\Delta\) is reduced. The hypothesis implies that \(\Delta\) cannot have a bond between \(\hat{X}\) and \(\hat{Y}^{-1}\) after any refinement. Assume that \(\Delta\) has a bond \(\mathsf{v}\) (possibly after refinement) joining two vertices on the same side \(\hat{Y}^{-1}\). Then \(\mathsf{v}\) cuts off from \(\Delta\) a simply connected subdiagram \(\Sigma\) with boundary loop \(\mathsf{Z}_{1}\hat{\mathfrak{u}}^{-1}\mathsf{Z}_{2}\mathsf{v}^{\pm 1}\) where \(\hat{Y}^{-1}=\mathsf{Z}_{2}\mathsf{WZ}_{1}\) for some \(\mathsf{W}\). According to Definition 6.1, at least one of the words \(\mathit{label}(\mathsf{Z}_{i})\) (\(i=1,2\)) is nonempty. Removing \(\Sigma\) from \(\Delta\) we obtain a diagram \(\Delta^{\prime}\) with a shorter total label of its two sides. Hence, by induction, we can assume that \(\Delta^{\prime}\) is small. Then \(|X|_{\alpha}=|\hat{X}|_{\alpha}\leq\eta\) by Proposition 7.9. ### Proposition (closeness fellow traveling) _Let \(X\) and \(Y\) be close reduced paths in \(\Gamma_{\alpha}\), \(\alpha\geq 1\). Then \(X\) and \(Y\) can be represented as \(X=\mathsf{U}_{1}\mathsf{U}_{2}\ldots\mathsf{U}_{k}\) and \(Y=\mathsf{V}_{1}\mathsf{V}_{2}\ldots\mathsf{V}_{k}\) (\(\mathsf{U}_{i}\) and \(\mathsf{V}_{i}\) can be empty) where the starting vertex of each \(\mathsf{U}_{i}\) is close to the starting vertex of \(\mathsf{V}_{i}\) and \(|\mathsf{U}_{i}|_{\alpha},|\mathsf{V}_{i}|_{\alpha}\leq\zeta\) for all \(i\)._ Proof.: Observe that the statement of the lemma holds in the case \(\alpha=0\) with \(|\mathsf{U}_{i}|_{0},|\mathsf{V}_{i}|_{0}=1\). Thus we may refer to the statement of the lemma in rank \(\alpha-1\) with bounds \(|\mathsf{U}_{i}|_{\alpha-1},|\mathsf{V}_{i}|_{\alpha-1}\leq 1\) which imply \(|\mathsf{U}_{i}|_{\alpha},|\mathsf{V}_{i}|_{\alpha}\leq\zeta\). Observe also that if \(X=\mathsf{X}_{1}\mathsf{X}_{2}\ldots\mathsf{X}_{r}\) and \(Y=\mathsf{Y}_{1}\mathsf{Y}_{2}\ldots\mathsf{Y}_{r}\) where for each \(i\), \(X_{i}\) and \(Y_{i}\) are close then the statement of the lemma for each pair \((X_{i},Y_{i})\) implies the statement of the lemma for \(X\) and \(Y\). By 9.5 we represent \(X\) and \(Y\) as \(X=\mathsf{X}_{1}\mathsf{X}_{2}\ldots\mathsf{X}_{r}\) and \(Y=\mathsf{Y}_{1}\mathsf{Y}_{2}\ldots\mathsf{Y}_{r}\) where pairs \((X_{i},Y_{i})\) satisfy the following conditions (1) or (2) in the alternate way: (1) for some bridges \(\mathfrak{u}_{i}\) and \(\mathsf{v}_{i}\) of rank \(\alpha\) the loop \(X_{i}^{-1}\mathfrak{u}_{i}\mathsf{Y}_{i}\mathsf{v}_{i}\) lifts to \(\Gamma_{\alpha-1}\) or (2) there are loops \(X_{i}^{-1}\mathfrak{w}_{i1}\mathsf{R}_{i}\mathsf{w}_{i2}\) and \(Y_{i}\mathsf{w}_{i3}\mathsf{S}_{i}\mathsf{w}_{i4}\) which can be lifted to \(\Gamma_{\alpha-1}\) such that \(\mathsf{S}_{i}\) and \(\mathsf{R}_{i}\) occur in one relation loop of rank \(\alpha\) and \(\mathfrak{w}_{ij}\) are bridges of rank \(\alpha-1\) (see Figure 19). We can assume that pairs \((X_{1},Y_{1})\) and \((X_{r},Y_{r})\) satisfy (2) and that in the case Figure 19. of (2), subpaths \(X_{i}\), \(Y_{i}\) of \(X\), \(Y\) and \(S_{i}\), \(R_{i}\) of the appropriate relation loop cannot be extended. We prove the statement for each of the pair \((X_{i},Y_{i})\). _Case of_ (1): Omitting the index \(i\) for \(X_{i}\) and \(Y_{i}\), assume that a loop \(X^{-1}w_{1}Pw_{2}Yw_{3}Qw_{4}\) lifts to \(\Gamma_{\alpha-1}\) where \(w_{i}\) are bridges of rank \(\alpha-1\) and \(P\) and \(Q\) are labeled by pieces of rank \(\alpha\). Without changing notations, we assume that \(X^{-1}w_{1}Pw_{2}Yw_{3}Qw_{4}\) is already in \(\Gamma_{\alpha-1}\). By the maximal choice of \(X_{i}\), \(Y_{i}\), \(S_{i}\) and \(R_{i}\) in the case of (2), there are no close vertices on pairs \((X,P)\), \((X,Q)\), \((Y,P)\) and \((Y,Q)\) except appropriate endpoints (i.e. except \(\iota(X)\) and \(\iota(P)\) for \((X,P)\) etc.). Depending on existence of close vertices on pairs \((P,Q)\) and \((X,Y)\) we consider three cases (a)-(c) as in Figure 20. In case (a) we have \(|X|_{\alpha},|Y|_{\alpha}\leq 6\zeta^{2}\eta<\zeta\) by Proposition 9.18\({}_{\alpha-1}\). In case (b) taking the maximal pair of close subpaths of \(P\) and \(Q\) we get \(|X|_{\alpha},|Y|_{\alpha}\leq 4\zeta^{2}\eta<\zeta\) again by Proposition 9.18\({}_{\alpha-1}\). In case (c) we have \(X=X_{1}X_{2}X_{3}\) and \(Y=Y_{1}Y_{2}Y_{3}\) where \(X_{2}\) and \(Y_{2}\) are close. Taking \(X_{2}\) and \(Y_{2}\) maximal possible we get \(|X_{i}|_{\alpha},|Y_{i}|_{\alpha}\leq 4\zeta^{2}\eta\) for \(i=1,3\) by Proposition 9.18\({}_{\alpha-1}\). For \(X_{2}\) and \(Y_{2}\) we can apply the statement for \(\alpha:=\alpha-1\). _Case of_ (2): In the second case by the statement of the lemma for \(\alpha:=\alpha-1\) we have \(X=U_{1}U_{2}\ldots U_{k}\) and \(Y=W_{1}W_{2}\ldots W_{l}\) where \(|U_{i}|_{\alpha},|W_{i}|_{\alpha}\leq\zeta\), the starting vertex of each \(U_{i}\) can be joined by a bridge of rank \(\alpha-1\) with a vertex on \(R\) and the starting vertex of each \(W_{i}\) can be joined by a bridge of rank \(\alpha-1\) with a vertex on \(S\). Then each \(\iota(U_{i})\) is close to \(\iota(Y)\) and each \(\iota(W_{i})\) is close to \(\tau(X)\). We take \(X=U_{1}U_{2}\ldots U_{k+l}\) and \(Y=V_{1}V_{2}\ldots V_{k+l}\) where \(U_{k+1}\),..., \(U_{k+l}\), \(V_{1}\),..., \(V_{k}\) are empty and \(V_{j}=W_{j-k}\) for \(k+1\leq j\leq k+l\). **9.22 Lemma**.: _Let \(X\) be a reduced path and \(R\) a relation loop of rank \(\alpha\) in \(\Gamma_{\alpha}\), \(\alpha\geq 1\). Let \(u_{i}\)\((i=1,2)\) be a path labeled by a word in \(\mathcal{H}_{\alpha-1}\) and joining vertices \(a_{i}\) on \(X\) and \(b_{i}\) on \(R\). Let \(Y\) be the subpath of \(X^{\pm 1}\) that starts at \(a_{1}\) and ends at \(a_{2}\), and let \(R=R_{1}R_{2}\) where \(R_{i}\) starts at \(b_{i}\) (Figure 21). Then one of the two loops \(Yu_{2}R_{1}^{-1}u_{1}^{-1}\) or \(Yu_{2}R_{2}u_{1}^{-1}\) lifts to \(\Gamma_{\alpha-1}\)._ Proof.: We fill the loop \(Yu_{2}R_{1}^{-1}u_{1}^{-1}\) by a disk diagram \(\Delta\) of rank \(\alpha\) with boundary loop \(\bar{Y}\bar{u}_{2}S\bar{u}_{1}^{-1}\) where \(label(S)=label(R_{1}^{-1})\). We take \(\bar{Y}\) as a side and \(\bar{u}_{2}S\bar{u}_{1}^{-1}\) as a bridge of \(\Delta\) with bridge partition \(\bar{u}_{2}\cdot S\cdot\bar{u}_{1}^{-1}\). Then we apply the reduction process making \(\Delta\) reduced. After reduction, we get either \(label(S)=label(R_{1}^{-1})\) or \(label(S)=label(R_{2})\). By Lemma 7.5, \(\Delta\) has no cells of rank \(\alpha\). Depending on the case, this implies that either \(Yu_{2}R_{1}^{-1}u_{1}^{-1}\) or \(Yu_{2}R_{2}u_{1}^{-1}\) lifts to \(\Gamma_{\alpha-1}\). Figure 20. **Proposition** (compatibility lifting).: _Let \(1\leq\beta\leq\alpha\). Let \(\mathsf{K}\) and \(\mathsf{M}\) be fragments of rank \(\beta\) which occur in a reduced path \(\mathsf{X}\) in \(\Gamma_{\alpha}\). Let \(\hat{\mathsf{X}}\) be a lift of \(\mathsf{X}\) in \(\Gamma_{\beta-1}\) and \(\hat{\mathsf{K}}\) and \(\hat{\mathsf{M}}\) be the subpaths of \(\hat{\mathsf{X}}\) which are projected onto \(\mathsf{K}\) and \(\mathsf{M}\) respectively. Then \(\mathsf{K}\sim\mathsf{M}\) implies \(\hat{\mathsf{K}}\sim\hat{\mathsf{M}}\) and \(\mathsf{K}\sim\mathsf{M}^{-1}\) implies \(\hat{\mathsf{K}}\sim\hat{\mathsf{M}}^{-1}\)._ Proof.: Assume that \(\mathsf{K}\sim\mathsf{M}^{\varepsilon}\) where \(\varepsilon=\pm 1\). Let \(\mathsf{R}\) be the common base loop for \(\mathsf{K}\) and \(\mathsf{M}^{\varepsilon}\). Lemma 9.22 implies that \(\mathsf{R}\) can be lifted to a line \(\hat{\mathsf{R}}\) which is the common base axis for both \(\hat{\mathsf{K}}\) and \(\hat{\mathsf{M}}^{\varepsilon}\). This implies \(\hat{\mathsf{K}}\sim\hat{\mathsf{M}}^{\varepsilon}\). **Corollary**.: _Let \(1\leq\beta\leq\alpha\). Then statements of Proposition 8.13, Corollary 8.14 and Proposition 8.15 hold for fragments of rank \(\beta\) in a reduced path \(\mathsf{X}\) in \(G_{\alpha}\)._ _More precisely, let \(\mathsf{X}\) be a reduced path in \(\Gamma_{\alpha}\). Then the following is true._ * _Let_ \(\mathsf{K}_{i}\)__\((i=1,2)\) _be fragments of rank_ \(\beta\) _in_ \(\mathsf{X}\)_,_ \(\mathsf{K}_{1}\sim\mathsf{K}_{2}\) _and_ \(\mu_{\mathsf{f}}(\mathsf{K}_{i})\geq 5.7\omega\) _for_ \(i=1\) _or_ \(i=2\)_. Then_ \(\mathsf{K}_{1}\cup\mathsf{K}_{2}\) _is a fragment of rank_ \(\beta\) _with_ \(\mathsf{K}_{1}\cup\mathsf{K}_{2}\sim\mathsf{K}_{1}\)_. If_ \(\mathsf{K}_{1}\) _and_ \(\mathsf{K}_{2}\) _are disjoint then_ \(\mu_{\mathsf{f}}(\mathsf{K}_{1}\cup\mathsf{K}_{2})\geq\mu_{\mathsf{f}}( \mathsf{K}_{1})+\mu_{\mathsf{f}}(\mathsf{K}_{2})-5.7\omega\)_._ * _Let_ \(\mathsf{K}_{i},\mathsf{M}_{i}\) _(_\(i=1,2\)_) be fragments of rank_ \(\beta\) _in_ \(\mathsf{X}\) _with_ \(\mu_{\mathsf{f}}(\mathsf{K}_{i}),\mu_{\mathsf{f}}(\mathsf{M}_{i})\geq\gamma+2.6\omega\)_. Assume that_ \(\mathsf{K}_{1}\sim\mathsf{K}_{2}\)_,_ \(\mathsf{M}_{1}\sim\mathsf{M}_{2}\) _and_ \(\mathsf{K}_{1}\not\sim\mathsf{M}_{1}\)_. Then_ \(\mathsf{K}_{1}<\mathsf{M}_{1}\) _if and only if_ \(\mathsf{K}_{2}<\mathsf{M}_{2}\)_._ * _If_ \(\mathsf{K}\) _and_ \(\mathsf{M}\) _are fragments of rank_ \(\beta\) _in_ \(\mathsf{X}\) _and_ \(\mu_{\mathsf{f}}(\mathsf{K}),\mu_{\mathsf{f}}(\mathsf{M})\geq 5.7\omega\) _then_ \(\mathsf{K}\not\sim\mathsf{M}^{-1}\)_._ ## 10. Stability Let \(F_{A}\) be a free group with basis \(A\) and let \(X^{-1}Y_{1}Y_{2}\ldots Y_{k+1}=1\) be a relation in \(F_{A}\) where \(X\), \(Y_{1}\),..., \(Y_{k}\) are freely reduced words in the generators \(A\). Then for any occurrence of a letter \(a^{\varepsilon}\in A^{\pm 1}\) in \(X\) there is a unique occurrence of the same letter \(a^{\varepsilon}\) in some \(Y_{i}\) which cancels with \(a^{-\varepsilon}\) in \(X^{-1}Y_{1}Y_{2}\ldots Y_{k+1}\). The main goal of this section is to establish an analog of this statement for relations in \(G_{\alpha}\). The role of letters \(a^{\varepsilon}\) will be played by fragments of rank \(\alpha\) and instead of relation \(X^{-1}Y_{1}Y_{2}\ldots Y_{k+1}=1\) we consider coarse polygons \(\mathsf{X}^{-1}*\mathsf{Y}_{1}*\ldots\mathsf{Y}_{k}*\) in \(\Gamma_{\alpha}\) (for our considerations, it is enough to consider cases \(k=1,2,3\)). The role of correspondence of canceled letters will be played by equivalence relation '\(\mathsf{K}\sim\mathsf{L}^{\pm 1}\)'. There are two essential differences of the case of groups \(G_{\alpha}\) from the case of a free group \(F_{A}\). One is a "fading effect": a fragment in \(\mathsf{Y}_{i}\) can be of a "smaller size" than an initial fragment in \(\mathsf{X}\). Another difference is that bridges of the coarse polygon can produce exceptions for stability (to describe such situations we introduce a special relation between fragments and bridges of the same rank \(\beta\), see Definition 10.4). We start with a statement which shows how closeness is propagated in coarse tetragons in \(\Gamma_{\alpha-1}\). This is essentially a consequence of inductive hypotheses. Figure 21. ### Definition (uniformly close).: For \(\alpha\geq 1\), we say that vertices \(\mathsf{a}_{1}\), \(\mathsf{a}_{2}\),..., \(\mathsf{a}_{r}\) of \(\Gamma_{\alpha}\) are _uniformly close_ if at least one of the following is true: * they are pairwise close in rank \(\alpha-1\); or * there exists a relator loop \(\mathsf{R}\) of rank \(\alpha\) such that each \(\mathsf{a}_{i}\) is close in rank \(\alpha-1\) to a vertex on \(\mathsf{R}\). We cover also the case \(\alpha=0\): vertices \(\mathsf{a}_{1}\), \(\mathsf{a}_{2}\),..., \(\mathsf{a}_{r}\) of \(\Gamma_{0}\) are said to be uniformly close if \(\mathsf{a}_{1}=\mathsf{a}_{2}=\cdots=\mathsf{a}_{r}\). Note that uniformly close vertices are pairwise close. If \(r=2\) then being uniformly close and being close is equivalent. **10.2 Lemma**.: _Let \(\alpha\geq 1\), \(\mathsf{X}\) and \(\mathsf{Y}\) be close reduced paths in \(\Gamma_{\alpha-1}\), and let \(\mathsf{S}^{-1}\ast\mathsf{T}_{1}\ast\mathsf{T}_{2}\ast\mathsf{T}_{3}\ast\) be a coarse tetragon in \(\Gamma_{\alpha-1}\) such that \(\mathsf{Y}\) is a subpath of \(\mathsf{S}\). Assume that \(|\mathsf{X}|_{\alpha-1}\geq 5.2\). Then \(\mathsf{X}\) can be represented as \(\mathsf{z}_{0}\mathsf{X}_{1}\mathsf{z}_{1}\ldots\mathsf{X}_{r}\mathsf{z}_{r}\)\((1\leq r\leq 3)\) where \(\mathsf{X}_{i}\) is close to a subpath \(\mathsf{W}_{i}\) of some \(\mathsf{T}_{j_{i}}\), \(j_{1}<\cdots<j_{r}\) and_ (10-1) \[\sum_{i}|\mathsf{X}_{i}|_{\alpha-1}>|\mathsf{X}|_{\alpha-1}-5.8.\] _Moreover:_ * _if_ \(r=3\) _then we have a stronger bound_ \[\sum_{i}|\mathsf{X}_{i}|_{\alpha-1}>|\mathsf{X}|_{\alpha-1}-3.4.\] * _There is a subpath_ \(\mathsf{Y}_{1}\) _of_ \(\mathsf{Y}\) _such that the starting vertices_ \(\iota(\mathsf{X}_{1})\)_,_ \(\iota(\mathsf{Y}_{1})\) _and_ \(\iota(\mathsf{W}_{1})\) _are uniformly close and the same is true for the ending vertices_ \(\iota(\mathsf{X}_{r})\)_,_ \(\iota(\mathsf{Y}_{1})\) _and_ \(\iota(\mathsf{W}_{r})\)_._ Proof.: If \(\alpha=1\) the statement is obvious (see Remark 10.3 below). Let \(\alpha>1\). Let \(\mathsf{Z}\) be a reduced path joining \(\iota(\mathsf{S})\) and \(\tau(\mathsf{T}_{2})\) which exists by Proposition 11.1\({}_{\alpha-1}\) (see Figure 22). We apply Proposition 10.18\({}_{\alpha-1}\) first to the coarse trigon \(\mathsf{S}^{-1}\ast\mathsf{Z}\ast\mathsf{T}_{3}\ast\) and then, possibly, to the coarse trigon \(\mathsf{Z}^{-1}\ast\mathsf{T}_{1}\ast\mathsf{T}_{2}\). Since \(|\mathsf{X}|_{\alpha-1}\geq 5.2\), after the first application of Proposition 10.18\({}_{\alpha-1}\), we find either a subpath \(\mathsf{X}_{3}\) of \(\mathsf{X}\) that is close to a subpath of \(\mathsf{T}_{3}\) or a subpath \(\mathsf{X}^{\prime}\) of \(\mathsf{X}\) that is close to a subpath of \(\mathsf{Z}\) with \(|\mathsf{X}^{\prime}|_{\alpha-1}>|\mathsf{X}|_{\alpha-1}-2.75>2.45\). In the latter case, the second application of \(10.18_{\alpha-1}\) gives the remaining \(\mathsf{X}_{1}\) and/or \(\mathsf{X}_{2}\). If \(r<3\) then for the bound (10-1), the worst cases are when we get two \(\mathsf{X}_{i}\)'s after double application of \(10.18_{\alpha-1}\). In those cases we have once case (iii) of \(10.18_{\alpha-1}\) and another time case (i) or (ii). Figure 22. Hence \(\sum_{i}|\mathsf{X}_{i}|_{\alpha-1}>|\mathsf{X}|_{\alpha-1}-3-2.75\). Statement (ii) follows from the appropriate part of Proposition 10.18\({}_{\alpha-1}\). Assume that \(r=3\) and therefore \(\mathsf{X}=\mathsf{z}_{0}\mathsf{X}_{1}\mathsf{z}_{1}\mathsf{X}_{2}\mathsf{z}_ {2}\mathsf{X}_{3}\mathsf{z}_{3}\) where each \(\mathsf{X}_{i}\) is close to a subpath of \(\mathsf{T}_{i}\). From application of Proposition 10.18\({}_{\alpha-1}\) we have \(|\mathsf{z}_{0}|_{\alpha-1},|\mathsf{z}_{3}|_{\alpha-1}<1.3\). Then using Proposition 9.19(i)\({}_{\alpha-1}\) we extend all \(\mathsf{X}_{i}\) to get \(|\mathsf{z}_{1}|_{\alpha-1},|\mathsf{z}_{2}|_{\alpha-1}\leq 4\zeta\eta<0.4\). This proves (i). _Remark_.: If \(\alpha=1\) then hypotheses of Lemma 10.2 say that \(\mathsf{X}=\mathsf{Y}\) and \(\mathsf{S}^{-1}\mathsf{T}_{1}\mathsf{T}_{2}\mathsf{T}_{3}\) is a loop in the Cayley graph \(\Gamma_{0}\) of the free group \(G_{0}\). Then the statement of the lemma holds without the assumption \(|\mathsf{X}|_{\alpha-1}\geq 5.2\). Furthermore, in the conclusion we have \(\sum_{i}|\mathsf{X}_{i}|_{\alpha-1}=|\mathsf{X}|_{\alpha-1}\). **10.4 Definition** (independence).: Let \(1\leq\beta\leq\alpha\), \(\mathsf{K}\) be a fragment of rank \(\beta\) in \(\Gamma_{\alpha}\) and \(\mathsf{u}\) be a bridge of rank \(\beta\) in \(\Gamma_{\alpha}\). Recall that \(\mathsf{K}\) is considered with the associated base loop \(\mathsf{R}\) of rank \(\beta\). We say that \(\mathsf{K}\) is _independent of \(\mathsf{u}\)_ if either \(\mathit{label}(\mathsf{u})\in\mathcal{H}_{\beta-1}\) or \(\mathsf{u}\) possesses a bridge partition \(\mathsf{u}=\mathsf{v}\cdot\mathsf{S}\cdot\mathsf{w}\) of rank \(\beta\) where \(\mathsf{S}\) occurs in a relator loop \(\mathsf{L}\) of rank \(\beta\) such that \(\mathsf{L}\neq\mathsf{R}^{\pm 1}\). It follows from the definition that if \(\mathsf{K}\) is independent of \(\mathsf{u}\) and \(\mathsf{M}\sim\mathsf{K}^{\pm 1}\) then \(\mathsf{M}\) is also independent of \(\mathsf{u}\). **10.5 Proposition** (non-active fragment in bigon).: _Let \(\alpha\geq 1\), \(\mathsf{X}^{-1}\mathsf{u}\mathsf{Y}\mathsf{v}\) be a coarse bigon in \(\Gamma_{\alpha}\) and let \(\mathsf{X}=\mathsf{F}_{0}\mathsf{K}_{1}\mathsf{F}_{1}\ldots\mathsf{K}_{r} \mathsf{F}_{r}\) where \(\mathsf{K}_{i}\) are the associated active fragments of rank \(\alpha\). Let \(\mathsf{K}\) be a fragment of rank \(\alpha\) in \(\mathsf{X}\) with \(\mu_{\mathsf{f}}(\mathsf{K})\geq 2\lambda+5.8\omega\). Assume that \(\mathsf{K}\not\sim\mathsf{K}_{i}\) for all \(i\) and that \(\mathsf{K}\) is independent of \(\mathsf{u}\) and \(\mathsf{v}\). Then there exists a fragment of rank \(\alpha\) in \(\mathsf{Y}\) such that \(\mathsf{M}\sim\mathsf{K}\) and_ \[\mu_{\mathsf{f}}(\mathsf{M})\geq\mu_{\mathsf{f}}(\mathsf{K})-2\lambda-3.4\omega.\] Proof.: By Proposition 8.10\(\mathsf{K}\) is a subpath of one of the paths \(\mathsf{F}_{0}\mathsf{K}_{1}\), \(\mathsf{K}_{1}\mathsf{F}_{1}\mathsf{K}_{2}\),..., \(\mathsf{K}_{r}\mathsf{F}_{r}\). We consider the case when \(\mathsf{K}\) is a subpath of some \(\mathsf{K}_{i}\mathsf{F}_{i}\mathsf{K}_{i+1}\) (the cases when \(\mathsf{K}\) is a subpath of \(\mathsf{F}_{0}\mathsf{K}_{1}\) or \(\mathsf{K}_{r}\mathsf{F}_{r}\) are similar; see also the remark in the end of the proof). Let \(\mathsf{Y}=\mathsf{H}_{0}\mathsf{M}_{0}\mathsf{H}_{1}\ldots\mathsf{M}_{r} \mathsf{H}_{r}\) where \(\mathsf{M}_{i}\) are the corresponding active fragments of rank \(\alpha\) in \(\mathsf{Y}\). As we can see from 9.4, there is a loop \(\mathsf{T}=(\mathsf{K}_{i}\mathsf{F}_{i}\mathsf{K}_{i+1})^{-1}\mathsf{w}_{1} \mathsf{S}_{1}\mathsf{w}_{2}\mathsf{H}_{i}\mathsf{w}_{3}\mathsf{S}_{2}\mathsf{w}_ {4}\) which can be lifted to \(\Gamma_{\alpha-1}\) and where \(\mathsf{w}_{j}\) are bridges of rank \(\alpha-1\) and \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) occur in base loops for \(\mathsf{K}_{i}\) and \(\mathsf{K}_{i+1}\) respectively (see Figure 23). Abusing notation we assume that \(\mathsf{T}\) is already Figure 23. in \(\Gamma_{\alpha-1}\). Then, instead of base loops, \(S_{1}\) and \(S_{2}\) occur in base axes \(L_{1}\) and \(L_{2}\) for \(K_{i}\) and \(K_{i+1}\) respectively. Let \(L\) be the base axis for \(K\) and \(S\) the base for \(K\) (which is contained in \(L\) by definition). Assumptions \(K\not\sim K_{i}\) and \(K\not\sim K_{i+1}\) imply \(L\neq L_{i}\) (\(i=1,2\)). By Corollary 8.2, if a subpath \(P\) of \(S\) is close to a subpath of \(S_{i}\) then \(\mu(P)<\lambda\). Then by Lemma 10.2 we find a subpath \(Q\) of \(S\) which is close to a subpath \(M\) of \(H_{i}\) and satisfies \[\mu(Q)>\mu(S)-2\lambda-3.4\omega.\] Then \(M\) is a fragment of rank \(\alpha\) with base \(Q\). Clearly, \(M\) satisfies the conclusion of the proposition. If \(K\) is a subpath of \(F_{0}K_{1}\) or \(K_{r}F_{r}\), a similar argument applies. For example, assume that \(K\) is a subpath of \(F_{0}K_{1}\). As above, we assume that all paths are in \(\Gamma_{\alpha-1}\) not changing their notations. Let \(L\) be a base axis for \(K\). By hypothesis, either \(\mathit{label}(u)\in\mathcal{H}_{\alpha-1}\) or \(u=u_{1}Vu_{2}\) where \(V\) occurs in a line \(L_{1}\) labeled by the infinite power \(R^{\infty}\) of a relator \(R\) of rank \(\alpha\) and \(L_{1}\) is distinct from \(L\). In the case \(\mathit{label}(u)\in\mathcal{H}_{\alpha-1}\) we apply Proposition 10.18\({}_{\alpha-1}\). Otherwise the argument is the same as in the case when \(K\) is a subpath of \(K_{i}F_{i}K_{i+1}\). The case when \(K\) is a subpath of \(K_{r}F_{r}\) is similar. Finally, there is a "degenerate" case when \(\mathrm{Area}_{\alpha}(X^{-1}uYv)=0\) and both \(u\) and \(v\) are bridges of rank \(\alpha-1\). In this case, the statement follows directly from Proposition 8.7. **10.6 Proposition** (fragment stability in bigon).: _Let \(\alpha\geq 1\), \(X^{-1}uYv\) be a coarse bigon in \(\Gamma_{\alpha}\) and let \(K\) be a fragment of rank \(\alpha\) in \(X\) with \(\mu_{\mathrm{f}}(K)\geq 2\lambda+5.8\omega\). Assume that \(K\) is independent of \(u\) and \(v\). Then there exists a fragment \(M\) of rank \(\alpha\) in \(Y\) such that \(M\sim K^{\pm 1}\) and_ \[\mu_{\mathrm{f}}(M)\geq\min\{\mu_{\mathrm{f}}(K)-2\lambda-3.4\omega,\ \xi_{0}\}\] Proof.: Let \(X=F_{0}K_{1}F_{1}\dots K_{r}F_{r}\) and \(Y=H_{0}M_{0}H_{1}\dots M_{r}H_{r}\) where \(K_{i}\) and \(M_{i}\) are the associated active fragments of rank \(\alpha\). If \(K\sim K_{i}\) for some \(i\) then we can take \(M=M_{i}\) due to Proposition 9.7. Otherwise we apply Proposition 10.5. **10.7 Proposition** (fragment stability in trigon).: _Let \(\alpha\geq 1\), \(X^{-1}u_{1}Y_{1}u_{2}Y_{2}u_{3}\) be a coarse trigon in \(\Gamma_{\alpha}\) and let \(K\) be a fragment of rank \(\alpha\) in \(X\) with \(\mu_{\mathrm{f}}(K)\geq 3\lambda+10\omega\). Assume that \(K\) is independent of any of \(u_{i}\). Then there is a fragment \(M\) of rank \(\alpha\) in \(Y_{1}\) or \(Y_{2}\) such that \(M\sim K^{\pm 1}\) and_ \[\mu_{\mathrm{f}}(M)>\min\left\{3\lambda-1.1\omega,\ \frac{1}{2}(\mu_{\mathrm{f}}(K) -3\lambda-6.8\omega)\right\}.\] Proof.: The idea of the proof is the same as in the proof of Proposition 10.5. To avoid complicated notations, we proceed by induction on the \(\alpha\)-area of \(P=X^{-1}u_{1}Y_{1}u_{2}Y_{2}u_{3}\) as described in 9.6. Assume that \(R\) is an active relator loop of rank \(\alpha\) of \(P\). As observed in 9.6, there are two or three fragments \(N_{i}\) (\(i=1,2\) or \(i=1,2,3\)) of rank \(\alpha\) with base loop \(R\) that occur in distinct paths \(X^{-1}\), \(Y_{1}\) or \(Y_{2}\). By Proposition 9.15 we can assume that \(\mu_{\mathrm{f}}(N_{i})\geq 3\lambda-1.1\omega\) for \(i=1,2\). If \(K\sim N_{1}^{\pm 1}\) then we for the required \(M\) we take that \(N_{i}\) which occurs in \(Y_{1}\) or \(Y_{2}\). Let \(K\not\sim N_{1}^{\pm 1}\). If \(N_{1}\) and \(N_{2}\) occur in \(Y_{1}\) and \(Y_{2}\) then we can replace \(P\) by a coarse trigon with smaller \(\alpha\)-area and use induction (see Figure 24a). (In this case \(u_{2}\) is replaced by a new bridge \(u_{2}^{\prime}\) and the assumption \(K\not\sim N_{1}^{\pm 1}\) implies that \(K\) is independent of \(u_{2}^{\prime}\).) Otherwise, assume that \(N_{1}\) occurs in \(X^{-1}\) and \(N_{2}\) occurs in \(Y_{1}\) (the case when \(N_{2}\) occurs in \(Y_{2}\) is symmetric). Since \(K\not\sim N_{1}^{-1}\) we have either \(K<N_{1}^{-1}\) or \(K>N_{1}^{-1}\). In the first case, we reduce the statement to the case of a coarse bigon as in Figure 24b and apply Proposition 10.5. In the second case, the statement follows by inductive hypothesis. It remains to consider the case \(Area_{\alpha}(P)=0\). Then the loop \(P\) can be lifted to \(\Gamma_{\alpha-1}\) and we assume that \(P\) is already in \(\Gamma_{\alpha-1}\). Let \(L\) be the base axis for \(K\) and \(S\) the base for \(K\). Since \(K\) is independent of \(u_{i}\) (when viewed in \(\Gamma_{\alpha}\)), we have either \(\mathit{label}(u_{i})\in\mathcal{H}_{\alpha-1}\) or \(u_{i}=v_{i}Q_{i}w_{i}\) where \(\mathit{label}(v_{i}),\mathit{label}(w_{i})\in\mathcal{H}_{\alpha-1}\) and \(Q_{i}\) occurs in a line \(L_{i}\) labeled by the infinite power \(R_{i}^{\infty}\) of a relator \(R_{i}\) of rank \(\alpha\) such that \(L_{i}\neq L\). We obtain a coarse \(r\)-gon with sides \(X^{-1}\), \(Y_{1}\), \(Y_{2}\) and \(Q_{i}\) where \(3\leq r\leq 6\) (see Figure 25). We consider the "worst" case \(r=6\) (the other cases are similar, with application of Propositions 10.18\({}_{\alpha-1}\) or 8.7\({}_{\alpha-1}\) where needed). Let \(Z\) be a reduced path joining \(\tau(u_{1})\) and \(\iota(u_{3})\) existing by Proposition 11.1\({}_{\alpha-1}\). By Corollary 8.2, if a subpath \(P\) of \(S\) is close to a subpath of \(Q_{i}\) then \(\mu(P)<\lambda\). Then the statement easily follows by applying Lemma 10.2 twice to coarse tetragons \(X^{-1}v_{1}Q_{1}w_{1}Zv_{3}Q_{3}w_{3}\) and \(Z^{-1}Y_{1}v_{2}Q_{2}w_{2}Y_{2}\). **10.8 Lemma**.: _Let \(\alpha\geq 1\), \(X\) be a piece of rank \(1\leq\beta<\alpha\) or a fragment of rank \(\beta<\alpha\). Then \(X\) contains no fragment \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K)\geq 3.2\omega\)._ _In particular, any fragment \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K)\geq 3.2\omega\) is a nonempty word (since otherwise it would occur in a fragment of rank 0)._ Proof.: We consider the case when \(X\) is a fragment of rank \(\beta<\alpha\). We represent \(X\) by a path \(X\) in \(\Gamma_{\alpha-1}\). Assume that \(X\) contains a fragment \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K)\geq 3.2\omega\) Figure 24. Figure 25. Let \(S\) be a base for \(K\) with \(|S|_{\alpha-1}\geq 3.2\). By Lemma 10.8\({}_{\leq\alpha-1}\) and Corollary 9.13 we have \(S=w_{1}S_{1}w_{2}\) and \(K=z_{1}K_{1}z_{2}\) where \(S_{1}\) and \(K_{1}\) are close in rank \(\max(0,\beta-1)\) and \(|S_{1}|_{\alpha-1}>|S|_{\alpha-1}-2-10\zeta^{2}\eta>1.15\). If \(\beta=0\) we already get a contradiction since in this case \(|K_{1}|\leq 1\) but \(|S_{1}|\geq|S_{1}|_{\alpha-1}>1\). Let \(\beta\geq 1\). Up to change of notation, we assume that \(X\), \(K_{1}\) and \(S_{1}\) are lifted to \(\Gamma_{\beta-1}\). Let \(T\) be a base for \(X\). By Proposition 10.16\({}_{\beta-1}\) a subpath \(T_{1}\) of \(T\) is close to a subpath \(S_{2}\) of \(S\) with \(|S_{2}|_{\alpha-1}>|S_{1}|_{\alpha-1}-2.6\zeta>1\). Then \(S_{2}\) is a fragment of rank \(\beta\) with base \(T_{1}\) and we should have \(|S_{2}|_{\alpha-1}\leq 1\), a contradiction. In the case when \(X\) is a piece of rank \(\alpha\) a similar argument works with skipping application of Proposition 10.16\({}_{\beta-1}\). **10.9 Lemma**.: _Let \(\alpha\geq 1\) and \(X\) be a word cyclically reduced in \(G_{\alpha-1}\). Assume that a cyclic shift of \(X\) contains a fragment \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K)\geq 6.5\omega\). Then \(X\) is strongly cyclically reduced in \(G_{\alpha-1}\)._ Proof.: Let \(F\) be a fragment of rank \(1\leq\beta\leq\alpha-1\) in a word \(X^{t}\). Assume that \(|F|>|X|\). Using Proposition 8.11 represent \(K\) as \(K=K_{1}uK_{2}\) where \(\mu_{\mathrm{f}}(K_{1}),\mu_{\mathrm{f}}(K_{2})>3.2\omega\). Since \(|K|\leq|X|\), \(F\) should contain a translate of \(K_{1}\) or \(K_{2}\). But this is impossible by Lemma 10.8. Hence \(|F|\leq|X|\) and then \(\mu_{\mathrm{f}}(F)\leq\rho\) since \(X\) is cyclically reduced in \(G_{\alpha-1}\). This shows that any power \(X^{t}\) is reduced in \(G_{\alpha-1}\), i.e. \(X\) is strongly cyclically reduced in \(G_{\alpha-1}\). **10.10 Proposition** (fragment stability in conjugacy relations with cyclic sides).: _Let \(\alpha\geq 1\) and \(X\) and \(Y\) be words which are cyclically reduced in \(G_{\alpha}\) and represent conjugate elements of \(G_{\alpha}\). Let \(\bar{X}=\prod_{i\in\mathbb{Z}}X_{i}\) and \(\bar{Y}=\prod_{i\in\mathbb{Z}}Y_{i}\) be parallel lines in \(\Gamma_{\alpha}\) representing the conjugacy relation. Let \(K\) be a fragment of rank \(\alpha\) in \(\bar{X}\) with \(\mu_{\mathrm{f}}(K)\geq 2\lambda+5.8\omega\) and \(|K|\leq|X|\). Then there is a fragment \(M\) of rank \(\alpha\) in \(\bar{Y}\) such that \(M\sim K^{\pm 1}\) and_ \[\mu_{\mathrm{f}}(M)\geq\min\{\mu_{\mathrm{f}}(K)-2\lambda-3.4\omega,\ \xi_{0}\}\] Proof.: By Lemma 10.9\(X\) is strongly cyclically reduced in \(G_{\alpha-1}\). We claim that a cyclic shift of \(Y\) also contains a fragment \(F\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(F)\geq 6.5\) and thus \(Y\) is strongly cyclically reduced in \(G_{\alpha-1}\) as well. Indeed, by Proposition 9.17 with \(\beta:=\alpha-1\) we may assume for some cyclic shifts \(X^{\prime}\) and \(Y^{\prime}\) of \(X\) and \(Y\) we have \(Y^{\prime}=w^{-1}X^{\prime}w\) in \(G_{\alpha-1}\) where \(w\in\mathcal{H}_{\alpha-1}\). Then existence of \(F\) easily follows by Propositions 8.11 and 8.7. Consider a reduced annular diagram \(\Delta\) of rank \(\alpha\) with boundary loops \(\hat{X}\) and \(\hat{Y}^{-1}\) representing the conjugacy relation given in the proposition. Let \(\tilde{\Delta}\) be the universal cover of \(\Delta\) and let \(\phi:\tilde{\Delta}^{(1)}\to\Gamma_{\alpha}\) be a combinatorially continuous map which sends lifts of \(\hat{X}\) and \(\hat{Y}\) to \(\bar{X}\) and \(\bar{Y}\) respectively. Assume that \(\Delta\) has a cell of rank \(\alpha\). Let \(D\) be some lift of this cell in \(\tilde{\Delta}\). By Proposition 7.13(i), \(\phi(\delta D)\) and \(\phi(\delta D)^{-1}\) are base loops for fragments \(N_{i}\) (\(i=1,2\)) of rank \(\alpha\) in \(\bar{X}\) and \(\bar{Y}\) respectively, such that \(\mu_{\mathrm{f}}(N_{1})+\mu_{\mathrm{f}}(N_{2})\geq 1-2\lambda-1.5\omega\). Since \(X\) and \(Y\) are cyclically reduced in \(G_{\alpha}\) we have \(\mu_{\mathrm{f}}(N_{i})\leq\rho\) and hence \(\mu_{\mathrm{f}}(N_{i})\geq 1-\rho-2\lambda-1.5\omega=\xi_{0}\). By construction, we have \(N_{1}\sim N_{2}^{-1}\). Since \(\bar{X}\) and \(\bar{Y}\) are parallel, we have \(s_{X,\bar{X}}^{k}N_{1}\sim s_{Y,\bar{Y}}^{k}N_{2}^{-1}\) for any \(k\in\mathbb{Z}\). If \(K\sim s_{X,\bar{X}}^{k}N_{1}\) for some \(k\) then we can take \(s_{Y,\bar{Y}}^{k}N_{2}\) for \(M\). Otherwise we have \(s_{X,\bar{X}}^{k}N_{1}<K<s_{X,\bar{X}}^{k+1}N_{1}\) for some \(k\) and the rest of the argument is the same as in the proof of Proposition 10.5. Now assume that \(\Delta\) has no cells of rank \(\alpha\). We can assume that \(\Delta\) is a reduced diagram of rank \(\beta\) for some \(\beta\leq\alpha-1\) and in case \(\beta\geq 1\), \(\Delta\) has at least one cell of rank \(\beta\). If \(\beta=0\) then \(\bar{X}=\bar{Y}\) and there is nothing to prove. Let \(\beta\geq 1\). Up to change of notations, we assume that \(K\), \(\bar{X}\) and \(\bar{Y}\) are lifted to \(\Gamma_{\alpha-1}\). Proposition 7.13(i)\({}_{\beta}\) implies that some vertices \(a\) on \(\bar{X}\) and \(\mathsf{b}\) on \(\check{\mathsf{Y}}\) are joined by a bridge of rank \(\beta\). This is true also for any translates \(s^{i}_{X,\check{\mathsf{X}}}\mathsf{a}\) and \(s^{i}_{Y,\check{\mathsf{Y}}}\mathsf{b}\). Then the statement follows by Proposition 8.7 (here we use that \(X\) and \(Y\) are strongly cyclically reduced in \(G_{\alpha-1}\)). **Lemma**.: _Let \(\alpha\geq 1\) and \(S\) be a word cyclically reduced in \(G_{\alpha-1}\). Assume that \(S\) is conjugate in \(G_{\alpha-1}\) to a word \(T_{1}v_{1}T_{2}v_{2}\) where \(T_{i}\) are reduced in \(G_{\alpha-1}\) and \(v_{i}\) are bridges of rank \(\alpha\). Let \(\bar{\mathsf{S}}=\prod_{i\in\mathbb{Z}}\mathsf{S}_{i}\) and \(\prod_{i\in\mathbb{Z}}\mathsf{T}_{1}^{(i)}\mathsf{v}_{1}^{(i)}\mathsf{T}_{2}^{ (i)}\mathsf{v}_{2}^{(i)}\) be parallel lines in \(\Gamma_{\alpha-1}\) representing the conjugacy relation. Denote \(\mathsf{U}_{2i}=\mathsf{T}_{1}^{(i)}\) and \(\mathsf{U}_{2i+1}=\mathsf{T}_{2}^{(i)}\)._ _Assume that a reduced path \(\mathsf{X}\) in \(\Gamma_{\alpha-1}\) is close to a subpath \(\mathsf{Y}\) of \(\bar{\mathsf{S}}\) with \(|\mathsf{Y}|\leq|S|\). Let \(|\mathsf{X}|_{\alpha-1}\geq 8\). Then \(\mathsf{X}\) can be represented as \(\mathsf{z}_{0}\mathsf{X}_{1}\mathsf{z}_{1}\ldots\mathsf{X}_{r}\mathsf{z}_{r}\)\((1\leq r\leq 4)\) where each \(\mathsf{X}_{i}\) is close to a subpath of some \(\mathsf{U}_{j_{i}}\), \(j_{1}<\cdots<j_{r}\), \(j_{r}-j_{1}\leq 3\) and_ \[\sum_{i}|\mathsf{X}_{i}|_{\alpha-1}\geq|\mathsf{X}|_{\alpha-1}-9.\] Proof.: Let \(Z\) be a word reduced in \(G_{\alpha-1}\) such that \(Z=T_{1}v_{1}T_{2}\) in \(G_{\alpha-1}\). We join \(\iota(\mathsf{T}_{1}^{(i)})\) and \(\tau(\mathsf{T}_{2}^{(i)})\) with the path \(\mathsf{Z}_{i}\) labeled \(Z\). Since \(|\mathsf{X}|_{\alpha-1}\geq 8\), application of Propositions 10.19\({}_{\alpha-1}\) gives \(\mathsf{X}=\mathsf{w}_{1}\mathsf{X}^{\prime}\mathsf{w}_{2}\) or \(\mathsf{X}=\mathsf{w}_{1}\mathsf{X}^{\prime}\mathsf{w}_{2}\mathsf{X}^{\prime \prime}\mathsf{w}_{3}\) where, respectively, \(\mathsf{X}^{\prime}\) is close to a subpath of some \(\mathsf{Z}_{i}\) and \(|\mathsf{X}^{\prime}|_{\alpha-1}\geq|\mathsf{X}|_{\alpha-1}-2.9\) or for some \(i\), \(\mathsf{X}^{\prime}\) is close to a subpath of \(\mathsf{Z}_{i}\), \(\mathsf{X}^{\prime\prime}\) is close to a subpath of \(\mathsf{Z}_{i+1}\) and \(|\mathsf{X}^{\prime}|_{\alpha-1}+|\mathsf{X}^{\prime\prime}|_{\alpha-1}\geq| \mathsf{X}|_{\alpha-1}-3\). Then a single or double application of Proposition 10.18\({}_{\alpha-1}\) gives the required \(\mathsf{X}_{i}\)'s. **Proposition** (fragment stability in conjugacy relations with non-cyclic side).: _Let \(\alpha\geq 1\) and \(X\) be a word cyclically reduced in \(G_{\alpha}\). Assume that \(X\) is conjugate in \(G_{\alpha}\) to a word \(Yu\) where \(Y\) is reduced in \(G_{\alpha}\) and \(u\) is a bridge of rank \(\alpha\). Let \(\bar{\mathsf{X}}=\prod_{i\in\mathbb{Z}}\mathsf{X}_{i}\) and \(\prod_{i\in\mathbb{Z}}\mathsf{Y}_{i}\mathsf{u}_{i}\) be parallel lines in \(\Gamma_{\alpha}\) representing the conjugacy relation. Let \(\mathsf{K}\) be a fragment of rank \(\alpha\) in \(\bar{\mathsf{X}}\) with \(\mu_{\mathrm{f}}(\mathsf{K})\geq 3\lambda+9\omega\) and \(|\mathsf{K}|\leq|X|\). Assume that \(\mathsf{K}\) is independent of any of the bridges \(\mathsf{u}_{i}\). Then there is a fragment \(\mathsf{M}\) of rank \(\alpha\) in some \(\mathsf{Y}_{k}\) such that \(\mathsf{M}\sim\mathsf{K}^{\pm 1}\) and_ \[\mu_{\mathrm{f}}(\mathsf{M})>\min\left\{\frac{5}{2}\lambda-1.1\omega,\ \frac{1}{2}(\mu_{ \mathrm{f}}(\mathsf{K})-3\lambda-6.8\omega)\right\}.\] Proof.: Let \(\Delta\) be an annular diagram of rank \(\alpha\) with boundary loops \(\hat{\mathsf{X}}^{-1}\) and \(\hat{\mathsf{Y}}\hat{\mathsf{u}}\) representing the conjugacy relation. Let \(\tilde{\Delta}\) be the universal cover of \(\Delta\) and \(\phi:\tilde{\Delta}^{(1)}\to\Gamma_{\alpha}\) a combinatorially continuous map sending lifts \(\tilde{\mathsf{X}}_{i}\), \(\tilde{\mathsf{Y}}_{i}\) and \(\tilde{\mathsf{u}}_{i}\) of \(\hat{\mathsf{X}}\), \(\hat{\mathsf{Y}}\) and \(\hat{\mathsf{u}}\) to \(\mathsf{X}_{i}\), \(\mathsf{Y}_{i}\) and \(\mathsf{u}_{i}\) respectively. Up to switching of \(\hat{\mathsf{u}}\), we assume that \(\Delta\) is reduced and has a tight set \(\mathcal{T}\) of contiguity subdiagrams. _Case_ 1: \(\Delta\) has no cells of rank \(\alpha\). Then parallel lines \(\bar{\mathsf{X}}=\prod_{i\in\mathbb{Z}}\mathsf{X}_{i}\) and \(\prod_{i\in\mathbb{Z}}\mathsf{Y}_{i}\mathsf{u}_{i}\) can be lifted to \(\Gamma_{\alpha-1}\); we assume that they and the subpath \(\mathsf{K}\) of \(\bar{\mathsf{X}}\) are already lifted to \(\Gamma_{\alpha-1}\). If \(u\in\mathcal{H}_{\alpha-1}\) then the statement follows by Proposition 10.19\({}_{\alpha-1}\), so we assume that \(u\notin\mathcal{H}_{\alpha-1}\). Let \(\mathsf{L}\) be the base axis for \(\mathsf{K}\) and \(\mathsf{S}\) the base for \(\mathsf{K}\). Since \(\mathsf{K}\) is independent of \(\mathsf{u}_{i}\) (when viewed in \(\Gamma_{\alpha}\)) we have \(\mathsf{u}_{i}=\mathsf{w}_{1}^{(i)}\mathsf{Q}_{i}\mathsf{w}_{2}^{(i)}\) where \(\mathit{label}(\mathsf{w}_{j}^{(i)})\in\mathcal{H}_{\alpha-1}\) and \(\mathsf{Q}_{i}\) occurs in a line \(\mathsf{L}_{i}\) labeled by the infinite power \(R_{i}^{\infty}\) of a relator \(R_{i}\) of rank \(\alpha\) such that \(\mathsf{L}_{i}\neq\mathsf{L}\). By Corollary 8.2, if a subpath \(\mathsf{P}\) of \(\mathsf{S}\) is close to a subpath of \(\mathsf{Q}_{i}\) then \(\mu(\mathsf{P})<\lambda\). Applying Lemma 10.11 we conclude that either there exists a fragment \(\mathsf{M}\) of rank \(\alpha\) in some \(\mathsf{Y}_{k}\) such that \(\mathsf{M}\sim\bar{\mathsf{K}}\) and \(\mu_{\mathrm{f}}(\mathsf{M})>\mu_{\mathrm{f}}(\mathsf{K})-2\lambda-9\omega\) or there exist fragments \(\mathsf{M}_{1}\) and \(\mathsf{M}_{2}\) of rank \(\alpha\) in some \(\mathsf{Y}_{k}\) and \(\mathsf{Y}_{k+1}\) respectively such that \(\mathsf{M}_{1}\sim\mathsf{M}_{2}\sim\mathsf{K}\) and \[\mu_{\mathrm{f}}(\mathsf{M}_{1})+\mu_{\mathrm{f}}(\mathsf{M}_{2})>\mu_{\mathrm{f} }(\mathsf{K})-2\lambda-9\omega.\] In the latter case, for at least one \(\mathsf{M}_{i}\) we have \(\mu_{\mathrm{f}}(\mathsf{M}_{i})>\frac{1}{2}(\mu_{\mathrm{f}}(\mathsf{K})-2\lambda -9\omega)\) and we can take its image in \(\Gamma_{\alpha}\) for the required \(\mathsf{M}\). _Case \(2\)_: \(\Delta\) has at least one cell of rank \(\alpha\). Let \(\mathsf{D}\) be such a cell and let \(\tilde{\mathsf{D}}\) be a lift of \(\mathsf{D}\) in \(\tilde{\Delta}\). By Proposition 7.11(iv) and Lemma 7.10(i), \(\mathsf{D}\) has two or three contiguity subdiagrams \(\Pi_{i}\in\mathsf{T}\) to sides of \(\Delta\), at most two to \(\tilde{\mathsf{Y}}\) and at most one to \(\tilde{\mathsf{X}}^{-1}\). By Proposition 7.13(iii), \(\phi(\delta\tilde{\mathsf{D}})\) is the base loop for two or three fragments \(\mathsf{N}_{i}\) (\(i=1,2\) or \(i=1,2,3\)) of rank \(\alpha\) in two or three of the paths \(\tilde{\mathsf{X}}^{-1}\), \(\mathsf{Y}_{j}\) and \(\mathsf{Y}_{j+1}\) for some \(j\), respectively, with (10-2) \[\sum_{i}\mu_{\mathrm{f}}(\mathsf{N}_{i})>1-4\lambda-2.2\omega.\] Since \(\mu_{\mathrm{f}}(\mathsf{N}_{i})\leq\rho\) for each \(i\), for at least two indices \(i\) we have \[\mu_{\mathrm{f}}(\mathsf{N}_{i})>\frac{1}{2}(1-4\lambda-2.2\omega-\rho)=\frac {5}{2}\lambda-1.1\rho.\] Note that all \(\mathsf{N}_{i}\) are pairwise compatible. If \(\mathsf{K}\sim\mathsf{N}_{1}^{\pm 1}\) then for the required \(\mathsf{M}\) we can take that \(\mathsf{N}_{i}\) which occurs in \(\mathsf{Y}_{i}\) or in \(\mathsf{Y}_{j+1}\) and has a larger \(\mu_{\mathrm{f}}(\mathsf{N}_{i})\). Hence we can assume that \(\mathsf{K}\not\sim\mathsf{N}_{i}^{\pm 1}\) for all \(\mathsf{N}_{i}\) produced by all lifts \(\tilde{\mathsf{D}}\) of all cells \(\mathsf{D}\) of rank \(\alpha\) of \(\Delta\). Assume that \(\mathsf{D}\) has two contiguity subdiagrams \(\Pi_{i}\in\mathsf{T}\) (\(i=1,2\)) to \(\hat{\mathsf{Y}}\), i.e. the corresponding fragments \(\mathsf{N}_{1}\) and \(\mathsf{N}_{2}\) of rank \(\alpha\) occur in \(\mathsf{Y}_{k}\) and \(\mathsf{Y}_{k+1}\) respectively. Then we cut off from \(\Delta\) the subdiagram \(\Delta\cup\Pi_{1}\cup\Pi_{2}\) and the remaining simply connected component. This replaces \(\Delta\) with a new diagram \(\Delta^{\prime}\) with a smaller number of cells of rank \(\alpha\), \(\mathsf{Y}_{i}\) with a subpath of \(\mathsf{Y}_{i}\), bridges \(\mathsf{u}_{i}\) with another bridges \(\mathsf{u}_{i}^{\prime}\) and the assumption that \(\mathsf{K}\not\sim\mathsf{N}_{i}^{\pm 1}\) for \(\mathsf{N}_{i}\) produced by all lifts \(\tilde{\mathsf{D}}\) of \(\mathsf{D}\) implies that \(\mathsf{K}\) is independent of all new bridges \(\mathsf{u}_{i}^{\prime}\). In this case we can apply induction on the number of the cells of rank \(\alpha\) of \(\Delta\). We may assume now that each cell \(\mathsf{D}\) of rank \(\alpha\) of \(\Delta\) has precisely two contiguity subdiagrams \(\Pi_{i}\in\mathsf{T}\) to sides of \(\Delta\), one to \(\hat{\mathsf{X}}^{-1}\) and another one to \(\hat{\mathsf{Y}}\). This implies that each lift of \(\mathsf{D}\) produces two fragments \(\mathsf{N}_{i}\), one in \(\tilde{\mathsf{X}}^{-1}\) and one in some \(\mathsf{Y}_{j}\). Let \(\{\mathsf{D}_{1},\mathsf{D}_{2},\ldots,\mathsf{D}_{k}\}\) be the set of all cells of rank \(\alpha\) of \(\Delta\). For each lift \(\tilde{\mathsf{D}}_{i}^{(j)}\) (\(t\in\mathbb{Z}\)) of \(\mathsf{D}_{i}\), denote \(\mathsf{N}_{i,1}^{(j)}\) and \(\mathsf{N}_{i,2}^{(j)}\) the corresponding fragments of rank \(\alpha\) that occurs in \(\tilde{\mathsf{X}}^{-1}\) and \(\mathsf{Y}_{j}\) respectively (the requirement that \(\mathsf{N}_{i,2}^{(j)}\) occurs in \(\mathsf{Y}_{j}\) determines uniquely the lift \(\tilde{\mathsf{D}}_{i}^{(j)}\) and the fragment \(\mathsf{N}_{i,1}^{(j)}\)). Note that (10-2) implies \[\mu_{\mathrm{f}}(\mathsf{N}_{i,k}^{(j)})>1-4\lambda-2.2\omega-\rho=5\lambda-2.2\omega.\] We order cells \(\mathsf{D}_{i}\) to get \(\mathsf{N}_{i,2}^{(j)}\) ordered in \(\mathsf{Y}_{j}\) as \(\mathsf{N}_{1,2}^{(j)}\ll\cdots\ll\mathsf{N}_{k,2}^{(j)}\). Consequently, in \(\tilde{\mathsf{X}}\) we have \(\cdots\mathsf{N}_{1,1}^{(j)}\ll\cdots\ll\mathsf{N}_{k,1}^{(j)}{}^{-1}\ll \mathsf{N}_{k,1}^{(j+1)}{}^{-1}\ll\cdots\ll\mathsf{N}_{k,1}^{(j+1)}{}^{-1}\cdots\) (Figure 26). By the assumption above, we have \(\mathsf{K}\not\sim\mathsf{N}_{i,1}^{(j)}{}^{-1}\) for all \(i,j\). Then by Proposition 8.10 we have either Figure 26. \({\sf N}_{i,1}^{(j)}{}^{-1}<{\sf K}<{\sf N}_{i+1,1}^{(j)}{}^{-1}\) for some \(i,j\) or \({\sf N}_{k,1}^{(j)}{}^{-1}<{\sf K}<{\sf N}_{1,1}^{(j+1)}{}^{-1}\) for some \(i\). In each of these cases, we find the required \({\sf M}\) by applying an appropriate part of the proof of Proposition 10.5 or Proposition 10.7. We will use the following observation. **Lemma**.: 1. _Let_ \({\sf K}\) _be a fragment of rank_ \(1\leq\beta\leq\alpha\) _in_ \(\Gamma_{\alpha}\)_. Let_ \({\sf M}\) _be either another fragment of rank_ \(\beta\) _in_ \(\Gamma_{\alpha}\) _such that_ \({\sf K}\sim{\sf M}^{\pm 1}\) _or a bridge of rank_ \(\beta\) _such that_ \({\sf K}\) _is not independent of_ \({\sf M}\)_. Then any of the endpoints of_ \({\sf K}\) _can be joined with any of the endpoints of_ \({\sf M}\) _by a bridge_ \({\sf w}\) _of rank_ \(\beta\)_._ _Moreover,_ \({\sf w}\) _can be chosen with the following property. If_ \({\sf N}\) _is any other fragment of rank_ \(\beta\) _such that_ \({\sf N}\not\sim{\sf M}^{\pm 1}\) _then_ \({\sf N}\) _is independent of_ \({\sf w}\)_._ 2. _Let_ \({\sf K}_{1}\)_,_ \({\sf K}_{2}\)_,_ \(\ldots\)_,_ \({\sf K}_{r}\) _be fragments of rank_ \(\beta\leq\alpha\) _in_ \(\Gamma_{\alpha}\) _such that_ \({\sf K}_{1}\sim{\sf K}_{i}^{\pm 1}\) _for all_ \(i\)_. Then all endpoints of all_ \({\sf K}_{i}\) _are uniformly close._ Proof.: Follows from definitions in 8.4 and Definition 10.4. **Lemma**.: _Let \(({\sf X}_{i},{\sf Y}_{i})\)\((i=1,2)\) be two pairs of close reduced paths in \(\Gamma_{\alpha}\) where \({\sf X}_{1}\) and \({\sf X}_{2}\) are subpaths of a reduced path \(\bar{{\sf X}}\). Assume that for the common subpath \({\sf Z}\) of \({\sf X}_{1}\) and \({\sf X}_{2}\) we have \(|{\sf Z}|_{\alpha}\geq 2.2\). Then there exists a triple \({\sf a}_{i}\)\((i=1,2,3)\) of uniformly close vertices on \({\sf Z}\), \({\sf Y}_{1}\) and \({\sf Y}_{2}\) respectively._ Proof.: If \(\alpha=0\) there is nothing to prove. Let \(\alpha\geq 1\). Let \({\sf X}_{i}^{-1}{\sf u}_{i}{\sf Y}_{i}{\sf v}_{i}\)\((i=1,2)\) be a coarse bigon where \({\sf u}_{i}\) and \({\sf v}_{i}\) are bridges of rank \(\alpha\). _Case_ 1: \({\rm Area}_{\alpha}({\sf X}_{i}^{-1}{\sf u}_{i}{\sf Y}_{i}{\sf v}_{i})=0\) for both \(i=1,2\). We apply Proposition 9.11 and find loops \({\sf X}_{i}^{\prime-1}{\sf u}_{i}^{\prime}{\sf Y}_{i}^{\prime}{\sf v}_{i}^{\prime}\) that can be lifted to \(\Gamma_{\alpha-1}\) where \({\sf X}_{i}^{\prime}\) and \({\sf Y}_{i}^{\prime}\) are subpaths of \({\sf X}_{i}\) and \({\sf Y}_{i}\) respectively. For the common part \({\sf Z}^{\prime}\) of \({\sf X}_{1}^{\prime}\) and \({\sf Z}_{2}^{\prime}\) we have \(|{\sf Z}^{\prime}|_{\alpha}\geq|{\sf Z}|_{\alpha}-2.04\geq 0.16\) and hence \(|{\sf Z}^{\prime}|_{\alpha-1}\geq 3.2\). Then the statement follows by induction. _Case_ 2: \({\rm Area}_{\alpha}({\sf X}_{i}^{-1}{\sf u}_{i}{\sf Y}_{i}{\sf v}_{i})>0\) for \(i=1\) or \(i=2\). Without loss of generality, assume that \({\sf K}\) and \({\sf M}\) are active fragments of rank \(\alpha\) in \({\sf X}_{1}\) and in \({\sf Y}_{1}\), respectively, such that \({\sf K}\sim{\sf M}^{-1}\). Let \({\sf X}_{1}={\sf S}_{1}{\sf K}{\sf S}_{2}\) and \({\sf Y}_{1}={\sf T}_{1}{\sf M}{\sf T}_{2}\). If \({\sf S}_{1}{\sf K}\) contains \({\sf Z}\) then we shorten \({\sf X}_{1}\) and \({\sf Y}_{1}\) replacing them with \({\sf S}_{1}{\sf K}\) and \({\sf T}_{1}\) thereby decreasing \({\rm Area}_{\alpha}({\sf X}_{1}^{-1}{\sf u}_{1}{\sf Y}_{1}{\sf v}_{1})\) as described in 9.5. Similarly, if \({\sf K}{\sf S}_{2}\) contains \({\sf Z}\) then we can replace \({\sf X}_{1}\) and \({\sf Y}_{1}\) with \({\sf K}{\sf S}_{2}\) and \({\sf T}_{2}\). Therefore, we can assume that \({\sf K}\) is contained in \({\sf Z}\). We take \({\sf a}_{1}=\iota({\sf K})\) and \({\sf a}_{2}=\iota({\sf M})\). If \({\sf K}\) is not independent of \({\sf u}_{2}\) or from \({\sf v}_{2}\) then for \({\sf a}_{3}\) we can take \(\iota({\sf Y}_{2})\) or \(\tau({\sf Y}_{2})\) respectively. Otherwise by Proposition 10.6 there exists a fragment \({\sf N}\) of rank \(\alpha\) in \({\sf Y}_{2}\) such that \({\sf N}\sim{\sf K}^{\pm 1}\) and we can take \({\sf a}_{3}=\iota({\sf N})\). **Lemma**.: _Let \(({\sf S},{\sf T})\) and \(({\sf X},{\sf Y})\) be pairs of close reduced paths in \(\Gamma_{\alpha}\) where \({\sf Y}\) is an end of \({\sf S}\) and the ending vertices \(\tau({\sf X})\), \(\tau({\sf Y})=\tau({\sf S})\) and \(\tau({\sf T})\) are uniformly close. Then there exists a triple \({\sf a}_{i}\)\((i=1,2,3)\) of uniformly close vertices on \({\sf X}\), \({\sf Y}\) and \({\sf T}\) respectively, such that \({\sf a}_{1}\) cuts off a start \({\sf X}_{1}\) of \({\sf X}\) with \(|{\sf X}_{1}|_{\alpha}<1.3\) and \({\sf a}_{2}\) cuts off a start \({\sf Y}_{1}\) of \({\sf Y}\) with \(|{\sf Y}_{1}|_{\alpha}<1.15\)._ Proof.: We can assume \(\alpha\geq 1\). We use induction on \(|{\sf X}|+|{\sf Y}|+|{\sf T}|\). If \(|{\sf X}|_{\alpha}<1.3\) and \(|{\sf Y}|_{\alpha}<1.2\) there is nothing to prove. We assume that \(|{\sf X}|_{\alpha}\geq 1.3\) or \(|{\sf Y}|_{\alpha}\geq 1.15\). It is enough to find a triple \({\sf a}_{i}\)\((i=1,2,3)\) of uniformly close vertices on \({\sf X}\), \({\sf Y}\) and \({\sf T}\) respectively, such that at least one \({\sf a}_{i}\) cuts off a proper start of appropriate path \({\sf X}\), \({\sf Y}\) or \({\sf T}\). Let \({\sf X}^{-1}{\sf u}_{1}{\sf Y}{\sf u}_{2}\) and \({\sf S}^{-1}{\sf v}_{1}{\sf T}{\sf v}_{2}\) be coarse bigons in \(\Gamma_{\alpha}\) where \({\sf u}_{i}\) and \({\sf v}_{i}\) are bridges of rank \(\alpha\). _Case_ 1: \({\rm Area}_{\alpha}({\sf X}^{-1}{\sf u}_{1}{\sf Y}_{1}{\sf v}_{2})={\rm Area }_{\alpha}({\sf S}^{-1}{\sf v}_{1}{\sf T}{\sf v}_{2})=0\). We assume that \({\sf u}_{2}\) and \({\sf v}_{2}\) are defined from the condition that \(\tau({\sf X})\), \(\tau({\sf Y})\) and \(\tau({\sf T})\) are uniformly close; that is, either \({\sf u}_{2}\) and \({\sf v}_{2}\) are bridges of rank \(\alpha-1\) or have the form \(\mathsf{u}_{2}=\mathsf{w}_{1}\mathsf{P}_{1}\mathsf{w}_{2}\) and \(\mathsf{v}_{2}=\mathsf{w}_{3}\mathsf{P}_{2}\mathsf{w}_{4}\) where \(\mathsf{w}_{i}\) are bridges of rank \(\alpha-1\) and \(\mathsf{P}_{i}^{\pm 1}\) are subpaths of a relator loop \(\mathsf{R}\) of rank \(\alpha\). We consider the second case (the case when \(\mathsf{u}_{2}\) and \(\mathsf{v}_{2}\) are bridges of rank \(\alpha-1\) is treated in a similar manner). Without changing notations, we assume that loops \(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Yu}_{2}\) and \(\mathsf{S}^{-1}\mathsf{v}_{1}\mathsf{T}\mathsf{v}_{2}\) are lifted to \(\Gamma_{\alpha-1}\) and, consequently, all paths introduced are in \(\Gamma_{\alpha-1}\) (the only change is that \(\mathsf{P}_{i}^{\pm 1}\) become subpaths of an \(R\)-periodic line \(\tilde{\mathsf{R}}\) where \(R\) is a relator of rank \(\alpha\)). After choosing \(\mathsf{a}_{i}\) (\(i=1,2,3\)) in \(\Gamma_{\alpha-1}\) we pass on to their images in \(\Gamma_{\alpha}\). _Case_ 1a: \(|\mathsf{X}|_{\alpha}\geq 1.3\). If a vertex \(\mathsf{b}_{1}\neq\tau(\mathsf{X})\) on \(\mathsf{X}\) is close in rank \(\alpha-1\) to a vertex \(\mathsf{b}_{2}\) on \(\mathsf{P}_{1}\) then we can take \(\mathsf{a}_{1}:=\mathsf{b}_{1}\), \(\mathsf{a}_{2}:=\tau(\mathsf{Y})\) and \(\mathsf{a}_{3}:=\tau(\mathsf{T})\). We assume that no such \(\mathsf{b}_{1}\) and \(\mathsf{b}_{2}\) exist. Then application of Proposition 9.19(ii)\({}_{\alpha-1}\) shows that \(\mathsf{X}=\mathsf{z}_{1}\mathsf{X}^{\prime}\mathsf{z}_{2}\) where \(\mathsf{X}^{\prime}\) is close to a subpath \(\mathsf{Y}^{\prime}\) of \(\mathsf{Y}\), \(|\mathsf{z}_{1}|_{\alpha}\leq 1+4\zeta^{2}\eta\), \(|\mathsf{z}_{2}|_{\alpha}\leq 4\zeta^{2}\eta\) and hence \(|\mathsf{X}^{\prime}|_{\alpha}\geq 0.3-8\zeta^{2}\eta\). Assume first that \(\alpha\geq 2\). Then shortening \(\mathsf{X}^{\prime}\) from the end by Proposition 9.21\({}_{\alpha-1}\) we can assume that \(\mathsf{z}_{1}\mathsf{X}^{\prime}\) is a proper start of \(\mathsf{X}\) (and that \(\mathsf{X}^{\prime}\) is still close to a subpath \(\mathsf{Y}^{\prime}\) of \(\mathsf{Y}\)). For the shortened \(\mathsf{X}^{\prime}\), we have \(|\mathsf{X}^{\prime}|_{\alpha}>0.3-8\zeta^{2}\eta-\zeta^{2}>0.26\) which implies \(|\mathsf{X}^{\prime}|_{\alpha-1}\geq\frac{1}{\zeta}|\mathsf{X}^{\prime}|_{ \alpha}>5.2\). Let \(\mathsf{v}_{1}=\mathsf{w}_{5}\mathsf{Qw}_{6}\) where \(\mathsf{w}_{5},\mathsf{w}_{6}\) are bridges of rank \(\alpha-1\) and \(\mathsf{Q}\) is labeled by a piece of rank \(\alpha\). Application of Lemma 10.2 gives a triple of uniformly close vertices \(\mathsf{a}_{i}\) (\(i=1,2,3\)) where \(\mathsf{a}_{1}\) lies on \(\mathsf{X}^{\prime}\), \(\mathsf{a}_{2}\) lies on \(\mathsf{Y}^{\prime}\) and \(\mathsf{a}_{3}\) lies either on \(\mathsf{Q}\) or \(\mathsf{T}\). If \(\mathsf{a}_{3}\) lies on \(\mathsf{Q}\) then we replace it with \(\iota(\mathsf{T})\). In the case \(\alpha=1\) we shorten \(\mathsf{X}^{\prime}\) by one edge and for the new \(\mathsf{X}^{\prime}\) we have \(|\mathsf{X}^{\prime}|_{\alpha}>0.3-8\zeta^{2}\eta-\zeta>0\). We can still apply Lemma 10.2 due to Remark 10.3, so the argument remains the same. _Case_ 1b: \(|\mathsf{Y}|_{\alpha}\geq 1.15\). Similarly to Case 1, we can assume that there is no vertex \(\mathsf{b}\neq\tau(\mathsf{Y})\) on \(\mathsf{Y}\) (and hence on \(\mathsf{S}\) since \(|\mathsf{Y}|_{\alpha-1}\geq\frac{1.15}{\zeta}=23\)) close in rank \(\alpha-1\) to a vertex on \(\mathsf{P}_{1}\) or on \(\mathsf{P}_{2}\). Applying Proposition 9.19(ii)\({}_{\alpha-1}\) we represent \(\mathsf{Y}\) and \(\mathsf{S}\) as \(\mathsf{Y}=\mathsf{z}_{1}\mathsf{Y}^{\prime}\mathsf{z}_{2}\), \(\mathsf{S}=\mathsf{z}_{3}\mathsf{S}^{\prime}\mathsf{z}_{4}\) where \(\mathsf{Y}^{\prime}\) is close (in rank \(\alpha-1\)) to a subpath \(\mathsf{X}^{\prime}\) of \(\mathsf{X}\), \(\mathsf{S}^{\prime}\) is close to a subpath \(\mathsf{T}^{\prime}\) of \(\mathsf{T}\) and \(|\mathsf{z}_{1}|_{\alpha},|\mathsf{z}_{3}|_{\alpha}<1+4\zeta^{2}\eta\), \(|\mathsf{z}_{2}|_{\alpha},|\mathsf{z}_{4}|_{\alpha}<4\zeta^{2}\eta\). In the case \(\alpha=1\) there is a common subpath \(\mathsf{Z}\) of \(\mathsf{X}^{\prime}\), \(\mathsf{Y}^{\prime}\), \(\mathsf{S}^{\prime}\) and \(\mathsf{T}^{\prime}\) of size \(|\mathsf{Z}|_{\alpha}\geq|\mathsf{Y}|_{\alpha}-1-8\zeta^{2}\eta>0\) and we can take \(\iota(\mathsf{Z})\) for all \(\mathsf{a}_{i}\). In the case \(\alpha\geq 2\), shortening \(\mathsf{Y}^{\prime}\) from the end by Proposition 9.21\({}_{\alpha-1}\) we can assume that \(\mathsf{z}_{1}\mathsf{Y}^{\prime}\) is a proper start of \(\mathsf{Y}\). Let \(\mathsf{Z}\) be the common subpath of \(\mathsf{Y}^{\prime}\) and \(\mathsf{S}^{\prime}\). We have \(|\mathsf{Z}|_{\alpha}>|\mathsf{Y}|_{\alpha}-1-8\zeta^{2}\eta-\zeta^{2}>0.11\) and hence \(|\mathsf{Z}|_{\alpha-1}>2.2\). Then the statement follows by Lemma 10.14\({}_{\alpha-1}\). _Case_ 2: \(\mathrm{Area}_{\alpha}(\mathsf{S}^{-1}\mathsf{v}_{1}\mathsf{T}\mathsf{v}_{2})>0\). Let \(\mathsf{K}\) and \(\mathsf{M}\) be active fragments of rank \(\alpha\) in \(\mathsf{S}\) and in \(\mathsf{T}\), respectively, such that \(\mathsf{K}\sim\mathsf{M}^{-1}\). Let \(\mathsf{S}=\mathsf{G}_{1}\mathsf{K}\mathsf{G}_{2}\) and \(\mathsf{T}=\mathsf{H}_{1}\mathsf{M}\mathsf{H}_{2}\). Note that \(|\mathsf{K}|,|\mathsf{M}|>0\) by Lemma 10.8. If \(\mathsf{K}\) is not contained in \(\mathsf{Y}\) then we replace \(\mathsf{S}\) and \(\mathsf{T}\) with \(\mathsf{K}\mathsf{G}_{2}\) and \(\mathsf{H}_{2}\) respectively and use induction. Assume that \(\mathsf{K}\) is contained in \(\mathsf{Y}\). We first take \(\mathsf{a}_{2}:=\iota(\mathsf{K})\), \(\mathsf{a}_{3}:=\iota(\mathsf{M})\). If \(\mathsf{M}\) is not independent on \(\mathsf{u}_{1}\) or from \(\mathsf{u}_{2}\) then we take \(\mathsf{a}_{1}:=\iota(\mathsf{X})\) or \(\mathsf{a}_{1}:=\tau(\mathsf{X})\) respectively. Otherwise by Proposition 10.6 there exits a fragment \(\mathsf{N}\) of rank \(\alpha\) in \(\mathsf{X}\) such that \(\mathsf{N}\sim\mathsf{M}^{\pm 1}\). In this case we take \(\mathsf{a}_{1}:=\iota(\mathsf{N})\) by Lemma 10.13(ii). _Case_ 3: \(\mathrm{Area}_{\alpha}(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Yu}_{2})>0\). Let \(\mathsf{K}\) and \(\mathsf{M}\) be active fragments of rank \(\alpha\) in \(\mathsf{X}\) and \(\mathsf{Y}\) respectively such that \(\mathsf{K}\sim\mathsf{M}^{-1}\). Then take \(\mathsf{a}_{1}:=\iota(\mathsf{K})\), \(\mathsf{a}_{2}:=\iota(\mathsf{M})\). Depending on whether \(\mathsf{M}\) is not independent of \(\mathsf{v}_{1}\) or \(\mathsf{v}_{2}\) we find \(\mathsf{a}_{3}\) similarly to the case 2 using Proposition 10.6 and Lemma 10.13(ii). ### Proposition (closeness transition in bigon).: _Let \((\mathsf{X},\mathsf{Y})\) and \((\mathsf{S},\mathsf{T})\) be pairs of close reduced paths in _Moreover, we have \(Y=t_{1}Y^{\prime}t_{2}\) where \(|t_{1}|_{\alpha},|t_{2}|_{\alpha}<1.15\) and triples \((\iota(X^{\prime}),\iota(Y^{\prime}),\iota(W))\) and \((\tau(X^{\prime}),\tau(Y^{\prime}),\tau(W))\) are uniformly close._ Proof.: We can assume that \(\alpha\geq 1\). Let \(X^{-1}u_{1}Yu_{2}\) and \(S^{-1}v_{1}Tw_{2}\) be coarse bigons in \(\Gamma_{\alpha}\) where \(u_{i}\) and \(v_{i}\) are bridges of rank \(\alpha\). By Lemma 10.15 it is enough to find a triple \(a_{i}\)\((i=1,2,3)\) of uniformly close vertices on \(X\), \(Y\) and \(T\) respectively. An easy analysis involving Proposition 10.6 shows how to do this in the case when \(Area_{\alpha}(X^{-1}u_{1}Yu_{2})>0\) or \(Area_{\alpha}(S^{-1}v_{1}Tw_{2})>0\). It remains to consider the case when \(Area_{\alpha}(X^{-1}u_{1}Yu_{2})=Area_{\alpha}(S^{-1}v_{1}Tw_{2})=0\). Let \(v_{i}=v_{i1}R_{i}v_{i2}\)\((i=1,2)\) where \(v_{ij}\) is a bridge of rank \(\alpha-1\) and \(R_{i}\) is labeled by a piece of rank \(\alpha\). By Proposition 9.11 we have \(X=w_{1}X_{1}w_{2}\) where endpoints of \(X_{1}\) and a subpath \(Y_{1}\) of \(Y\) can be joined by bridges \(u_{1}^{\prime}\) and \(u_{2}^{\prime}\) of rank \(\alpha-1\), so that the loop \(X_{1}^{-1}u_{1}^{\prime}Y_{1}u_{2}^{\prime}\) can be lifted to \(\Gamma_{\alpha-1}\) and \(|w_{i}|_{\alpha}\leq 1+4\zeta^{2}\eta\)\((i=1,2)\). Without changing notations, we assume that loops \(X_{1}^{-1}u_{1}^{\prime}Y_{1}u_{2}^{\prime}\) and \(S^{-1}v_{1}Tw_{2}\) are already lifted to \(\Gamma_{\alpha-1}\) (and \(Y_{1}\) is still a subpath of \(S\)). We have \[|X_{1}|_{\alpha}\geq|X|_{\alpha}-|w_{1}|_{\alpha}-|w_{2}|_{\alpha}>0.3-8\zeta^ {2}\eta>0.26\] and, consequently, \(|X_{1}|_{\alpha-1}>5.2\). By Lemma 10.2 there is a triple of uniformly close vertices \(b_{1}\) on \(X\), \(b_{2}\) on \(Y\) and \(b_{3}\) on one of the paths \(R_{1}\), \(T\) or \(R_{2}\). For \(a_{1}\) and \(a_{2}\) we take images of \(b_{1}\) and \(b_{2}\) in \(\Gamma_{\alpha}\). Depending on the location of \(b_{3}\) we take for \(a_{3}\) the image of either \(\iota(T)\), \(b_{3}\) or \(\tau(T)\) as shown in Figure 27. **Lemma**.: _Let \((X,Y)\) be a pair of close reduced paths in \(\Gamma_{\alpha}\), and let \(S^{-1}*T_{1}*T_{2}*\) be a coarse trigon in \(\Gamma_{\alpha}\) where \(Y\) is an end of \(S\) and ending vertices \(\tau(X)\), \(\tau(Y)\) and \(\tau(T_{2})\) are uniformly close. Then either_ 1. _there exists a triple_ \(a_{i}\)__\((i=1,2,3)\) _of uniformly close vertices on_ \(X\)_,_ \(Y\) _and_ \(T_{1}\) _respectively, such that_ \(a_{1}\) _cuts off a start_ \(X_{1}\) _of_ \(X\) _with_ \(|X_{1}|_{\alpha}<1.3\)_;_ 2. _there exists a triple_ \(a_{i}\)__\((i=1,2,3)\) _of uniformly close vertices on_ \(X\)_,_ \(Y\) _and_ \(T_{2}\) _respectively, such that_ \(a_{1}\) _cuts off a start_ \(X_{1}\) _of_ \(X\) _with_ \(|X_{1}|_{\alpha}\leq 1.45\)_._ Proof.: We can assume \(\alpha\geq 1\). We use the same strategy as in the proof of Lemma 10.15 and proceed by induction on \(|X|+|Y|+|T_{2}|\). In view of Lemma 10.15, it is enough to prove that if \(|X|\geq 1.45\) then there exists a triple \(a_{i}\) of uniformly close vertices on \(X\), \(Y\) and some \(T_{i}\) respectively such that \(a_{1}\) or \(a_{2}\) cuts off a proper start of the appropriate path \(X\) or \(Y\). Let \(u_{i}\)\((i=1,2)\) and \(v_{j}\)\((j=1,2,3)\) be bridges of rank \(\alpha\) in \(\Gamma_{\alpha}\) such that \(u_{1}Xu_{2}Y^{-1}\) is a coarse bigon and \(S^{-1}v_{1}T_{1}v_{2}T_{2}v_{3}\) is a coarse trigon. Figure 27. _Case \(1\)_: \(\mathrm{Area}_{\alpha}(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Y}\mathsf{u}_{2})= \mathrm{Area}_{\alpha}(\mathsf{S}^{-1}\mathsf{v}_{1}\mathsf{T}_{1}\mathsf{v}_{2} \mathsf{T}_{2}\mathsf{v}_{3})=0\). We assume that \(\mathsf{u}_{2}\) and \(\mathsf{v}_{3}\) are defined from the condition that \(\tau(\mathsf{X})\), \(\tau(\mathsf{Y})\) and \(\tau(\mathsf{T}_{2})\) are uniformly close; that is, either \(\mathsf{u}_{2}\) and \(\mathsf{v}_{3}\) are bridges of rank \(\alpha-1\) or have the form \(\mathsf{u}_{2}=\mathsf{u}_{21}\mathsf{Q}\mathsf{u}_{22}\) and \(\mathsf{v}_{3}=\mathsf{v}_{31}\mathsf{P}_{3}\mathsf{v}_{32}\) where \(\mathsf{u}_{2i},\mathsf{v}_{3i}\) are bridges of rank \(\alpha-1\) and \(\mathsf{Q}^{\pm 1},\mathsf{P}_{3}^{\pm 1}\) are subpaths of a relator loop \(\mathsf{R}\) of rank \(\alpha\). We consider the second case (in the first case the argument is similar). Let \(\mathsf{v}_{i}=\mathsf{v}_{i1}\mathsf{P}_{i}\mathsf{v}_{i2}\) (\(i=1,2\)) where \(\mathsf{v}_{ij}\) is a bridge of rank \(\alpha-1\) and _label_(\(\mathsf{P}_{i}\)) is a piece of rank \(\alpha\). We can assume that there is no vertex on \(\mathsf{X}\) other than \(\tau(\mathsf{X})\) which is close in rank \(\alpha-1\) to a vertex on \(\mathsf{R}\) (otherwise we can take those for \(\mathsf{a}_{1}\) and \(\mathsf{a}_{2}\) as in the proof of Lemma 10.15). By Remark 9.3, we can assume that loops \(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Y}\mathsf{u}_{2}\) and \(\mathsf{S}^{-1}\mathsf{v}_{1}\mathsf{T}_{1}\mathsf{v}_{2}\mathsf{T}_{2} \mathsf{v}_{3}\) can be lifted to \(\Gamma_{\alpha-1}\). Abusing notations, we assume that they are already in \(\Gamma_{\alpha-1}\). Application of Proposition 9.19(ii)\({}_{\alpha-1}\) shows that \(\mathsf{X}=\mathsf{w}_{1}\mathsf{X}^{\prime}\mathsf{w}_{2}\) where \(\mathsf{X}^{\prime}\) is close to a subpath \(\mathsf{Y}^{\prime}\) of \(\mathsf{Y}\), \(|\mathsf{w}_{1}|_{\alpha}\leq 1+4\eta\zeta^{2}\), \(|\mathsf{w}_{2}|_{\alpha}\leq 4\eta\zeta^{2}\) and hence \(|\mathsf{X}^{\prime}|_{\alpha}\geq 0.45-8\eta\zeta^{2}\). As in the proof of Lemma 10.15 the proof slightly differs in cases \(\alpha\geq 2\) and \(\alpha\geq 1\). In the case \(\alpha\geq 2\), shortening \(\mathsf{X}^{\prime}\) from the end by Proposition 9.21\({}_{\alpha-1}\) we can assume that \(\mathsf{w}_{1}\mathsf{X}^{\prime}\) is a proper start of \(\mathsf{X}\), with a new bound \(|\mathsf{X}^{\prime}|_{\alpha}>0.45-8\eta\zeta^{2}-\zeta^{2}>0.41\) which implies \(|\mathsf{X}^{\prime}|_{\alpha-1}>8.2\). If there is a triple of uniformly close vertices on \(\mathsf{X}^{\prime}\), \(\mathsf{Y}^{\prime}\) and some \(\mathsf{P}_{i}\) then we are done. We assume that no such triple exists. Let \(\mathsf{S}_{1}\) be a reduced path joining \(\iota(\mathsf{T}_{1})\) and \(\tau(\mathsf{T}_{2})\) (see Figure 28). By Lemma 10.2 we have \(\mathsf{X}^{\prime}=\mathsf{z}_{1}\mathsf{X}^{\prime\prime}\mathsf{z}_{2}\) where \(\mathsf{X}^{\prime\prime}\) is close to a subpath of \(\mathsf{S}_{1}\). Moreover, the lemma says that there exists a triple of uniformly close vertices on \(\mathsf{X}^{\prime}\), \(\mathsf{Y}^{\prime}\) and \(\mathsf{S}_{1}\) and then applying Lemma 10.17\({}_{\alpha-1}\) we may assume that \(|z_{i}|_{\alpha-1}<1.45\). Then \[|\mathsf{X}^{\prime\prime}|_{\alpha-1}\geq|\mathsf{X}^{\prime}|_{\alpha-1}-| \mathsf{z}_{1}|_{\alpha-1}-|\mathsf{z}_{2}|_{\alpha-1}>5.3.\] Another application of Lemma 10.2 gives a triple of uniformly close vertices \(\mathsf{b}_{i}\) (\(i=1,2,3\)) where \(\mathsf{b}_{1}\) lies on \(\mathsf{X}^{\prime}\), \(\mathsf{b}_{2}\) lies on \(\mathsf{Y}^{\prime}\) and \(\mathsf{b}_{3}\) lies either on \(\mathsf{T}_{1}\) or on \(\mathsf{T}_{2}\). For \(\mathsf{a}_{i}\) we take the images of \(\mathsf{b}_{i}\) in \(\Gamma_{\alpha}\). In the case \(\alpha=1\) the argument is similar (see Case 1a in the proof of Lemma 10.15) with no need for a lower bound on \(|\mathsf{X}^{\prime\prime}|_{\alpha-1}\) for application of Lemma 10.2. _Case \(2\)_: \(r=\mathrm{Area}_{\alpha}(\mathsf{S}^{-1}\mathsf{v}_{1}\mathsf{T}_{1}\mathsf{v}_{2 }\mathsf{T}_{2}\mathsf{v}_{3})>0\). Let \(\mathsf{L}\) be an active relator loop for \(\mathsf{S}^{-1}\mathsf{v}_{1}\mathsf{T}_{1}\mathsf{v}_{2}\mathsf{T}_{2} \mathsf{v}_{3}\) and \(\mathsf{K}_{i}\) (\(i=1,2\) or \(i=1,2,3\)) be the associated active fragments of rank \(\alpha\) occurring in \(\mathsf{S}\), \(\mathsf{T}_{1}\) or \(\mathsf{T}_{2}\). If some \(\mathsf{K}_{i}\) occurs in \(\mathsf{T}_{1}\) and some \(\mathsf{K}_{j}\) occur in \(\mathsf{T}_{2}\) then we can shorten \(\mathsf{T}_{1}\) and \(\mathsf{K}_{j}\). Figure 28. and \(T_{2}\) decreasing \(r\) as described in 9.6. A similar inductive argument works in the case when some \(K_{i}\) occurs in \(S\) and is not contained in \(Y\). Thus we may assume that there are only \(K_{1}\) and \(K_{2}\), \(K_{1}\) is contained in \(Y\) and \(K_{2}\) occurs in \(T_{1}\) or \(T_{2}\). By Proposition 9.15, \(\mu_{\mathsf{f}}(K_{i})\geq 3\lambda-1.1\omega\). The rest of the argument is the same as in the Case 2 of the proof of Lemma 10.15. _Case \(3\)_: \(\operatorname{Area}_{\alpha}(X^{-1}u_{1}Yu_{2})>0\). Let \(K\) and \(M\) be active fragments of rank \(\alpha\) in \(X\) and in \(Y\) respectively such that \(K\sim M^{-1}\). We take \(a_{1}:=\iota(K)\), \(a_{2}:=\iota(M)\) and define \(a_{3}\) according to the following cases: * If \(M\) is not independent of \(v_{1}\) then \(a_{3}:=\iota(T_{1})\); * If \(M\) is not independent of \(v_{2}\) then \(a_{3}:=\tau(T_{1})\); * If \(M\) is not independent of \(v_{3}\) then \(a_{3}:=\tau(T_{2})\); * Otherwise by Proposition 10.7 applied to \(M\) there exists a fragment \(N\) or rank \(\alpha\) in \(T_{1}\) or \(T_{2}\) such that \(M\sim N^{\pm 1}\). Then \(a_{3}:=\iota(N)\). **Proposition** (closeness transition in trigon).: _Let \((X,Y)\) be a pair of close reduced paths in \(\Gamma_{\alpha}\), and let \(S^{-1}*T_{1}*T_{2}*\) be a coarse trigon in \(\Gamma_{\alpha}\) where \(Y\) is a subpath of \(S\). Assume that \(|X|_{\alpha}\geq 2.45\). Then \(X\) can be represented as in one of the following three cases:_ * \(X=z_{1}X_{1}z_{2}\) _where_ \(X_{1}\) _is close to a subpath_ \(W_{1}\) _of_ \(T_{1}\) _and_ \(|z_{1}|_{\alpha}<1.3\)_,_ \(|z_{2}|_{\alpha}<1.45\)_._ * \(X=z_{1}X_{2}z_{2}\) _where_ \(X_{2}\) _is close to a subpath_ \(W_{2}\) _of_ \(T_{2}\) _and_ \(|z_{1}|_{\alpha}<1.45\)_,_ \(|z_{2}|_{\alpha}<1.3\)_._ * \(X=z_{1}X_{1}z_{3}X_{2}z_{2}\) _where_ \(X_{i}\) _is close to a subpath_ \(W_{i}\) _of_ \(T_{i}\) _(_\(i=1,2\)_),_ \(|z_{1}|_{\alpha},|z_{2}|_{\alpha}<1.3\) _and_ \(|z_{3}|_{\alpha}<0.4\)_._ _Moreover, we can assume that there exists a subpath \(Y^{\prime}\) of \(Y\) such that triples \((\iota(X_{p}),\iota(Y^{\prime}),\iota(W_{p}))\) and \((\tau(X_{q}),\tau(Y^{\prime}),\tau(W_{q}))\) are uniformly close where \(p\) and \(q\) are the smallest and the greatest indices of \(X_{i}\) in (i)-(iii), i.e. \(p=q=1\) in (i), \(p=q=2\) in (ii) and \(p=1\), \(q=2\) in (iii)._ Proof.: Let \(u_{i}\) (\(i=1,2\)) and \(v_{j}\) (\(j=1,2,3\)) be bridges of rank \(\alpha\) such that \(u_{1}Xu_{2}Y^{-1}\) is a coarse bigon and \(S^{-1}v_{1}T_{1}v_{2}T_{2}v_{3}\) is a coarse trigon. In view of Lemmas 10.15 and 10.17, finding a triple \(a_{i}\) (\(i=1,2,3\)) of uniformly close vertices on \(X\), \(Y\) and some \(T_{i}\) implies the conclusion of the proposition except the bound \(|z_{3}|_{\alpha}<0.4\) in (iii). The latter follows from Proposition 9.19(i). An easy analysis as in Cases 2 and 3 of the proof of Lemma 10.17 shows how to find the vertices \(a_{i}\) in the case when \(\operatorname{Area}_{\alpha}(X^{-1}u_{1}Yu_{2})>0\) or \(\operatorname{Area}_{\alpha}(S^{-1}v_{1}Tv_{2}T_{2}v_{3})>0\). It remains to consider the case when \(\operatorname{Area}_{\alpha}(X^{-1}u_{1}Yu_{2})=\operatorname{Area}_{\alpha}(S ^{-1}v_{1}Tv_{2}T_{2}v_{3})=0\). Let \(v_{i}=w_{i1}R_{i}w_{i2}\) (\(i=1,2,3\)) where \(\mathit{label}(w_{ij})\in\mathcal{H}_{\alpha-1}\) and the label of \(R_{i}\) is a piece of rank \(\alpha\). By Proposition 9.11 we have \(X=w_{1}X_{1}w_{2}\) where endpoints of \(X_{1}\) and a subpath \(Y_{1}\) of \(Y\) can be joined by bridges \(u_{1}^{\prime}\) and \(u_{2}^{\prime}\) of rank \(\alpha-1\) and the loop \(X_{1}u_{1}^{\prime}Y_{1}^{-1}u_{2}^{\prime-1}\) can be lifted to \(\Gamma_{\alpha-1}\) and \(|w_{i}|_{\alpha}\leq 1+4\zeta^{2}\eta\) (\(i=1,2\)). Without changing notations, we assume that loops \(X_{1}^{-1}u_{1}^{\prime}Y_{1}u_{2}^{\prime}\) and \(S^{-1}v_{1}Tv_{2}\) are already in \(\Gamma_{\alpha-1}\) (and \(Y_{1}\) is still a subpath of \(S\)). We have \[|X_{1}|_{\alpha}\geq|X|_{\alpha}-|w_{1}|_{\alpha}-|w_{2}|_{\alpha}>0.41\] and, consequently, \(|X_{1}|_{\alpha-1}>8.2\). Then we find \(a_{i}\) applying Lemmas 10.17\({}_{\alpha-1}\) and 10.2 as in the proof of Lemma 10.17. **Proposition** (closeness transition in conjugacy relations).: _Let \(S\) be a word cyclically reduced in \(G_{\alpha}\). Assume that \(S\) is conjugate in \(G_{\alpha}\) to a word \(Tv\) where \(T\in\mathcal{R}_{\alpha}\) and \(v\in\mathcal{H}_{\alpha}\). Let \(\bar{S}=\prod_{i\in\mathbb{Z}}S_{i}\) and \(\prod_{i\in\mathbb{Z}}T_{i}v_{i}\) be lines in \(\Gamma_{\alpha}\) representing the conjugacy relation._ _Assume that a reduced path \(\mathsf{X}\) in \(\Gamma_{\alpha}\) is close to a subpath \(\mathsf{Y}\) of \(\mathsf{\tilde{S}}\) with \(|\mathsf{Y}|\leq|S|\). Let \(|\mathsf{X}|_{\alpha}\geq 2.45\). Then either:_ * \(\mathsf{X}\) _can be represented as_ \(\mathsf{X}=\mathsf{z}_{1}\mathsf{X}_{1}\mathsf{z}_{2}\) _where_ \(\mathsf{X}_{1}\) _is close to a subpath_ \(\mathsf{W}_{1}\) _of_ \(\mathsf{T}_{i}\) _for some_ \(i\) _and_ \(|\mathsf{z}_{1}|_{\alpha},|\mathsf{z}_{2}|_{\alpha}<1.45\)_._ * \(\mathsf{X}\) _can be represented as_ \(\mathsf{X}=\mathsf{z}_{1}\mathsf{X}_{1}\mathsf{z}_{3}\mathsf{X}_{2}\mathsf{z}_ {2}\) _where for some_ \(i\)_,_ \(\mathsf{X}_{1}\) _is close to a subpath_ \(\mathsf{W}_{1}\) _of_ \(\mathsf{T}_{i}\)_,_ \(\mathsf{X}_{2}\) _is close to a subpath_ \(\mathsf{W}_{2}\) _of_ \(\mathsf{T}_{i+1}\)_,_ \(|\mathsf{z}_{1}|_{\alpha},|\mathsf{z}_{2}|_{\alpha}<1.3\) _and_ \(|\mathsf{z}_{3}|_{\alpha}\leq 0.4\)_._ _Moreover, we can assume that there exists a subpath \(\mathsf{Y}^{\prime}\) of \(\mathsf{Y}\) such that triples \((\iota(\mathsf{X}_{1}),\iota(\mathsf{Y}^{\prime}),\iota(\mathsf{W}_{1}))\) and \((\tau(\mathsf{X}_{q}),\tau(\mathsf{Y}^{\prime}),\tau(\mathsf{W}_{q}))\) are uniformly close where \(q=1\) in (i) and \(q=2\) in (ii)._ Proof.: It is enough to find a uniformly close triple of vertices \(\mathsf{a}_{i}\) (\(i=1,2,3\)) on \(\mathsf{X}\), \(\mathsf{Y}\) and some \(\mathsf{T}_{i}\) and then use Lemmas 10.17 or 10.15. Let \(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Yu}_{2}\) be a coarse bigon where \(\mathsf{u}_{1}\) and \(\mathsf{u}_{2}\) are bridges of rank \(\alpha\). If \(\operatorname{Area}_{\alpha}(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Yu}_{2})>0\) then we reach the goal using Proposition 10.12 and Lemma 10.13(ii). Assume that \(\operatorname{Area}_{\alpha}(\mathsf{X}^{-1}\mathsf{u}_{1}\mathsf{Yu}_{2})=0\). Let \(\Delta\) be an annular diagram of rank \(\alpha\) with boundary loops \(\mathsf{\hat{S}}^{-1}\) and \(\mathsf{\hat{T}}\hat{\mathsf{v}}\) representing the conjugacy relation. Let \(\tilde{\Delta}\) be the universal cover of \(\Delta\) and \(\phi:\tilde{\Delta}^{(1)}\to\Gamma_{\alpha}\) the combinatorially continuous map sending lifts \(\mathsf{\hat{S}}_{i}\), \(\mathsf{\tilde{T}}_{i}\) and \(\tilde{\mathsf{v}}_{i}\) to \(\mathsf{S}_{i}\), \(\mathsf{T}_{i}\) and \(\mathsf{v}_{i}\) respectively. We assume that \(\Delta\) is reduced and has a tight set of contiguity subdiagrams. Let \(r\) be the number of cells of rank \(\alpha\) of \(\Delta\). Assume that \(r>0\) and let \(\mathsf{D}\) be a cell of rank \(\alpha\) of \(\Delta\). By Proposition 7.11(iv) and Lemma 7.10(i), \(\mathsf{D}\) has two or three contiguity subdiagrams \(\Pi_{i}\in\mathsf{T}\) to sides of \(\Delta\), at most two to \(\mathsf{\hat{T}}\) and at most one to \(\mathsf{\hat{S}}^{-1}\). If there are two contiguity subdiagrams \(\Pi_{i}\) (\(i=1,2\)) of \(\mathsf{D}\) to \(\mathsf{\hat{T}}\) then we consider a new annular diagram \(\Delta^{\prime}\) obtained by cutting off \(\mathsf{D}\cup\Pi_{1}\cup\Pi_{2}\) and the remaining simply connected component from \(\Delta\), and new words \(T^{\prime}\) and \(v^{\prime}\) where \(T^{\prime}\) is a subword of \(T\). In this case, the statement follows by induction on \(r\). We can assume now that \(\mathsf{D}\) has one contiguity subdiagram to \(\mathsf{\hat{S}}^{-1}\) and one to \(\mathsf{\hat{T}}\). Let \(\mathsf{\tilde{D}}_{i}\) (\(i\in\mathbb{Z}\)) be the lifts of \(\mathsf{D}\) in \(\mathsf{\tilde{\Delta}}\). With an appropriate numeration of \(\mathsf{\tilde{D}}_{i}\)'s, each relation loop \(\phi(\delta\mathsf{\tilde{D}}_{i})\) is a base loop for a fragment \(\mathsf{K}_{i}\) in \(\mathsf{\tilde{S}}^{-1}\) and a fragment \(\mathsf{M}_{i}\) in \(\mathsf{T}_{i}\). By Proposition 7.13(iii), \[\mu_{\mathsf{f}}(\mathsf{K}_{i}^{-1})+\mu_{\mathsf{f}}(\mathsf{M}_{i})>1-4 \lambda-2.2\omega.\] Since \(T\) is reduced in \(G_{\alpha}\), we have \(\mu_{\mathsf{f}}(\mathsf{M}_{i})\leq\rho\) and hence \[\mu_{\mathsf{f}}(\mathsf{K}_{i}^{-1})>5\lambda-2.2\omega.\] If none of \(\mathsf{K}_{i}^{-1}\)'s is contained in \(\mathsf{Y}\) then we can apply Proposition 10.18. Otherwise we use an argument similar to one in Case 2 of the proof of Lemma 10.15. Now assume that \(\Delta\) has no cells of rank \(\alpha\). Without changing notations, we assume that parallel lines \(\mathsf{\tilde{S}}=\prod_{i\in\mathbb{Z}}\mathsf{S}_{i}\), \(\prod_{i\in\mathbb{Z}}\mathsf{T}_{i}\mathsf{v}_{i}\) and paths \(\mathsf{X}\) and \(\mathsf{Y}\) are lifted to \(\Gamma_{\alpha-1}\) so that \(\mathsf{Y}\) is still a subpath of \(\mathsf{\tilde{S}}\). Let \(v=w_{1}Rw_{2}\) where \(w_{i}\in\mathcal{H}_{\alpha-1}\) and \(R\) is a piece of rank \(\alpha\). We represent \(\mathsf{v}_{i}\) accordingly as \(\mathsf{v}_{i}=\mathsf{w}_{1}^{(i)}\mathsf{R}_{i}\mathsf{w}_{2}^{(i)}\). Let \(Z\) be a word reduced in \(G_{\alpha-1}\) such that \(Z=Tw_{1}R\) and let \(\mathsf{Z}_{i}\) (\(i\in\mathbb{Z}\)) be appropriate paths in \(\Gamma_{\alpha-1}\) with \(\mathit{label}(\mathsf{Z}_{i})=Z\) (Figure 29). Since \(|\mathsf{X}|_{\alpha}\geq 2.45\) we have \(|\mathsf{X}|_{\alpha-1}\geq\frac{1}{\zeta}|\mathsf{X}|_{\alpha}\geq 49\). By Proposition 10.19\({}_{\alpha-1}\), a subpath \(\mathsf{X}^{\prime}\) of \(\mathsf{X}\) with \(|\mathsf{X}^{\prime}|_{\alpha-1}>23\) is close to a subpath of some \(\mathsf{Z}_{i}\). Then using Proposition 10.18\({}_{\alpha-1}\) we find a triple \(\mathsf{b}_{i}\) of uniformly close vertices on \(\mathsf{X}^{\prime}\), \(\mathsf{Y}\) and \(\mathsf{T}_{i}\) or \(\mathsf{R}_{i}\) respectively. If \(\mathsf{b}_{3}\) lies on \(\mathsf{T}_{i}\) then for the desired \(\mathsf{a}_{i}\) we take images of \(\mathsf{b}_{i}\) in \(\Gamma_{\alpha}\). If \(\mathsf{b}_{3}\) lies on \(\mathsf{R}_{i}\) then for \(\mathsf{a}_{i}\) (\(i=1,2,3\)) we take images of \(\mathsf{b}_{1}\), \(\mathsf{b}_{2}\) and \(\tau(\mathsf{T}_{i})\), respectively. **10.20 Lemma**.: _Let \(1\leq\beta\leq\alpha\) and \(\mathsf{X}\) be a reduced path in \(\Gamma_{\alpha}\). Let \(\mathsf{K}_{1}\) and \(\mathsf{K}_{2}\) be fragments of rank \(\beta\) in \(\mathsf{X}\) such that \(\mu_{\mathrm{f}}(\mathsf{K}_{i})\geq\lambda+2.6\omega\ (i=1,2)\), \(\mathsf{K}_{1}<\mathsf{K}_{2}\) and \(\mathsf{K}_{1}\not\sim\mathsf{K}_{2}\). If a bridge of rank \(\beta\) starts or ends at \(\iota(\mathsf{X})\) then \(\mathsf{K}_{2}\) is independent of \(\mathsf{u}\). Similarly, if a bridge of rank \(\beta\) starts or ends at \(\tau(\mathsf{X})\) then \(\mathsf{K}_{1}\) is independent of \(\mathsf{u}\)._ Proof.: We consider the case when \(\iota(\mathsf{u})=\iota(\mathsf{X})\) (all other cases are similar). Assume that \(\mathsf{K}_{2}\) is not independent of \(\mathsf{u}\). By Definition 10.4, \(\mathsf{u}=\mathsf{vSw}\) where \(\mathsf{S}\) occurs in a relation loop \(\mathsf{R}\) of rank \(\beta\), \(\mathsf{v}\) and \(\mathsf{w}\) are bridges of rank \(\beta-1\) and \(\mathsf{R}^{\pm 1}\) is the base relation loop for \(\mathsf{K}\). Let \(\tilde{\mathsf{R}}\) and \(\tilde{\mathsf{X}}\) be lifts of \(\mathsf{R}\) and \(\mathsf{X}\) in \(\Gamma_{\beta-1}\) so that \(\tilde{\mathsf{R}}^{\pm 1}\) is the base axis for \(\tilde{\mathsf{K}}_{2}\). Lemma 9.22 implies that the starting vertex of \(\tilde{\mathsf{X}}\) is close to a vertex on \(\tilde{\mathsf{R}}\). Then using Proposition 10.21\({}_{\alpha-1}\) we conclude that the starting segment \(\tilde{\mathsf{X}}_{1}\tilde{\mathsf{K}}_{2}\) of \(\tilde{\mathsf{X}}\) is a fragment of rank \(\alpha\) with base axis \(\tilde{\mathsf{R}}\). Since \(\mathsf{K}_{1}\) is contained in \(\tilde{\mathsf{X}}_{1}\tilde{\mathsf{K}}_{2}\), Proposition 8.10 gives \(\mathsf{K}_{1}\sim\mathsf{K}_{2}\), a contradiction. **10.21 Proposition** (closeness preserves order).: _Let \(\mathsf{X}_{1}\mathsf{X}_{2}\) and \(\mathsf{Y}_{1}\mathsf{Y}_{2}\) be reduced paths in \(\Gamma_{\alpha}\) such that endpoints of \(\mathsf{X}_{i}\) and \(\mathsf{Y}_{i}\) are close in the order as in Figure 30. Then \(|\mathsf{X}_{1}|_{\alpha},|\mathsf{Y}_{2}|_{\alpha}<5.7\)._ Proof.: We can assume that \(\alpha\geq 1\). Due to symmetry, it is enough to show that \(|\mathsf{X}_{1}|_{\alpha}<5.7\). Denote \(\mathsf{u}_{i}\ (i=1,2,3)\) bridges of rank \(\alpha\) joining endpoints of \(\mathsf{X}_{i}\) and \(\mathsf{Y}_{i}\) as shown in Figure 30. _Claim 1:_\(\mathrm{Area}_{\alpha}(\mathsf{X}_{1}^{-1}\mathsf{u}_{1}\mathsf{Y}_{2}\mathsf{u }_{2}^{-1})\leq 1\). Proof of Claim 1.: Assume that \(\mathrm{Area}_{\alpha}(\mathsf{X}_{1}^{-1}\mathsf{u}_{1}\mathsf{Y}_{2}\mathsf{ u}_{2}^{-1})\geq 2\). Let \(\mathsf{K}_{i}\) and \(\mathsf{M}_{i}\ (i=1,2)\) be active fragments of rank \(\alpha\) in \(\mathsf{X}_{1}\) and \(\mathsf{Y}_{2}\), respectively, such that \(\mathsf{K}_{1}<\mathsf{K}_{2}\) and \(\mathsf{K}_{i}\sim\mathsf{M}_{i}^{-1}\). By Proposition 9.7(ii) and Lemma 10.20, \(\mathsf{K}_{2}\) is independent of \(\mathsf{u}_{1}\). Similarly, \(\mathsf{M}_{2}\) and hence \(\mathsf{K}_{2}\), are independent of \(\mathsf{u}_{3}\). By Propositions 9.7 and 10.5 applied to \((\mathsf{X}_{1}\mathsf{X}_{2})^{-1}\mathsf{u}_{1}\mathsf{Y}_{1}^{-1}\mathsf{u }_{3}^{-1}\), there is Figure 30. Figure 29. a fragment \(\mathsf{N}\) of rank \(\alpha\) in \(\mathsf{Y}_{1}\) such that \(\mathsf{N}\sim\mathsf{K}_{2}^{\pm 1}\) and \(\mu_{\mathrm{f}}(\mathsf{N})\geq 5\lambda-4.9\omega\). We obtain a contradiction with Corollary 9.24(ii),(iii). _Claim 2: If \(\mathrm{Area}_{\alpha}(\mathsf{X}_{1}^{-1}\mathsf{u}_{1}\mathsf{Y}_{2}\mathsf{ u}_{2}^{-1})=0\) and \(\mathit{label}(\mathsf{u}_{1}),\mathit{label}(\mathsf{u}_{2})\in\mathcal{H}_{ \alpha-1}\) then \(|\mathsf{X}_{1}|_{\alpha}<1+6.1\zeta\)._ Proof of Claim 2.: If \(r=\mathrm{Area}_{\alpha}(\mathsf{X}_{2}\mathsf{u}_{3}\mathsf{Y}_{1}\mathsf{Y}_ {2}\mathsf{u}_{2}^{-1})>0\) then we can reduce the statement to the case of a smaller \(r\) as explained in 9.4. So we can assume that \(\mathrm{Area}_{\alpha}(\mathsf{X}_{2}\mathsf{u}_{3}\mathsf{Y}_{1}\mathsf{Y}_ {2}\mathsf{u}_{2}^{-1})=0\). Then loops \(\mathsf{X}_{1}^{-1}\mathsf{u}_{1}\mathsf{Y}_{2}\mathsf{u}_{2}^{-1}\) and \(\mathsf{X}_{2}\mathsf{u}_{3}\mathsf{Y}_{1}\mathsf{Y}_{2}\mathsf{u}_{2}^{-1}\) can be lifted to \(\Gamma_{\alpha-1}\) (up to possible switching of \(\mathsf{u}_{3}\)). To simplify notations, we assume that these loops are already in \(\Gamma_{\alpha-1}\). Let \(\mathsf{u}_{3}=\mathsf{v}_{1}\mathsf{Q}\mathsf{v}_{2}\) where \(\mathit{label}(\mathsf{v}_{i})\in\mathcal{H}_{\alpha-1}\) and \(\mathit{label}(\mathsf{Q})\) is a piece of rank \(\alpha\). We obtain a coarse trigon in \(\Gamma_{\alpha-1}\) with sides \(\mathsf{X}_{1}\mathsf{X}_{2}\), \(\mathsf{Q}\) and \(\mathsf{Y}_{1}\), see Figure 31. Applying Propositions 9.19(i)\({}_{\alpha-1}\) and 10.21\({}_{\alpha-1}\) we obtain \[|\mathsf{X}_{1}\mathsf{X}_{2}|_{\alpha}<1+4\zeta^{2}\eta+5.7\zeta<1+6.1\zeta.\] _The rest of the proof:_ If \(\mathrm{Area}_{\alpha}(\mathsf{X}_{1}^{-1}\mathsf{u}_{1}\mathsf{Y}_{2}\mathsf{ u}_{2}^{-1})=0\) then the statement follows from Claim 2 and Proposition 9.11. By Claim 1, it remains to consider the case \(\mathrm{Area}_{\alpha}(\mathsf{X}_{1}^{-1}\mathsf{u}_{1}\mathsf{Y}_{2}\mathsf{ u}_{2}^{-1})=1\). Then \(\mathsf{X}_{1}\) can be represented as \(\mathsf{R}_{1}\mathsf{S}_{1}\mathsf{R}_{2}\mathsf{S}_{2}\mathsf{R}_{3}\) (see Figure 32) where each \(\mathsf{R}_{i}\) is a fragment of rank \(\alpha\) and by Claim 2 and Proposition 9.19(ii)\({}_{\alpha-1}\) each \(\mathsf{S}_{i}\) satisfies \(|\mathsf{S}_{i}|_{\alpha}<1+6.1\zeta+8\zeta^{2}\eta\). We obtain \[|\mathsf{X}_{1}|_{\alpha}<3+2(1+6.1\zeta+8\zeta^{2}\eta)<5.7.\] The proof is completed. In the end of the section we formulate several statements about stability of fragments in a more general setup when fragments have arbitrary rank \(\beta\) in the interval \(0\leq\beta\leq\alpha\). Figure 31. Figure 32. **10.22 Proposition**.: _Let \(S\) and \(T\) be close reduced paths in \(\Gamma_{\alpha}\). Let \(0\leq\beta<\alpha\) and let \(X\) and \(Y\) be close in rank \(\beta\) reduced paths in \(\Gamma_{\alpha}\) such that \(Y\) is a subpath of \(S\). Assume that \(|X|_{\alpha}\geq 2.3\) and \(Y\) contains no fragments \(K\) of rank \(\gamma\) with \(\beta<\gamma\leq\alpha\) and \(\mu_{f}(K)\geq\xi_{0}\). Then \(X\) can be represented as \(X=w_{1}X^{\prime}w_{2}\) where \(X^{\prime}\) is close in rank \(\beta\) to a subpath of \(T\) and \(|w_{i}|_{\alpha}<1.2\ (i=1,2)\)._ Proof.: Let \(S^{-1}u_{1}Tu_{2}\) and \(X^{-1}v_{1}Yv_{2}\) be corresponding coarse bigons. If \(Area_{\alpha}(S^{-1}u_{1}Tu_{2})>0\) then by the argument from 9.5 we reduce the statement to a new pair \((S,T)\) and a coarse bigon \(S^{-1}u_{1}Tu_{2}\) with a smaller value of \(Area_{\alpha}(S^{-1}u_{1}Tu_{2})\). Hence we can assume that \(Area_{\alpha}(S^{-1}u_{1}Tu_{2})=0\). Without changing notations, we assume that both loops \(S^{-1}u_{1}Tu_{2}\) and \(X^{-1}v_{1}Yv_{2}\) are in \(\Gamma_{\alpha-1}\). Let \(u_{i}=u_{i1}P_{i}u_{i2}\) where \(\mathit{label}(u_{ij})\in\mathcal{H}_{\alpha-1}\) and \(\mathit{label}(P_{i})\) is a piece of rank \(\alpha\). Observe that if a subpath \(X^{\prime}\) is close to a subpath of \(P_{1}\) or \(P_{2}\) then \(|X^{\prime}|_{\alpha}\leq 1\). Since \(|X|_{\alpha}\geq 2.3\) applying Lemma 10.2 we find a subpath of \(X\) close to a subpath of \(T\). We consider the case when \(X=z_{0}X_{1}z_{1}X_{2}z_{2}X_{3}z_{3}\) where \(X_{i}\ (i=1,2,3)\) are close to subpaths of \(P_{1}\), \(T\) and \(P_{2}\) respectively (the other cases from Lemma 10.2 give a better lower bound on \(|X_{2}|_{\alpha}\)). By Lemma 10.15 we can assume that \(|z_{0}|_{\alpha-1},|z_{3}|_{\alpha-1}<1.3\) and by Proposition 9.19(i)\({}_{\alpha-1}\) we can assume that \(|z_{1}|_{\alpha-1},|z_{2}|_{\alpha-1}<0.4\). We have \(|X_{1}|_{\alpha},|X_{3}|_{\alpha}\leq 1\), so \(|X_{2}|_{\alpha}>2.3-2-3\zeta=0.15\) and hence \(|X_{2}|_{\alpha-1}>3\). Then by Corollary 9.13\({}_{\alpha-1}\) we have \(X_{2}=t_{1}X^{\prime}t_{2}\) where \(X^{\prime}\) is close in rank \(\beta\) to a subpath of \(T\) and \(|t_{i}|_{\alpha-1}<1.03\). We have \(X=z_{1}X_{1}z_{2}t_{1}X^{\prime}t_{2}z_{3}X_{3}z_{4}\) where \(|z_{1}X_{1}z_{2}t_{1}|_{\alpha}<1+2.73\zeta<1.2\) and a similar bound holds for \(|t_{2}z_{3}X_{3}z_{4}|_{\alpha}\). **10.23 Proposition**.: _Let \(X\) and \(Y\) be reduced paths in \(\Gamma_{\alpha}\). Let \(1\leq\beta\leq\alpha\) and assume that either \(X\) or \(Y\) contains no fragments \(N\) of rank \(\gamma\) with \(\beta<\gamma\leq\alpha\) and \(\mu_{f}(N)\geq\xi_{0}\)._ _Let \(K_{i}\ (i=1,2)\) be fragments of rank \(\beta\) in \(X\) such that \(K_{1}\not\sim K_{2}\) and \(K_{1}<K_{2}\). Assume that at least one of the following conditions holds:_ * _there exist fragments_ \(M_{i}\ (i=1,2)\) _of rank_ \(\beta\) _in_ \(Y\) _such that_ \(\mu_{f}(M_{i})\geq\lambda+2.7\omega\)_,_ \(K_{i}\sim M_{i}^{\pm 1}\) _and_ \(M_{1}<M_{2}\)_; or_ * \(X\) _and_ \(Y\) _are close in rank_ \(\beta\)_._ _Then the following is true:_ * _Let_ \(N\) _be a fragment of rank_ \(\beta\) _in_ \(X\) _with_ \(\mu_{f}(N)\geq 2\lambda+9.1\omega\) _such that_ \(K_{1}<N<K_{2}\) _and_ \(N\not\sim K_{i}\) _for_ \(i=1,2\)_. Then there exists a fragment_ \(N^{\prime}\) _of rank_ \(\beta\) _in_ \(Y\) _such that_ \(N^{\prime}\sim N^{\pm 1}\)_,_ \(M_{1}<N^{\prime}<M_{2}\) _in case (*) and_ (10-3) \[\mu_{f}(N^{\prime})\geq\min\{\mu_{f}(N_{i})-2\lambda-3.4\omega,\ \xi_{0}\}\] _In case (*), if_ \(M_{1}\) _and_ \(M_{2}\) _are disjoint then we can assume that_ \(M_{1}\ll N^{\prime}\ll M_{2}\)_. This is the case (that is,_ \(M_{1}\) _and_ \(M_{2}\) _are necessarily disjoint) if_ \(\mu_{f}(N)\geq 4\lambda+9\omega\)_._ * _Assume that_ \(\mu_{f}(K_{i})\geq 2\lambda+9.1\omega\) _and in case (*),_ \(\mu_{f}(M_{i})\geq 2\lambda+9.1\omega\)_. Let_ \(K^{\prime}_{i}\ (i=1,2)\) _be a pair of another fragments of rank_ \(\beta\) _in_ \(X\) _and_ \(M^{\prime}_{i}\ (i=1,2)\) _a pair of another fragments of rank_ \(\beta\) _in_ \(Y\) _such that_ \(\mu_{f}(K^{\prime}_{i}),\mu_{f}(M^{\prime}_{i})\geq 2\lambda+9.1\omega\)_,_ \(K^{\prime}_{i}\sim M^{\prime\pm 1}_{i}\)__\((i=1,2)\) _and_ \(K^{\prime}_{1}\not\sim K^{\prime}_{2}\)_. Then_ \(K^{\prime}_{1}<K^{\prime}_{2}\) _if and only if_ \(M^{\prime}_{1}<M^{\prime}_{2}\)_._ _Furthermore, the statement of the proposition is true also in the case \(\beta=0\) if we drop all conditions of the form \(\mu_{f}(\cdot)\geq\dots\) for fragments of rank \(\beta\)._ Proof.: If \(\beta=0\) then by Proposition 9.10 we have \(M_{i}=K_{i}\ (i=1,2)\), \(M_{1}\cup M_{2}=K_{1}\cup K_{2}\) in case (*) and \(X=Y\) in case (**). Then the statement is trivial. We assume that \(\beta\geq 1\). (i): Assume that (*) holds. First assume that \(M_{1}\) and \(M_{2}\) are disjoint. Let \(X_{1}=K_{1}\cup K_{2}\) and \(Y_{1}\) be the subpath of \(Y\) between \(M_{1}\) and \(M_{2}\), i.e. \(Y=*M_{1}Y_{1}M_{2}*\). By Lemma 10.13(i) and Proposition 9.10 we have a loop \(X_{1}^{-1}uY_{1}v\) that can be lifted to \(\Gamma_{\beta}\) where \(u\) and \(v\) are bridges of rank \(\beta\). Up to change of notation, we assume that \(X_{1}^{-1}uY_{1}v\) is already in \(\Gamma_{\beta}\). Again by Lemma 10.13(i)\({}_{\beta}\), \(N\) is independent of \(u\) and \(v\). By Proposition 10.6\({}_{\beta}\), there exists \(N^{\prime}\) in \(Y_{1}\) satisfying (10-3) such that \(N^{\prime}\sim N^{\pm 1}\), i.e. we have \(M_{1}\ll N^{\prime}\ll M_{2}\) as required. Assume that \(M_{1}\) and \(M_{2}\) have a nonempty intersection. By Proposition 8.12\({}_{\beta}\) there exist fragments \(M_{1}^{\prime}\) and \(M_{2}^{\prime}\) of rank \(\beta\) such that \(M_{i}^{\prime}\sim M_{i}\), \(M_{1}^{\prime}\) is a start of \(M_{1}\) disjoint from \(M_{2}\) and \(M_{2}^{\prime}\) is an end of \(M_{2}\) disjoint from \(M_{1}\). Let \(Y_{2}=M_{1}\cup M_{2}\). Using the argument above with \(Y_{2}\) instead of \(Y_{1}\) and \(M_{1}^{\prime}\) instead of \(M_{1}\) we find \(N_{1}\) in \(Y_{2}\) disjoint from \(M_{2}\) such that \(\mu_{\rm f}(N_{1})>5.7\omega\) and \(N_{1}\sim N^{\pm 1}\). Similarly, using \(Y_{2}\) instead of \(Y_{1}\) and \(M_{2}^{\prime}\) instead of \(M_{2}\) we find \(N_{2}\) in \(Y_{2}\) disjoint from \(M_{1}\) such that \(\mu_{\rm f}(N_{2})>5.7\omega\) and \(N_{2}\sim N^{\pm 1}\). Then we can take \(N^{\prime}=N_{1}\cup N_{2}\) by Corollary 9.24(i), (iii). If \(\mu_{\rm f}(N)\geq 4\lambda+9\omega\) then \(\mu_{\rm f}(N^{\prime})>2\lambda+5.6\omega\) and using Propositions 8.11\({}_{\beta}\) and 8.10\({}_{\beta}\) we conclude that \(M_{1}\) and \(M_{2}\) cannot cover \(N^{\prime}\) together, i.e. \(M_{1}\ll M_{2}\). In case (**) we already have a loop \(X^{-1}uYv\) with bridges \(u\) and \(v\) of rank \(\beta\). We lift it to \(\Gamma_{\beta}\) and then apply Lemma 10.20\({}_{\beta}\) to see that the lift of \(N\) is independent of the lifts of \(u\) and \(v\). Then application of Proposition 10.6\({}_{\beta}\) gives the required \(N^{\prime}\). (ii): An easy analysis with a help of Propositions 9.24(ii) and 8.10\({}_{\beta}\) shows that it is enough to prove the following: _Let \(X\) and \(Y\) be reduced paths in \(\Gamma_{\alpha}\). Let \(K_{i}\)\((i=1,2,3)\) be fragments of rank \(\beta\) in \(X\), \(M_{i}\)\((i=1,2,3)\) be fragments of rank \(\beta\) in \(Y\), \(\mu_{\rm f}(K_{i}),\mu_{\rm f}(M_{i})\geq\lambda+9.1\omega\), \(K_{i}\sim M_{i}^{\pm 1}\) for all \(i\) and \(K_{i}\not\sim K_{j}\) for \(i\neq j\). If \(K_{1}<K_{2}<K_{3}\) and \(M_{1}<M_{3}\) then \(M_{1}<M_{2}<M_{3}\)._ Assume that this is not the case, that is, we have \(K_{1}<K_{2}<K_{3}\), \(M_{1}<M_{3}\) and either \(M_{2}<M_{1}\) or \(M_{3}<M_{2}\). By (i), there exists a fragment \(N\) of rank \(\alpha\) in \(Y\) such that \(K_{2}\sim N^{\pm 1}\) and \(M_{1}<N<M_{3}\). Then by Propositions 9.24(i) and 8.10\({}_{\beta}\) we obtain \(M_{1}\sim N\) or \(M_{3}\sim N\), a contradiction. **Proposition**.: _Let \(X\) and \(Y\) be words strongly cyclically reduced in \(G_{\alpha}\), representing conjugate elements of \(G_{\alpha}\). Let \(\bar{X}\) and \(\bar{Y}\) be lines in \(\Gamma_{\alpha}\) representing the conjugacy relation. Let \(1\leq\beta\leq\alpha\). Assume that at least one of the words \(X\) or \(Y\) has the property that no its cyclic shift contains a fragment \(K\) of rank \(\gamma\) with \(\mu_{\rm f}(K)>\xi_{0}\) and \(\beta<\gamma\leq\alpha\). Let \(\bar{X}=\ldots X_{-1}X_{0}X_{1}\dots\) and \(\bar{Y}=\ldots Y_{-1}Y_{0}Y_{1}\dots\) be lines in \(\Gamma_{\alpha}\) representing the conjugacy relation._ 1. _Then for any fragment_ \(K\) _of rank_ \(\beta\) _in_ \(\bar{X}\) _with_ \(\mu_{\rm f}(K)\geq 2\lambda+9.1\omega\) _there exists a fragment_ \(M\) _of rank_ \(\beta\) _in_ \(\bar{Y}\) _such that_ \(M\sim K^{\pm 1}\) _and_ \[\mu_{\rm f}(M)\geq\min\{\mu_{\rm f}(K)-2\lambda-3.4\omega,\ \xi_{0}\}\] 2. _If_ \(X\) _and_ \(Y\) _are strongly cyclically reduced in_ \(G_{\alpha}\) _then the correspondence between fragments of rank_ \(\beta\) _in_ \(\bar{X}\) _and in_ \(\bar{Y}\) _preserves the ordering in the following sense: if_ \(K_{i}\)__\((i=1,2)\) _are fragments of rank_ \(\beta\) _in_ \(\bar{X}\)_,_ \(M_{i}\)__\((i=1,2)\) _are fragments of rank_ \(\beta\) _in_ \(\bar{Y}\)_,_ \(\mu_{\rm f}(K_{i}),\mu_{\rm f}(M_{i})\geq 2\lambda+9.1\omega\)_,_ \(K_{i}\sim M_{i}^{\pm 1}\)__\((i=1,2)\) _and_ \(K_{1}\not\sim K_{2}\)_. Then_ \(K_{1}<K_{2}\) _if and only if_ \(M_{1}<M_{2}\)_._ _Furthermore, the statement of the proposition is true also in the case \(\beta=0\) if we drop all conditions of the form \(\mu_{\rm f}(\cdot)\geq\dots\) for fragments of rank \(\beta\)._ Proof.: By Proposition 9.17 every subpath of \(\bar{X}\) can be extended to be close in rank \(\beta\) to a subpath of \(\bar{Y}\). Then (i) follows from Proposition 8.16(ii) and Proposition 10.23(i) with \(K_{1}=s_{X,\bar{X}}^{-1}K\) and \(K_{2}=s_{X,\bar{X}}K\). Statement (ii) follows by Proposition 10.23(ii). In the case \(\beta=0\) the statement becomes trivial after application of Proposition 9.17. ## 11. Reduced representatives The main goal of this section is to prove that any element of \(G_{\alpha}\) can be represented by a reduced word and to prove a cyclic analog of this statement (Proposition 11.5). **11.1 Proposition** (reduced representative).: _Every element of \(G_{\alpha}\) can be represented by a reduced in \(G_{\alpha}\) word which contains no fragments \(F\) of rank \(1\leq\beta\leq\alpha\) with \(\mu_{\mathrm{f}}(\mathsf{F})\geq\frac{1}{2}+2\lambda+15\omega\)._ **11.2 Lemma**.: _Let \(m\geq 3\) and \(\mathsf{X}^{-1}\mathsf{x}\mathsf{Y}_{1}\mathsf{x}\mathsf{Y}_{2}\mathsf{*} \dots\mathsf{x}\mathsf{Y}_{m}\mathsf{*}\) be a coarse \((m+1)\)-gon in \(\Gamma_{\alpha-1}\). Assume that there are indices \(1\leq t_{1}<t_{2}<\dots<t_{k}\leq m\)\((k\geq 1)\) such that_ \[t_{1}\leq 3,\quad t_{k}\geq m-2,\quad t_{j}-t_{j-1}\leq 2\text{ for all }j\] _and_ \[|\mathsf{Y}_{t_{j}}|_{\alpha-1}>4\eta\quad\text{for all }j.\] _Assume further that there are no close vertices in each of the pairs \((\mathsf{Y}_{i},\mathsf{Y}_{i+1})\), \((\mathsf{Y}_{1},\mathsf{Y}_{t_{1}})\), \((\mathsf{Y}_{t_{j}},\mathsf{Y}_{t_{j}+1})\), \((\mathsf{Y}_{t_{k}},\mathsf{Y}_{m})\) except appropriate endpoints (i.e. except \(\tau(\mathsf{Y}_{i})\) and \(\iota(\mathsf{Y}_{i+1})\)). Then each of the paths \(\mathsf{Y}_{t_{j}}\) has a vertex close to a vertex \(\mathsf{a}_{j}\) on \(\mathsf{X}\) and these vertices \(\mathsf{a}_{j}\) are in \(\mathsf{X}\) in the (non-strict) order from start to end._ Proof.: We first claim that there are no close vertices in pairs \((\mathsf{Y}_{i},\mathsf{Y}_{j})\) for \(j-i>1\). Assume there are. We choose such a pair with minimal possible \(j-i\). Then an ending segment \(\mathsf{Y}_{i}^{\prime}\) of \(\mathsf{Y}_{i}\), paths \(\mathsf{Y}_{i+1}\),..., \(\mathsf{Y}_{j-1}\) and a starting segment \(\mathsf{Y}_{j}^{\prime}\) of \(\mathsf{Y}_{j}\) form a coarse \(r\)-gon with \(r=j-i+1\geq 3\). Applying Proposition 9.18\({}_{\alpha-1}\) we get \[\sum_{k=i+1}^{j-1}|\mathsf{Y}_{i}|_{\alpha-1}\leq(r-2)\eta.\] On the other hand, it follows from the hypothesis of the lemma that there are at least \(\min(1,\frac{1}{2}(r-3))\) paths \(\mathsf{Y}_{t_{k}}\) among \(\mathsf{Y}_{i+1}\),..., \(\mathsf{Y}_{j-1}\) and hence \[\sum_{k=i+1}^{j-1}|\mathsf{Y}_{i}|_{\alpha-1}>4\eta\min\left(1,\frac{1}{2}(r- 3)\right).\] We get a contradiction since the right hand side of the inequality is at least \((r-2)\eta\). This proves the claim. Shortening if necessary \(\mathsf{Y}_{1}\) and \(\mathsf{X}\) we can assume that there is no pair of close vertices on \(\mathsf{Y}_{1}\) and \(\mathsf{X}\) other that \((\iota(\mathsf{Y}_{1}),\iota(\mathsf{X}))\). Similarly, we can assume that there is no pair of close vertices on \(\mathsf{Y}_{m}\) and \(\mathsf{X}\) other than \((\tau(\mathsf{Y}_{m}),\tau(\mathsf{X}))\). Now we claim that there is a pair of close vertices on \(\mathsf{Y}_{i}\) and \(\mathsf{X}\) for some \(2\leq i\leq m-1\). Indeed, otherwise we can apply Proposition 9.18\({}_{\alpha-1}\) to the whole coarse \((m+1)\)-gon \(\mathsf{X}^{-1}\mathsf{x}\mathsf{Y}_{1}\mathsf{x}\mathsf{Y}_{2}\mathsf{*} \dots\mathsf{x}\mathsf{Y}_{m}\mathsf{*}\) and obtain a contradiction since \(4k\eta\geq(m-1)\eta\). Let \((\mathsf{b},\mathsf{c})\) be a pair of close vertices on \(\mathsf{X}\) and \(\mathsf{Y}_{i_{0}}\) where \(2\leq i_{0}\leq m-1\). Let \(\mathsf{b}\) divide \(\mathsf{X}\) as \(\mathsf{X}_{1}\mathsf{X}_{2}\) and \(\mathsf{c}\) divide \(\mathsf{Y}_{i_{0}}\) as \(\mathsf{Z}_{1}\mathsf{Z}_{2}\) If there is at least one index \(t_{j}\) in the interval \(2\leq t_{j}\leq i_{0}-1\) then the conditions of the lemma are satisfied for the coarse \((i_{0}+1)\)-gon \(\mathsf{X}_{1}^{-1}\mathsf{x}\mathsf{Y}_{1}\mathsf{*}\dots\mathsf{Y}_{i_{0}-1} \mathsf{*}\mathsf{Z}_{1}\mathsf{*}\) and we conclude by induction that every \(\mathsf{Y}_{t_{j}}\) with \(t_{j}<i_{0}\) has a vertex close to a vertex \(\mathsf{a}_{j}\) on \(\mathsf{X}\) and the vertices \(\mathsf{a}_{j}\) occur in \(\mathsf{X}\) in the appropriate order. Similarly, we conclude the same for every path \(\mathsf{Y}_{t_{j}}\) with \(t_{j}>i_{0}\). This implies the statement for all \(\mathsf{Y}_{t_{j}}\). **11.3 Lemma**.: _Let \(X\) be a word reduced in \(G_{\alpha-1}\). Assume that for any fragment \(K\) of rank \(\alpha\) in \(X\) we have_ \[\mu_{\rm f}(K)\leq 1-3\lambda-5\omega.\] _Then there exists a word \(Y\) equal to \(X\) in \(G_{\alpha}\) which is reduced in \(G_{\alpha-1}\) and such that for any fragment \(M\) of rank \(\alpha\) in \(Y\) we have_ \[\mu_{\rm f}(M)<\frac{1}{2}+2\lambda+15\omega.\] _In particular, \(Y\) is reduced in \(G_{\alpha}\) (note that \(\frac{1}{2}+2\lambda+15\omega<\rho=1-9\lambda\) by (2-3) and (4-1).)_ Proof.: We represent \(X\) by a reduced path \({\sf X}\) in \(\Gamma_{\alpha-1}\). Denote \[t=\frac{1}{2}+11\omega.\] Let \({\sf K}_{1}\),..., \({\sf K}_{r}\) be a maximal set of pairwise non-compatible fragments of rank \(\alpha\) in \({\sf X}\) with \(\mu_{\rm f}({\sf K}_{i})\geq t\). We assume that each \({\sf K}_{i}\) has maximal size \(\mu_{\rm f}({\sf K}_{i})\) in its equivalence class of compatible fragments of rank \(\alpha\) occurring in \({\sf X}\). Using Proposition 8.12 we shorten each \({\sf K}_{i}\) from the start obtaining a fragment \(\bar{\sf K}_{i}\) of rank \(\alpha\) so that \(\bar{\sf K}_{i}\) do not intersect pairwise; we have \(\mu_{\rm f}(\bar{\sf K}_{i})>\mu_{\rm f}({\sf K}_{i})-\lambda-2.7\omega\). Let \[{\sf X}={\sf S}_{0}\bar{\sf K}_{1}{\sf S}_{1}\ldots\bar{\sf K}_{r}{\sf S}_{r}.\] Let \({\sf P}_{i}\) be a base for \(\bar{\sf K}_{i}\); for each \(i\), we have a coarse bigon \(\bar{\sf K}_{i}^{-1}\mathsf{u}_{i}{\sf P}_{i}\mathsf{v}_{i}\) with bridges \(\mathsf{u}_{i}\) and \(\mathsf{v}_{i}\). Let \(P_{i}=\mathit{label}({\sf P}_{i})\) and \(P_{i}Q_{i}^{-1}\) be the associated relator of rank \(\alpha\). We consider a path in \(\Gamma_{\alpha-1}\) \[{\sf Z}={\sf S}_{0}^{*}\mathsf{u}_{1}^{*}{\sf Q}_{1}\mathsf{v}_{1}^{*}{\sf S }_{1}^{*}\ldots\mathsf{u}_{r}^{*}{\sf Q}_{r}\mathsf{v}_{r}^{*}{\sf S}_{r}^{*}\] where labels of \({\sf S}_{i}^{*}\), \(\mathsf{u}_{i}^{*}\) and \(\mathsf{v}_{i}^{*}\) are equal to corresponding labels of \({\sf S}_{i}\), \(\mathsf{u}_{i}\) and \(\mathsf{v}_{i}\) and \(\mathit{label}({\sf Q}_{i})=Q_{i}\). Note that \(\mathit{label}({\sf Z})=X\) in \(G_{\alpha}\). We perform the following procedure: 1. if a pair of vertices on \({\sf Q}_{i}\) and \({\sf S}_{i}^{*}\) are close and is distinct from \((\tau({\sf Q}_{i}),\iota({\sf S}_{i}^{*}))\) then we choose a bridge \(\mathsf{w}\) of rank \(\alpha-1\) joining these vertices, replace \(\mathsf{v}_{i}^{*}\) with \(\mathsf{w}\) and shorten \({\sf Q}_{i}\) from the end and \({\sf S}_{i}^{*}\) from the start; similarly, if a pair of vertices on \({\sf Q}_{i}\) and \({\sf S}_{i-1}^{*}\) are close and is distinct from \((\iota({\sf Q}_{i}),\tau({\sf S}_{i-1}^{*}))\) then we choose a bridge \(\mathsf{w}\) of rank \(\alpha-1\) joining them and replace \(\mathsf{u}_{i}^{*}\) with \(\mathsf{w}\) shortening \({\sf Q}_{i}\) from the start and \({\sf S}_{i-1}^{*}\) from the end; we apply recursively the operation until possible; 2. if a vertex on \({\sf Q}_{i}\) is close to a vertex on \({\sf Q}_{i+1}^{*}\) then we choose a bridge \(\mathsf{w}\) of rank \(\alpha-1\) joining these vertices, shorten \({\sf Q}_{i}\) from the end and \({\sf Q}_{i+1}\) from the end and join then by \(\mathsf{w}\) (so \({\sf S}_{i}^{*}\) is eliminated and \(\mathsf{v}_{i}^{*}{\sf S}_{i}^{*}\mathsf{u}_{i}^{*}\) is replaced with a bridge \(\mathsf{w}\) of rank \(\alpha-1\)); we apply recursively the operation until possible; After the procedure, we obtain a path \[{\sf Z}_{1}={\sf T}_{0}{\sf U}_{0}{\sf R}_{1}{\sf U}_{1}\ldots{\sf R}_{r}{\sf U }_{r}{\sf T}_{r}\] where for each \(i\), \({\sf R}_{i}\) is a subpath of \({\sf Q}_{i}\) and \({\sf U}_{i}\) either is a bridge of rank \(\alpha-1\) or has the form \(\mathsf{w}_{i}{\sf T}_{i}{\sf z}_{i}\) where \({\sf T}_{i}\) is a subpath of \({\sf S}_{i}^{*}\) and \(\mathsf{w}_{i}\) and \({\sf z}_{i}\) are bridges of rank \(\alpha-1\). Let \({\sf Y}\) be a reduced path with the same endpoints as \({\sf Z}_{1}\). Our goal is to prove that the label \(Y\) of \({\sf Y}\) satisfies the requirement of the lemma, that is, for any fragment \({\sf N}\) of rank \(\alpha\) in \({\sf Y}\) we have \(\mu_{\rm f}({\sf N})<\frac{1}{2}+2\lambda+15\omega\). We compute a lower bound for \(\mu({\sf R}_{i})\). Fix \(i\) and let \({\sf Q}_{i}={\sf Q}^{\prime}{\sf R}_{i}{\sf Q}^{\prime\prime}\). At step (i) of the procedure, we do not shorten \({\sf Q}_{i}\) more than this would give a fragment of rank \(\alpha\) in \({\sf X}\) with a base that is a proper extension of \({\sf P}_{i}\), so we get \(\mu({\sf Q}_{i})\geq 1-\mu_{\rm f}({\sf K}_{i})\geq 3\lambda+5\omega\). At step (ii) we shorten \(Q_{i}\) from each side by less than \(\lambda+0.4\omega\) (this follows from Proposition 9.19(i)\({}_{\alpha-1}\), Proposition 8.15 and Corollary 8.2). This implies \(\mu(R_{i})>\lambda+4\omega\) and, in particular, \(|R_{i}|_{\alpha-1}>4\eta\). We apply Lemma 11.2 with \(X:=Y\) where \(R_{i}\) and \(T_{i}\) play the role of \(Y_{i}\)'s and \(R_{i}\) are taken as \(Y_{t_{i}}\). The lemma says that each path \(R_{i}\) has a vertex close to a vertex on \(Y\) and these vertices on \(Y\) are appropriately ordered. We can write \[Y=V_{0}M_{1}V_{1}\dots M_{r}V_{r}\] where each \(M_{i}\) is close to a subpath of \(Q_{i}\) (at the moment each \(M_{i}\) is empty because it is represented by a vertex on \(Y\)). Extending \(M_{i}\)'s we make them maximal so that no vertex on \(W_{i}\) except \(\iota(V_{i})\) is close to a vertex on \(Q_{i}\) and no vertex on \(V_{i}\) except \(\tau(V_{i})\) is close to a vertex on \(Q_{i+1}\). Up to location of \(Z\) in \(\Gamma_{\alpha-1}\) we can assume that it starts at \(\iota(X)\). Combining the two graphs shown in Figure 33a and mapping them to \(\Gamma_{\alpha}\) we obtain a graph as shown in Figure 33b. This graph is similar to one obtained from a single-layer diagram (as in Figure 15). An easy analysis with use of Proposition 9.19\({}_{\alpha-1}\), Proposition 8.15 and Corollary 8.2 shows that \(M_{i}\) and some extension \(\tilde{K}_{i}\) of \(\tilde{K}_{i}\) satisfy the bound as in Proposition 9.7, i.e. \[\mu_{f}(M_{i})+\mu_{f}(\tilde{K}_{i})>1-2\lambda-1.5\omega.\] Since \(\mu_{f}(\tilde{K}_{i})\leq\mu_{f}(K_{i})\leq 1-3\lambda-5\omega\) we obtain that for all \(i\), \[\mu_{f}(M_{i})>\lambda+3.5\omega.\] Figure 33. Let \(\mathsf{N}\) be a fragment of rank \(\alpha\) in \(\mathsf{Y}\). By Proposition 8.10, we have either \(\mathsf{N}\sim\mathsf{M}_{i}\) or \(\mathsf{N}\subseteq\mathsf{M}_{i}\cup\mathsf{M}_{i+1}\) for some \(i\). In the case when \(\mathsf{N}\subseteq\mathsf{M}_{i}\cup\mathsf{M}_{i+1}\), \(\mathsf{N}\not\sim\mathsf{M}_{i}\) and \(\mathsf{N}\not\sim\mathsf{M}_{i+1}\) we can apply the argument from the proof of Proposition 10.5 and find a fragment \(\mathsf{N}^{\prime}\) in \(\mathsf{X}\) such that \[\mu_{\mathrm{f}}(\mathsf{N}^{\prime})>\mu_{\mathrm{f}}(\mathsf{N})-2\lambda-3.4\omega.\] We have also \(\mathsf{N}^{\prime}\not\sim\mathsf{K}_{i},\mathsf{K}_{i+1}\) and hence \(\mathsf{N}^{\prime}\not\sim\mathsf{K}_{j}\) for all \(j\). By the choice of the \(\mathsf{K}_{i}\)'s, we have \(\mu_{\mathrm{f}}(\mathsf{K}^{\prime})<t\) and hence \[\mu_{\mathrm{f}}(\mathsf{N})<t+2\lambda+3.4\omega<\frac{1}{2}+2\lambda+15\omega.\] Assume that \(\mathsf{N}\sim\mathsf{M}_{i}\) for some \(i\). Let \(\bar{\mathsf{Q}}\) and \(\bar{\mathsf{P}}\) be bases for \(\mathsf{N}\) and \(\mathsf{K}_{i}\) respectively. Images of \(\bar{\mathsf{Q}}^{-1}\) and \(\bar{\mathsf{P}}\) in \(\Gamma_{\alpha}\) are subpaths of a relator loop and have at most two overlapping parts. We give an upper bound for \(\mu(\bar{\mathsf{Q}})+\mu(\bar{\mathsf{P}})\) by finding an upper bound for the size of each overlapping part. Assume, for example, that an end of the image of \(\bar{\mathsf{P}}\) in \(\Gamma_{\alpha}\) overlaps with a start of the image of \(\bar{\mathsf{Q}}^{-1}\). Changing the location of \(\mathsf{Z}\) in \(\Gamma_{\alpha-1}\) we can assume that \(\bar{\mathsf{P}}\) and \(\bar{\mathsf{Q}}^{-1}\) overlap on a subpath \(\mathsf{W}\) of the same size already in \(\Gamma_{\alpha-1}\). We consider the case \(i<r\) (see Figure 34; the case \(i=r\) is similar with a better upper bound on \(\mu(\mathsf{W})\)). We apply Proposition 9.19(ii)\({}_{\alpha-1}\) to a coarse tetragon with one side \(\mathsf{W}\) and other sides which are an end \(\mathsf{S}\) of \(\mathsf{S}_{i}\bar{\mathsf{K}}_{i+1}\), a start \(\mathsf{V}\) of \(\mathsf{M}_{i+1}^{-1}\mathsf{V}_{i}^{-1}\) and a subpath of a common base axis \(\mathsf{L}\) for \(\mathsf{K}_{i+1}^{-1}\) and \(\mathsf{N}_{i+1}\). In the worst case we have \(\mathsf{W}=\mathsf{W}_{1}\mathsf{z}_{1}\mathsf{W}_{2}\mathsf{z}_{2}\mathsf{ W}_{3}\) where \(\mathsf{W}_{1}\) is close to a subpath of \(\mathsf{V}^{-1}\), \(\mathsf{W}_{2}\) is close to a subpath of \(\mathsf{L}^{-1}\), \(\mathsf{W}_{3}\) is close to a subpath of \(\mathsf{S}^{-1}\) and \(|\mathsf{z}_{i}|_{\alpha-1}\leq 4\eta\zeta\). Proposition 10.21\({}_{\alpha-1}\) implies \(|\mathsf{W}_{1}|_{\alpha-1}<5.7\) and \(|\mathsf{W}_{3}|_{\alpha-1}<5.7\). Since \(\mathsf{K}_{i}\not\sim\mathsf{K}_{i+1}\) we obtain \(\mu(\mathsf{W}_{2})<\lambda\). Hence \[\mu(\mathsf{W})<\lambda+2\omega(5.7+4\eta\zeta)<\lambda+13\omega.\] We obtain \[\mu_{\mathrm{f}}(\mathsf{N})+\mu_{\mathrm{f}}(\mathsf{K}_{i})<1+2\lambda+26\omega.\] Since \(\mu_{\mathrm{f}}(\mathsf{K}_{i})\geq t\) this implies the required bound \(\mu_{\mathrm{f}}(\mathsf{N})<\frac{1}{2}+2\lambda+15\omega\). **11.4 Lemma**.: _Let \(\alpha\geq 1\) and \(X\) be a word reduced in \(G_{\alpha}\) and \(a\in\mathcal{A}^{\pm 1}\) a letter in the generators of \(G_{\alpha}\). Let \(Y\) be a word reduced in \(G_{\alpha-1}\) such that \(Y=Xa\) in \(G_{\alpha-1}\). Then \(Y\) has no fragments \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K)\geq\rho+6.2\omega\)._ Proof.: Follows from Lemma 10.8 and Proposition 8.8. Figure 34. Proof of Proposition 11.1.: It is trivial if \(\alpha=0\). In the case \(\alpha\geq 1\) Proposition 11.1 follows by induction from Lemmas 11.3 and 11.4 since \(\rho+6.2\omega<1-3\lambda-5\omega\). We turn to the cyclic analogue of Proposition 11.1: **11.5 Proposition** (cyclically reduced representative).: _Every element of \(G_{\alpha}\) of finite order is conjugate to a cyclically reduced word of the form \(R_{0}^{k}\) where \(R_{0}\) is the root of a relator of rank \(\beta\), \(1\leq\beta\leq\alpha\)._ _Every element of \(G_{\alpha}\) of infinite order is conjugate to a strongly cyclically reduced word in \(G_{\alpha}\)._ **11.6 Lemma** (a cyclic version of Lemma 11.2).: _Let \(X\) be a word cyclically reduced in \(G_{\alpha-1}\) representing an element of \(G_{\alpha-1}\) of infinite order. Let \(m\geq 2\), \(Y_{1},\ldots,Y_{m}\) be words reduced in \(G_{\alpha-1}\), \(u_{1},\ldots,u_{m}\) be bridges of rank \(\alpha-1\) and let \(X\) be conjugate to \(Y_{1}u_{1}\ldots Y_{m}u_{m}\) in \(G_{\alpha-1}\). Let \(\prod_{i\in\mathbb{Z}}\mathsf{Y}_{1}^{(i)}\mathsf{u}_{1}^{(i)}\ldots\mathsf{ Y}_{m}^{(i)}\mathsf{u}_{m}^{(i)}\) and \(\bar{\mathsf{X}}=\prod_{i\in\mathbb{Z}}\mathsf{X}^{(i)}\) be lines in \(\Gamma_{\alpha-1}\) labeled \((Y_{1}u_{1}\ldots Y_{m}u_{m})^{\infty}\) and \(X^{\infty}\) respectively representing the conjugacy relation._ _Assume that there are indices \(1\leq t_{1}<t_{2}<\cdots<t_{k}\leq m\)\((k\geq 1)\) such that_ \[m+t_{1}-t_{m}\leq 2,\quad t_{j}-t_{j-1}\leq 2\quad\text{ for all }j,\] _and_ \[|Y_{t_{j}}|_{\alpha-1}>4\eta\quad\text{for all }j.\] _Assume that there are no close vertices in each of the pairs \((\mathsf{Y}_{i}^{(0)},\mathsf{Y}_{i+1}^{(0)})\), \((\mathsf{Y}_{m}^{(0)},\mathsf{Y}_{1}^{(1)})\), \((\mathsf{Y}_{t_{j}}^{(0)},\mathsf{Y}_{t_{j}+1}^{(0)})\), \((\mathsf{Y}_{t_{k}}^{(0)},\mathsf{Y}_{t_{1}}^{(1)})\) except appropriate endpoints (i.e. except pairs \((\tau(\mathsf{Y}_{i}^{(0)}),\iota(\mathsf{Y}_{i+1}^{(0)}))\) and \((\tau(\mathsf{Y}_{m}^{(0)}),\iota(\mathsf{Y}_{1}^{(1)}))\)). Then each of the paths \(\mathsf{Y}_{t_{j}}^{(0)}\), \(j=1,\ldots,k\) has a vertex close to a vertex \(\mathsf{a}_{j}\) on \(\bar{\mathsf{X}}\) and these vertices \(\mathsf{a}_{j}\) are in the (non-strict) order corresponding to the order of the \(\mathsf{Y}_{j}^{(0)}\)'s (and \(\mathsf{a}_{k}\) is located non-strictly before \(s_{X,\bar{\mathsf{X}}}\mathsf{a}_{0}\))._ Proof.: The proof follows the proof of Lemma 11.2 with appropriate changes. _Claim 1: There are no close vertices in pairs \((\mathsf{Y}_{i}^{(0)},\mathsf{Y}_{j}^{(0)})\) with \(j-i>1\) and \((\mathsf{Y}_{i}^{(0)},\mathsf{Y}_{j}^{(1)})\) with \(j+m-i>1\)._ The proof repeats the argument from the proof of Lemma 11.2. _Claim 2: For some \(i\), there are close vertices in the pair \((\mathsf{Y}_{i}^{(0)},\bar{\mathsf{X}})\)._ Assume this is not true. Consider an annular diagram \(\Delta\) of rank \(\alpha-1\) with boundary loops \(\bar{\mathsf{X}}^{-1}\) and \(\hat{\mathsf{Y}}_{1}\hat{\mathsf{u}}_{1}\ldots\hat{\mathsf{Y}}_{m}\hat{\mathsf{ u}}_{m}\) and a combinatorially continuous map \(\phi:\tilde{\Delta}\to\Gamma_{\alpha-1}\) such that \(\phi\) maps the boundary of \(\tilde{\Delta}\) to \(\bar{\mathsf{X}}^{-1}\) and \(\prod_{i}\mathsf{Y}_{1}^{(i)}\mathsf{u}_{1}^{(i)}\ldots\mathsf{Y}_{m}^{(i)} \mathsf{u}_{m}^{(i)}\). The assumption, Claim 1 and the hypothesis of the lemma imply that \(\Delta\) is small. Application of Proposition 7.9\({}_{\alpha-1}\) gives \[\sum_{i}|Y_{i}|_{\alpha-1}\leq\eta m.\] On the other hand, from the hypothesis of the lemma we have \(\sum_{i}|Y_{i}|_{\alpha-1}\geq 4k\eta>\eta m\), a contradiction. This proves the claim. By Claim 2, assume without loss of generality that there is a vertex \(\mathsf{b}\) on \(\mathsf{Y}_{1}^{(0)}\) which is close to a vertex \(\mathsf{c}\) on \(\bar{\mathsf{X}}\). Let \(\mathsf{b}\) divide \(\mathsf{Y}_{1}^{(0)}\) as \(\mathsf{Y}_{1}^{(0)}=\mathsf{Z}_{1}\mathsf{Z}_{2}\) and up to cyclic shift of \(X\), assume that \(\mathsf{X}^{(0)}\) starts at \(\mathsf{c}\). Now we can directly apply Lemma 11.2 to the coarse \((m+2)\)-gon \[(\mathsf{X}^{(0)})^{-1}*\mathsf{Z}_{2}\mathsf{u}_{1}^{(0)}\mathsf{Y}_{2}^{(0)} \ldots\mathsf{u}_{m-1}^{(0)}\mathsf{Y}_{m}^{(0)}\mathsf{u}_{m}^{(0)}\mathsf{Z} _{1}*\] and get the required conclusion. ### Lemma (a cyclic version of Lemma 11.3).: _Let \(X\) be a word strongly cyclically reduced in \(G_{\alpha-1}\). Assume that \(X\) is not conjugate in \(G_{\alpha}\) to a power of the root of a relator of rank \(\beta\leq\alpha\). Next, assume that for any fragment \(K\) of rank \(\alpha\) in a cyclic shift of \(X\) we have_ \[\mu_{\mathrm{f}}(K)\leq 1-4\lambda-8\omega.\] _Then there exists a word \(Z\) conjugate to \(X\) in \(G_{\alpha}\) which is strongly cyclically reduced in \(G_{\alpha-1}\) and such that no power \(Z^{k}\) contains a fragment \(L\) of rank \(\alpha\) with_ \[\mu_{\mathrm{f}}(L)<\frac{1}{2}+2\lambda+15\omega.\] _In particular, \(Z\) is strongly cyclically reduced in \(G_{\alpha}\)._ Proof.: The general scheme is the same as in the proof of Lemma 11.3. Let \(\bar{\mathsf{X}}=\prod_{i\in\mathbb{Z}}\mathsf{X}_{i}\) be a line in \(\Gamma_{\alpha-1}\) labeled \(X^{\infty}\). First we note that for any fragment \(\mathsf{K}\) of rank \(\alpha\) in \(\bar{\mathsf{X}}\) we have \(s_{X,\bar{\mathsf{X}}}\mathsf{K}\not\sim\mathsf{K}\) by Proposition 8.16(ii). By Propositions 8.10 and 8.11 there exists a starting segment \(\mathsf{K}^{\prime}\) of \(\mathsf{K}\) that is a fragment of rank \(\alpha\) with \(\mu_{\mathrm{f}}(\mathsf{K}^{\prime})>\mu_{\mathrm{f}}(\mathsf{K})-\lambda-3\omega\) and \(|\mathsf{K}^{\prime}|\leq|X|\), i.e. _label_(\(\mathsf{K}^{\prime}\)) occurs in a cyclic shift of \(X\). Then the hypothesis of the lemma implies that \(\bar{\mathsf{X}}\) contains no fragments \(\mathsf{K}\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(\mathsf{K})\geq 1-3\lambda-5\omega\). Denote \(t=\frac{1}{2}+11\omega\). We can assume that there is at least one fragment \(\mathsf{K}\) of rank \(\alpha\) in \(\bar{\mathsf{X}}\) with \(\mu_{\mathrm{f}}(\mathsf{K})\geq t\) (otherwise we can take \(Z:=X\)). We choose a maximal set \(\mathsf{K}_{1}\),..., \(\mathsf{K}_{r}\) of pairwise non-compatible fragments of rank \(\alpha\) in \(\bar{\mathsf{X}}\) with \(\mu_{\mathrm{f}}(\mathsf{K}_{i})\geq t\) such that \(\mathsf{K}_{1}<\cdots<\mathsf{K}_{r}<s_{X,\bar{\mathsf{X}}}\mathsf{K}_{1}\) and \(\mathsf{K}_{r}\not\sim s_{X,\bar{\mathsf{X}}}\mathsf{K}_{1}\) (after choosing \(\mathsf{K}_{1}\) we use Proposition 8.16(ii) to get \(s_{X,\bar{\mathsf{X}}}\mathsf{K}_{1}\not\sim\mathsf{K}_{1}\)). We assume that each \(\mathsf{K}_{i}\) has maximal size \(\mu_{\mathrm{f}}(\mathsf{K}_{i})\) in its class of compatible fragments of rank \(\alpha\) in \(\bar{\mathsf{X}}\). Using Proposition 8.12 we shorten each \(\mathsf{K}_{i}\) from its start obtaining a fragment \(\bar{\mathsf{K}}_{i}\) of rank \(\alpha\) so that all \(\bar{\mathsf{K}}_{i}\) do not intersect pairwise and \(|\mathsf{K}_{1}\cup\mathsf{K}_{r}|\leq|X|\); we have \(\mu_{\mathrm{f}}(\bar{\mathsf{K}}_{i})>\mu_{\mathrm{f}}(\mathsf{K}_{i})- \lambda-2.7\omega\). Passing to a cyclic shift of \(X\) (and changing all \(\mathsf{X}_{i}\) accordingly) we may assume also that \[\mathsf{X}_{0}=\bar{\mathsf{K}}_{1}\mathsf{S}_{1}\ldots\bar{\mathsf{K}}_{r} \mathsf{S}_{r}.\] Let \(\mathsf{P}_{i}\) be the base for \(\bar{\mathsf{K}}_{i}\) and \(\bar{\mathsf{K}}_{i}^{-1}\mathsf{u}_{i}\mathsf{P}_{i}\mathsf{v}_{i}\) a loop in \(\Gamma_{\alpha-1}\) with bridges \(\mathsf{u}_{i}\) and \(\mathsf{v}_{i}\). Denote \(S_{i}=\textit{label}(\mathsf{S}_{i})\), \(P_{i}=\textit{label}(\mathsf{P}_{i})\), \(u_{i}=\textit{label}(\mathsf{u}_{i})\), \(v_{i}=\textit{label}(\mathsf{v}_{i})\) and let \(P_{i}Q_{i}^{-1}\) be the associated relator of rank \(\alpha\). Let \[Z=u_{1}Q_{1}v_{1}S_{1}u_{2}Q_{1}v_{2}S_{2}\ldots u_{r}Q_{r}v_{r}S_{r}.\] Let \(Y\) be a word strongly cyclically reduced in \(G_{\alpha-1}\) that is conjugate to \(Z\) in \(G_{\alpha-1}\). We prove that \(Y\) satisfies the requirements of the lemma. Note that \(Y\) and hence \(Z\) are conjugate to \(X\) in \(G_{\alpha}\). We transform \(Z\) using a procedure analogous to the procedure described in the proof of Lemma 11.3. At any moment, we will have a word \(Z_{1}\) of the form \[Z_{1}=R_{1}U_{1}\ldots R_{r}U_{r},\] conjugate to \(Z\) in \(G_{\alpha-1}\) where each \(R_{i}\) is a subword of \(Q_{i}\) and each \(U_{i}\) either is a bridge of rank \(\alpha-1\) or has the form \(w_{i}T_{i}z_{i}\) where \(w_{i}\), \(z_{i}\) are bridges of rank \(\alpha-1\) and \(T_{i}\) is a subword of \(S_{i}\). At the start, we have \(R_{i}=Q_{i}\) and \(U_{i}=v_{i}S_{i}u_{i+1}\) (here and below \(i+1\) is taken modulo \(r\)). The transformation procedure consists of the following steps applied recursively until possible. 1. Suppose that \(U_{i}\) has the form \(w_{i}T_{i}z_{i}\) above. If \(R_{i}=R^{\prime}R^{\prime\prime}\), \(T_{i}=T^{\prime}T^{\prime\prime}\) where \(|R^{\prime\prime}|+|T^{\prime}|>0\) and \(R^{\prime\prime}w_{i}T^{\prime}\) is equal in \(G_{\alpha-1}\) to a bridge \(w\) of rank \(\alpha-1\) then replace \(R_{i}\), \(w_{i}\) and \(T_{i}\) with \(R^{\prime}\), \(w\) and \(T^{\prime\prime}\) respectively; similarly, if \(T_{i}=T^{\prime}T^{\prime\prime}\), \(R_{i+1}=R^{\prime}R^{\prime\prime}\) where \(|T^{\prime\prime}|+|R^{\prime}|>0\) and \(T^{\prime\prime}z_{i}R^{\prime}\) is equal in \(G_{\alpha-1}\) to a bridge \(w\) of rank \(\alpha-1\) then replace \(T_{i}\), \(z_{i}\) and \(R_{i+1}\) with \(T^{\prime}\), \(w\) and \(R^{\prime\prime}\) respectively. 2. If \(R_{i}=R^{\prime}R^{\prime\prime}\) and \(R_{i+1}=R^{*}R^{**}\) where \(|R^{\prime\prime}|+|R^{*}|>0\) and \(R^{\prime\prime}U_{i}R^{*}\) is equal in \(G_{\alpha-1}\) to a bridge \(w\) of rank \(\alpha-1\) then replace \(R_{i}\), \(U_{i}\) and \(R_{i+1}\) with \(R^{\prime}\), \(w\) and \(R^{**}\) respectively. Similar to the proof of Lemma 11.3, after performing the procedure we obtain \(|R_{i}|_{\alpha-1}>4\eta\) for all \(i\). Let \(\bar{\mathsf{Z}}=\prod_{i\in\mathbb{Z}}\mathsf{Z}^{(i)}\) be a line in \(G_{\alpha-1}\) labeled \(Z^{\infty}\) and let \(\mathsf{Q}^{(i)}_{j}\) denote the appropriate subpath of \(\mathsf{Z}^{(i)}\) labeled \(Q_{j}\). We can implement the procedure above on the line \(\bar{\mathsf{Z}}\) instead of a word \(Z\) by changing appropriate paths instead of words (to each change of words in (i) or (ii) there corresponds infinitely many changes of paths translated by \(s_{X,\bar{\mathsf{X}}}\)). As a result, we get a line \(\prod_{i\in\mathbb{Z}}\mathsf{Z}^{(i)}_{1}\) so that the corresponding subpath \(\mathsf{R}^{(i)}_{j}\) of \(\mathsf{Z}^{(i)}_{1}\) is also a subpath of \(\mathsf{Q}^{(i)}_{j}\). Denote also \(\mathsf{T}^{(i)}_{j}\) the appropriate subpath of \(\mathsf{Z}^{(i)}_{1}\) labeled \(T_{j}\). Let \(\bar{\mathsf{Y}}=\prod_{i\in\mathbb{Z}}\mathsf{Y}^{(i)}\) be the line in \(G_{\alpha-1}\) such that \(\bar{\mathsf{Z}}\) and \(\bar{\mathsf{Y}}\) are associated with conjugate words \(Z\) and \(Y\). We apply Lemma 11.6 with \(\bar{\mathsf{X}}:=\bar{\mathsf{Y}}\) where \(\mathsf{R}^{(i)}_{j}\) and \(\mathsf{T}^{(i)}_{j}\) play the role of \(\mathsf{Y}^{(i)}_{j}\)'s and \(\mathsf{R}^{(i)}_{j}\) are taken as \(\mathsf{Y}^{(i)}_{t_{j}}\). According to the lemma, each path \(\mathsf{R}^{(0)}_{j}\) has a vertex close to a vertex on \(\bar{\mathsf{Y}}\), these vertices on \(\bar{\mathsf{Y}}\) are ordered along \(\bar{\mathsf{Y}}\) in the increasing order of the index \(j\), and the length of the segment of \(\bar{\mathsf{Y}}\) between the first and the last one is not more that \(|Y|\). Up to cyclic shift of \(Y\), we can write \[\mathsf{Y}^{(0)}=\mathsf{W}_{0}\mathsf{M}_{1}\mathsf{W}_{1}\ldots\mathsf{M}_{ r}\mathsf{W}_{r}\] where each \(\mathsf{M}_{j}\) is close to a subpath of \(\mathsf{Q}^{(0)}_{j}\). Taking \(\mathsf{M}_{j}\) maximal with these properties we obtain, as in the proof of Lemma 11.3, \[\mu_{\mathrm{f}}(\mathsf{M}_{i})>\lambda+3.5\omega\quad\text{for all $j$}.\] The rest of the proof is similar to the proof of Lemma 11.3. **11.8 Lemma**.: _If \(\mathsf{X}\) is a reduced path in \(\Gamma_{\alpha}\) and the endpoints of \(\mathsf{X}\) are close then \(|\mathsf{X}|_{\alpha}\leq 1\)._ Proof.: For \(\alpha\geq 1\) this follows from Lemma 9.22. **11.9 Lemma**.: _If \(P\) is a piece of rank \(\alpha\) then for any fragment \(K\) of rank \(\alpha\) in \(P\) we have \(\mu_{\mathrm{f}}(K)\leq\max\{\lambda,\mu(P)+2\omega\}\)._ Proof.: Let \(\mathsf{P}\) be a path in \(\Gamma_{\alpha-1}\) with \(\mathit{label}(\mathsf{P})=P\), let \(R\) be the associated relator of rank \(\alpha\) and let \(\mathsf{L}\) be the line labeled \(R^{\infty}\) extending \(\mathsf{P}\). Assume that \(\mathsf{K}\) is a fragment of rank \(\alpha\) contained in \(\mathsf{P}\). If the base axis for \(\mathsf{K}\) is distinct from \(\mathsf{L}\) then \(\mu_{\mathrm{f}}(\mathsf{K})<\lambda\) by Corollary 8.2. Otherwise the base \(\mathsf{Q}\) for \(\mathsf{K}\) is contained in \(\mathsf{L}\) and Lemma 11.8\({}_{\alpha-1}\) implies \[\mu_{\mathrm{f}}(\mathsf{K})=\mu(\mathsf{Q})\leq\mu(\mathsf{K})+2\omega\leq\mu( \mathsf{P})+2\omega.\] **11.10 Proposition**.: _Let \(P\) be a piece of rank \(1\leq\beta\leq\alpha\) with \(\mu(P)\leq\rho-2\omega\). Then \(P\) is reduced in \(G_{\alpha}\). If \(R=QS\) where \(R\) is a relator of rank \(\beta\) then either \(Q\) or \(S\) is reduced in \(G_{\alpha}\)._ Proof.: The first statement follows from Lemmas 10.8 and 11.9. If \(R\) is a relator of rank \(\beta\) and \(R=QS\) then by 4.14(ii), we have either \(\mu(Q)\leq\frac{1}{2}+\omega\) or \(\mu(S)\leq\frac{1}{2}+\omega\). It remains to note that \(\frac{1}{2}+\omega<\rho-2\omega\). Proof of Proposition 11.5.: Let \(X\) be a word representing an element of \(G_{\alpha}\). We may assume that \(X\) is reduced in \(G_{\alpha}\) as a non-cyclic word. We perform a "coarse cyclic cancellation" in \(X\): represent \(X\) as \(UX_{1}V\) where \(VU\) is equal in \(G_{\alpha}\) to a bridge \(u\) of rank \(\alpha\) and \(X_{1}\) has the minimal possible length. Let \(u=v_{1}Pv_{2}\) where \(P\) is a piece of rank \(\alpha\). We can assume that \(\mu(P)\leq\frac{1}{2}+\omega\). Let \(Y\) be a word cyclically reduced in \(G_{\alpha-1}\) and conjugate to \(X_{1}u\) in \(G_{\alpha-1}\). Note that \(X_{1}u\) and hence \(Y\) are conjugate to \(X\) in \(G_{\alpha}\). We show that either \(Y\) is conjugate in \(G_{\alpha-1}\) to a power \(R_{0}^{t}\) of the root \(R_{0}\) of a relator of rank \(\beta\leq\alpha\) or no cyclic shift of \(Y\) contains a fragment \(K\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(\mathsf{K})\geq\rho+2\lambda+16\omega\). In the first case, by Proposition 11.10 we can assume that \(R_{0}^{k}\) is cyclically reduced in \(G_{\alpha}\) and we come to the first alternative of Proposition 11.5. Otherwise, according to Proposition 11.5\({}_{\alpha-1}\) we can assume that \(Y\) is strongly cyclically reduced in \(G_{\alpha-1}\). Then we apply Lemma 11.7 to find a strongly cyclically reduced in \(G_{\alpha}\) word \(Z\) conjugate to \(Y\) in \(G_{\alpha}\) (note that \(\rho+2\lambda+16\omega<1-4\lambda-8\omega\)), coming to the second alternative. Let \(\bar{\mathsf{Y}}=\prod_{i\in\mathbb{Z}}\mathsf{Y}_{i}\) and \(\prod_{i\in\mathbb{Z}}\mathsf{X}_{1}^{(i)}\mathsf{v}_{1}^{(i)}\mathsf{P}_{i} \mathsf{v}_{2}^{(i)}\) be lines in \(\Gamma_{\alpha-1}\) representing the conjugacy relation. We observe that 1. _The base axis of any fragment_ \(\mathsf{N}\) _of rank_ \(\alpha\) _in_ \(\mathsf{P}_{i}\) _with_ \(\mu_{\mathrm{f}}(\mathsf{N})\geq\lambda\) _is the infinite periodic extension of_ \(\mathsf{P}_{i}\)_. In particular, If_ \(\mathsf{N}_{1}\) _and_ \(\mathsf{N}_{2}\) _are fragments of rank_ \(\alpha\) _in_ \(\mathsf{P}_{i}\) _with_ \(\mu_{\mathrm{f}}(\mathsf{N}_{j})\geq\lambda\) _then_ \(\mathsf{N}_{1}\sim\mathsf{N}_{2}\)_. (This follows from Corollary_ 8.2_.)_ Now formulate some consequences of the choice of \(X_{1}\) of minimal possible length: 1. _There exist no fragments_ \(\mathsf{N}_{1}\) _and_ \(\mathsf{N}_{2}\) _of rank_ \(\alpha\) _in_ \(\mathsf{X}_{1}^{(i)}\) _and in_ \(\mathsf{X}_{1}^{(i+1)}\)_, respectively, such that_ \(\mathsf{N}_{1}\sim\mathsf{N}_{2}\) _and_ \(\mu_{\mathrm{f}}(\mathsf{N}_{i})\geq 3.2\omega\)_._ Indeed, assume that such \(\mathsf{N}_{1}\) and \(\mathsf{N}_{2}\) do exist. Note that both \(\mathsf{N}_{1}\) and \(\mathsf{N}_{2}\) are nonempty by Lemma 10.8. By Lemma 10.13(i), any two of the endpoints of the images of \(\mathsf{N}_{1}\) and \(\mathsf{N}_{2}\) in \(\Gamma_{\alpha}\) are close. Then we can shorten \(X_{1}\) to its subword \(X_{2}\) so that \(X_{2}u^{\prime}\) is conjugate to \(X\) in \(G_{\alpha}\) for some \(u^{\prime}\in\mathcal{H}_{\alpha}\) contrary to the choice of \(X_{1}\) (see Figure 35a; in the figure we have \(\mathsf{N}_{2}\ll s_{Y,\bar{\mathsf{Y}}}\mathsf{N}_{1}\) in \(\mathsf{X}_{1}^{(i+1)}\) but in all other cases we can easily find an appropriate path \(\mathsf{X}_{2}\) with \(|\mathsf{X}_{2}|<\mathsf{X}_{1}\) and take \(X_{2}:=\text{label}(\mathsf{X}_{2})\)). 1. _There exist no fragments_ \(\mathsf{N}_{1}\) _and_ \(\mathsf{N}_{2}\) _of rank_ \(\alpha\) _in_ \(\mathsf{X}_{1}^{(i)}\) _and in_ \(\mathsf{P}_{i}\) _or_ \(\mathsf{P}_{i-1}\)_, respectively, such that_ \(\mathsf{N}_{1}\sim\mathsf{N}_{2}\)_,_ \(\mu_{\mathrm{f}}(\mathsf{N}_{1})\geq 3.2\omega\) _and_ \(\mu_{\mathrm{f}}(\mathsf{N}_{2})\geq\lambda\)_. (Otherwise using (i) we can shorten_ \(X_{1}\) _to_ \(X_{2}:=\text{label}(\mathsf{X}_{2})\) _as shown in Figure_ 35b.)_ Let \(Q\) be a word reduced in \(G_{\alpha-1}\) which is equal to \(X_{1}v_{1}P\) in \(G_{\alpha-1}\). We denote \(\mathsf{Q}_{i}\) the corresponding path in \(\Gamma_{\alpha-1}\) joining \(\iota(\mathsf{X}_{1}^{(i)})\) with \(\tau(\mathsf{P}_{i})\). Using (iii), Proposition 8.8 and Lemma 11.9 we conclude that 1. There are no fragments \(\mathsf{M}\) of rank \(\alpha\) in \(\mathsf{Q}_{i}\) with \(\mu_{\mathrm{f}}(\mathsf{M})\geq\rho+\lambda+6.2\omega\). Assume that \(\mathsf{K}\) is a fragment of rank \(\alpha\) in \(\bar{\mathsf{Y}}\) with \(\mu_{\mathrm{f}}(\mathsf{K})\geq\rho+2\lambda+16\omega\) and \(|\mathsf{K}|\leq|Y|\). By (iv) and Proposition 8.9, for some \(i\) there are fragments \(\mathsf{M}_{1}\) and \(\mathsf{M}_{2}\) of rank \(\alpha\) in \(\mathsf{Q}_{i}\) and \(\mathsf{Q}_{i+1}\) respectively such that \(\mathsf{M}_{j}\sim\mathsf{K}\)\((i=1,2)\) and \(\mu_{\mathrm{f}}(\mathsf{M}_{j})>\lambda+6.8\omega\). By Proposition 8.8 there is a fragment \(\mathsf{N}_{1}\) of rank \(\alpha\) such that \(\mathsf{M}_{1}\sim\mathsf{N}_{1}\) and either \(\mathsf{N}_{1}\) occurs in \(\mathsf{X}_{1}^{(i)}\) and \(\mu_{\mathrm{f}}(\mathsf{N}_{1})>3.2\omega\) or \(\mathsf{N}_{1}\) occurs in \(\mathsf{P}_{i}\) and \(\mu_{\mathrm{f}}(\mathsf{N}_{1})>\lambda\). Similarly, there is a fragment \(\mathsf{N}_{2}\) of rank \(\alpha\) such that \(\mathsf{M}_{2}\sim\mathsf{N}_{2}\) and either \(\mathsf{N}_{2}\) occurs in \(\mathsf{X}_{1}^{(i+1)}\) and \(\mu_{\mathrm{f}}(\mathsf{N}_{2})>3.2\omega\) or \(\mathsf{N}_{2}\) occurs in \(\mathsf{P}_{i+1}\) and \(\mu_{\mathrm{f}}(\mathsf{N}_{2})>\lambda\). If \(\mathsf{N}_{1}\) occurs in \(\mathsf{X}_{1}^{(i)}\) and \(\mathsf{N}_{2}\) occurs in \(\mathsf{X}_{1}^{(i+1)}\) we get a contradiction with (ii). If \(\mathsf{N}_{1}\) occurs in \(\mathsf{P}_{i}\) and \(\mathsf{N}_{2}\) occurs in \(\mathsf{X}_{1}^{(i+1)}\) or \(\mathsf{N}_{1}\) occurs in \(\mathsf{X}_{1}^{(i)}\) and \(\mathsf{N}_{2}\) occurs in \(\mathsf{P}_{i+1}\) we get a contradiction with (iii). Finally, if \(\mathsf{N}_{1}\) occurs in \(\mathsf{P}_{i}\) and \(\mathsf{N}_{2}\) occurs in \(\mathsf{P}_{i+1}\) then by (i), we have \(s_{Y,\vec{\mathsf{Y}}}\mathsf{N}_{1}\sim\mathsf{N}_{2}\) and hence \(\mathsf{K}\sim s_{Y,\vec{\mathsf{Y}}}\mathsf{K}\). By Proposition 8.16(i)\({}_{\alpha-1}\) this implies that \(Y\) is conjugate in \(G_{\alpha-1}\) to a power of the root of a relator of rank \(\alpha\). This finishes the proof. **11.11 Proposition**.: _Let \(R\) be a relator of rank \(\beta\leq\alpha\) and let \(R=R_{0}^{n}\) where \(R_{0}\) is the root of \(R\). Then \(R_{0}\) has order \(n\) in \(G_{\alpha}\)._ Proof.: Let \(k\) be a proper divisor of \(n\). By Lemma 10.8, \(R_{0}^{k}\) contains no fragments \(K\) of rank \(\gamma\) with \(\mu_{\mathrm{f}}(K)\geq 3.2\omega\), for all \(\gamma=\beta+1,\ldots,\alpha\). By Proposition 11.10\({}_{\beta}\), \(R_{0}^{k}\) is cyclically reduced in \(G_{\beta}\) and hence also in rank \(\alpha\). Hence \(R_{0}^{k}\neq 1\) in \(G_{\alpha}\). **11.12 Proposition** (conjugate powers of relator roots).: _Let \(R\) be a relator of rank \(1\leq\beta\leq\alpha\) and let \(R=R_{0}^{n}\) where \(R_{0}\) is the root of \(R\). If \(R_{0}^{k}=g^{-1}R_{0}^{l}g\) in \(G_{\alpha}\) for some \(k,l\not\equiv 0\pmod{n}\) then \(g\in\langle R_{0}\rangle\) and \(k\equiv l\pmod{n}\)._ Proof.: By Proposition 11.11, if \(R_{0}^{k}=g^{-1}R_{0}^{l}g\) in \(G_{\alpha}\) and \(g\in\langle R_{0}\rangle\) then \(k\equiv l\pmod{n}\). It remains to prove that equality \(R_{0}^{k}=g^{-1}R_{0}^{l}g\) for \(k,l\not\equiv 0\pmod{n}\) implies \(g\in\langle R_{0}\rangle\). By Proposition 11.10 we can assume that \(R_{0}^{k}\) and \(R_{0}^{l}\) are cyclically reduced in \(G_{\alpha}\). We represent \(g\) by a word \(Z\) and consider an annular diagram \(\Delta\) of rank \(\alpha\) with two cyclic sides \(\mathsf{X}_{1}\) and \(\mathsf{X}_{2}\) labeled \(R_{0}^{-k}\) and \(R_{0}^{l}\) which is obtained from a disk diagram with boundary label \(R_{0}^{-k}Z^{-1}R_{0}^{l}Z\) by gluing two boundary segments labeled \(Z^{-1}\) and \(Z\). Let \(\mathsf{Z}\) be the path in \(\Delta\) with \(\mathit{label}(\mathsf{Z})=Z\) that joins starting vertices of \(\mathsf{X}_{2}\) and \(\mathsf{X}_{1}\). We apply to \(\Delta\) the reduction process 5.7. By Lemma 4.8, we can replace \(\mathsf{Z}\) by a new path \(\mathsf{Z}_{1}\) with the same endpoints such that \(\mathit{label}(Z_{1})=Z\) in \(G_{\alpha}\) (so \(\mathit{label}(Z_{1})\) represents \(g\) in \(G_{\alpha}\)). We can assume also that \(\Delta\) has a tight set \(\mathscr{T}\) of contiguity subdiagrams. _Case_ 1: \(\Delta\) has a cell \(\mathsf{D}\) of rank \(\alpha\). By Proposition 7.13(i), \(\mathsf{D}\) has a contiguity subdiagram \(\Pi_{i}\in\mathscr{T}\) to each of the sides \(\mathsf{X}_{i}\) of \(\Delta\). Moreover, if \(\delta\Pi_{i}=\mathsf{S}_{i}\mathsf{u}_{i}\mathsf{Q}_{i}\mathsf{v}_{i}\) where \(\mathsf{S}_{i}^{-1}\) is a contiguity arc occurring in \(\delta\mathsf{D}\) then \(\mu(\mathsf{S}_{i})>\lambda\). By Lemma 10.8 this implies \(\beta=\alpha\). Let \(\mathit{label}(\delta\Delta)=R^{\prime}\) be the set of \(\mathsf{D}\)-orbits of \( where \(R^{\prime}\) is a relator of rank \(\alpha\). Consider lines \(\bar{\mathsf{X}}_{1}\), \(\bar{\mathsf{X}}_{2}\) and \(\bar{\mathsf{R}}\) in \(\Gamma_{\alpha-1}\) labeled \(R^{\pm\infty}\), \(R^{\pm\infty}\) and \(R^{\prime\infty}\) which are obtained by mapping the universal cover of the subgraph of \(\Delta\) shown in Figure 36. By Corollary 8.2 we get \(\bar{\mathsf{X}}_{1}=\bar{\mathsf{X}}_{2}=\bar{\mathsf{R}}\). This implies that \(\mathit{label}(\mathsf{Z}_{1})\) is equal in \(G_{\alpha-1}\) to a power of \(R_{0}\), as required. _Case_ 2: \(\Delta\) has no cells of rank \(\alpha\). Then we have equality \(R_{0}^{k}=Z_{1}^{-1}R_{0}^{l}Z_{1}\) in \(G_{\alpha-1}\). If \(\beta<\alpha\) then the statement follows from Proposition 11.12\({}_{\alpha-1}\). Let \(\beta=\alpha\). If \(kl>0\) then the statement follows from Proposition 13.8\({}_{\alpha-1}\). If \(kl<0\) then by Corollary 13.10(i)\({}_{\alpha-1}\) we obtain \(R_{0}=g^{-1}R_{0}^{-1}g\) which contradicts our condition (S3) on the presentation of \(G_{\alpha}\). ### Proposition _Every element of \(G_{\alpha}\) of infinite order has the form \(h^{m}\) where \(h\) is a non-power._ Proof.: We need to prove this only in the case \(\alpha\geq 1\). Let \(g\in G_{\alpha}\) be an element of infinite order. It is enough to find an upper bound on \(|m|\) in the equality of the form \(g=h^{m}\). Up to conjugation, we represent \(g\) and \(h\) by a strongly cyclically reduced in \(G_{\alpha}\) words \(X\) and \(Y\) by Proposition 11.5. Let \(\beta\) be the maximal rank with \(1\leq\beta\leq\alpha\) such that a cyclic shift of \(X\) contains a fragment \(K\) of rank \(\beta\) with \(\mu_{\mathrm{f}}(K)\geq\xi_{0}\). (It there is no such \(K\) then by Proposition 9.16\(X\) in conjugate to \(Y^{m}\) in the free group \(G_{0}\) and then \(|m|\leq|X|\).) Using Propositions 10.24(i) and 8.16(ii) we find \(m\) pairwise non-compatible fragments \(M\) of rank \(\beta\) with \(\mu_{\mathrm{f}}(M)\geq\xi_{0}-2\lambda-3.4\omega\) in a cyclic shift of \(X\). This again implies \(|m|\leq|X|\). ## 12. Coarsely periodic words and segments over \(G_{\alpha}\) In this section we analyze words which are "geometrically close" in \(G_{\alpha}\) to periodic words. In Sections 12 and 13 we use the following notation for numeric parameters: \[\xi_{1}=\xi_{0}-2.6\omega,\quad\xi_{2}=\xi_{1}-2\lambda-3.4\omega.\] ### Definition A _simple period over \(G_{\alpha}\)_ is a strongly cyclically reduced word representing a non-power element of \(G_{\alpha}\). According to 2.5, if \(A\) is a simple period over \(G_{\alpha}\) then any word \(A^{n}\) is reduced over \(G_{\alpha}\). Proposition 7.6 implies that \(A\) has infinite order in \(G_{\alpha}\). Figure 36. **12.2 Definition**.: Let \(A\) be a simple period over \(G_{\alpha}\). The _activity rank_ of \(A\) is the maximal rank \(\beta\) such that an \(A\)-periodic word contains a fragment \(K\) of rank \(\beta\geq 1\) with \(\mu_{\mathrm{f}}(K)\geq\xi_{1}\) or it is 0 if no such fragments exist. ### Case of activity rank 0 The arguments below differ depending on whether the activity rank \(\beta\) of a simple period over \(G_{\alpha}\) is positive or 0. However, the difference is only that in the case \(\beta\geq 1\) we use various conditions on the size \(\mu_{\mathrm{f}}(\mathsf{F})\) of fragments \(\mathsf{F}\) of rank \(\beta\). _All definitions, statements and proofs in Sections 12 and 13 apply in cases when the activity rank \(\beta\) of a simple period over \(G_{\alpha}\) is 0 simply ignoring conditions of the form \(\mu_{\mathrm{f}}(\cdot)\geq\dots\) for fragments of rank \(\beta\) (i.e. assuming that these conditions are all formally true in case \(\beta=0\))._ Below we do not distinguish this special case \(\beta=0\). We will use the following notations. If \(\mathsf{K}\) and \(\mathsf{M}\) are fragments of the same rank \(0\leq\beta\leq\alpha\) occurring in a reduced path \(\mathsf{X}\) in \(\Gamma_{\gamma}\) then \(\mathsf{K}\lesssim\mathsf{M}\) means \(\mathsf{K}<\mathsf{M}\) or \(\mathsf{K}\sim\mathsf{M}\); similarly, \(\mathsf{K}\nleq\mathsf{M}\) means \(\mathsf{K}<\mathsf{M}\) and \(\mathsf{K}\not\sim\mathsf{M}\). Note that by Corollary 9.24(ii), for fragments \(\mathsf{K}\), \(\mathsf{M}\) of rank \(\beta\geq 1\) with \(\mu_{\mathrm{f}}(\mathsf{K}),\mu_{\mathrm{f}}(\mathsf{L})\geq\gamma+2.6\omega\) the relation \(`\mathsf{K}\lesssim\mathsf{M}\)' depends only on their equivalence classes with respect to compatibility. Thus, for fixed \(\mathsf{X}\) and \(\beta\) it induces the linear order on the set of equivalence classes of '\(\sim\)' of fragments \(\mathsf{N}\) of rank \(\beta\) in \(\mathsf{X}\) with \(\mu_{\mathrm{f}}(\mathsf{N})\geq\gamma+2.6\omega\). (In case \(\beta=0\) relation \(\mathsf{K}\lesssim\mathsf{M}\) is defined on subpaths on length 1 and means \(\mathsf{K}\ll\mathsf{M}\) or \(\mathsf{K}=\mathsf{M}\).) **12.4 Definition**.: Let \(A\) be a simple period over \(G_{\alpha}\) and \(\beta\) the activity rank of \(A\). A reduced path \(\mathsf{S}\) in \(\Gamma_{\alpha}\) is a _coarsely periodic segment with period \(A\)_ (or a _coarsely \(A\)-periodic segment_ for short) if there exists a path \(\mathsf{P}\) labeled by an \(A\)-periodic word, fragments \(\mathsf{K}_{0}\), \(\mathsf{K}_{1}\) of rank \(\beta\) in \(\mathsf{P}\) and fragments \(\mathsf{M}_{0}\), \(\mathsf{M}_{1}\) of rank \(\beta\) in \(\mathsf{S}\) such that: * \(\mathsf{P}\) starts with \(\mathsf{K}_{0}\) and ends with \(\mathsf{K}_{1}\); \(\mathsf{S}\) starts with \(\mathsf{M}_{0}\) and ends with \(\mathsf{M}_{1}\); * \(\mathsf{K}_{0}\sim\mathsf{M}_{0}^{\pm 1}\), \(\mathsf{K}_{1}\sim\mathsf{M}_{1}^{\pm 1}\) and \(\mathsf{K}_{0}\not\sim\mathsf{K}_{1}\); * \(\mu_{\mathrm{f}}(\mathsf{K}_{i})\geq\xi_{1}\), \(\mu_{\mathrm{f}}(\mathsf{M}_{i})\geq\xi_{2}\) (\(i=0,1\)); * \(s_{A,\mathsf{P}}\mathsf{K}_{0}\lesssim\mathsf{K}_{1}\) (informally, \(\mathsf{P}\) "contains at least one period \(A\)"). The path \(\mathsf{P}\) is a _periodic base_ for \(\mathsf{S}\). The infinite \(A\)-periodic extension of \(\mathsf{P}\) is an _axis_ for \(\mathsf{S}\). Note that the starting fragment \(\mathsf{M}_{0}\) and the ending fragment \(\mathsf{M}_{1}\) of \(\mathsf{S}\) are defined up to compatibility. Note also that by Lemma 10.13(i) and Proposition 9.10, \(\mathsf{P}\) and \(\mathsf{S}\) are close in rank \(\beta\). In particular, if \(\beta=0\) then \(\mathsf{P}=\mathsf{Q}\) and thus \(\mathsf{P}\) is an \(A\)-periodic segment. We will be assuming that a coarsely \(A\)-periodic segment is always considered with a fixed associated axis. (In fact, we prove later that the axis of a coarsely \(A\)-periodic segment is defined in a unique way, see Corollary 13.9). Note that under this assumption, the periodic base \(\mathsf{P}\) for \(\mathsf{S}\) is defined up to changing the starting and the ending fragments \(\mathsf{K}_{0}\) and \(\mathsf{K}_{1}\) of rank \(\beta\) with compatible ones. The label of a coarsely \(A\)-periodic segment in \(\Gamma_{\alpha}\) is a _coarsely \(A\)-periodic word over \(G_{\alpha}\)_. Note that a simple period \(A\) over \(G_{0}\) is any cyclically freely reduced word that is not a proper power. A coarsely \(A\)-periodic word over \(G_{0}\) is simply any \(A\)-periodic word \(P\) with \(|P|>|A|\). **12.5 Definition**.: We measure the size of a coarsely \(A\)-periodic segment \(\mathsf{S}\), which roughly corresponds to the number of periods \(A\), in the following way. Let \(\mathsf{P}\) be the periodic base for \(\mathsf{S}\) and \(\mathsf{K}_{0}\), \(\mathsf{K}_{1}\) as in Definition 12.4. Then we write \(\ell_{A}(\mathsf{S})=t\) where \(t\) is the maximal integer such that \(s_{A,\mathsf{P}}^{t}\mathsf{K}_{0}\lesssim\mathsf{K}_{1}\). Thus, we always have \(\ell_{A}(\mathsf{S})\geq 1\). Since we consider a fixed associated axis for \(S\), the number \(\ell_{A}(S)\) does not depend on the choice of a periodic base \(P\). If \(S\) is a coarsely \(A\)-periodic word over \(G_{\alpha}\) then we formally define \(\ell_{A}(S)\) to be the maximal possible value of \(\ell_{A}(S)\) where \(S\) is a coarsely \(A\)-periodic segment labeled \(S\). _Remark_.: (i) It immediately follows from the definition that \(t\) is also the maximal integer such that \(K_{0}\lesssim s_{A,P}^{-t}K_{1}\). Thus, \(\ell_{A}(S)=\ell_{A^{-1}}(S^{-1})\). (ii) To compute \(\ell_{A}(S)\) we have to take a path \(S\) in \(\Gamma_{\alpha}\) with \(\mathit{label}(S)=S\) and then choose a periodic base \(P\) for \(S\) so that \(\ell_{A}(S)\) is maximal possible; it will follow from Proposition 13.7 that any choice of \(P\) gives in fact the same value \(\ell_{A}(S)\). _Remark_.: Up to changing the periodic base \(P\), we can always assume in Definition 12.5 that both \(K_{0}\) and its translation \(s_{A,P}^{t}K_{0}\) occur in \(P\). In this case we have \(|P|\geq\ell_{A}(S)|A|\). _Definition_.: Let \(S_{1}\) and \(S_{2}\) be coarsely \(A\)-periodic segments in \(\Gamma_{\alpha}\). We say that \(S_{1}\) and \(S_{2}\) are _compatible_ if they have the same axis and _strongly compatible_ if they share a common periodic base. We use notations \(S_{1}\sim S_{2}\) and \(S_{1}\approx S_{2}\) for compatibility and strong compatibility respectively. Note that in the case \(S_{1}\approx S_{2}\) any periodic base for \(S_{1}\) is a periodic base for \(S_{2}\) and vice versa. This easily follows from Definition 12.4. If \(S_{1}\) and \(S_{2}\) are coarsely \(A\)-periodic segments in \(\Gamma_{0}\) then \(S_{1}\sim S_{2}\) if and only if they have a common periodic extension and \(S_{1}\approx S_{2}\) if and only if \(S_{1}=S_{2}\). _Proposition_.: _Let \(S_{1}\) and \(S_{2}\) be coarsely \(A\)-periodic segments in \(\Gamma_{\alpha}\)._ * _If_ \(S_{1}\approx S_{2}\) _then_ \(\ell_{A}(S_{1})=\ell_{A}(S_{2})\)_._ * _Assume that_ \(S_{1}\) _and_ \(S_{2}\) _occur in a reduced path_ \(X\) _in_ \(\Gamma_{\alpha}\) _and_ \(S_{1}\sim S_{2}\)_. Then the union of_ \(S_{1}\) _and_ \(S_{2}\) _in_ \(X\) _is an_ \(A\)_-coarsely periodic segment where a periodic base for_ \(S_{1}\cup S_{2}\) _is the union of periodic bases_ \(f\) _or_ \(S_{1}\) _and_ \(S_{2}\) _in their common infinite_ \(A\)_-periodic extension._ Proof.: (i) is immediate consequence of Definition 12.8. (ii) follows from Proposition 10.23(ii). We describe a procedure of shortening a coarsely \(A\)-periodic segment \(S\) by a "given number \(k\) of periods". Let \(k\geq 1\) and \(\ell_{A}(S)\geq k+1\). Let \(\beta\) be the activity rank of \(S\), let \(P\) a periodic base for \(S\) and let \(K_{i}\) and \(M_{i}\) (\(i=0,1\)) be starting and ending fragments of rank \(\beta\) of \(P\) and \(S\) respectively as in Definition 13.3. We have \(K_{0}<s_{A,P}^{k}K_{0}\lesssim s_{A,P}^{-1}K_{1}<K_{1}\) and it follows from Proposition 8.16(ii) that \(s_{A,P}^{k}K_{0}\not\sim K_{0}\) and \(s_{A,P}^{k}K_{0}\not\sim K_{1}\). By Proposition 10.23(i) there exists a fragment \(N\) of rank \(\beta\) in \(S\) with \(\mu_{f}(N^{\prime})\geq\xi_{2}\) such that \(s_{A,P}^{k}K_{0}\sim N^{\pm 1}\). Then \(S_{1}=N\cup M_{1}\) is an end of \(S\) which is a coarsely \(A\)-periodic segment with periodic base \(P_{1}=s_{A,P}^{k}K_{0}\cup K_{1}\) and \(\ell_{A}(S_{1})=\ell_{A}(S)-k\). We note that: * The result of the operation is defined up to the strict compatibility. * We have \(P=XP_{1}\) where \(|X|=k|A|\). * If \(k\geq 2\) then by Proposition 10.23(i) we can find also a fragment \(N^{\prime}\) of rank \(\beta\) in \(S\) with \(\mu_{f}(N^{\prime})\geq\xi_{2}\) such that \(s_{A,P}^{k-1}K_{0}\sim N^{\prime\pm 1}\) and \(N^{\prime}\) and \(N\) are disjoint. Then \(S=S_{0}uS_{1}\) where \(S_{0}=M_{0}\cup N^{\prime}\) is a coarsely \(A\)-periodic segment with periodic base \(K_{0}\cup s_{A,P}^{k-1}K_{0}\) and \(\ell_{A}(S_{0})=k-1\). * The starting position of \(\mathsf{S}_{1}\) depends only on the starting position of \(\mathsf{S}\); more precisely, if \(\mathsf{S}^{\prime}\) is a start of \(\mathsf{S}\) and \(\mathsf{S}_{1}\) and \(\mathsf{S}^{\prime}_{1}\) are obtained from \(\mathsf{S}\) and \(\mathsf{S}^{\prime}\) as above then \(\mathsf{S}^{\prime}_{1}\) is a start of \(\mathsf{S}_{1}\) up to strict compatibility of \(\mathsf{S}^{\prime}_{1}\); if \(\mathsf{S}\approx\mathsf{S}^{\prime}\) then \(\mathsf{S}_{1}\approx\mathsf{S}^{\prime}_{1}\). **Definition**.: If \(\mathsf{S}_{1}\) is obtained from \(\mathsf{S}\) by the procedure in 12.10 then we say that \(\mathsf{S}_{1}\) is obtained by _shortening of \(\mathsf{S}\) by \(t\) periods from the start_. In the symmetric way, we define _shortening of \(\mathsf{S}\) by \(t\) periods from the end_. If \(\ell_{A}(\mathsf{S})\geq 2t+1\) and \(\mathsf{S}^{\prime}\) is obtained from \(\mathsf{S}\) by applying the operation from both sides then \(\mathsf{S}^{\prime}\) is the result of _truncation of \(\mathsf{S}\) by \(t\) periods_. **Definition**.: We define two numeric parameters associated with a simple period \(A\) over \(G_{\alpha}\): the _stable size \([A]_{\alpha}\) of \(A\) in rank \(\alpha\)_, \[[A]_{\alpha}=\inf_{m\geq 1}\frac{|(A^{m})^{\circ}|_{\alpha}}{m}\] and _the stability decrement \(h_{\alpha}(A)\)_: \[h_{\alpha}(A)=\left\lceil\frac{1.2}{[A]_{\alpha}}\right\rceil+1.\] If \(\ell_{A}(\mathsf{S})\geq 2h_{\alpha}(A)+1\) then the result of truncation of \(\mathsf{S}\) by \(h_{\alpha}(A)\) periods is _the stable part of \(\mathsf{S}\)_. By claim 12.10(iv) and its symmetric version, the function '\(\mathsf{S}\to\operatorname{stable}\) part of \(\mathsf{S}\)' respects strict compatibility: if \(\mathsf{S}_{1}\approx\mathsf{S}_{2}\) and \(\mathsf{S}^{*}_{i}\) is the stable part of \(\mathsf{S}_{i}\) then \(\mathsf{S}^{*}_{1}\approx\mathsf{S}^{*}_{2}\). The basic fact about \([A]_{\alpha}\) and \(h_{\alpha}(A)\) is the following observation. **Lemma**.: _If \(X\) is an \(A\)-periodic word and \(|X|\geq m|A|\) then \(|X|_{\alpha}\geq m[A]_{\alpha}\). In particular, if \(|X|\geq(h_{\alpha}(A)-1)|A|\) then \(|X|_{\alpha}\geq 1.2\)._ Proof.: We have \[|X|_{\alpha}\geq|A_{1}^{m}|_{\alpha}\geq|(A^{m})^{\circ}|_{\alpha}\geq m[A]_{\alpha}\] where \(A_{1}\) is the cyclic shift of \(A\) at which \(X\) starts. The second statement follows from the first. The principal role of the stable part is described by the following proposition. **Proposition** (stability of coarsely periodic words).: _Let \(\mathsf{S}\) be a coarsely \(A\)-periodic segment in \(\Gamma_{\alpha}\) with \(\ell_{A}(\mathsf{S})\geq 2h_{\alpha}(A)+1\) and let \(\mathsf{S}^{*}\) be the stable part of \(\mathsf{S}\). If \(\mathsf{X}\) and \(\mathsf{Y}\) are close reduced paths in \(\Gamma_{\alpha}\) and \(\mathsf{S}\) is a subpath of \(\mathsf{X}\) then \(\mathsf{Y}\) contains a coarsely \(A\)-periodic segment \(\mathsf{T}\) such that \(\mathsf{T}\approx\mathsf{S}^{*}\)._ Proof.: Let \(\mathsf{P}\) and \(\mathsf{P}^{*}\) be periodic bases for \(\mathsf{S}\) and \(\mathsf{S}^{*}\) respectively. Let \(\beta\) be the activity rank of \(A\) and let \(\mathsf{K}_{i}\) and \(\mathsf{M}_{i}\) (\(i=0,1\)) be fragments of rank \(\beta\) in \(\mathsf{P}\) and in \(\mathsf{S}\), respectively, from Definition 13.3 applied to \(\mathsf{P}\) and \(\mathsf{S}\). Denote \(t=h_{\alpha}(A)\). Let \(\mathsf{X}\) and \(\mathsf{Y}\) be as in the proposition. If \(\alpha=0\) then \(\mathsf{X}=\mathsf{Y}\) and there is nothing to prove. Let \(\alpha>0\). We claim that \(\mathsf{P}=\mathsf{z}_{1}\mathsf{P}^{\prime}\mathsf{z}_{2}\) where \(\mathsf{P}^{\prime}\) is close in rank \(\beta\) to a subpath of \(\mathsf{Y}\) and \(|\mathsf{z}_{i}|_{\alpha}<1.2\). Indeed, if \(\beta=\alpha\) then it easily follows from Proposition 10.6 and Lemma 10.13(i) that \(\mathsf{P}\) is already close to a subpath of \(\mathsf{Y}\). If \(\beta<\alpha\) then we observe that \(\mathsf{S}\) contains no fragments \(\mathsf{K}\) of rank \(\gamma\) with \(\beta<\gamma\leq\alpha\) and \(\mu_{\mathsf{f}}(\mathsf{K})\geq\xi_{0}\) due to the definition of the activity rank and Proposition 8.7\({}_{\leq\alpha}\). Then the claim follows by Proposition 10.22. By Lemma 12.13 we have \(|{\sf z}_{i}|<(t-1)|A|\). This implies that \(s_{A,{\sf P}}^{t-1}{\sf K}_{0}\cup s_{A,{\sf P}}^{-t+1}{\sf K}_{1}\) is contained in \({\sf P}^{\prime}\). Note that \({\sf P}^{*}=s_{A,{\sf P}}^{t}{\sf K}_{0}\cup s_{A,{\sf P}}^{-t}{\sf K}_{1}\) where \(\mu_{\rm f}({\sf K}_{0}),\mu_{\rm f}({\sf K}_{1})\geq\xi_{1}\). Then by Proposition 10.23(i) we find a subpath \({\sf T}\) which is a coarsely \(A\)-periodic segment with periodic base \({\sf P}^{*}\) and, consequently, we have \({\sf T}\approx{\sf S}^{*}\). We use parameter \(h_{\alpha}(A)\) also in several other situations. **12.15 Proposition**.: _Let \({\sf P}\) be a periodic segment in \(\Gamma_{\alpha}\) with a simple period \(A\) over \(G_{\alpha}\). Assume that \(|{\sf P}|\geq m|A|\) where \(m\geq 2h_{\alpha}(A)+3\). Let \({\sf X}\) be a reduced path in \(\Gamma_{\alpha}\) such that \({\sf P}\) and \({\sf X}\) are close. Then there exist a subpath \({\sf P}_{1}\) of \({\sf P}\) and a subpath \({\sf X}_{1}\) of \({\sf X}\) such that \({\sf X}_{1}\) is a coarsely \(A\)-periodic segment with periodic base \({\sf P}_{1}\) and \(\ell_{A}({\sf X}_{1})=m-2h_{\alpha}(A)-2\)._ Proof.: Let \(\beta\) be the activity rank of \(A\). Using Corollary 9.13 and Lemma 12.13 we find close in rank \(\beta\) subpaths \({\sf P}_{2}\) of \({\sf P}\) and \({\sf X}_{2}\) of \({\sf X}\) with \(|{\sf P}_{2}|\geq m-2h_{\alpha}(A)+2\). By Proposition 8.16(iii) any fragment \({\sf K}\) of rank \(\beta\) in \({\sf P}\) with \(\mu_{\rm f}({\sf K})\geq 2\lambda+5.3\omega\) satisfies \(|{\sf K}|<2|A|\), so according to Definition 12.4 there exists a fragment \({\sf K}\) of rank \(\beta\) in \({\sf P}\) with \(\mu_{\rm f}({\sf K})\geq\xi_{1}\). Shortening \({\sf K}\) from the end by Proposition 8.12 if \(\beta\geq 1\) and using again Proposition 8.16(ii) we find a fragment \({\sf K}_{1}\) of rank \(\beta\) with \(\mu_{\rm f}({\sf K}_{1})>\xi_{1}-\lambda-2.7\omega\) that is a start of \({\sf K}\) disjoint from \(s_{A,{\sf P}}{\sf K}\); hence \(|{\sf K}_{1}|\leq|A|\). We can assume that \({\sf K}\) occurs in \({\sf P}_{2}\) and is closest to the start of \({\sf P}_{2}\). Then \({\sf P}_{2}\) contains \(m-2h_{\alpha}(A)\) translates \(s_{A,{\sf P}}^{i}{\sf K}\) of \({\sf K}\) for \(i=0,\ldots,m-2h_{\alpha}(A)-1\) and contains also \(s_{A,{\sf P}}^{m-2h_{\alpha}(A)}{\sf K}_{1}\). Applying Proposition 10.23(i) we find fragments \({\sf M}_{i}\) (\(i=1,\ldots,m-2h_{\alpha}(A)-1\)) of rank \(\beta\) in \({\sf X}_{2}\) with \(\mu_{\rm f}({\sf M}_{i})\geq\xi_{2}\) such that \(s_{A,{\sf P}}^{i}{\sf K}\sim{\sf M}_{i}^{\pm 1}\). Then \({\sf X}_{1}={\sf M}_{1}\cup{\sf M}_{m-2h_{\alpha}(A)-1}\) is a coarsely \(A\)-periodic segment with periodic base \(s_{A,{\sf P}}{\sf K}\cup s_{A,{\sf P}}^{m-2h_{\alpha}(A)-1}{\sf K}\) and we have \(\ell_{A}({\sf X}_{1})=m-2h_{\alpha}(A)-2\). **12.16 Proposition**.: _Let \(S\) be a coarsely \(A\)-periodic word over \(G_{\alpha}\) and \(B\) a simple period over \(G_{\alpha}\) conjugate to \(A\). Let \(\ell_{A}(S)\geq 2h_{\alpha}(A)+3\). Then a subword \(T\) of \(S\) is a coarsely \(B\)-periodic word over \(G_{\alpha}\) with \(\ell_{B}(T)\geq\ell_{A}(S)-2h_{\alpha}(A)-2\)._ Proof.: We represent \(S\) by a coarsely \(A\)-periodic segment \({\sf S}\) in \(\Gamma_{\alpha}\). Let \({\sf P}\) a periodic base for \({\sf S}\), let \({\sf L}_{1}\) be the axis of \({\sf S}\) and let \({\sf L}_{2}\) be the \(B\)-periodic line parallel to \({\sf L}_{1}\). Denote \(\beta_{1}\) and \(\beta_{2}\) activity ranks of \(A\) and \(B\) respectively. According to Definition 12.2, either \({\sf L}_{1}\) or \({\sf L}_{2}\) contains no fragments \({\sf K}\) of rank \(\gamma\) with \(\beta_{1}<\gamma\leq\alpha\) and \(\mu_{\rm f}({\sf K})\geq\xi_{1}\). Let \({\sf K}_{0}\) and \({\sf K}_{1}\) be fragments of rank \(\beta_{1}\) with \(\mu_{\rm f}({\sf K}_{i})\geq\xi_{1}\) that are a start and an end of \({\sf P}\) respectively. We have \(s_{A,{\sf L}_{1}}^{\ell_{A}({\sf S})}{\sf K}_{0}\lesssim{\sf K}_{1}\). By Proposition 10.24(i), there exist fragments \({\sf M}_{0}\) and \({\sf M}_{1}\) of rank \(\beta_{1}\) in \({\sf L}_{2}\) with \(\mu_{\rm f}({\sf M}_{i})\geq\xi_{2}\) such that \({\sf K}_{i}\sim{\sf M}_{i}^{\pm 1}\). Since \({\sf L}_{1}\) and \({\sf L}_{2}\) are parallel, we have \(s_{A,{\sf L}_{1}}=s_{B,{\sf L}_{2}}\) and hence \(s_{B,{\sf L}_{2}}^{\ell_{A}({\sf S})}{\sf M}_{0}\lesssim{\sf M}_{1}\) by Proposition 10.24(ii). Then \({\sf Q}={\sf M}_{0}\cup s_{B,{\sf L}_{2}}^{\ell_{A}({\sf S})}{\sf M}_{0}\cup{ \sf M}_{1}\) is close in rank \(\beta_{1}\) to \({\sf P}\), \(|{\sf Q}|\geq\ell_{A}({\sf S})\) and the statement follows by Proposition 12.15. ## 13. Overlapped coarse periodicity The main result of this section is Proposition 13.4 which can be thought as an analog of a well known property of periodic words: if two periodic words have a sufficiently large overlapping then they have a common period. We need such an analog in a more general context where closeness plays the role of overlapping. As a main technical tool, instead of coincidence of letters in the overlapping case we use correspondence of fragments of rank \(\beta\leq\) \(\alpha\) in strictly close in rank \(\beta\) segments in \(\Gamma_{\alpha}\) given by Proposition 10.23. A difficulty is caused by the "fading effect" of this correspondence: a fragment size can decrease when passing from one segment to the other. To overcome this difficulty, we use a special combinatorial argument [9, Lemma 6.4]. **13.1 Lemma** (penetration lemma, [9, Lemma 6.4]).: _Let \(S_{0}\), \(S_{1}\), \(\dots\), \(S_{k}\) be a finite collection of disjoint sets. Assume that the following assertions hold:_ 1. _Each_ \(S_{i}\) _is pre-ordered, i.e. endowed with a transitive relation '_\(<_{i}\)_'._ 2. _There is an equivalence relation_ \(a\sim b\) _on the union_ \(\bigcup_{i}S_{i}\) _such that for any_ \(a,b\) _in the same set_ \(S_{i}\) _we have either_ \(a<_{i}b\)_,_ \(b<_{i}a\) _or_ \(a\sim b\)_; in other words, we have an induced linear ordering on the set of equivalence classes on each_ \(S_{i}\)_._ 3. _We assume that the equivalence preserves the pre-ordering in neighboring sets: if_ \(a,b\in S_{i}\)_,_ \(a^{\prime},b^{\prime}\in S_{i+1}\)_,_ \(a\sim a^{\prime}\) _and_ \(b\sim b^{\prime}\) _then_ \(a<_{i}b\Leftrightarrow a^{\prime}<_{i+1}b^{\prime}\)_._ _If_ \(c\in S_{i}\)_,_ \(a,b\in S_{j}\) _and_ \(a\lesssim_{j}b\) _(where_ \(a\lesssim_{j}b\) _denotes '_\(a<_{j}b\) _or_ \(a\sim b\)_') then we say that_ \(c\) penetrates _between_ \(a\) _and_ \(b\) _if there exists_ \(c^{\prime}\sim c\) _such that_ \(a\lesssim_{j}c^{\prime}\lesssim_{j}b\)_._ 4. _There is a subset of_ \(\bigcup_{i}S_{i}\) _of_ stable _elements that have the following property: if_ \(c\in S_{i}\) _is stable,_ \(a\lesssim_{i}c\lesssim_{i}b\)_,_ \(a^{\prime},b^{\prime}\in S_{j}\)_,_ \(a^{\prime}\lesssim_{j}b^{\prime}\)_,_ \(a\sim a^{\prime}\) _and_ \(b\sim b^{\prime}\) _then_ \(c\) _penetrates between_ \(a^{\prime}\) _and_ \(b^{\prime}\)_._ 5. _For each_ \(i\leq k-1\)_, there are stable elements_ \(a_{i},b_{i}\in S_{i}\) _and_ \(a^{\prime}_{i},b^{\prime}_{i}\in S_{i+1}\) _such that_ \(a_{i}\sim a^{\prime}_{i}\)_,_ \(b_{i}\sim b^{\prime}_{i}\) _and_ \(a_{i}<_{i}b_{i}\)_._ _Finally, let \(c_{0}\in S_{0}\) be stable and \(a_{0}\lesssim_{0}c_{0}\lesssim_{0}b_{0}\). Assume that \(c_{0}\) penetrates between \(a_{i}\) and \(b_{i}\) for each \(i=1,2,\dots,k-1\). Then \(c_{0}\) penetrates between \(a_{k}\) and \(b_{k}\)._ The following observation is a special case of [9, Lemma 6.2]. **13.2 Lemma**.: _Suppose a group \(G\) acts on set \(X\). Let \(g,h\in G\), \(x_{0},x_{1},\dots,x_{t}\in X\) and for some \(r,s\geq 0\) with \(\gcd(r,s)=1\) and \(r+s\leq t\),_ \[gx_{i}=x_{i+r}\ (i=0,1,\dots,t-r),\quad hx_{i}=x_{i+s}\ (i=0,1,\dots,t-s).\] _Assume that the stabilizer \(H\) of \(x_{0}\) is malnormal in \(G\). Then either \(g,h\in H\) (and hence \(x_{0}=x_{1}=\dots=x_{t}\)) or there exists \(d\in G\) such that \(g=d^{r}\) and \(h=d^{s}\)._ Proof.: Induction on \(r+s\). We can assume that \(r\leq s\). If \(r>0\) then we have \(g^{-1}hx_{i}=x_{i+s-r}\) for \(0\leq i\leq t-s\) and the statement follows from the inductive hypothesis with \(h:=g^{-1}h\), \(s:=s-r\) and \(t:=t-r\). Otherwise we have \(r=0\) and \(s=1\). Then \(h^{-1}ghx_{0}=gx_{0}=x_{0}\) and by malnormality of \(H\), we have either \(g,h\in H\) or \(g=1\) (and then \(g=h^{0}\) and \(h=h^{1}\)). **13.3 Definition**.: Let \(\mathsf{X}\) and \(\mathsf{Y}\) be reduced paths in \(\Gamma_{\alpha}\). We say that \(\mathsf{X}\) and \(\mathsf{Y}\) are _strictly close in rank \(\beta\leq\alpha\)_ if there are fragments \(\mathsf{K}_{0}\), \(\mathsf{K}_{1}\) of rank \(\beta\) in \(\mathsf{X}\) and fragments \(\mathsf{M}_{0}\), \(\mathsf{M}_{1}\) of rank \(\beta\) in \(\mathsf{Y}\) such that: * \(\mu_{\mathsf{f}}(\mathsf{K}_{i}),\mu_{\mathsf{f}}(\mathsf{M}_{i})\geq\xi_{2}\ (i=0,1)\). * \(\mathsf{X}\) starts with \(\mathsf{K}_{0}\) and ends with \(\mathsf{K}_{1}\); \(\mathsf{Y}\) starts with \(\mathsf{M}_{0}\) and ends with \(\mathsf{M}_{1}\); * \(\mathsf{K}_{0}\sim\mathsf{M}_{0}^{\pm 1}\), \(\mathsf{K}_{1}\sim\mathsf{M}_{1}^{\pm 1}\) and \(\mathsf{K}_{0}\not\sim\mathsf{K}_{1}\). By Lemma 10.13(i), paths which are strictly close in rank \(\beta\) are also close in rank \(\beta\). One of the advantages of strict closeness is that this relation is transitive (this follows immediately from Definition 13.3). Note that a coarsely periodic segment \(\mathsf{P}\) in \(\Gamma_{\alpha}\) and its periodic base \(\mathsf{S}\) are strictly close according to Definition 12.4 (and the condition in Definition 12.4 is slightly stronger because of the lower bound on the size of the starting and the ending fragments of \(S\)). **13.4 Proposition**.: _Let \(A\) be a simple period over \(G_{\alpha}\), \(\beta\) the activity rank of \(A\) and \(\mathsf{P}_{i}\)\((i=0,1)\) be two \(A\)-periodic segments in \(\Gamma_{\alpha}\). Let \(S_{i}\)\((i=0,1)\) be a reduced path in \(\Gamma_{\alpha}\) which is strictly close to \(\mathsf{P}_{i}\). Assume that \(S_{0}\) is contained in \(S_{1}\). Assume also that \(\mathsf{P}_{0}\) contains at least one period \(A\) in the sense that there exist fragments \(K\) and \(K^{\prime}\) of rank \(\beta\) in \(\mathsf{P}_{0}\) such that \(\mu_{\mathrm{f}}(K),\mu_{\mathrm{f}}(K^{\prime})\geq\xi_{2}\) and \(K^{\prime}\sim s_{A,\mathsf{P}_{0}}K\). Then \(\mathsf{P}_{0}\) and \(\mathsf{P}_{1}\) have a common periodic extension._ Proof.: Denote \[\xi_{3}=\xi_{2}-2\lambda-3.4\omega=3\lambda-10.9\omega.\] Throughout the proof, "fragment \(M\)" means "fragment \(M\) of rank \(\beta\) with \(\mu_{\mathrm{f}}(M)\geq\zeta_{3}\)" (or simply "fragment \(M\) of rank \(0\)" if \(\beta=0\), see 12.3). Let a line \(L_{i}\) be the infinite periodic extension of \(\mathsf{P}_{i}\) and let \(g\) be an element of \(G_{\alpha}\) such that \(L_{1}=gL_{0}\), so \(s_{A,\mathsf{P}_{1}}=gs_{A,\mathsf{P}_{0}}g^{-1}\). Our argument relies on establishing a correspondence between fragments of rank \(\beta\) in \(\mathsf{P}_{i}\) and \(S_{i}\). It will be convenient to consider fragments of rank \(\beta\) in four paths \(\mathsf{P}_{i}\) and \(S_{i}\) as four disjoint sets, i.e. we will formally consider pairs \((M,X)\) where \(X\in\{\mathsf{P}_{0},\mathsf{P}_{1},S_{0},S_{1}\}\) and \(M\) is a fragment occurring in \(X\). We will refer to \(M\) as a "fragment belonging to \(X\)" or simply as a "fragment in \(X\)". We introduce two operations on fragments in \(\mathsf{P}_{i}\) and \(S_{i}\). Let \(M\) and \(N\) be fragments each belonging to some \(\mathsf{P}_{i}\) or \(S_{i}\). 1. If \(M\) belongs to \(\mathsf{P}_{i}\), \(N\) belongs to \(S_{i}\) and \(M\sim N^{\pm 1}\) then either of \(M\) and \(N\)_jumps_ to the other. 2. \(M\)_translates_ to \(N\) in the following cases (a)-(d): 1. \(M\) and \(N\) belong to the same \(\mathsf{P}_{i}\) and \(N\sim s_{A,\mathsf{P}_{i}}^{k}M\) for some \(k\in\mathbb{Z}\); or 2. \(M\) belongs to \(\mathsf{P}_{0}\), \(N\) belongs to \(\mathsf{P}_{1}\) and \(N\sim gs_{A,\mathsf{P}_{0}}^{k}M\) for some \(k\in\mathbb{Z}\); or 3. \(M\) belongs to \(\mathsf{P}_{1}\), \(N\) belongs to \(\mathsf{P}_{0}\) and \(N\sim g^{-1}s_{A,\mathsf{P}_{1}}^{k}M\) for some \(k\in\mathbb{Z}\). (In other words, \(M\) translates to \(N\) in cases (a)-(c) if they have the same position in their corresponding periodic lines \(L_{i}\) with respect to the period \(A\) up to compatibility.) 1. An "identical" case: \(M\sim N\) and they belong to some \(S_{i}\) and \(S_{j}\) respectively. Note that the two operations are reversible and are defined up to compatibility. Let \(K\) and \(K^{\prime}\) be fragments in \(\mathsf{P}_{0}\) such that \(\mu_{\mathrm{f}}(K),\mu_{\mathrm{f}}(K^{\prime})\geq\xi_{1}\) and \(K^{\prime}\sim s_{A,\mathsf{P}_{0}}K\), as assumed in the proposition. Let \(M\) be a maximal set of pairwise non-compatible fragments which can be obtained by operations (i) and (ii) starting from \(K\). By Proposition 8.10, neither of any two fragments in \(M\) is contained in the other, so \(M\) is a finite set. The following assertion is the principal step of the proof. _Claim: The jump operation is always possible inside \(M\); that is, for any \(M\in M\) in \(\mathsf{P}_{i}\) or in \(S_{i}\), \(i\in\{0,1\}\), there exists a fragment \(N\) of rank \(\alpha\) in \(S_{i}\) or, respectively, in \(\mathsf{P}_{i}\) such that \(M\sim N^{\pm 1}\)._ _Proof of the claim._ We assume that some \(M\in M\) is given and prove existence of the required \(N\). The proof will consist of application of Lemma 13.1. We do a necessary preparation. According to the definition of \(M\), there is a sequence \(\mathsf{T}_{0}=K\), \(\mathsf{T}_{1}\),..., \(\mathsf{T}_{l}=M\) of fragments \(\mathsf{T}_{j}\in M\) such that \(\mathsf{T}_{j+1}\) is obtained from \(\mathsf{T}_{j}\) by one of the operations (i) or (ii). We can assume that the sequence has no two translations in a row (otherwise we can replace them by a single translation) and has no two jumps in a row (otherwise they eliminate). Assume also for convenience that \(\mathsf{T}_{0}\to\mathsf{T}_{1}\) is a translation (by inserting a trivial translation if needed). Thus for each \(i\), \(\mathsf{T}_{2j}\) translates to \(\mathsf{T}_{2j+1}\) and \(\mathsf{T}_{2j+1}\) jumps to \(\mathsf{T}_{2j+2}\). We can assume that the last step \(\mathsf{T}_{l-1}\to\mathsf{T}_{l}\) is a translation, so \(l=2k-1\) for some \(k\). Now roughly speaking, we move all fragments \(\mathsf{T}_{j}\) along with the corresponding paths \(\mathsf{P}_{i}\) or \(\mathsf{S}_{i}\) belonging them, to the same location up to compatibility. We define a sequence \(\mathsf{Y}_{0}\), \(\mathsf{Y}_{1}\),..., \(\mathsf{Y}_{k}\) of paths in \(\Gamma_{\alpha}\) and a sequence \(\mathsf{W}_{j}\) of fragments in \(\mathsf{Y}_{j}\) for \(j=0,1,\ldots,k-1\). For each \(j\) we will have \(\mathsf{W}_{j}=f_{j}\mathsf{T}_{2j+1}\) for some \(f_{j}\in G_{\alpha}\). The definition of \(\mathsf{Y}_{j}\) and \(f_{j}\) goes as follows. Denote \((\mathsf{X}_{1},\mathsf{X}_{2},\mathsf{X}_{3},\mathsf{X}_{4})=(\mathsf{P}_{0},\mathsf{S}_{0},\mathsf{P}_{1},\mathsf{S}_{1})\) and let \(J(i)\) denote the index such that a fragment in \(\mathsf{X}_{i}\) jumps to a fragment in \(\mathsf{X}_{J(i)}\) (i.e. \((J(1),J(2),J(3),J(4))=(2,1,4,3)\)). Denote also \(I(j)\) the index such that \(\mathsf{T}_{2j-1}\) belongs to \(\mathsf{X}_{I(j)}\). Thus, \(\mathsf{T}_{2j}\) belongs to \(\mathsf{X}_{J(I(j))}\). We start with \(\mathsf{Y}_{0}=\mathsf{X}_{I(0)}\) and \(\mathsf{W}_{0}=\mathsf{T}_{1}\), so \(f_{0}=1\). Assume that \(j<k-1\) and \(\mathsf{Y}_{j}\) and \(f_{j}\) are already defined. If \(\mathsf{T}_{2j}\to\mathsf{T}_{2j+1}\) is a translation by (a)-(c) then there exists \(f_{j+1}\in G_{\alpha}\) such that \(f_{j+1}\mathsf{X}_{I(j+1)}\) and \(f_{j}\mathsf{X}_{J(I(j))}\) belong to the same \(A\)-periodic line and \(f_{j+1}\mathsf{T}_{2j+1}\sim f_{j}\mathsf{T}_{2j}\). We take \(\mathsf{Y}_{j+1}=f_{j+1}\mathsf{X}_{I(j+1)}\cup f_{j}\mathsf{X}_{J(I(j))}\). Otherwise \(\mathsf{T}_{2j}\to\mathsf{T}_{2j+1}\) is a translation by (d), i.e. \(\mathsf{X}_{J(I(j))}\) is either \(\mathsf{S}_{0}\) or \(\mathsf{S}_{1}\). In this case we take \(f_{j+1}=f_{j}\) and \(\mathsf{Y}_{j+1}=f_{j}\mathsf{S}_{1}\). Finally, define \(\mathsf{Y}_{k}=f_{k}\mathsf{X}_{J(I(k-1))}\). We have \(f_{j+1}\mathsf{T}_{2j+2}^{\pm 1}\sim f_{j+1}\mathsf{T}_{2j+1}\sim f_{j}\mathsf{T}_{2j}\) for all \(j=0,1,\ldots,k-2\) and hence \(\mathsf{W}_{0}\sim\mathsf{W}_{1}^{\pm 1}\sim\cdots\sim\mathsf{W}_{k-1}^{\pm 1}\). Figure 37 illustrates the construction. By strict closeness of pairs \((\mathsf{P}_{0},\mathsf{S}_{0})\) and \((\mathsf{P}_{1},\mathsf{S}_{1})\), each \(\mathsf{X}_{i}\) starts with a fragment \(\mathsf{U}_{i}\) and ends with a fragment \(\mathsf{V}_{i}\) such that \(\mu_{\mathsf{f}}(\mathsf{U}_{i}),\mu_{\mathsf{f}}(\mathsf{V}_{i})\geq\xi_{2}\), \(\mathsf{U}_{i}\not\sim\mathsf{V}_{i}\) and we have \(\mathsf{U}_{i}\sim\mathsf{U}_{J(i)}^{\pm 1}\) and \(\mathsf{V}_{i}\sim\mathsf{V}_{J(i)}^{\pm 1}\) We now apply Lemma 13.1 where: * \(S_{j}\) is the set of all fragments \(\mathsf{N}\) in \(\mathsf{S}_{j}\) with \(\mu_{\mathsf{f}}(\mathsf{N})\geq\xi_{3}\). * \(\mathsf{N}<_{i}\mathsf{N}^{\prime}\) is defined as '\(\mathsf{N}\not\sim\mathsf{N}^{\prime}\) and \(\mathsf{N}<\mathsf{N}^{\prime}\) in \(\mathsf{S}_{j}\)'. * Equivalence of \(\mathsf{N},\mathsf{N}^{\prime}\in\bigcup_{j}S_{j}\) is defined as \(\mathsf{N}\sim\mathsf{N}^{\prime\pm 1}\). * \(\mathsf{N}\in\bigcup_{j}S_{j}\) is defined to be stable iff \(\mu_{\mathsf{f}}(\mathsf{N})\geq\xi_{2}\). * For \(a_{j}\), \(b_{j}\), \(a_{j}^{\prime}\) and \(b_{j}^{\prime}\) we take appropriate translates of \(\mathsf{U}_{i}\) and \(\mathsf{V}_{i}\), namely, \(f_{j}\mathsf{U}_{I(j)}\), \(f_{j}\mathsf{V}_{I(j)}\), \(f_{j}\mathsf{U}_{J(I(j))}\) and \(f_{j}\mathsf{V}_{J(I(j))}\) respectively. We have conditions (i)-(v) of Lemma 13.1 satisfied: condition (i) holds in case \(\beta\geq 1\) by Corollary 9.24(ii), condition (ii) holds by Proposition 8.10\({}_{\beta}\), conditions (iii) and (iv) hold by Proposition 10.23 in view of the inequality \(\xi_{3}\geq 2\lambda+9.1\omega\) and, finally, condition (v) holds immediately by construction. For \(c_{0}\) in Lemma 13.1 we take \(\mathsf{T}_{1}\). Note that up to compatibility, we can assume that \(\mu_{\mathrm{f}}(\mathsf{T}_{1})\geq\xi_{2}\), so \(\mathsf{T}_{1}\) is stable. (By construction, \(\mathsf{T}_{1}\) is obtained from \(\mathsf{T}_{0}=\mathsf{K}\) by translation to \(\mathsf{X}_{I(0)}\); if \(\mathsf{T}_{1}\) is compatible with the starting or the ending fragment of \(\mathsf{X}_{I(0)}\) then we can assume \(\mu_{\mathrm{f}}(\mathsf{T}_{1})\geq\xi_{2}\) due to Definition 13.3; otherwise we can assume that \(\mathsf{T}_{1}\) is a literal translation of \(\mathsf{K}\) and then \(\mu_{\mathrm{f}}(\mathsf{T}_{1})=\mu_{\mathrm{f}}(\mathsf{K})\geq\xi_{1}\).) Since \(\mathsf{T}_{1}=\mathsf{W}_{0}\sim\mathsf{W}_{1}^{\pm 1}\sim\cdots\sim \mathsf{W}_{k-1}^{\pm 1}\) and each \(\mathsf{W}_{j}\) occurs in \(f_{j}\mathsf{X}_{I(j)}\), \(\mathsf{T}_{1}\) penetrates between each pair \(f_{j}\mathsf{U}_{I(j)}\) and \(f_{j}\mathsf{V}_{I(j)}\) for \(j=0,1,\ldots,k-1\). All the hypotheses of Lemma 13.1 are satisfied and applying it we find a fragment \(\mathsf{W}_{k}\) in \(f_{k-1}\mathsf{X}_{J(I(k-1))}\) such that \(\mathsf{W}_{k}^{\pm 1}\sim\mathsf{W}_{k-1}=f_{k-1}\mathsf{M}\). Then \(\mathsf{M}\to f_{k-1}^{-1}\mathsf{W}_{k}\) is the required jump. This finishes the proof of the claim. We finish the proof of the proposition. Let \(\mathsf{K}_{0}=\mathsf{K}\), \(\mathsf{K}_{1}\),..., \(\mathsf{K}_{m}\sim s_{A,\mathsf{P}_{0}}\mathsf{K}\) be all fragments in \(\mathcal{M}\) between \(\mathsf{K}\) and \(s_{A,\mathsf{P}_{0}}\mathsf{K}\) in their natural order, i.e. we have \(\mathsf{K}_{0}<\mathsf{K}_{1}<\cdots<\mathsf{K}_{m}\). Let \(\mathsf{M}_{0},\ldots,\mathsf{M}_{m}\in\mathcal{M}\) be fragments in \(\mathsf{P}_{1}\) such that \(\mathsf{M}_{i}\sim\mathsf{K}_{i}^{\pm 1}\) for all \(i\) (each \(\mathsf{M}_{i}\) is obtained from \(\mathsf{K}_{i}\) by two jumps). Note that \(\mathsf{M}_{0}<\mathsf{M}_{1}<\cdots<\mathsf{M}_{m}\) by Proposition 10.23. Since \(\mathcal{M}\) is closed under translations, the number of fragments in \(\mathcal{M}\) between \(\mathsf{M}_{0}\) and \(s_{A,\mathsf{P}_{1}}\mathsf{M}_{0}\) is the same as the number of fragments in \(\mathcal{M}\) between \(\mathsf{K}\) and \(s_{A,\mathsf{P}_{0}}\mathsf{K}\), i.e. we have \(\mathsf{M}_{m}\sim s_{A,\mathsf{P}_{1}}\mathsf{M}_{0}\). This implies that \(\mathsf{K}_{0}\) translates to some \(\mathsf{M}_{q}\), i.e. \(\mathsf{M}_{q}\sim gs_{A,\mathsf{P}_{0}}^{t}\mathsf{K}_{0}^{\pm 1}\) for some \(t\) and hence \[\mathsf{M}_{i+q}\sim gs_{A,\mathsf{P}_{0}}^{t}\mathsf{K}_{i}^{\pm 1}\text{ for }i=0,1, \ldots,m-q,\quad\mathsf{M}_{i+q-m}\sim gs_{A,\mathsf{P}_{0}}^{t-1}\mathsf{K}_{ i}^{\pm 1}\text{ for }i=m-q+1,\ldots,m.\] Note that \(\gcd(m.q)=1\) since \(\mathcal{M}\) is generated by a single fragment \(\mathsf{K}\). By Propositions 8.16(i), 11.12 and Corollary 9.24(iii), the subgroup \(\{g\in G_{\alpha}\mid g\mathsf{M}_{0}\sim\mathsf{M}_{0}^{\pm 1}\}\) is malnormal in \(G_{\alpha}\). We now apply Lemma 13.2 where for \(x_{i}\) we take the equivalence class of \(\mathsf{M}_{i}\) in the set of fragments of rank \(\beta\) in \(\Gamma_{\alpha}\) under compatibility up to invertion. By the lemma, \(\langle g,s_{A,\mathsf{P}_{0}}\rangle\) is cyclic. Since \(A\) is a non-power, we get \(g\in\langle s_{A,\mathsf{P}_{0}}\rangle\) which means that \(\mathsf{L}_{1}=\mathsf{L}_{2}\). As an immediate consequence of Proposition 13.4 we get: **13.5 Corollary** (overlapping coarse periodicity).: _Let \(\mathsf{S}_{0}\) and \(\mathsf{S}_{1}\) be coarsely periodic segments in \(\Gamma_{\alpha}\) with the same simple period \(A\) over \(G_{\alpha}\). If \(\mathsf{S}_{0}\) is contained in \(\mathsf{S}_{1}\) then \(\mathsf{S}_{0}\sim\mathsf{S}_{1}\)._ **13.6 Corollary**.: _Let \(\mathsf{S}\) and \(\mathsf{T}\) be non-compatible coarse periodic segments in \(\Gamma_{\alpha}\) with the same simple period \(A\) which occur in a reduced path \(\mathsf{X}\). Let \(\ell_{A}(\mathsf{S})\geq 3\). Assume that \(\mathsf{S}_{1}\) is obtained from \(\mathsf{S}\) by shortening by 2 periods from the end if \(\mathsf{S}<\mathsf{T}\) or by shortening by 2 periods from the start if \(\mathsf{S}>\mathsf{T}\). Then \(\mathsf{S}_{1}\) and \(\mathsf{T}\) are disjoint._ Proof.: Without loss of generality, we assume that \(\mathsf{S}<\mathsf{T}\) and \(\mathsf{S}_{1}\) is obtained from \(\mathsf{S}\) by shortening by 2 periods from the end. By 12.10(iii) we have \(\mathsf{S}=\mathsf{S}_{1}\mathsf{u}\mathsf{S}_{2}\) where \(\mathsf{S}_{2}\) is a coarsely \(A\)-periodic segment with \(\mathsf{S}_{2}\sim\mathsf{S}\). B y hypothesis we have \(\mathsf{S}_{2}\not\sim\mathsf{T}\) and then by Corollary 13.5, neither of \(\mathsf{S}_{2}\) or \(\mathsf{T}\) is contained in the other. This implies that \(\mathsf{S}_{1}\) and \(\mathsf{T}\) are disjoint. **13.7 Proposition** (strictly close periodic paths with one period).: _Let \(A\) be a simple period over \(G_{\alpha}\) and \(\beta\) the activity rank of \(A\). Let \(\mathsf{P}_{0}\) and \(\mathsf{P}_{1}\) be strictly close in rank \(\beta\) paths in \(\Gamma_{\alpha}\) labeled by periodic words with period \(A\). Assume that there exist fragments \(\mathsf{K},\mathsf{K}^{\prime}\) of rank \(\beta\) in \(\mathsf{P}_{0}\) such that \(\mu_{\mathrm{f}}(\mathsf{K}),\mu_{\mathrm{f}}(\mathsf{K}^{\prime})\geq\xi_{2}\) and \(s_{A,\mathsf{P}_{0}}\mathsf{K}\sim\mathsf{K}^{\prime}\). Then \(\mathsf{P}_{0}\) and \(\mathsf{P}_{1}\) have a common periodic extension._ Proof.: This is a special case of Proposition 13.4 with \(\mathsf{S}_{0}=\mathsf{S}_{1}=\mathsf{P}_{1}\). **13.8 Proposition**.: _Let \(g\in G_{\alpha}\) be a non-power of infinite order and let \(h\in G_{\alpha}\). If \(g^{k}=h^{-1}g^{l}h\) for some \(k,l>0\) then \(h\in\langle g\rangle\) and \(k=l\)._ Proof.: By Proposition 11.5, up to conjugation we can assume that \(g\) is represented by a simple period \(A\) over \(G_{\alpha}\). It is enough to prove that \(h\in\langle A\rangle\). Consider two periodic lines \(\mathsf{L}_{0}\) and \(\mathsf{L}_{1}\) in \(G_{\alpha}\) with period \(A\) which represent the conjugacy relation. We have \(h\in\langle A\rangle\) if and only if \(\mathsf{L}_{0}=\mathsf{L}_{1}\). Let \(\beta\) be the activity rank of \(A\). By Proposition 10.24 we find strictly close in rank \(\beta\) subpaths \(\mathsf{P}_{i}\) of \(\mathsf{L}_{i}\) with any desired bound \(|\mathsf{P}_{0}|\geq t|A|\). Then the statement follows from Proposition 13.7. As an immediate consequence we get: **13.9 Corollary**.: _Let \(\mathsf{S}_{0}\) and \(\mathsf{S}_{1}\) be coarsely \(A\)-periodic segments in \(\Gamma_{\alpha}\) and \(\mathsf{L}_{i}\)\((i=1,2)\) be an axis for \(\mathsf{S}_{i}\). If \(\mathsf{S}_{0}\sim\mathsf{S}_{1}\) then \(\mathsf{L}_{1}=\mathsf{L}_{2}\)._ **13.10 Corollary**.: _Let \(g\in G_{\alpha}\) be an element of infinite order. Then the following is true._ * \(g\) _has the unique root; i.e. there exists a unique non-power element_ \(g_{0}\in G_{\alpha}\) _such that_ \(g=g_{0}^{t}\) _for some_ \(t\geq 1\)_._ * _If_ \(h^{r}\in\langle g\rangle\) _and_ \(h^{r}\neq 1\) _then_ \(h\in\langle g_{0}\rangle\) _where_ \(g_{0}\) _is the root of_ \(g\)_._ * _If_ \(g\) _is conjugate to_ \(g^{-1}\) _then_ \(g\) _is the product of two involutions._ Proof.: (i) is direct consequence of Propositions 11.13 and 13.8. (ii) follows from (i) and Proposition 13.8 because \(g_{0}^{t}=h^{r}\) implies \(g_{0}^{t}=h^{-1}g_{0}^{t}h\). (iii) Assume that \(g=h^{-1}g^{-1}h\). From \(g=h^{-2}gh^{2}\) we conclude that \(h^{2}=1\) by (ii). Similarly, we have \((hg)^{2}=1\) and then \(g=h\cdot hg\). **13.11 Corollary**.: _Assume that each relator \(R\) of each rank \(\beta\leq\alpha\) has the form \(R=R_{0}^{n}\) where \(R_{0}\) is the root of \(R\) and \(n\) is odd (\(n\) can vary for different relators \(R\)). Then \(G_{\alpha}\) has no involutions and no element of \(G_{\alpha}\) is conjugate to its inverse._ Proof.: By Proposition 11.5, any element of finite order of \(G_{\alpha}\) is conjugate to some power \(R_{0}^{t}\) of the root \(R_{0}\) of a relator \(R\) of rank \(\beta\leq\alpha\). By Proposition 11.11, \(R_{0}^{t}\) has an odd order and cannot be an involution. The second statement follows from the first by Corollary 13.10(iii). **13.12 Lemma**.: _Let \(\mathsf{P}\) be an \(A\)-periodic segment in \(\Gamma_{\alpha}\) with a simple period \(A\) over \(G_{\alpha}\). Let \(\mathsf{S}\) be a coarsely periodic segment in \(\mathsf{P}\) with another simple period \(B\) over \(G_{\alpha}\) and assume that \(A\) and \(B\) are not conjugate in \(G_{\alpha}\). Then the following is true._ * \(\mathsf{S}\not\sim s_{A,\mathsf{P}}^{t}\mathsf{S}\) _for any_ \(t\neq 0\)_._ * _If_ \(\ell_{B}(\mathsf{S})\geq 3\) _then_ \(|\mathsf{S}|<2|A|\)_._ Proof.: (i) Assume that \(\mathsf{S}\sim s_{A,\mathsf{P}}^{t}\mathsf{S}\) for some \(t\neq 0\). Let \(\mathsf{L}_{1}\) be the infinite periodic extension of \(\mathsf{P}\), and let \(\mathsf{L}_{2}\) be the axis for \(\mathsf{K}\). By Corollary 13.9 we have \(\mathsf{L}_{2}=s_{A,\mathsf{P}}^{t}\mathsf{L}_{2}\), so \(s_{A,\mathsf{P}}^{t}=s_{B,\mathsf{Q}}^{r}\) for some \(r\neq 0\). Since \(A\) and \(B\) are non-powers, by Corollary 13.10(ii) \(s_{A,\mathsf{P}}^{\varepsilon}=s_{B,\mathsf{Q}}\) for \(\varepsilon=\pm 1\) and hence \(L_{1}^{\varepsilon}\) and \(L_{2}\) are parallel. From the fact that \(\mathsf{S}\) is a subpath of \(\mathsf{P}\) we easily deduce by Proposition 10.23 (taking for \(\beta\) the activity rank of \(A\)) that \(\varepsilon=1\). We obtain a contradiction with the assumption that \(A\) and \(B\) are not conjugate in \(G_{\alpha}\). (ii) By 12.10(iii) we represent \(\mathsf{S}\) as \(\mathsf{S}=\mathsf{S}_{1}\mathsf{uS}_{2}\) where \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) are coarsely periodic segments with period \(B\) and \(\ell_{B}(\mathsf{S}_{1})\geq\ell_{B}(\mathsf{S})-2\). By (i) and Corollary 13.5, \(s_{A,\mathsf{P}}^{-1}\mathsf{S}\) does not contain \(\mathsf{S}_{1}\) and \(s_{A,\mathsf{P}}\mathsf{S}\) does not contain \(\mathsf{S}_{2}\). This implies \(|\mathsf{S}|<2|A|\). **13.13 Proposition**.: _Let \(\mathsf{P}\) and \(\mathsf{Q}\) be close periodic segments in \(\Gamma_{\alpha}\) with the same simple period \(A\) over \(G_{\alpha}\). If \(|\mathsf{P}|\geq(2h_{\alpha}(A)+1)|A|\) (where \(h_{\alpha}(A)\) is defined in 12.12) then \(\mathsf{P}\) and \(\mathsf{Q}\) belong to the same \(A\)-periodic line._ Proof.: Follows from Propositions 12.15 and 13.7. We finish the section by formulating technical statements which we will need in the construction of relations of Burnside groups. We use notation \(\mathsf{S}\lessapprox\mathsf{T}\) for '\(\mathsf{S}<\mathsf{T}\) or \(\mathsf{S}\approx\mathsf{T}\)'. **13.14 Lemma**.: _Let \(\mathsf{S}\) and \(\mathsf{T}\) be coarsely \(A\)-periodic segments occurring in a reduced path \(\mathsf{X}\) in \(\Gamma_{\alpha}\). Assume that some periodic bases for \(\mathsf{S}\) and \(\mathsf{T}\) have the same label. If \(\mathsf{S}\) is contained in \(\mathsf{T}\) then \(\mathsf{S}\approx\mathsf{T}\)._ Proof.: Assume that \(\mathsf{S}\) is contained in \(\mathsf{T}\). Let \(\mathsf{P}_{i}\) (\(i=1,2\)) be periodic bases for \(\mathsf{S}\) and \(\mathsf{T}\) respectively, with \(\mathit{label}(\mathsf{P}_{1})=\mathit{label}(\mathsf{P}_{2})\). Let \(\beta\) be the activity rank of \(A\). By Proposition 13.4, \(\mathsf{P}_{1}\) and \(\mathsf{P}_{2}\) have a common periodic extension. Let \(\mathsf{K}_{i}\) and \(\mathsf{M}_{i}\) (\(i=0,1,2,3\)) be fragments of rank \(\beta\) with \(\mu_{\mathrm{f}}(\mathsf{K}_{i}),\mu_{\mathrm{f}}(\mathsf{M}_{i})\geq\xi_{2}\) such that \(\mathsf{P}_{1}=\mathsf{K}_{0}\cup\mathsf{K}_{1}\), \(\mathsf{P}_{2}=\mathsf{K}_{2}\cup\mathsf{K}_{3}\), \(\mathsf{S}=\mathsf{M}_{0}\cup\mathsf{M}_{1}\), \(\mathsf{T}=\mathsf{M}_{2}\cup\mathsf{M}_{3}\) and \(\mathsf{K}_{i}\sim\mathsf{M}_{i}\) for all \(i\). We have \(\mathsf{M}_{2}\lesssim\mathsf{M}_{0}\lessapprox\mathsf{M}_{1}\lesssim \mathsf{M}_{3}\) which by Proposition 13.4 implies \(\mathsf{K}_{2}\lesssim\mathsf{K}_{0}\) and \(\mathsf{K}_{1}\lesssim\mathsf{K}_{3}\). Now from \(\mathit{label}(\mathsf{P}_{1})=\mathit{label}(\mathsf{P}_{2})\) we conclude that \(\mathsf{K}_{2}\sim\mathsf{K}_{0}\) and \(\mathsf{K}_{1}\sim\mathsf{K}_{3}\), i.e. \(\mathsf{S}\approx\mathsf{T}\). **13.15 Lemma**.: _Let \(\mathsf{X}\) and \(\mathsf{Y}\) be close reduced paths in \(\Gamma_{\alpha}\). Let \(\mathsf{S}_{0},\mathsf{S}_{1}\) be coarsely \(A\)-periodic segments in \(\mathsf{X}\) and \(\mathsf{T}_{0},\mathsf{T}_{1}\) be coarsely \(A\)-periodic segments in \(\mathsf{Y}\) such that \(\ell(\mathsf{S}_{i})\geq 2h_{\alpha}(A)+1\), \(\mathsf{S}_{i}\approx\mathsf{T}_{i}\) for \(i=0,1\) and \(\mathsf{S}_{0}\not\sim\mathsf{S}_{1}\). Then \(\mathsf{S}_{0}<\mathsf{S}_{1}\) if and only if \(\mathsf{T}_{0}<\mathsf{T}_{1}\)._ Proof.: By Corollary 13.5, none of \(\mathsf{S}_{0}\) and \(\mathsf{S}_{1}\) is contained in the other and the same is true for \(\mathsf{T}_{0}\) and \(\mathsf{T}_{1}\). Assume, for example, that \(\mathsf{S}_{0}<\mathsf{S}_{1}\) and \(\mathsf{T}_{1}<\mathsf{T}_{0}\). Let \(\mathsf{X}_{1}\) and \(\mathsf{Y}_{1}\) be the starting segments of \(\mathsf{X}\) and \(\mathsf{Y}\) ending with \(\mathsf{S}_{1}\) and \(\mathsf{S}_{2}\) respectively. By Proposition 12.14 with \(\mathsf{X}:=\mathsf{X}_{1}\) and \(\mathsf{Y}:=\mathsf{Y}_{1}\) there exists \(\mathsf{U}\) in \(Y_{1}\) such that \(\mathsf{U}\approx\mathsf{S}_{0}^{*}\) where \(\mathsf{S}_{0}^{*}\) is the stable part of \(\mathsf{S}_{0}\). Then \(\mathsf{U}\cup\mathsf{T}_{0}\) is a coarsely \(A\)-periodic segment containing \(\mathsf{T}_{1}\) and we get a contradiction with Corollary 13.5. **13.16 Lemma**.: _Let \(\mathsf{X}\) and \(\mathsf{Y}\) be reduced paths in \(\Gamma_{\alpha}\). Let \(\mathsf{S}_{0},\mathsf{S}_{1}\) be coarsely \(A\)-periodic segments in \(\mathsf{X}\) and \(\mathsf{T}_{0},\mathsf{T}_{1}\) be coarsely \(A\)-periodic segments in \(\mathsf{Y}\) such that \(\mathsf{S}_{0}\lessapprox\mathsf{S}_{1}\), \(\mathsf{T}_{0}\lessapprox\mathsf{T}_{1}\) and \(\mathsf{S}_{i}\approx\mathsf{T}_{i}\), \(i=0,1\)._ * _Let_ \(\mathsf{U}\) _be a coarsely_ \(A\)_-periodic segment in_ \(\mathsf{X}\) _such that_ \(\mathsf{S}_{0}\lessapprox\mathsf{U}\lessapprox\mathsf{S}_{1}\)_,_ \(\ell_{A}(\mathsf{U})\geq h_{\alpha}(A)+1\) _and_ \(\mathsf{U}\) _is the stable part of some other coarsely_ \(A\)_-periodic segment in_ \(\mathsf{X}\)_. Then there exists a coarsely_ \(A\)_-periodic segment_ \(\mathsf{V}\) _in_ \(\mathsf{Y}\) _such that_ \(\mathsf{T}_{0}\lessapprox\mathsf{V}\lessapprox\mathsf{T}_{1}\) _and_ \(\mathsf{U}\approx\mathsf{V}\)_._ * _Let_ \(\mathsf{U}_{i}\)__\((i=1,2)\) _be coarsely_ \(A\)_-periodic segments in_ \(\mathsf{X}\) _and_ \(\mathsf{V}_{i}\)__\((i=1,2)\) _be coarsely_ \(A\)_-periodic segments in_ \(\mathsf{Y}\) _such that_ \(\ell_{A}(\mathsf{U}_{i})\geq 2h_{\alpha}(A)+1\)__\((i=1,2)\)_,_ \(\mathsf{S}_{0}\lessapprox\mathsf{U}_{i}\lessapprox\mathsf{S}_{1}\)_,_ \(\mathsf{T}_{0}\lessapprox\mathsf{V}_{i}\lessapprox\mathsf{T}_{1}\) _and_ \(\mathsf{U}_{i}\approx\mathsf{V}_{i}\) _for_ \(i=1,2\)_. Assume that_ \(\mathsf{U}_{2}\approx g\mathsf{U}_{1}\) _for some_ \(g\in G_{\alpha}\)_, i.e._ \(\mathsf{U}_{1}\) _and_ \(\mathsf{U}_{2}\) _have periodic bases with the same label. Then_ \(\mathsf{U}_{1}\lessapprox\mathsf{U}_{2}\) _if and only if_ \(\mathsf{V}_{1}\lessapprox\mathsf{V}_{2}\)_._ Proof.: Let \(\beta\) be the activity rank of \(A\). (i): Let \(\mathsf{U}\) be the stable part of \(\bar{\mathsf{U}}\) and \(\bar{\mathsf{U}}=\mathsf{Z}_{1}\mathsf{U}\mathsf{Z}_{2}\). We consider several cases. _Case_ 1: \(\mathsf{U}\not\sim\mathsf{S}_{i}\) for \(i=0,1\). Then by Corollary 13.5 we have \(\mathsf{S}_{0}<\bar{\mathsf{U}}<\mathsf{S}_{1}\). Since \(\mathsf{S}_{0}\cup\mathsf{S}_{1}\) and \(\mathsf{T}_{0}\cup\mathsf{T}_{1}\) are close, existence of \(\mathsf{V}\) follows from Proposition 12.14. _Case_ 2: Exactly one of the relations \(\mathsf{U}\sim\mathsf{S}_{i}\) (\(i=0,1\)) holds. Without loss of generality, assume that \(\mathsf{U}\sim\mathsf{S}_{0}\) and \(\mathsf{U}\not\sim\mathsf{S}_{1}\). By Corollary 13.5 we have \(\bar{\mathsf{U}}<\mathsf{S}_{1}\). If \(\mathsf{U}\approx\mathsf{S}_{0}\) there is nothing to prove. Assume that \(\mathsf{U}\not\approx\mathsf{S}_{0}\) and hence \(\mathsf{U}\mathsf{Z}_{2}\) is contained in \(\mathsf{S}_{0}\cup\mathsf{S}_{1}\). By the construction of the stable part, \(\mathsf{UZ}_{2}\) is a coarsely \(A\)-periodic segment with \(\ell_{A}(\mathsf{UZ}_{2})=\ell(\mathsf{U})+h_{\alpha}(A)\geq 2h_{\alpha}(A)+1\). Let \(\mathsf{W}\) be the stable part of \(\mathsf{UZ}_{2}\). Using Proposition 12.14 with \(\mathsf{X}:=\mathsf{S}_{0}\cup\mathsf{S}_{1}\) and \(\mathsf{Y}:=\mathsf{T}_{0}\cup\mathsf{T}_{1}\) we find a coarsely \(A\)-periodic segment \(\mathsf{W}^{\prime}\) in \(\mathsf{T}_{0}\cup\mathsf{T}_{1}\) such that \(\mathsf{W}\approx\mathsf{W}^{\prime}\). By Proposition 12.9(ii), \(\mathsf{S}_{0}\cup\mathsf{U}\) is a coarsely \(A\)-periodic segment and since \(\mathsf{W}^{\prime}\sim\mathsf{T}_{0},\,\mathsf{T}_{0}\cup\mathsf{W}^{\prime}\) is a coarsely \(A\)-periodic segment as well. By 12.10(iv) (more formally, by the symmetric version of 12.10(iv)) \(\mathsf{W}\) is an end of \(\mathsf{U}\) which implies \(\mathsf{S}_{0}\cup\mathsf{U}\approx\mathsf{T}_{0}\cup\mathsf{W}^{\prime}\). Now let \(\mathsf{P}\) be a periodic base for \(\mathsf{U}\). By the construction of the stable part, \(\mathsf{P}\) starts with a fragment \(\mathsf{N}\) of rank \(\beta\) with \(\mu_{\mathsf{f}}(\mathsf{N})\geq\xi_{1}\). Since \(\mathsf{P}\) is contained in a periodic base for \(\mathsf{T}_{0}\cup\mathsf{W}^{\prime}\), by Proposition 10.23 we find a fragment \(\mathsf{N}^{\prime}\) of rank \(\beta\) in \(\mathsf{T}_{0}\cup\mathsf{W}^{\prime}\) such that \(\mu_{\mathsf{f}}(\mathsf{N}^{\prime})\geq\xi_{2}\) and \(\mathsf{N}^{\prime}\sim\mathsf{N}\). Then for the desired \(\mathsf{V}\) we can take the end of \(\mathsf{T}_{0}\cup\mathsf{W}^{\prime}\) starting with \(\mathsf{N}^{\prime}\). _Case_ 3: \(\mathsf{U}\sim\mathsf{S}_{0}\sim\mathsf{S}_{1}\). Then a periodic base \(\mathsf{P}\) for \(\mathsf{U}\) is contained in a periodic base for \(\mathsf{S}_{0}\cup\mathsf{S}_{1}\). By the construction of the stable part, \(\mathsf{P}\) starts and ends with fragments \(\mathsf{N}_{0}\) and \(\mathsf{N}_{1}\) of rank \(\beta\) with \(\mu_{\mathsf{f}}(\mathsf{N}_{i})\geq\xi_{1}\). Then using Proposition 10.23 we find fragments \(\mathsf{N}^{\prime}_{i}\) (\(i=0,1\)) of rank \(\beta\) in \(\mathsf{T}_{0}\cup\mathsf{T}_{1}\) such that \(\mu_{\mathsf{f}}(\mathsf{N}^{\prime}_{i})\geq\xi_{2}\) and \(\mathsf{N}^{\prime}_{i}\sim\mathsf{N}_{i}\) (\(i=1,2\)). We can take \(\mathsf{V}=\mathsf{N}^{\prime}_{0}\cup\mathsf{N}^{\prime}_{1}\). (ii): We consider two cases. _Case_ 1: \(\mathsf{U}_{1}\sim\mathsf{U}_{2}\). Let \(\mathsf{P}_{1}\) and \(\mathsf{P}_{2}\) be periodic bases for \(\mathsf{U}_{1}\) and \(\mathsf{U}_{2}\) with \(\mathit{label}(\mathsf{P}_{1})=\mathit{label}(\mathsf{P}_{2})\) which have a common periodic extension. It easily follows from Proposition 10.23(ii) that \(\mathsf{U}_{1}<\mathsf{U}_{2}\Leftrightarrow\mathsf{P}_{1}<\mathsf{P}_{2}\) and \(\mathsf{U}_{1}\approx\mathsf{U}_{2}\Leftrightarrow\mathsf{P}_{1}=\mathsf{P}_{2}\). Since \(\mathsf{P}_{i}\) is also a periodic base for \(\mathsf{V}_{i}\), a similar statement holds for \(\mathsf{V}_{i}\)'s which clearly implies the required conclusion. _Case_ 2: \(\mathsf{U}_{1}\not\sim\mathsf{U}_{2}\). Without loss of generality, we assume that \(\mathsf{U}_{1}<\mathsf{U}_{2}\), \(\mathsf{V}_{1}>\mathsf{V}_{2}\) and come to a contradiction. We can assume also that \(\mathsf{X}=\mathsf{S}_{0}\cup\mathsf{S}_{1}\), \(\mathsf{Y}=\mathsf{T}_{0}\cup\mathsf{T}_{1}\) and hence \(\mathsf{X}\) and \(\mathsf{Y}\) are close in rank \(\alpha\). Let \(\mathsf{U}_{i}^{*}\) and \(\mathsf{V}_{i}^{*}\) be stable parts of \(\mathsf{U}_{i}\) and \(\mathsf{V}_{i}\). By Corollary 13.6, \(\mathsf{U}_{1}\) is disjoint from \(\mathsf{U}_{2}^{*}\). Let \(\mathsf{X}=\mathsf{X}_{1}\mathsf{U}_{1}\mathsf{X}_{2}\mathsf{U}_{2}^{*} \mathsf{X}_{3}\) and \(\mathsf{Y}=\mathsf{Y}_{1}\mathsf{V}_{2}^{*}\mathsf{Y}_{2}\mathsf{V}_{1}\mathsf{ Y}_{3}\). By Proposition 12.14 with \(\mathsf{X}=\mathsf{X}_{1}\mathsf{U}_{1}\mathsf{X}_{2}\) and \(\mathsf{Y}:=\mathsf{Y}_{1}\) there exists a coarsely \(A\)-periodic segment \(\mathsf{W}\) in \(\mathsf{Y}_{1}\) such that \(\mathsf{W}\approx\mathsf{U}_{1}^{*}\). Then \(\mathsf{W}\sim\mathsf{U}_{1}\sim\mathsf{V}_{1}\) and by Proposition 12.9(ii) and Corollary 13.5 we get \(\mathsf{U}_{1}\sim\mathsf{W}\sim\mathsf{W}\cup\mathsf{V}_{1}\sim\mathsf{V}_{2}\sim \mathsf{U}_{2}\), the desired contradiction. ## 14. Comparing \(\alpha\)-length of close words In this section, we prove the following proposition. **14.1 Proposition**.: _Let \(X,Y\in\mathcal{R}_{\alpha}\) be close in rank \(\alpha\). Then_ \[|Y|_{\alpha}<1.3|X|_{\alpha}+2.2.\] Recall that a fragment word \(F\) of rank \(\alpha\) is considered with fixed associated words \(S\), \(u\), \(v\) and a relator \(R\) of rank \(\alpha\) such that \(F=uSv\) in \(G_{\alpha-1}\), \(u,v\in\mathcal{R}_{\alpha-1}\) and \(S\) is a subword of \(R^{k}\) for some \(k>0\). If \(\mathsf{F}\) is a path in \(\Gamma_{\alpha-1}\) labeled \(F\) then this uniquely defines the base \(\mathsf{S}\) for \(\mathsf{F}\). Let \(F\) and \(G\) be fragments of rank \(\alpha\) in a word \(X\). Let \(\mathsf{X}\) be a path in \(\Gamma_{\alpha-1}\) labeled \(X\) and \(\mathsf{F}\), \(\mathsf{G}\) the corresponding subpaths of \(\mathsf{X}\). We write \(F\sim G\) if \(\mathsf{F}\sim\mathsf{G}\) (so the relation is formally defined for the occurrences of \(F\) and \(G\) in \(X\)). Recall that the size \(|X|_{\alpha}\) of a word \(X\) in rank \(\alpha\) is the minimal possible value of \(\operatorname{weight}_{\alpha}(\mathcal{F})\) of a fragmentation \(\mathcal{F}\) of rank \(\alpha\) of \(X\). A fragmentation \(\mathcal{F}\) of rank \(\alpha\) of \(X\) is a partition \(X=F_{1}\cdot F_{2}\cdots F_{k}\) where \(F_{i}\) is a nonempty subword of a fragment of rank \(\beta\leq\alpha\). Assuming that each \(F_{i}\) is assigned a unique value of \(\beta\), the weight in rank \(\alpha\) of \(\mathcal{F}\) is defined by formula \[\operatorname{weight}_{\alpha}(\mathcal{F})=m_{\alpha}+\zeta m_{\alpha-1}+ \zeta^{2}m_{\alpha-2}+\cdots+\zeta^{\alpha}m_{0}\] where \(m_{\beta}\) is the number of subwords of fragments of rank \(\beta\) in \(\mathcal{F}\). We call a fragmentation \(\mathcal{F}\) of \(X\)_minimal_ if \(\operatorname{weight}_{\alpha}(\mathcal{F})=|X|_{\alpha}\). We call a subword \(F\) of a fragment of rank \(\beta\geq 1\) a _truncated fragment of rank \(\beta\)_. We will be assuming that with a truncated fragment \(F\) of rank \(\alpha\) there is an associated genuine fragment \(\bar{F}\) of rank \(\beta\) such that \(F\) is a subword of \(\bar{F}\). If \(\mathsf{F}\) is a path in \(\Gamma_{\alpha}\) with \(\mathit{label}(\mathsf{F})=F\) then we have the associated fragment \(\bar{\mathsf{F}}\) in \(\Gamma_{\alpha}\) such that \(\mathsf{F}\) is a subpath of \(\bar{\mathsf{F}}\). Note a truncated fragment of rank \(1\) is simply a fragment of rank \(1\). We extend the compatibility relation to truncated fragments of rank \(\beta\) in a word \(X\) in the following natural way. If \(F\) and \(G\) are truncated fragments of rank \(\beta\) in \(X\) and \(\bar{F}\) and \(\bar{G}\) their associated fragments of rank \(\beta\) in \(\Gamma_{\alpha}\) then \(F\sim G\) if and only if \(\bar{F}\sim\bar{G}\). Let \(\mathcal{F}=F_{1}\cdot F_{2}\cdot\ldots\cdot F_{k}\) be a fragmentation of rank \(\alpha\) of a word \(X\). Let \(F_{i}\) be a truncated fragment of rank \(\beta\geq 1\) in \(\mathcal{F}\). Assume that \(F_{i}\) can be extended in \(X\) to a larger truncated fragment \(G\) of rank \(\beta\), i.e. \[X=F_{1}F_{2}\ldots F_{p}^{\prime}F_{p}^{\prime\prime}\ldots F_{i}\ldots F_{q} ^{\prime}F_{q}^{\prime\prime}\ldots F_{k}\] where \(F_{p}=F_{p}^{\prime}F_{p}^{\prime\prime}\), \(F_{q}=F_{q}^{\prime}F_{q}^{\prime\prime}\) and \(G=F_{p}^{\prime\prime}\ldots F_{i}\ldots F_{q}^{\prime}\) (here we consider the case \(1<i<k\); cases \(i=1\) and \(i=k\) differ only in notation). Then we can produce a new fragmentation \(\mathcal{F}^{\prime}\) of rank \(\alpha\), \(X=F_{1}\cdots F_{p-1}\cdot[F_{p}^{\prime}]\cdot G\cdot[F_{q}^{\prime\prime}] \cdot F_{q+1}\cdots F_{k}\) where square brackets mean that \(F_{p}^{\prime}\) and \(F_{q}^{\prime\prime}\) are absent if empty. We say that \(\mathcal{F}^{\prime}\) is obtained from \(\mathcal{F}\) by _extending \(F_{i}\) to \(G\)_. Note that if \(\mathcal{F}\) is minimal then in the case \(i>1\), we necessarily have \(p=i-1\) and nonempty \(F_{p}^{\prime}\) and in the case \(i<k\) we necessarily have \(q=i+1\) and nonempty \(F_{q}^{\prime\prime}\). **14.3 Lemma**.: _Let \(\mathcal{F}=F_{1}\cdot F_{2}\cdot\ldots\cdot F_{k}\) be a minimal fragmentation of rank \(\alpha\geq 1\) of a word \(X\in\mathcal{R}_{\alpha}\)._ 1. _Let_ \(F_{i}\) _be a truncated fragment of rank_ \(\alpha\) _in_ \(\mathcal{F}\)_. Then_ \(|F_{i}|_{\alpha-1}\geq\frac{1}{\zeta}\) _and_ \(F_{i}=uFv\) _where_ \(F\) _is a fragment of rank_ \(\alpha\)_,_ \(F_{i}\sim F\)_,_ \(|u|_{\alpha-1},|v|_{\alpha-1}<\zeta\) _and the base_ \(\mathsf{P}\) _for the corresponding fragment_ \(\mathsf{F}\) _in_ \(\Gamma_{\alpha-1}\) _satisfies_ \(|\mathsf{P}|_{\alpha-1}>13\)_._ 2. _If_ \(K\) _is a fragment of rank_ \(\alpha\) _in_ \(X\) _and_ \(\mu_{\mathrm{f}}(K)\geq 3\lambda+15\omega\) _then_ \(F_{i}\sim K\) _for some_ \(i\)_._ 3. _Let_ \(X=P_{0}K_{1}P_{1}\ldots K_{r}P_{r}\) _where_ \(K_{i}\) _are fragments of rank_ \(\alpha\) _with_ \(\mu_{\mathrm{f}}(K_{i})\geq 3\lambda+13\omega\) _for all_ \(i\)_. Then there exists another minimal fragmentation_ \(\mathcal{F}^{\prime}\) _of rank_ \(\alpha\) _of_ \(X\) _such that each_ \(K_{i}\) _is contained in a compatible truncated fragment of rank_ \(\alpha\) _in_ \(\mathcal{F}^{\prime}\)_._ Proof.: (i) If \(|F_{i}|_{\alpha-1}<\frac{1}{\zeta}\) then we could replace \(F_{i}\) by its fragmentation of rank \(\alpha-1\) which would decrease the weight of \(\mathcal{F}\). By Proposition 9.21\({}_{\alpha-1}\) in the case \(\alpha\geq 2\) (in the case \(\alpha=1\) we take \(u\) and \(v\) empty) we have \(F_{i}=uFv\) where \(F\) is a fragment of rank \(\alpha\), \(F_{i}\sim F\) and \(|u|_{\alpha-1},|v|_{\alpha-1}<\zeta\). If \(\mathsf{F}\) is the corresponding fragment of rank \(\alpha\) in \(\Gamma_{\alpha-1}\) and \(\mathsf{P}\) is the base for \(\mathsf{F}\) then by Proposition 14.1\({}_{\alpha-1}\) \[|\mathsf{P}|_{\alpha-1}>\frac{1}{1.3}\left(\frac{1}{\zeta}-2\zeta-2.2\right)>13.\] (ii) Let \(K\) be a fragment of rank \(\alpha\) in \(X\) and \(\mu_{\mathrm{f}}(K)\geq 3\lambda+15\omega\). We assume that there is no truncated fragment \(F_{i}\) of rank \(\alpha\) such that \(F_{i}\sim K\). By Proposition 8.10 and the assumption, if \(H\) is a common part of \(K\) and some \(F_{i}\) of rank \(\alpha\) then \(H\) contains no fragment \(K^{\prime}\) of rank \(\alpha\) with \(\mu_{\mathrm{f}}(K^{\prime})\geq\lambda+2.6\omega\). By Lemma 10.8, if \(H\) is a common part of \(K\) and some \(F_{i}\) of rank \(\beta<\alpha\) then \(H\) contains no fragment \(K^{\prime}\) of rank \(\alpha\). rank \(\alpha\) with \(\mu_{\rm f}(K^{\prime})\geq 3.2\omega\). In particular, \(K\) is not contained in any \(F_{i}\). Let \[X=F_{1}F_{2}\ldots F_{p}^{\prime}F_{p}^{\prime\prime}\ldots F_{q}^{\prime}F_{q}^ {\prime\prime}\ldots F_{k}\quad\text{where}\quad F_{p}=F_{p}^{\prime}F_{p}^{ \prime\prime},\quad F_{q}=F_{q}^{\prime}F_{q}^{\prime\prime},\quad K=F_{p}^{ \prime\prime}F_{p+1}\ldots F_{q}^{\prime}.\] If some \(F_{i}\) is contained in \(K\) and has rank \(\alpha\) then by the remark above and 14.2, \(K\) is covered by at most three of the \(F_{j}\)'s. In this case, by Proposition 8.11 we would have \[\mu_{\rm f}(K)\leq 3(\lambda+2.6\omega)+2\zeta\omega<3\lambda+15\omega\] contrary to the hypothesis. Therefore, each \(F_{i}\) that contained in \(K\) has rank \(\beta<\alpha\). Now by Proposition 8.11, \(F_{p}F_{p+1}\ldots F_{q}\) contains a fragment \(K^{\prime}\) of rank \(\alpha\) with \[\mu_{\rm f}(K^{\prime})\geq\mu_{\rm f}(K)-2(\lambda+2.6\omega)-2\zeta\omega>29\omega.\] For a base \(P\) of \(K^{\prime}\) we have \(|P|_{\alpha-1}>29\) and by Proposition 14.1\({}_{\alpha-1}\), \(|K^{\prime}|_{\alpha-1}>20\). This implies that \({\rm weight}_{\alpha}(F_{p}\cdot F_{p+1}\cdot\ldots\cdot F_{q})>1\) and we get a contradiction with minimality of \(\mathcal{F}\) since we can replace \(F_{p}F_{p+1}\ldots F_{q}\) in \(\mathcal{F}\) by a single truncated fragment of rank \(\alpha\). This finishes the proof. (iii) By (ii), for each \(i=1,2,\ldots,r\) there exists a truncated fragment \(F_{t_{i}}\) of rank \(\alpha\) in \(\mathcal{F}\) such that \(K_{i}\sim F_{t_{i}}\). Proposition 8.13 easily implies that \(F_{t_{i}}\cup K_{i}\) is a truncated fragment of rank \(\alpha\). For each \(i=1,2,\ldots,r\) we consequently replace \(F_{t_{i}}\) in \(\mathcal{F}\) by \(F_{t_{i}}\cup K_{i}\). Since we do not increase \({\rm weight}_{\alpha}(\mathcal{F})\), the resulting fragmentation \(\mathcal{F}^{\prime}\) of \(X\) is also minimal. **14.4 Lemma**.: _Let \(\alpha\geq 1\) and \(X,Y\in\mathcal{R}_{\alpha}\) be close in rank \(\alpha-1\). Then_ \[|Y|_{\alpha}<1.3|X|_{\alpha}+2.2\zeta.\] Proof.: Let \(\mathcal{F}\) be a minimal fragmentation of \(X\). We represent \(X\) and \(Y\) by close paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha-1}\). Then \(\mathcal{F}\) induces the partition of \(\mathsf{X}\), denoted \(\bar{\mathcal{F}}\), into (path) truncated fragments of ranks \(\leq\alpha\). Let \[\mathsf{X}=\mathsf{P}_{0}\mathsf{H}_{1}\mathsf{P}_{1}\ldots\mathsf{H}_{r}\mathsf{ P}_{r}\] where \(\mathsf{H}_{1},\ldots,\mathsf{H}_{r}\) are all truncated fragments of rank \(\alpha\) in \(\bar{\mathcal{F}}\). If \(r=0\) then \(|X|_{\alpha}=\zeta|X|_{\alpha-1}\), \(|Y|_{\alpha}\leq\zeta|X|_{\alpha-1}\) and the statement simply follows from Proposition 14.1\({}_{\alpha-1}\). We assume \(r>0\). By Lemma 14.3(i), for each \(i\) we have \(\mathsf{H}_{i}=\mathsf{u}_{i}\mathsf{H}_{i}^{\prime}\mathsf{v}_{i}\) where \(\mathsf{H}_{i}^{\prime}\) is a fragment of rank \(\alpha\), \(\mathsf{H}_{i}^{\prime}\sim\mathsf{H}_{i}\), \(|\mathsf{u}|_{\alpha-1},|\mathsf{v}|_{\alpha-1}<\zeta\), and the base \(\mathsf{S}_{i}\) for \(\mathsf{H}_{i}\) satisfies \(|\mathsf{S}_{i}|_{\alpha-1}>13\). Using Proposition 10.16\({}_{\alpha-1}\) we find fragments \(\mathsf{H}_{i}^{\prime\prime}\) and \(\mathsf{G}_{i}\) of rank \(\alpha\) in \(\mathsf{X}\) and \(\mathsf{Y}\) respectively where \(\mathsf{H}_{i}^{\prime}=\mathsf{w}_{i}\mathsf{H}_{i}^{\prime\prime}\mathsf{z}_ {i}\), \(|\mathsf{w}_{i}|_{\alpha-1},|\mathsf{z}_{i}|_{\alpha-1}<1.15\), \(\mathsf{H}_{i}\sim\mathsf{H}_{i}^{\prime\prime}\sim\mathsf{G}_{i}\) and \(\mathsf{H}_{i}^{\prime\prime}\) and \(\mathsf{G}_{i}\) are close in rank \(\alpha-1\). Using Lemma 10.13(i)\({}_{\alpha-1}\) after each application of Proposition 10.16\({}_{\alpha-1}\) we can assume that \(\mathsf{G}_{i}\) are disjoint, i.e. \[\mathsf{Y}=\mathsf{Q}_{0}\mathsf{G}_{1}\mathsf{Q}_{1}\ldots\mathsf{G}_{r}\mathsf{ Q}_{r}.\] By Proposition 14.1\({}_{\alpha-1}\) we have \[|\mathsf{Q}_{0}|_{\alpha-1}<1.3|\mathsf{P}_{0}\mathsf{u}_{1}\mathsf{ w}_{1}|_{\alpha-1}+2.2,\] \[|\mathsf{Q}_{i}|_{\alpha-1}<1.3|\mathsf{z}_{i}\mathsf{v}_{i} \mathsf{P}_{i}\mathsf{u}_{i+1}\mathsf{w}_{i+1}|_{\alpha-1}+2.2\quad(i=1,\ldots,r- 1),\] \[|\mathsf{Q}_{k}|_{\alpha-1}<1.3|\mathsf{z}_{k}\mathsf{v}_{k} \mathsf{P}_{k}|_{\alpha-1}+2.2.\] We have also \[|X|_{\alpha}=r+\zeta\sum_{i=1}^{r}|\mathsf{P}_{i}|_{\alpha-1}\quad\text{and} \quad|Y|_{\alpha}\leq r+\zeta\sum_{i=1}^{r}|\mathsf{Q}_{i}|_{\alpha-1}.\] Then \[|Y|_{\alpha} <r+1.3\zeta\sum_{i=1}^{r}|\mathsf{P}_{i}|_{\alpha-1}+1.3r\zeta(2.3+2 \zeta)+2.2\zeta(r+1)\] \[=(1+1.3\zeta(4.5+2\zeta))r+1.3\zeta\sum_{i=1}^{r}|\mathsf{P}_{i}|_{ \alpha-1}+2.2\zeta\] \[<1.3|X|_{\alpha}+2.2\zeta.\] Proof of Proposition 14.1.: Let \(X,Y\in\mathcal{R}_{\alpha}\) be close in rank \(\alpha\). Let \(\mathcal{F}\) be a minimal fragmentation of \(X\). We consider close paths \(\mathsf{X}\) and \(\mathsf{Y}\) in \(\Gamma_{\alpha}\) labeled \(X\) and \(Y\) respectively. Then \(\mathcal{F}\) induces the partitions of \(\mathsf{X}\) into (path) truncated fragments of ranks \(\leq\alpha\), \[\mathsf{X}=\mathsf{F}_{1}\cdot\mathsf{F}_{2}\cdot\ldots\cdot\mathsf{F}_{k}.\] Let \(\mathsf{X}^{-1}\mathsf{u}\mathsf{Y}\mathsf{v}\) be a coarse bigon. We fix some bridge partitions of \(\mathsf{u}\) and \(\mathsf{v}\). Let \(\Delta\) be a filling diagram of rank \(\alpha\) with boundary loop \(\tilde{\mathsf{X}}^{-1}\tilde{\mathsf{u}}\tilde{\mathsf{Y}}\tilde{\mathsf{v}}\). Up to switching of \(\mathsf{u}\) and \(\mathsf{v}\) we can assume that \(\Delta\) is reduced and has a tight set \(\mathcal{T}\) of contiguity subdiagrams. Let \(\mathsf{D}_{1}\),..., \(\mathsf{D}_{r}\) be all cells of rank \(\alpha\) of \(\Delta\). In the process of forming \(\mathcal{T}\) we assume that we pick first the contiguity subdiagrams of \(\mathsf{D}_{i}\) to \(\tilde{\mathsf{X}}^{-1}\) choosing them with maximal possible contiguity segment occurring in \(\tilde{\mathsf{X}}^{-1}\). Let \[\mathsf{X}=\mathsf{P}_{0}\mathsf{K}_{1}\mathsf{P}_{1}\ldots\mathsf{K}_{r} \mathsf{P}_{r}\quad\text{and}\quad\mathsf{Y}=\mathsf{Q}_{0}\mathsf{M}_{1} \mathsf{Q}_{1}\ldots\mathsf{M}_{r}\mathsf{Q}_{r}.\] where \(\mathsf{K}_{i}\) and \(\mathsf{M}_{i}\) are the corresponding active fragments of rank \(\alpha\) in \(\mathsf{X}\) and \(\mathsf{Y}\). By the way we produce \(\mathcal{T}\) and by Proposition 9.21\({}_{\alpha-1}\) in the case \(\alpha\geq 2\) we have the following: (*) _For all \(i\), the fragment \(\mathsf{K}_{i}\) cannot be extended in \(\mathsf{P}_{i-1}\mathsf{K}_{i}\mathsf{P}_{i}\). In particular, if \(\mathsf{F}\) is a truncated fragment of rank \(\alpha\) contained in \(\mathsf{P}_{i-1}\mathsf{K}_{i}\mathsf{P}_{i}\) and containing \(\mathsf{K}_{i}\) then \(\mathsf{F}=\mathsf{w}_{1}\mathsf{K}_{i}\mathsf{w}_{2}\) where \(|\mathsf{w}_{i}|_{\alpha-1}<\zeta\)\((i=1,2)\)_ \(0.4\zeta\) respectively. Assume that case (c) holds. Then \(\mathsf{P}_{i}=\mathsf{u}_{1}\mathsf{S}\mathsf{u}_{2}\) and \(\mathsf{Q}_{i}=\mathsf{v}_{1}\mathsf{T}\mathsf{v}_{2}\) where \(\mathsf{S}\) and \(\mathsf{T}\) are close in rank \(\alpha-1\) and \(|\mathsf{u}_{i}|_{\alpha},|\mathsf{v}_{i}|_{\alpha}\leq 4\zeta^{2}\eta<0.4\zeta\). Using Lemma 14.4, we get \[|\mathsf{Q}_{i}|_{\alpha}<1.3|\mathsf{P}_{i}|_{\alpha}+3\zeta\] Note that this inequality holds also in cases (a) and (b). Now let \(i=0\) or \(i=r\). If \(r>0\) then the difference of the case \(i=0\) from the case \(1\leq i\leq r-1\) is that we can have an extra contiguity subdiagram between \(\mathsf{Y}\) and the central arc of \(\tilde{\mathsf{u}}\) (see Figure 39). We then have \[|\mathsf{Q}_{0}|_{\alpha}<1+1.3|\mathsf{P}_{0}|_{\alpha}+3\zeta\] and, similarly, \[|\mathsf{Q}_{r}|_{\alpha}<1+1.3|\mathsf{P}_{r}|_{\alpha}+3\zeta.\] If \(r=0\) we have a single bound instead, \[|\mathsf{Q}_{0}|_{\alpha}<2+1.3|\mathsf{P}_{0}|_{\alpha}+3\zeta.\] Summarizing, with (14-1) we get \[|Y|_{\alpha} \leq r+\gamma\sum_{i}|\mathsf{P}_{i}|_{\alpha}+2+3\zeta(r+1)\] \[=(1+3\zeta)r+1.3\sum_{i}|\mathsf{P}_{i}|_{\alpha}+2+3\zeta\] \[<1.3|X|_{\alpha}+2.2.\] **Corollary**.: _If \(F\) is a fragment of rank \(\alpha\) and \(\mu_{\mathrm{f}}(F)\geq t\omega\) then \(|F|_{\alpha-1}>\frac{1}{1.3}(t-2.2)\). In particular, \(|F|>\frac{1}{1.3}\zeta^{1-\alpha}(t-2.2)\)._ **Corollary**.: _Let \(Y=u_{1}X_{1}u_{2}X_{2}u_{3}\) in \(\Gamma_{\alpha}\) where \(X_{i},Y\in\mathcal{R}_{\alpha}\) and \(u_{i}\in\mathcal{H}_{\alpha}\). Then \(|Y|_{\alpha}\leq 1.3(|X_{1}|_{\alpha}+|X_{2}|_{\alpha})+4.8\)._ Figure 38. Figure 39. Proof.: Follows from Propositions 9.19(i) and 14.1. The following two statements are proved under the assumption that a normalized presentation (2-1) of \(G\) satisfies the iterated small cancellation condition (S0)-(S3) for all \(\alpha\geq 1\). We therefore will be assuming that all statements starting from Section 5 hold for all values of \(\alpha\). **14.7 Proposition**.: _Let \(W\) be a word with \(|W|\leq\alpha\) and let \(W=X\) in \(G_{\alpha}\) where \(X\in\mathcal{R}_{\alpha}\). Then \(|X|_{\alpha}<0.3\), \(X\) contains no fragments \(F\) of rank \(\beta>\alpha\) with \(\mu_{\mathrm{f}}(F)\geq 3\omega\) and, in particular, \(X\in\cap_{\alpha\geq 1}\mathcal{R}_{\alpha}\)._ By Corollary 14.5 it is enough to prove that \(|X|_{\alpha}<0.3\). We proceed by induction on \(\alpha\). If \(\alpha=1\) then \(X\) is the freely reduced form of \(W\) and \(|X|_{1}\leq\zeta|X|<0.3\). Let \(\alpha>1\). Let \(W=W_{1}a\), \(a\in\mathcal{A}^{\pm 1}\) and \(W_{1}=X_{1}\) in \(G_{\alpha-1}\) where \(X_{1}\in\mathcal{R}_{\alpha-1}\). By Corollary 14.5, the inductive hypothesis and Proposition 9.15, equality \(X=X_{1}a\) holds already in \(G_{\alpha-1}\). By Corollary 14.6\({}_{\alpha-1}\) \[|X|_{\alpha}\leq\zeta|X|_{\alpha-1}\leq\zeta(1.3(0.3+0.3)+4.8)<0.3.\] **14.8 Corollary**.: _Every element of \(G\) can be represented by a word \(X\) reduced in \(G\) such that for some \(\alpha\geq 1\), \(X\) contains no fragments \(F\) of rank \(\beta\geq\alpha\) with \(\mu_{\mathrm{f}}(F)\geq 3\omega\)._ ## 15. A graded presentation for the Burnside group In this section we show that for sufficiently large odd \(n\) the Burnside group \(B(m,n)\) has a graded presentation which satisfies the iterated small cancellation condition formulated in Section 2. We fix an odd number \(n>2000\). We are going to construct a graded presentation of the form (15-1) \[\big{\langle}\mathcal{A}\ \big{|}\ \ C^{n}=1\ (C\in\bigcup_{\alpha\geq 1} \mathcal{E}_{\alpha})\big{\rangle}\] where all relators of all ranks \(\alpha\) are \(n\)-th powers. We assume that values of the parameters \(\lambda\) and \(\Omega\) are chosen as in Theorem 3, i.e. \[\lambda=\frac{80}{n},\quad\Omega=0.25n.\] We will use also the following extra parameters: \[p_{0}=39,\quad p_{1}=p_{0}+26=65.\] In what follows, we define the set \(\mathcal{E}_{\alpha+1}\) under the assumption that sets \(\mathcal{E}_{\beta}\) are already defined for all \(\beta\leq\alpha\). We fix the value of rank \(\alpha\geq 0\) and assume that the presentation (15-1) satisfies small cancellation conditions (S0)-(S3) in 2.8, 2.9 and in normalized in the sense Definition 2.10 for all values of the rank up to \(\alpha\). We can therefore assume that all statements in Sections 5-13 are true for the current value of \(\alpha\) and below. According to Propositions 11.5 and 11.13 each element of infinite order of \(G_{\alpha}\) is conjugate to a power of a simple period over \(G_{\alpha}\). We will define \(\mathcal{E}_{\alpha+1}\) as a certain set of simple periods over \(G_{\alpha}\). This will automatically imply condition (S0) with \(\alpha:=\alpha+1\). Since \(n\) is odd, by Corollary 13.11 we obtain also that (S3) holds with \(\alpha:=\alpha+1\). Before going to the chain of definitions in the next section, we formulate the following two conditions (P1) and (P2) on \(\mathcal{E}_{\alpha+1}\) (which can be viewed as "periodic" versions of (S1) and (S2) for the value of rank \(\alpha:=\alpha+1\)). 1. For each \(A\in\mathcal{E}_{\alpha+1}\), \([A]_{\alpha}\geq 0.25\). 2. Let \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) be periodic lines in \(\Gamma_{\alpha}\) with periods \(A,B\in\mathcal{E}_{\alpha+1}\) respectively. Assume that a subpath P of \(\mathsf{L}_{1}\) and a subpath of \(\mathsf{Q}\) of \(\mathsf{L}_{2}\) are close and \(|\mathsf{P}|\geq p_{1}|A|\). Then \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are parallel. The main goal of the construction of \(\mathcal{E}_{\alpha+1}\) will be to satisfy (P1) and (P2). Note that (P1) immediately implies (S1) for \(\alpha:=\alpha+1\) because of the assumption \(n>2000\). Later we prove that (P2) implies (S2)\({}_{\alpha+1}\). (The difference between (P2) and (S2)\({}_{\alpha+1}\) is that in (P2) we measure periodic words by the number of periods while in (S2)\({}_{\alpha+1}\) we use the length function \(|\cdot|_{\alpha}\). An appropriate bound will be given in Proposition 16.6.) Our first step is to define a set of simple periods over \(G_{\alpha}\) which potentially violate (P2) (they will be excluded in the definition of \(\mathcal{E}_{\alpha+1}\)). **15.1 Definition**.: A simple period \(A\) over \(G_{\alpha}\) is _suspended of level 0_ if there exist a simple period \(B\) not conjugate in \(G_{\alpha}\) to \(A\) and words \(P\in\operatorname{Per}(A)\) and \(Q\in\operatorname{Per}(B)\) such that \(P\) and \(Q\) are close in \(G_{\alpha}\) and \(|Q|\geq p_{1}|B|\). At first sight, we could simply define \(\mathcal{E}_{\alpha+1}\) by excluding periods \(A\) as in Definition 15.1 from the set of all simple periods over \(G_{\alpha}\). However, in this case we cannot guarantee a necessary lower bound on \([A]_{\alpha}\) for \(A\in\mathcal{E}_{\alpha+1}\) in (P1). Roughly speaking, we need to claim that a fragment of rank \(\beta\leq\alpha\) can cover only a "small" part of a periodic word with a period \(A\in\mathcal{E}_{\alpha+1}\); moreover, we need an exponentially decreasing upper bound on the size of this part when \(\beta\) decreases (compare with the definition of the function \(|\cdot|_{\alpha}\) in 2.7). To achieve this, we enlarge the set of excluded simple periods over \(G_{\alpha+1}\) by adding potentially "bad" examples of this sort. **15.2 Definition**.: A simple period \(A\) over \(G_{\alpha}\) is _suspended of level \(m\geq 1\)_ if there exist a suspended period \(B\) of level \(m-1\) not conjugate to \(A\) in \(G_{\alpha}\), and a reduced in \(G_{\alpha}\) word of the form \(XQY\) such that \(Q\in\operatorname{Per}(B)\), \(|Q|\geq 4|B|\) and \(XQY\) is close in \(G_{\alpha}\) to a word \(P\in\operatorname{Per}(A)\). **15.3 Definition**.: Let \(\mathcal{P}_{\alpha}\) denote the set of all simple periods over \(G_{\alpha}\) and \(\mathcal{S}_{\alpha}\) denote the set of all suspended simple periods over \(G_{\alpha}\) of all levels \(m\geq 0\). For \(\mathcal{E}_{\alpha+1}\) we take any set of representatives of equivalence classes in \(\mathcal{P}_{\alpha}\setminus\mathcal{S}_{\alpha}\) with respect to the equivalence \[A\sim B\Leftrightarrow A\text{ is conjugate to }B^{\pm 1}\text{ in }G_{\alpha}.\] The definition implies that any simple period over \(G_{\alpha}\) in \(\mathcal{P}_{\alpha}\setminus\mathcal{S}_{\alpha}\) has finite order in \(G_{\alpha+1}\). Since \(\mathcal{P}_{\alpha+1}\subseteq\mathcal{P}_{\alpha}\), it follows that any simple period over \(G_{\alpha+1}\) and, in particular, any word in \(\mathcal{E}_{\beta}\) for \(\beta\geq\alpha+1\) belongs to \(\mathcal{S}_{\alpha}\). As a consequence, we prove now that a fragment of rank \(\alpha+1\) cannot cover a large periodic word with a simple period \(A\) over \(G_{\alpha+1}\). (So here is the trick: the definition of the set of suspended periods over \(G_{\alpha}\) of levels \(m\geq 1\) serves condition (P1) for the _future_ rank \(\alpha+1\).) **15.4 Remark**.: By construction, we obtain a normalized presentation (15-1) (see Definition 2.10). **15.5 Proposition**.: _Let \(A\) be a simple period over \(G_{\alpha+1}\). If an \(A\)-periodic word \(P\) is a subword of a fragment of rank \(\alpha+1\) then \(|P|<4|A|\)._ Proof.: As observed above, \(A\in\mathcal{S}_{\alpha}\). Let \(UPV\) be a fragment of rank \(\alpha+1\) where \(P\in\operatorname{Per}(A)\). Then \(UPV\) is close in \(G_{\alpha}\) to a word \(Q\in\operatorname{Per}(B)\) where \(B\in\mathcal{E}_{\alpha+1}\). Since \(A\) is of infinite order in \(G_{\alpha+1}\), it is not conjugate to \(B\) in \(G_{\alpha}\). In this case, Definition 15.2 says that if \(|P|\geq 4|A|\) then \(B\in\mathcal{S}_{\alpha}\) which would contradict Definition 15.3. Proposition 15.5 with \(\alpha:=\alpha-1\) is an important but not sufficient ingredient in the proof of (P1). We need also to ensure that if a subword of fragment of rank \(\beta<\alpha\) is a subword of an \(A\)-periodic word with \(A\in\mathcal{E}_{\alpha+1}\) then its length compared to \(|A|\) is "exponentially decreasing when \(\beta\) decreases". We prove a precise form of this statement in the next section by showing that coarsely periodic words have a certain property of hierarchical containment: a coarsely \(A\)-periodic word \(S\) over \(G_{\alpha}\) has \(t\) disjoint occurrences of coarsely periodic words over \(G_{\alpha-1}\) with sufficiently large number of periods where \(t\) is approximately the number of periods \(A\) in \(S\). ## 16. Hierarchical containment of coarsely periodic words _Starting from this point, all statements are formulated and proved under assumption that the group \(G\) has a specific presentation (15-1) defined in Section 15._ The goal of this section is to prove the following property of suspended periods over \(G_{\alpha}\) and to finalize the proof of the fact that the presentation (15-1) satisfies conditions (S0)-(S3). As in Section 15 we assume fixed the value of rank \(\alpha\geq 0\) and assume that the normalized presentation (15-1) satisfies conditions (S0)-(S3) for ranks less or equal \(\alpha\); so we can use all statements in Sections 5-15 for any rank up to \(\alpha\). **16.1 Proposition**.: _Let \(A\) be a suspended period over \(G_{\alpha}\). Then there exists a simple period \(B\) over \(G_{\alpha}\) such that:_ * _A cyclic shift of_ \(A\) _contains a coarsely_ \(B\)_-periodic word_ \(T\) _over_ \(G_{\alpha}\) _with_ \(\ell_{B}(T)\geq p_{0}\)_._ * _Moreover, this subword_ \(T\) _has the following property. Let_ \(\mathsf{S}\) _be a coarsely_ \(A\)_-periodic segment in_ \(\Gamma_{\alpha}\) _with_ \(\ell_{A}(\mathsf{S})\geq 4\)_. Then there are an_ \(A\)_-periodic base_ \(\mathsf{P}\) _for_ \(\mathsf{S}\)_,_ \(\ell_{A}(\mathsf{S})-3\) _translates_ \(\mathsf{T}\)_,_ \(s_{A,\mathsf{P}}\mathsf{T}\)_,_ \(\dots\)_,_ \(s_{A,\mathsf{P}}^{\ell(\mathsf{S})-4}\mathsf{T}\) _of a coarsely_ \(B\)_-periodic segment_ \(\mathsf{T}\) _in_ \(\mathsf{P}\) _with_ \(\text{label}(\mathsf{T})=T\) _and_ \(\ell_{A}(\mathsf{S})-3\) _disjoint coarsely_ \(B\)_-periodic segments_ \(\mathsf{V}_{i}\)__\((i=0,1,\dots,\ell(\mathsf{S})-4)\) _in_ \(\mathsf{S}\) _such that_ \(\mathsf{V}_{i}\approx s_{A,\mathsf{P}}^{i}\mathsf{T}\) _for all_ \(i\)_._ We start with showing how Proposition 16.1\({}_{\alpha-1}\) implies (P1) in the case \(\alpha\geq 1\). **16.2 Lemma**.: _Let \(A\) be a simple period over \(G_{\alpha}\) and let \(\mathsf{S}\) and \(\mathsf{V}_{i}\)\((i=0,1,\dots,\ell_{A}(\mathsf{S})-4)\) be as in Proposition 16.1\({}_{\alpha-1}\). Then for any \(i\), \(\mathsf{V}_{i}\cup\mathsf{V}_{i+4}\) is not contained in a fragment of rank \(\alpha\)._ Proof.: As in Proposition 16.1\({}_{\alpha-1}\), let \(\mathsf{P}\) be an \(A\)-periodic base for \(\mathsf{S}\) in \(\Gamma_{\alpha-1}\) containing \(t-3\) translates \(\mathsf{T}\), \(s_{A,\mathsf{P}}\mathsf{T}\), \(\dots\), \(s_{A,\mathsf{P}}^{t-4}\mathsf{T}\) where \(\mathsf{T}\) is a coarsely periodic segment with another period \(B\) and \(\ell_{B}(\mathsf{T})\geq p_{0}\). Assume that a fragment \(\mathsf{K}\) of rank \(\alpha\) in \(\Gamma_{\alpha-1}\) contains \(\mathsf{V}_{i}\) and \(\mathsf{V}_{i+4}\). Let \(\mathsf{L}\) be the base axis for \(\mathsf{K}\), so \(\mathsf{L}\) is a \(C\)-periodic line with \(C\in\mathcal{E}_{\alpha}\). Denoting \(\mathsf{V}_{i}^{*}\) the stable part of \(\mathsf{V}_{i}\), by Proposition 12.14\({}_{\alpha-1}\) we find \(\mathsf{W}\) and \(\mathsf{W}^{\prime}\) in \(\mathsf{L}\) such that \(\mathsf{W}\approx\mathsf{V}_{i}^{*}\) and \(\mathsf{W}^{\prime}\approx\mathsf{V}_{i+4}^{*}\). Then \(\mathsf{W}\cup\mathsf{W}^{\prime}\) is close to \(s_{A,\mathsf{P}}^{i}\mathsf{T}^{*}\cup s_{A,\mathsf{P}}^{i+4}\mathsf{T}^{*}\). Since \(A\in\mathcal{S}_{\alpha-1}\), according to Definition 15.2\({}_{\alpha-1}\) this should imply \(C\in\mathcal{S}_{\alpha-1}\), a contradiction. **16.3 Lemma**.: _Let \(\alpha\geq 1\). Assume that a (linear or cyclic) word \(X\) has \(r\) disjoint occurrences of coarsely \(A\)-periodic words \(U_{i}\)\((i=1,\ldots,r)\) over \(G_{\alpha-1}\) with \(\ell_{A}(U_{i})\geq p_{0}\). Then \(|X|_{\alpha-1}\geq 5r\)._ Proof.: The statement is immediate if \(\alpha=1\). Assume that \(\alpha>1\). Consider a fragmentation \(\mathcal{F}\) of rank \(\alpha-1\) of \(X\) (definition 2.7). Let \(S_{1}\),..., \(S_{k}\) be the subwords of fragments of rank \(\alpha-1\) in \(\mathcal{F}\). By Proposition 16.1\({}_{\alpha-1}\) each \(U_{i}\) contains \(p_{0}-3=36\) disjoint coarsely \(B\)-periodic words \(V_{i,j}\)\((j=1,\ldots,36)\) over \(G_{\alpha-2}\) with \(\ell_{B}(V_{i,j})\geq p_{0}\). We can assume that \(U_{i}\) and \(V_{i,j}\) are indexed in their natural order from the start to the end in \(X\). By Lemma 16.2, each \(S_{i}\) intersects at most \(6\) consequent subwords \(V_{i,j},V_{i,j+1},\ldots,V_{i,j+5}\). Excluding \(V_{i,j}\) with \(1\leq j\leq 6\), we obtain that each \(S_{i}\) intersects at most \(6\) of all the remaining \(V_{i,j}\). By induction, we conclude that \[|X|_{\alpha-1}\geq k+5\zeta\max\{0,\;30r-6k\}\] With fixed \(r\), the minimal value of the right-hand side is achieved when \(30r-6k=0\). This gives the bound \(|X|_{\alpha-1}\geq 5r\). We prove the following stronger form of (P1): **16.4 Proposition**.: _For any simple period \(A\) over \(G_{\alpha}\) we have \([A]_{\alpha}\geq 0.25\) and, consequently, \(h_{\alpha}(A)\leq 6\)._ Proof.: If \(\alpha=0\) then \([A]_{0}\geq 1\) by the definition of \([\cdot]_{0}\). Let \(\alpha\geq 1\). Take any \(r\geq 1\). Consider a fragmentation \(\mathcal{F}\) of rank \(\alpha\) of the cyclic word \((A^{r})^{\circ}\). Assume that \(\mathcal{F}\) consists of words \(S_{i}\), \(i=1,2,\ldots,N\) where the first \(k\) are subwords of fragments of rank \(\alpha\). By Proposition 15.5\({}_{\alpha-1}\) we have \(|S_{i}|<4|A|\) for \(i=1,2,\ldots,k\). This implies that the cyclic word \((A^{r-4k})^{\circ}\) can be partitioned into subwords of words in some subset of the remaining \(S_{i}\), \(i=k+1,k+2,\ldots,N\). Therefore, \[|(A^{r})^{\circ}|_{\alpha}\geq k+\zeta|(A^{r-4k})^{\circ}|_{\alpha-1}.\] Proposition 16.1\({}_{\alpha-1}\) says that \((A^{r-4k})^{\circ}\) has at least \(r-4k\) disjoint occurrences of a coarsely \(B\)-periodic word \(K\) over \(G_{\alpha-1}\) with \(\ell_{B}(K)\geq p_{0}\). Then by Lemma 16.3, \[|(A^{r})^{\circ}|_{\alpha}\geq k+5\zeta(r-4k)=0.25r.\] This holds for all \(r\geq 1\), so by Definition 12.12 we get \([A]_{\alpha}\geq 0.25\) and hence \(h_{\alpha}(A)\leq 6\). The following lemma is a key tool in the proof of Proposition 16.1. Very roughly, it corresponds to the statement "if a word \(W\) is periodic with two simple periods \(A\) and \(B\) at the same time, and if \(|W|\geq 2|A|\), \(|W|\geq 2|B|\) then \(B\) is a cyclic shift of \(A\)". **16.5 Lemma**.: _Let \(\mathsf{L}_{0}\) and \(\mathsf{L}_{1}\) be periodic lines in \(\Gamma_{\alpha}\) with simple periods \(A\) and \(B\) over \(G_{\alpha}\), respectively. Let \(\mathsf{S}\) be a coarsely \(C\)-periodic segment in \(\mathsf{L}_{0}\) where \(C\) is another simple period over \(G_{\alpha}\), \(\ell_{C}(\mathsf{S})\geq 25\). Assume that there exist coarsely \(C\)-periodic segments \(\mathsf{T}_{0},\mathsf{T}_{1},\mathsf{T}_{2}\) in \(\mathsf{L}_{1}\) such that \(\mathsf{T}_{0}<\mathsf{T}_{1}<\mathsf{T}_{2}\) and \(\mathsf{T}_{i}\approx s_{A,\mathsf{L}_{0}}^{i}\mathsf{S}\), \(i=0,1,2\)._ _If \(\mathsf{T}_{0}\lesssim s_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\) or \(s_{B,\mathsf{L}_{1}}\mathsf{T}_{1}\lesssim\mathsf{T}_{2}\) then, if fact, \(\mathsf{T}_{0}\approx s_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\), \(s_{B,\mathsf{L}_{1}}\mathsf{T}_{1}\approx\mathsf{T}_{2}\), words \(A\) and \(B\) represent conjugate elements of \(G_{\alpha}\) and periodic lines \(\mathsf{L}_{0}\) and \(\mathsf{L}_{1}\) are parallel._ Proof.: Denote \(\mathsf{P}_{0}=\mathsf{S}\cup s_{A,\mathsf{L}_{0}}^{2}\mathsf{S}\) and \(\mathsf{P}_{1}=\mathsf{T}_{0}\cup\mathsf{T}_{2}\). Let \(\mathsf{S}^{*}\) and \(\mathsf{T}_{i}^{*}\) be stable parts of \(\mathsf{S}\) and \(\mathsf{T}_{i}\). The crucial argument is similar to one in the proof of Proposition 13.4. Denote \(\mathcal{P}\) the set of all coarsely \(C\)-periodic segments \(\mathsf{U}\) in \(\Gamma_{\alpha}\) such that \(\mathsf{U}\approx g\mathsf{S}^{*}\) for some \(g\in G_{\alpha}\) (i.e. \(\mathsf{U}\) and \(\mathsf{S}^{*}\) have the same labels of their periodic bases). We introduce translations and jumps on the set of coarsely \(C\)-periodic segments \(\mathsf{U}\in\mathcal{P}\) which occur in \(\mathsf{P}_{0}\) or \(\mathsf{P}_{1}\). As in the proof of Proposition 13.4, it will be convenient to consider two disjoint sets of those \(\mathsf{U}\in\mathcal{P}\) which occur in \(\mathsf{P}_{0}\) and in \(\mathsf{P}_{1}\). (So formally we introduce the set \(\mathcal{P}_{i}\) (\(i=0,1\)) of pairs \((\mathsf{U},\mathsf{P}_{i})\) where \(\mathsf{U}\) occurs in \(\mathsf{P}_{i}\); thus \(s^{i}_{A,\mathsf{L}_{0}}\mathsf{S}^{*}\) belongs to \(\mathcal{P}_{0}\) and \(\mathsf{T}^{*}_{i}\) belongs to \(\mathcal{P}_{1}\) for \(i=0,1,2\). For a coarsely \(C\)-periodic segment \(\mathsf{U}\in\mathcal{P}\), saying '\(\mathsf{U}\) occurs in \(\mathsf{P}_{i}\)' we mean the corresponding element of \(\mathcal{P}_{i}\).) Let \(\mathsf{U},\mathsf{V}\in\mathcal{P}\) be coarsely \(C\)-periodic segments each occurring in some \(\mathsf{P}_{i}\). 1. If \(\mathsf{U}\) and \(\mathsf{V}\) occur in different paths \(\mathsf{P}_{i}\) and \(\mathsf{U}\approx\mathsf{V}\) then \(\mathsf{U}\)_jumps_ to \(\mathsf{V}\). 2. \(\mathsf{U}\)_translates_ to \(\mathsf{V}\) in the following cases: \(\mathsf{U}\) and \(\mathsf{V}\) occur in \(\mathsf{P}_{0}\) and \(\mathsf{U}\approx s^{k}_{A,\mathsf{L}_{0}}\mathsf{V}\) for some \(k\in\mathbb{Z}\); or \(\mathsf{U}\) and \(\mathsf{V}\) occur in \(\mathsf{P}_{1}\) and \(\mathsf{U}\approx s^{k}_{B,\mathsf{L}_{1}}\mathsf{V}\) for some \(k\in\mathbb{Z}\). Let \(\mathcal{M}\) be a maximal set of pairwise non-(strictly compatible) segments which can be obtained by these two operations from \(\mathsf{S}^{*}\). Lemma 13.14 implies that \(\mathcal{M}\) is a finite set. As in the proof of Proposition 13.4 we prove the following claim. _Claim: The jump operation is always possible inside \(\mathcal{M}\); that is, for any \(\mathsf{U}\in\mathcal{M}\) in \(\mathsf{P}_{i}\), \(i\in\{0,1\}\), there exists \(\mathsf{V}\in\mathcal{P}\) in \(\mathsf{P}_{1-i}\) such that \(\mathsf{V}\approx\mathsf{U}\)._ To prove the claim, we will apply Lemma 13.1 and do a necessary preparatory work. Assume that \(\mathsf{U}\in\mathcal{M}\) belongs to \(\mathsf{P}_{0}\) (the other case differs only in notation). Let \(\mathsf{V}_{0}=\mathsf{S}^{*}\), \(\mathsf{V}_{1}\),..., \(\mathsf{V}_{l}=\mathsf{U}\) be a sequence of coarsely \(C\)-periodic segments \(\mathsf{V}_{i}\in\mathcal{M}\) such that \(\mathsf{V}_{i+1}\) is obtained from \(\mathsf{V}_{i}\) by one of the operations (i) or (ii). We can assume that \(\mathsf{V}_{2j}\to\mathsf{V}_{2j+1}\) are translations and \(\mathsf{V}_{2j+1}\to\mathsf{V}_{2j+2}\) are jumps, so \(l=2k-1\) for some \(k\). Under this assumption, \(\mathsf{V}_{2j}\to\mathsf{V}_{2j+1}\) is a translation inside \(\mathsf{P}_{0}\) if \(j\) is even and inside \(\mathsf{P}_{1}\) if \(j\) is odd. We then define a sequence \(\mathsf{Y}_{0}\), \(\mathsf{Y}_{1}\),..., \(\mathsf{Y}_{k}\) of paths in \(\Gamma_{\alpha}\) (\(\mathsf{Y}_{j}\) will be periodic segments with alternating periods \(A\) and \(B\)) and a sequence \(\mathsf{W}_{j}\in\mathcal{P}\) of coarsely \(C\)-periodic segments in \(\mathsf{Y}_{j}\) for \(j=0,1,\ldots,k-1\) such that \(\mathsf{W}_{0}=\mathsf{V}_{1}\) and \(\mathsf{W}_{i}\approx\mathsf{W}_{0}\) for all \(i\). For each \(j\) we will have \(\mathsf{W}_{j}=f_{j}\mathsf{V}_{2j+1}\) for some \(f_{j}\in G_{\alpha}\). The definition of \(\mathsf{Y}_{j}\) and \(f_{j}\) goes as follows. We start with \(\mathsf{Y}_{0}=\mathsf{P}_{0}\) and \(\mathsf{W}_{0}=\mathsf{V}_{1}\), so \(f_{0}=1\). Assume that \(j<k-1\) and \(\mathsf{Y}_{j}\) and \(f_{j}\) are already defined. For even \(j\), \(\mathsf{V}_{2j}\) translates to \(\mathsf{V}_{2j+1}\) inside \(\mathsf{P}_{0}\), so there exists \(f_{j+1}\in G_{\alpha}\) of the form \(f_{j}s^{t}_{A,\mathsf{P}_{0}}\) such that \(f_{j+1}\mathsf{V}_{2j+1}\approx f_{j}\mathsf{V}_{2j}\). Thus, \(f_{i}\mathsf{P}_{0}\) and \(f_{j+1}\mathsf{P}_{0}\) have a common \(A\)-periodic extension and we take \(\mathsf{Y}_{j+1}=f_{i}\mathsf{P}_{0}\cup f_{j+1}\mathsf{P}_{0}\). Similarly, for odd \(j\)\(\mathsf{V}_{2j}\) translates to \(\mathsf{V}_{2j+1}\) inside \(\mathsf{P}_{1}\). We take \(f_{j+1}\in G_{\alpha}\) of the form \(f_{j}s^{t}_{B,\mathsf{P}_{1}}\) such that \(f_{j+1}\mathsf{V}_{2j+1}\approx f_{j}\mathsf{V}_{2j}\) and take \(\mathsf{Y}_{j+1}=f_{i}\mathsf{P}_{0}\cup f_{j+1}\mathsf{P}_{0}\) inside a common \(B\)-periodic extension of \(f_{i}\mathsf{P}_{0}\) and \(f_{j+1}\mathsf{P}_{0}\). Note that \(k\) is odd because \(\mathsf{V}_{2k+1}=\mathsf{U}\) is assumed to occur in \(\mathsf{P}_{0}\). We finally set \(\mathsf{Y}_{k}=f_{k-1}\mathsf{P}_{1}\). We now apply Lemma 13.1 where: * \(S_{j}\) is the set of all coarsely \(C\)-periodic segments \(\mathsf{V}\in\mathcal{P}\) in \(\mathsf{Y}_{j}\). * \(S_{j}\) is pre-ordered by '\(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}}\limits^{\lower 3.0pt\hbox{$\sim$}}}\)'. * Equivalence is strict compatibility. * A segment \(\mathsf{V}\in\bigcup_{j}S_{j}\) is defined to be stable if \(\mathsf{V}\) is the stable part of some coarsely \(C\)-periodic segment in \(\mathsf{Y}_{j}\). * For \(a_{j}\), \(b_{j}\), \(a^{\prime}_{j}\) and \(b^{\prime}_{j}\) we take appropriate translates of \(\mathsf{S}^{*}\) and \(\mathsf{T}^{*}_{i}\); namely, \(f_{j}\mathsf{S}^{*}\), \(f_{j}s^{2}_{A,\mathsf{L}_{0}}\mathsf{S}^{*}\), \(f_{j}\mathsf{T}^{*}_{0}\) and \(f_{j}\mathsf{T}^{*}_{2}\) if \(j\) is even or \(f_{j}\mathsf{T}^{*}_{0}\), \(f_{j}\mathsf{T}^{*}_{2}\), \(f_{j}\mathsf{S}^{*}\) and \(f_{j}s^{2}_{A,\mathsf{L}_{0}}\mathsf{S}^{*}\) if \(j\) is odd, respectively. * \(c_{0}\) is \(\mathsf{V}_{1}\). Note that by Proposition 16.4 we have \(h_{\alpha}(C)\leq 6\). Hence the hypothesis \(\ell_{C}(\mathsf{S})\geq 25\) implies \(\ell_{C}(\mathsf{V})\geq 13\geq 2h_{\alpha}(C)+1\) for any \(\mathsf{V}\in\mathcal{P}\). Condition (ii) of Lemma 13.1 holds by Lemma 13.14. Conditions (iii) and (iv) of Lemma 13.1 hold by Lemma 13.16. By the lemma, there exists a coarsely \(C\)-periodic segment \(\mathsf{V}_{k}\in\mathcal{P}\) in \(f_{k-1}\mathsf{P}_{1}\) such that \(\mathsf{V}_{k}\approx f_{k-1}\mathsf{U}\). This gives the required jump \(\mathsf{U}\to f_{k-1}^{-1}\mathsf{V}_{k}\). The claim is proved. Let \(r\) be the number of coarsely \(C\)-periodic segments \(\mathsf{V}\in\mathcal{M}\) such that and \(\mathsf{K}^{*}\lessapprox\mathsf{V}\lessapprox s_{A,\mathsf{L}_{0}}\mathsf{K}^ {*}\) and let \(q\) be the number of coarsely \(C\)-periodic segments \(\mathsf{V}\in\mathcal{M}\) such that \(\mathsf{T}_{1}^{*}\lessapprox\mathsf{N}\lessapprox s_{B,\mathsf{L}_{1}} \mathsf{T}_{1}^{*}\) (in other words, \(r\) and \(q\) are the numbers of coarsely \(C\)-periodic segments \(\mathsf{V}\in\mathcal{M}\) in one period \(A\) and in one period \(B\), respectively). Note that \(\gcd(r,q)=1\) because \(\mathcal{M}\) is generated by a single segment \(\mathsf{S}^{*}\). We assume first that either \(\mathsf{T}_{0}\lessapprox s_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\) or \(s_{B,\mathsf{L}_{1}}\mathsf{T}_{1}\lessapprox\mathsf{T}_{2}\). Since \(\mathcal{M}\) is closed under translations modulo equivalence '\(\approx\)', each of these relations implies \(q\leq r\) and hence implies the other one. Let \(\mathsf{U}_{0},\mathsf{U}_{1},\ldots,\mathsf{U}_{t}\) be all coarsely \(C\)-periodic segments in \(\mathcal{M}\) belonging to \(\mathsf{P}_{0}\) arranged in their order in \(\mathsf{P}_{0}\) (so \(\mathsf{U}_{i}\) form a set of representatives of coarsely \(C\)-periodic segments in \(\mathcal{M}\) modulo '\(\approx\)'). The group \(G_{\alpha}\) acts on the set \(\mathcal{P}/\!\approx\). It follows from Corollary 13.9 that the action is free. For equivalence classes \([\mathsf{U}_{i}]\) of \(\mathsf{U}_{i}\) we have \[s_{A,\mathsf{L}_{0}}[\mathsf{U}_{i}]=[\mathsf{U}_{i+r}],\ i=0,1,\ldots,t-r\quad s _{B,\mathsf{L}_{1}}[\mathsf{U}_{i}]=[\mathsf{U}_{i+q}],\ i=0,1,\ldots,t-q.\] Note also that \(t\geq 2r+1\). Applying Lemma 13.2 we get \(s_{A,\mathsf{L}_{0}}=d^{q}\) and \(s_{B,\mathsf{L}_{1}}=d^{r}\) for some \(d\in G_{\alpha}\). Since \(A\) and \(B\) are non-powers we get \(q=r=1\) which immediately implies the conclusion of the proposition. For the proof, it remains to consider cases \(\mathsf{T}_{0}\sim s_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\) and \(s_{B,\mathsf{L}_{1}}\mathsf{T}_{1}\sim\mathsf{T}_{2}\). We consider the case \(s_{B,\mathsf{L}_{1}}\mathsf{T}_{1}\sim\mathsf{T}_{2}\) (the case \(\mathsf{T}_{0}\sim s_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\) is symmetric). By the already proved part, we can assume that \(\mathsf{T}_{2}\lessapprox s_{B,\mathsf{L}_{1}}\mathsf{T}_{1}\). We show that the assumption leads to a contradiction. We have \(\mathsf{T}_{0}\lessapproxs_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{2}\lessapprox \mathsf{T}_{1}\), so there exists \(\mathsf{T}_{3}\in\mathcal{M}\) such that \(\mathsf{T}_{3}\approx s_{B,\mathsf{L}_{1}}^{-1}\mathsf{T}_{2}\). \(\mathsf{T}_{3}\) jumps to some \(\mathsf{S}_{3}\in\mathcal{M}\) in \(\mathsf{L}_{0}\) such that \(\mathsf{S}_{3}\sim\mathsf{S}\) and \(\mathsf{S}_{3}\lessapprox\mathsf{S}\). Then \(\mathsf{S}_{3}\) translates to \(\mathsf{S}_{4}\approx s_{A,\mathsf{L}_{0}}\mathsf{S}_{3}\) and we have \(\mathsf{S}_{4}\sim\mathsf{S}_{2}\) and \(\mathsf{S}_{4}\lessapprox\mathsf{S}_{2}\). Then \(\mathsf{S}_{4}\) jumps to some \(\mathsf{T}_{4}\) in \(\mathsf{L}_{1}\) and we can continue the process infinitely (see Figure 40). Proof of Proposition 16.1.: Let \(A\) be a suspended period of level \(m\) over \(G_{\alpha}\), Assume first that \(m=0\). Then by Definition 15.1 and Proposition 12.15 an \(A\)-periodic segment \(\mathsf{R}\) in \(G_{\alpha}\) contains a coarsely \(B\)-periodic segment \(\tilde{\mathsf{T}}\) with \(\ell_{B}(\tilde{\mathsf{T}})\geq p_{1}-2h_{\alpha}(B)-2\geq 51\) where \(B\) is not conjugate to \(A\) in \(G_{\alpha}\). By Lemma 13.12 we have \(\tilde{\mathsf{T}}\not\sim s_{A,\tilde{\mathsf{R}}}\tilde{\mathsf{T}}\) and \(|\tilde{\mathsf{T}}|<2|A|\). Let \(\mathsf{T}\) be the stable part of \(\tilde{\mathsf{T}}\). Since \(h_{\alpha}(B)\geq 2\) by Definition 12.12, we have \(|\mathsf{T}|<|A|\) by Corollary 13.6. Note also that \(\ell_{B}(\mathsf{T})\geq\ell_{B}(\tilde{\mathsf{T}})-2h_{\alpha}(B)\geq p_{0}\). Let \(T=\mathit{label}(\mathsf{T})\). We show that \(T\) has the required property (ii) formulated in Proposition 16.1 Let \(\mathsf{S}\) be a coarsely \(A\)-periodic segment in \(\Gamma_{\alpha}\) with \(\ell_{A}(\mathsf{S})\geq 4\) and let \(\mathsf{P}\) be a periodic base for \(\mathsf{S}\). Denote \(t=\ell(\mathsf{S})\). By Remark 12.7 we can assume that \(|\mathsf{P}|\geq t|A|\). Up to placing \(\tilde{\mathsf{T}}\) Figure 40. in \(\Gamma_{\alpha}\) we can assume that \(\mathsf{P}\) contains \(t-2\) translates \(\mathsf{\tilde{T}}\), \(s_{A,\mathsf{P}}\mathsf{\tilde{T}}\),..., \(s_{A,\mathsf{P}}^{t-3}\mathsf{\tilde{T}}\) of \(\mathsf{\tilde{T}}\). Using Lemma 10.13(i) (which implies that strictly compatible coarsely periodic segments are close) and Proposition 12.14 we find disjoint \(\mathsf{V}_{i}\) (\(i=0,\ldots,t-3\)) in \(\mathsf{S}\) such that \(\mathsf{V}_{i}\approx s_{A,\mathsf{P}}^{i}\mathsf{T}\). This proves the proposition in the case \(m=0\). Let \(m\geq 1\). The proof consists of two parts. First we provide a construction of a coarsely \(B\)-periodic segment \(T\) satisfying condition (i) of Proposition 16.1 and then we prove (ii). _Construction of \(T\)._ According to Definition 15.2, there exists a sequence \(A_{0}\), \(A_{1}\),..., \(A_{m}=A\) of simple periods over \(G_{\alpha}\) where \(A_{0}\) is suspended of level \(0\), for each \(i\leq m-1\)\(A_{i}\) is not conjugate to \(A_{i+1}\) and there are reduced in \(G_{\alpha}\) close words \(X_{i}Q_{i}Y_{i}\) and \(P_{i+1}\in\mathrm{Per}(A_{i+1})\) where \(Q_{i}\in\mathrm{Per}(A_{i})\) and \(|Q_{i}|\geq 4|A_{i}|\). For each \(i\), we consider corresponding close paths \(\mathsf{X}_{i}\mathsf{Q}_{i}\mathsf{Y}_{i}\) and \(\mathsf{P}_{i+1}\) in \(\Gamma_{\alpha}\) and place then in such a way that \(\mathsf{Q}_{i}\) and \(\mathsf{P}_{i}\) have the common infinite \(A_{i}\)-periodic extension \(\mathsf{L}_{i}\). We denote also \(\mathsf{L}_{0}\) the infinite \(A_{0}\)-periodic extension of \(\mathsf{Q}_{0}\). As we proved above, there is a coarsely \(B\)-periodic segment \(\mathsf{\tilde{T}}_{0}\) in \(\mathsf{Q}_{0}\) with \(\ell(\mathsf{\tilde{T}}_{0})\geq 51\) and the stable part \(\mathsf{T}_{0}\) satisfying \(\ell(\mathsf{T}_{0})\geq p_{0}\) and \(|\mathsf{T}_{0}|<|A|\). Up to positioning \(\mathsf{\tilde{T}}_{0}\) in \(\mathsf{L}_{0}\) we can assume that \(\mathsf{Q}_{0}\) contains translates \(s_{A_{0},\mathsf{L}_{0}}^{-1}\mathsf{T}_{0}\) and \(s_{A_{0},\mathsf{L}_{0}}^{-1}\mathsf{T}_{0}\) of \(\mathsf{T}_{0}\). In what follows, if \(\mathsf{Z}\) is a coarsely \(B\)-periodic segment in \(\Gamma_{\alpha}\) then \(\mathsf{Z}^{*}\) denotes the stable part of \(\mathsf{Z}\). By Lemma 13.12, \(s_{A_{0},\mathsf{L}_{0}}^{t}\mathsf{T}_{0}\not\sim\mathsf{T}_{0}\) for any \(t\neq 0\) and hence \(s_{A_{0},\mathsf{L}_{0}}^{-1}\mathsf{T}_{0}\ltimes\mathsf{\tilde{T}}_{0} \ltimes s_{A_{0},\mathsf{L}_{0}}^{*}\mathsf{T}_{0}\). By Proposition 12.14 there are \(\mathsf{T}_{1}\), \(\mathsf{U}_{1,1}\) and \(\mathsf{W}_{1,1}\) in \(\mathsf{P}_{1}\) such that \(\mathsf{T}_{1}\approx\mathsf{T}_{0}\), \(\mathsf{U}_{1,1}\approx s_{A_{0},\mathsf{L}_{0}}^{-1}\mathsf{T}_{0}^{*}\) and \(\mathsf{W}_{1,1}\approx s_{A_{0},\mathsf{L}_{0}}\mathsf{T}_{0}^{*}\). Application of Lemma 16.5 with \(\mathsf{S}:=\mathsf{T}_{0}^{*}\) (note that \(\ell_{B}(\mathsf{T}_{0}^{*})\geq p_{0}-12\geq 27\)) gives \(s_{A_{1},\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\ltimes\mathsf{U}_{1,1}\) and \(\mathsf{W}_{1,1}\ltimes s_{A_{1},\mathsf{L}_{1}}\mathsf{T}_{1}\). In particular, we have \(|\mathsf{T}_{1}|\leq|A_{1}|\). In the case \(m=1\) we take \(T:=\mathit{label}(\mathsf{T}_{1})\). Assume that \(m\geq 2\). We continue a procedure of finding coarsely \(B\)-periodic segments \(\mathsf{T}_{i}\) in \(\mathsf{P}_{i}\). Up to positioning \(\mathsf{Q}_{1}\) in \(\mathsf{L}_{1}\) we can assume that \(\mathsf{Q}_{1}\) contains both \(s_{A_{1},\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}\) and \(s_{A_{1},\mathsf{L}_{1}}\mathsf{T}_{1}\). Using Proposition 12.14 we find \(\mathsf{U}_{2,2}\), \(\mathsf{U}_{2,1}\), \(\mathsf{W}_{2,1}\) and \(\mathsf{W}_{2,2}\) in \(\mathsf{P}_{2}\) such that \(\mathsf{U}_{2,2}\approx s_{A_{1},\mathsf{L}_{1}}^{-1}\mathsf{T}_{1}^{*}\), \(\mathsf{U}_{2,1}\approx\mathsf{U}_{1,1}^{*}\), \(\mathsf{W}_{2,1}\approx\mathsf{W}_{1,1}^{*}\) and \(\mathsf{W}_{2,2}\approx s_{A_{1},\mathsf{L}_{1}}\mathsf{T}_{1}^{*}\). By Lemma 13.15, \(\mathsf{U}_{2,2}\ltimes\mathsf{U}_{2,1}\ltimes\mathsf{W}_{2,1}\ltimes\mathsf{W }_{2,2}\ltimes\mathsf{W}_{2,2}\). We have \(\mathsf{U}_{2,1}\approx s_{A_{0},\mathsf{L}_{0}}^{-1}\mathsf{T}_{0}^{**}\), \(\mathsf{W}_{2,1}\approx s_{A_{0},\mathsf{L}_{0}}\mathsf{T}_{0}^{**}\) and using Proposition 12.14 once more with \(\mathsf{X}:=s_{A_{0},\mathsf{L}_{0}}^{-1}\mathsf{T}_{0}^{**}\cup s_{A_{0}, \mathsf{L}_{0}}\mathsf{T}_{0}^{**}\) and \(\mathsf{Y}:=\mathsf{U}_{2,1}\cup\mathsf{W}_{2,1}\) we find \(\mathsf{T}_{2}\) in \(\mathsf{P}_{2}\) such that \(\mathsf{T}_{2}\approx\mathsf{T}_{0}\). Application of Lemma 16.5 gives \(s_{A_{2},\mathsf{L}_{2}}^{-1}\mathsf{T}_{2}\ltimes\mathsf{U}_{2,2}\) and \(\mathsf{W}_{2,2}\ltimes s_{A_{2},\mathsf{L}_{2}}\mathsf{T}_{2}\). In particular, \(|\mathsf{T}_{2}|\leq|A_{2}|\). Repeating in a similar manner, we find \(\mathsf{U}_{m,m}\), \(\mathsf{U}_{m,m-1}\), \(\mathsf{W}_{m,m-1}\) and \(\mathsf{W}_{m,m}\) in \(\mathsf{P}_{m}\) such that \(\mathsf{U}_{m,m}\approx s_{A_{m-1},\mathsf{L}_{m-1}}^{-1}\mathsf{T}_{m-1}^{*}\), \(\mathsf{U}_{m,m-1}\approx\mathsf{U}_{m-1,m-1}^{*}\), \(\mathsf{W}_{m,m-1}\approx\mathsf{W}_{m-1,m-1}^{*},\mathsf{W}_{m,m}\approx s_{A_{ m-1},\mathsf{L}_{m-1}}\mathsf{T}_{m-1}^{*}\) and \(\mathsf{U}_{m,m}\ltimes\mathsf{U}_{m,m-1}\ltimes\mathsf{W}_{m,m-1}\ltimes \mathsf{W}_{m,m}\). Then we successively find \(\mathsf{U}_{m,m-2}\), \(\mathsf{W}_{m,m-2}\), \(\mathsf{U}_{m,m-3}\), \(\mathsf{W}_{m,m-3}\),..., \(\mathsf{U}_{m,1}\), \(\mathsf{W}_{m,1}\) such that \(\mathsf{U}_{m,i}\approx\mathsf{U}_{i,i}^{*}\approx s_{A_{i-1},\mathsf{L}_{i-1}}^{-1} \mathsf{T}_{i-1}^{**}\) and \(\mathsf{W}_{m,i}\approx\mathsf{V}_{i,i}^{*}\approx s_{A_{i-1},\mathsf{L}_{i-1 }}\mathsf{T}_{i-1}^{**}\). Finally, we find \(\mathsf{T}_{m}\) in \(\mathsf{P}_{m}\) such that \(\mathsf{T}_{m}\approx\mathsf{T}_{0}\). Application of Lemma 16.5 gives \(s_{A_{m},\mathsf{L}_{m}}^{-1}\mathsf{T}_{m}\ltimes\mathsf{U}_{m,m}\) and \(\mathsf{W}_{m,m}\ltimes s_{A_{m},\mathsf{L}_{m}}\mathsf{T}_{m}\) which implies \(|\mathsf{T}_{m}|\leq|A_{m}|\). We take \(T: in \(\mathsf{S}\) such that \(\mathsf{V}_{i}\approx s^{i}_{A,\mathsf{P}}\mathsf{T}\) and \(\mathsf{V}_{i}\) are all disjoint. Since \(\ell_{B}(\mathsf{V}_{i})=\ell_{B}(\mathsf{T}_{m})\geq p_{0}\) this will finish the proof. Fix an index \(k\) in the interval \(1\leq i\leq t-3\). Up to positioning \(\mathsf{P}\) and \(\mathsf{S}\) in \(\Gamma_{\alpha}\) we can assume that \(\mathsf{P}\) and \(\mathsf{P}_{m}\) have the common \(A_{m}\)-periodic extension \(\mathsf{L}_{m}\) and \(s^{k}_{A,\mathsf{P}}\mathsf{T}=\mathsf{T}_{m}\). By Lemma 16.5, \(s^{-1}_{A_{m},\mathsf{L}_{m}}\mathsf{T}\lessapprox\mathsf{U}_{m,m}\) and \(\mathsf{W}_{m,m}\lessapprox s_{A_{m},\mathsf{L}_{m}}\mathsf{T}\). Then using Proposition 12.14 as in the procedure above, we successively find pairs \((\mathsf{U}_{i},\mathsf{W}_{i})\) for \(i=m,m-1,\ldots,1\) such that \(\mathsf{Z}_{k-1}\lessapprox\mathsf{U}_{m}\lessapprox\mathsf{U}_{m-1} \lessapprox\cdots\lessapprox\mathsf{U}_{1}\lessapprox\mathsf{Z}_{k} \lessapprox\mathsf{W}_{1}\lessapprox\cdots\lessapprox\mathsf{W}_{m} \lessapprox\mathsf{Z}_{k+1}\) and \(\mathsf{U}_{i}\approx\mathsf{U}_{i,i}^{*}\), \(\mathsf{W}_{i}\approx\mathsf{W}_{i,i}^{*}\) for \(i=m,m-1,\ldots,1\). Then using Proposition 12.14 again with \(\mathsf{X}:=s^{-1}_{A_{0},\mathsf{L}_{0}}\mathsf{T}^{**}_{0}\cup s_{A_{0}, \mathsf{L}_{0}}\mathsf{T}^{**}_{0}\), \(\mathsf{Y}:=\mathsf{U}_{1}\cup\mathsf{W}_{1}\) and \(\mathsf{S}=\hat{\mathsf{T}}_{0}\) gives \(\mathsf{V}_{k}\) with \(\mathsf{U}_{1}\lessapprox\mathsf{V}_{k}\lessapprox\mathsf{W}_{1}\) and \(\mathsf{V}_{k}\approx\mathsf{T}_{0}\approx s^{k}_{A,\mathsf{P}}\mathsf{T}\). The proof is finished. **16.6 Proposition**.: _Let \(A\in\mathcal{E}_{\alpha+1}\) and \(t\geq 1\) be an integer. Let \(P\) be an \(A\)-periodic word with \(|P|=t|A|\). Then_ \[\frac{t}{n+t}<\mu(P)<\frac{t}{n-t}+\omega.\] _Moreover, for \(t\geq 200\) we have also_ \[0.89\frac{t}{n}<\mu(P)<1.12\frac{t}{n}.\] Proof.: Denote \(N=|(A^{n})^{\circ}|_{\alpha}\). Recall that \(\mu(P)=|P|_{\alpha}/N\). Up to cyclic shift of \(A\), we assume that \(P=A^{t}\). For the lower bound on \(\mu(P)\) in the first inequality, we observe that the cyclic word \((A^{n})^{\circ}\) can be covered with \(\lceil\frac{n}{t}\rceil\) copies of \(P\). By 4.14, this implies \[N<\left(\frac{n}{t}+1\right)|P|_{\alpha}\] which is equivalent to \(\frac{t}{n+t}<\mu(P)\). Similarly, for the upper bound we observe that \(\lfloor\frac{n}{t}\rfloor\) disjoint copies of \(P\) can be placed inside \((A^{n})^{\circ}\). Then again by 4.14, \[N\geq\left\lfloor\frac{n}{t}\right\rfloor(|P|_{\alpha}-1)>\left(\frac{n}{t}-1 \right)(|P|_{\alpha}-1)\] which implies by (S1) with \(\alpha:=\alpha+1\) \[\mu(P)<\frac{t}{n-t}+\frac{1}{N}\leq\frac{t}{n-t}+\omega.\] Figure 41. If \(t\geq 200\) then we partition \(A^{t}\) into \(k\) subwords \(A^{t_{i}}\) with \(80\leq t_{i}\leq 120\). We have \[\sum_{i}|A_{t_{i}}|_{\alpha}-(k-1)\leq|P|_{\alpha}\leq\sum_{i}|A_{t_{i}}|_{ \alpha}.\] and by the already proved bounds on \(\mu(A^{t_{i}})\), for each \(i\) we have \[0.94\frac{t_{i}}{n}<\mu(A^{t_{i}})<1.07\frac{t_{i}}{n}+\frac{1}{N}.\] Then \[\mu(P)\geq\sum_{i}\mu(A^{t_{i}})-\frac{k-1}{N}>0.94\frac{t}{n}-\frac{k}{N}.\] By Proposition 16.4, \(N\geq 0.25n\). Hence \[\frac{k}{N}\leq\frac{t}{80}\left(\frac{n}{N}\right)\frac{1}{n}\leq 0.05\frac{t }{n}\] and we obtain the required bound \(\mu(P)>0.89\frac{t}{n}\). Similarly, for the upper bound on \(\mu(P)\) we get \[\mu(P)\leq\sum_{i}\mu(A^{t_{i}})<1.07\frac{t}{n}+\frac{k}{N}\leq 1.12\frac{t}{n}.\] **16.7 Corollary**.: _(P2) implies (S2)\({}_{\alpha+1}\)._ Proof.: By Proposition 16.6, if \(P\) is a subword of \(A^{n}\) with \(A\in\mathcal{E}_{\alpha+1}\) and \(\mu(P)\geq\lambda\) then \(|P|\geq t|A|\) where \(t\) satisfies \[\frac{t}{n-t}\geq\lambda-\omega\geq\frac{1}{24}-\frac{1}{480}\] and hence \(t>76\). Since \(76>p_{1}\), the required implication is straightforward. **16.8 Proposition**.: _Presentation (15-1) satisfies (P2) and therefore satisfies the iterated small cancellation condition (S0)-(S3) for all \(\alpha\geq 1\)._ Proof.: Indeed, assume that \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\) are periodic lines in \(\Gamma_{\alpha}\) with periods \(A,B\in\mathcal{E}_{\alpha+1}\) respectively. Let \(\mathsf{P}\) and \(\mathsf{Q}\) be close subpath of \(\mathsf{L}_{1}\) and \(\mathsf{L}_{2}\), respectively, such that \(|\mathsf{P}|\geq p_{1}|A|\). If \(A\) is conjugate to \(B\) in \(G_{\alpha}\) then \(A=B\) according to Definition 15.3 and the statement follows from Proposition 13.13. If \(A\) is not conjugate to \(B\) in \(G_{\alpha}\) then \(B\) is suspended of level \(0\) as a simple period over \(G_{\alpha}\) and hence cannot belong to \(\mathcal{E}_{\alpha+1}\). From this point, we may assume that all statements in Sections 5-16 are true for all values of rank \(\alpha\). **16.9 Proposition**.: _Every element of \(G\) is conjugate to a power of some \(C\in\bigcup_{\alpha\geq 1}\mathcal{E}_{\alpha}\)._ Proof.: Let \(g\in G\). If \(g\) has finite order then by Proposition 11.5, \(g\) is conjugate to a power of some \(C\in\bigcup_{\alpha\geq 1}\mathcal{E}_{\alpha}\). We assume that \(g\) has infinite order and come to a contradiction. By Corollary 14.8 we represent \(g\) by a word \(X\) reduced in \(G\) such that for some \(\alpha\geq 1\), \(X\) contains no fragments \(F\) of rank \(\beta\geq\alpha\) with \(\mu_{\mathrm{f}}(F)\geq 3\omega\). By our assumption, \(X\) has infinite order in all \(G_{\beta}\) for \(\beta\geq\alpha\). By Propositions 11.13 and 11.5, \(X\) is conjugate in \(G_{\alpha}\) to a word of the form \(A^{t}\) where \(A\) is a simple period over \(G_{\alpha}\). Using Proposition 7.13(iii) we conclude that \(X\) is conjugate to \(A^{t}\) already in \(G_{\alpha-1}\). Then applying Proposition 8.9 with \(\beta:=\alpha,\alpha+1,\dots\) we see that no cyclic shift of \(A\) contains a fragment \(K\) of rank \(\beta\geq\alpha\) with \(\mu_{\mathrm{f}}(K)\geq 9\omega\) and that \(A\) is cyclically reduced in \(G_{\beta}\) for all \(\beta>\alpha\). Moreover, by Propositions 8.16(iii) and 8.11, \(A\) is strongly cyclically reduced in \(G_{\beta}\) for all \(\beta>\alpha\). Assume that for some \(\beta\geq\alpha\), \(A\) is conjugate in \(G_{\beta}\) to a power \(B^{r}\) of a simple period over \(G_{\beta}\). By Proposition 9.16, \(A\) and \(B^{r}\) are conjugate already in \(G_{\alpha}\). Since \(A\) is a non-power in \(G_{\alpha}\), we have \(r=1\) and then by Propositions 11.13 and 11.5, \(A\) is a non-power in \(G_{\beta}\). We showed that \(A\) is a simple period over \(G_{\beta}\) for any \(\beta\geq\alpha\). But this is impossible because by Proposition 16.4 we should have \(|A|_{\beta}\geq 0.25\) and hence \(|A|\geq 0.25\zeta^{-\beta}\) for any \(\beta\geq\alpha\). As an immediate consequence we get: **16.10 Corollary**.: \(G\) _satisfies the identity \(x^{n}=1\) and therefore is isomorphic to the free Burnside group \(B(m,n)\)._
2307.08872
A Refined scissors congruence group and the third homology of $\textrm{SL}_2$
There is a natural connection between the third homology of $\textrm{SL}_2(A)$ and the refined Bloch group $\mathcal{RB}(A)$ of a commutative ring $A$. In this article we investigate this connection and as the main result we show that if $A$ is a universal $\textrm{GE}_2$-domain such that $-1 \in A^{\times 2}$, then we have the exact sequence $H_3(\textrm{SM}_2(A),\mathbb{Z}) \to H_3(\textrm{SL}_2(A),\mathbb{Z}) \to \mathcal{RB}(A) \to 0$, where $\textrm{SM}_2(A)$ is the group of monomial matrices in $\textrm{SL}_2(A)$. Moreover we show that $\mathcal{RP}_1(A)$, the refined scissors congruence group of $A$, naturally is isomorph with the relative homology group $H_3(\textrm{SL}_2(A), \textrm{SM}_2(A),\mathbb{Z})$.
Behrooz Mirzaii, Elvis Torres Pérez
2023-07-17T21:55:47Z
http://arxiv.org/abs/2307.08872v2
# A refined scissors congruence group and the third homology of \(\mathrm{SL}_{2}\) ###### Abstract. There is a natural connection between the third homology of \(\mathrm{SL}_{2}(A)\) and the refined Bloch group \(\mathcal{RB}(A)\) of a commutative ring \(A\). In this article we investigate this connection and as the main result we show that if \(A\) is a universal \(\mathrm{GE}_{2}\)-domain such that \(-1\in{A^{\times}}^{2}\), then we have the exact sequence \[H_{3}(\mathrm{SM}_{2}(A),\mathbb{Z})\to H_{3}(\mathrm{SL}_{2}(A),\mathbb{Z}) \rightarrow\mathcal{RB}(A)\to 0,\] where \(\mathrm{SM}_{2}(A)\) is the group of monomial matrices in \(\mathrm{SL}_{2}(A)\). Moreover we show that \(\mathcal{RP}_{1}(A)\), the refined scissors congruence group of \(A\), naturally is isomorph with the relative homology group \(H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\). For a commutative ring \(A\), the study of the third homology of the group \(\mathrm{SL}_{2}(A)\) is important because of its close connection to the third \(K\)-group of \(A\)[18], [12], its appearance in the scissors congruence problem in 3-dimensional hyperbolic geometry [4], [17], etc. An important method to study this group is by means of its connection to the refined scissors congruence group \(\mathcal{RP}_{1}(A)\) of \(A\), introduced and studied by Hutchinson [6], [7][9], [12], [3]. Let \(\mathcal{RB}(A)\subseteq\mathcal{RP}_{1}(A)\) be the refined Bloch group of \(A\). Usually there is a natural map from the third homology of \(\mathrm{SL}_{2}(A)\) to \(\mathcal{RB}(A)\). In the current paper we study this map assuming minimum conditions on \(A\). Let \(T(A)\) and \(B(A)\) be the group of diagonal and upper triangular matrices in \(\mathrm{SL}_{2}(A)\), respectively. Assume that (i) A is a universal \(\mathrm{GE}_{2}\)-ring, (ii) \(\mu_{2}(A)=\{\pm 1\}\) and \(-1\in{A^{\times}}^{2}\), (iii) \(H_{n}(T(A),\mathbb{Z})\simeq H_{n}(B(A),\mathbb{Z})\) for \(n=2,3\). Then as the first main result of this article we show that the sequence \[H_{3}(\mathrm{SM}_{2}(A),\mathbb{Z})\to H_{3}(\mathrm{SL}_{2}(A),\mathbb{Z}) \rightarrow\mathcal{RB}(A)\to 0 \tag{0.1}\] is exact, where \(\mathrm{SM}_{2}(A)\) is the group of monomial matrices in \(\mathrm{SL}_{2}(A)\) (Theorem 6.6). Moreover, if \(A\) satisfies in conditions (i) and (iii), then as the second main result we show that there is an exact sequence \[I(A)\otimes\mu_{2}(A)\to H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A), \mathbb{Z})\rightarrow\frac{\mathcal{RP}_{1}(A)}{\langle\psi_{1}(a^{2}):a\in A ^{\times}\rangle}\to 0, \tag{0.2}\] where \(I(A)\) is the fundamental ideal of \(A\) (see Theorem 8.2). As a particular case, we show that if \(-1\in{A^{\times}}^{2}\), then we have the isomorphism \[H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\simeq\mathcal{RP}_{1}( A).\] The homology groups of \(\operatorname{SL}_{2}(A)\) relative to its subgroups \(T(A)\) and \(\operatorname{SM}_{2}(A)\) seems to be important. In this article we show that for any ring \(A\) satisfying conditions (i) and (iii), we have the isomorphisms \[H_{2}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2}(A),\mathbb{Z})\simeq W(A), \hskip 28.452756ptH_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\simeq K_{1}^{ \operatorname{MW}}(A),\] where \(W(A)\) is the Witt ring of \(A\) and \(K_{1}^{\operatorname{MW}}(A)\) is the first Milnor-Witt \(K\)-group of \(A\). Moreover we show that \[H_{3}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z}\big{[}\tfrac{1}{2}\big{]})\simeq \mathcal{RP}_{1}(A)\big{[}\tfrac{1}{2}\big{]}\] (for the last two isomorphism we need to assume that \(\operatorname{SL}_{2}(A)\) is perfect.) It seems that \(K_{1}^{\operatorname{MW}}(A)\big{[}\tfrac{1}{2}\big{]}\) and \(\mathcal{RP}_{1}(A)\big{[}\tfrac{1}{2}\big{]}\) should be part of a chain of groups [19, App. A] with certain properties similar to \(K\)-groups. These two groups appear in the unstable analogues of the fundamental theorem of \(K\)-theory for the second and third homology of \(\operatorname{SL}_{2}\) over an infinite field [8], which can be used to calculate the low-dimensional homology of \(\operatorname{SL}_{2}\) of Laurent polynomials over certain fields. Moreover they have certain interesting localization property [5, Theorem 6.3], [12, Theorem A]. We briefly outline the organization of the present paper. In Section 1 we study the \(\operatorname{GE}_{2}\)-rings and rings universal for \(\operatorname{GE}_{2}\) and introduce a spectral sequence which will be our main tool in handling the homology of \(\operatorname{SL}_{2}(A)\). In Section 2 we introduce Hutchinson's refined scissors congruence group and some of its elementary properties. In Section 3 we compare the homologies of the groups \(T(A)\) and \(B(A)\) on certain class of rings. In Section 4 we introduce the refined Bloch group of \(A\) and study its appearance in the spectral sequence discussed in Section 1. In Section 5, we study the low dimensional homologies of the group of the monomial matrices in \(\operatorname{SL}_{2}(A)\). Section 6 is devoted to the proof of the exact sequence (0.1). In section 7 we develop a spectral sequence suitable for the study of the relative homology of groups which will be used in Section 8. This spectral sequence might be known to experts but we did not find any suitable reference to the general form discussed here. In Section 8 we prove the exact sequence (0.2) and study some other relative homology groups. **Notations.** In this article all rings are commutative, except probably group rings, and have the unit element \(1\). For a ring \(A\), let \(\mathcal{G}_{A}\) be the group \(A^{\times}/(A^{\times})^{2}\) and \(\mathcal{W}_{A}\) be the set of \(a\in A^{\times}\) such that \(1-a\in A^{\times}\). Thus \[\mathcal{G}_{A}:=A^{\times}/(A^{\times})^{2},\hskip 28.452756pt\mathcal{W}_{A}: =\{a\in A:a(1-a)\in A^{\times}\}.\] ## 1. The \(\operatorname{GE}_{2}\)-rings and the complex of unimodular vectors Let \(A\) be a commutative ring. Let \(\operatorname{E}_{2}(A)\) be the subgroup of \(\operatorname{GL}_{2}(A)\) generated by the elementary matrices \(E_{12}(a):=\begin{pmatrix}1&a\\ 0&1\end{pmatrix}\) and \(E_{21}(a):=\begin{pmatrix}1&0\\ a&1\end{pmatrix}\), \(a\in A\). The group \(\operatorname{E}_{2}(A)\) is generated by the matrices \[E(a):=\begin{pmatrix}a&1\\ -1&0\end{pmatrix},\hskip 14.226378pta\in A.\] In fact we have the following formulas \[E_{12}(a)=E(-a)E(0)^{-1},\ \ \ \ E_{21}(a)=E(0)^{-1}E(a),\ \ \ \ E(0)=E_{12}(1)E_{21}(-1)E_{12}(1).\] Let \(D_{2}(A)\) be the subgroup of \(\operatorname{GL}_{2}(A)\) generated by diagonal matrices. Let \(\operatorname{GE}_{2}(A)\) be the subgroup of \(\operatorname{GL}_{2}(A)\) generated by \(D_{2}(A)\) and \(\operatorname{E}_{2}(A)\). A ring \(A\) is called a _\(\operatorname{GE}_{2}\)-ring_ if \[\operatorname{GE}_{2}(A)=\operatorname{GL}_{2}(A).\] Since \(\operatorname{E}_{2}(A)=\operatorname{SL}_{2}(A)\cap\operatorname{GE}_{2}(A)\) and \(\operatorname{GL}_{2}(A)=\operatorname{SL}_{2}(A)D_{2}(A)\), this condition is equivalent to \(\operatorname{E}_{2}(A)=\operatorname{SL}_{2}(A)\). For any \(a\in A^{\times}\), let \(D(a):=\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\). Observe that \(D(-a)=E(a)E(a^{-1})E(a)\). Thus \(D(a)\in\operatorname{E}_{2}(A)\). For any \(x,y\in A\) and \(a\in A^{\times}\), we have the following relations between matrices \(E(x)\) and \(D(a)\): 1. \(E(x)E(0)E(y)=D(-1)E(x+y)\), 2. \(E(x)D(a)=D(a^{-1})E(a^{2}x)\), 3. \(D(a)D(b)=D(ab)\). A ring \(A\) is called _universal for \(\operatorname{GE}_{2}\)_ if the relations (1), (2) and (3) form a complete set of defining relations for \(\operatorname{E}_{2}(A)\). A \(\operatorname{GE}_{2}\)-ring which is universal for \(\operatorname{GE}_{2}\) is called a _universal \(\operatorname{GE}_{2}\)-ring_. Thus a universal \(\operatorname{GE}_{2}\)-ring is characterized by the property that \(\operatorname{SL}_{2}(A)\) is generated by the matrices \(E(x)\) and \(D(a)\), with (1)-(3) as a complete set of defining relations. Any local ring is a universal \(\operatorname{GE}_{2}\)-rings [2, Theorem 4.1]. Moreover Euclidean domains are \(\operatorname{GE}_{2}\)-rings [2, SS2]. For more example of \(\operatorname{GE}_{2}\)-rings and rings universal for \(\operatorname{GE}_{2}\) see [2] and [10]. A (column) vector \(\boldsymbol{u}=\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix}\in A^{2}\) is said to be unimodular if there exists a vector \(\boldsymbol{v}=\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}\) such that the matrix \((\boldsymbol{u},\boldsymbol{v}):=\begin{pmatrix}u_{1}&v_{1}\\ u_{2}&v_{2}\end{pmatrix}\) is an invertible matrix. For any non-negative integer \(n\), let \(X_{n}(A^{2})\) be the free abelian group generated by the set of all \((n+1)\)-tuples \((\langle\boldsymbol{v}_{0}\rangle,\ldots,\langle\boldsymbol{v}_{n}\rangle)\), where every \(\boldsymbol{v}_{i}\in A^{2}\) is unimodular and for any two distinct vectors \(\boldsymbol{v}_{i},\boldsymbol{v}_{j}\), the matrix \(\boldsymbol{v}_{i},\boldsymbol{v}_{j}\) is invertible. Observe that \(\langle\boldsymbol{v}\rangle\subseteq A^{2}\) is the line \(\{\boldsymbol{v}a:a\in A\}\). We consider \(X_{l}(A^{2})\) as a left \(\operatorname{GL}_{2}(A)\)-module (resp. left \(\operatorname{SL}_{2}(A)\)-module) in a natural way. If necessary, we convert this action to a right action by the definition \(m.g:=g^{-1}m\). Let us define the \(l\)-th differential operator \[\partial_{l}:X_{l}(A^{2})\to X_{l-1}(A^{2}),\ \ l\geq 1,\] as an alternating sum of face operators which throws away the \(i\)-th component of generators. Let \(\partial_{-1}=\epsilon:X_{0}(A^{2})\to\mathbb{Z}\) be defined by \(\sum_{i}n_{i}(\langle v_{0,i}\rangle)\mapsto\sum_{i}n_{i}\). Hence we have the complex \[X_{\bullet}(A^{2})\to\mathbb{Z}:\ \cdots\longrightarrow X_{2}(A^{2})\stackrel{{ \partial_{2}}}{{\longrightarrow}}X_{1}(A^{2})\stackrel{{ \partial_{1}}}{{\longrightarrow}}X_{0}(A^{2})\to\mathbb{Z}\to 0.\] We say that the above complex is exact in dimension \(<k\) if the complex \[X_{k}(A^{2})\stackrel{{\partial_{k}}}{{\longrightarrow}}X_{k-1}(A^{2} )\stackrel{{\partial_{k-1}}}{{\longrightarrow}}\cdots\stackrel{{ \partial_{2}}}{{\longrightarrow}}X_{1}(A^{2})\stackrel{{ \partial_{1}}}{{\longrightarrow}}X_{0}(A^{2})\to\mathbb{Z}\to 0\] is exact. **Proposition 1.1** (Hutchinson).: _Let \(A\) be a commutative ring._ (i) _The complex \(X_{\bullet}(A^{2})\to\mathbb{Z}\) is exact in dimension \(<1\) if and only if \(A\) is a \(\operatorname{GE}_{2}\)-ring._ (ii) _If \(A\) is universal for \(\operatorname{GE}_{2}\), then \(X_{\bullet}(A^{2})\) is exact in dimension \(1\), i.e. \(H_{1}(X_{\bullet}(A^{2}))=0\)._ Proof.: See [10, Theorem 3.3, Theorem 7.2 and Corollary 7.3]. **Remark 1.2**.: In [10, Theorem 3.3, Theorem 7.2] Hutchinson calculated \(H_{0}\) and \(H_{1}\) of the complex \(X_{\bullet}(A^{2})\) for any commutative ring \(A\). Let the complex \(X_{\bullet}(A^{2})\to\mathbb{Z}\) be exact in dimension \(<1\), (i.e. \(A\) is a \(\operatorname{GE}_{2}\)-ring by Proposition 1.1) and let \(Z_{1}(A^{2}):=\ker(\partial_{1})\). From the complex \[0\to Z_{1}(A^{2})\stackrel{{\operatorname{inc}}}{{\to}}X_{1}(A^{ 2})\stackrel{{\partial_{1}}}{{\to}}X_{0}(A^{2})\to 0, \tag{1.1}\] we obtain the double complex \[D_{\bullet,\bullet}:0\to F_{\bullet}\otimes_{\operatorname{SL}_{2}(A)}Z_{1}(A ^{2})\stackrel{{\operatorname{id}_{F_{\bullet}}\otimes \operatorname{inc}}}{{\longrightarrow}}F_{\bullet}\otimes_{\operatorname{SL}_ {2}(A)}X_{1}(A^{2})\stackrel{{\operatorname{id}_{F_{\bullet}} \otimes\partial_{1}}}{{\longrightarrow}}F_{\bullet}\otimes_{\operatorname{SL }_{2}(A)}X_{0}(A^{2})\to 0,\] where \(F_{\bullet}\to\mathbb{Z}\) is a projective resolution of \(\mathbb{Z}\) over \(\operatorname{SL}_{2}(A)\). This gives us the first quadrant spectral sequence \[E_{p.q}^{1}=\left\{\begin{array}{ll}H_{q}(\operatorname{SL}_{2}(A),X_{p}(A^ {2}))&p=0,1\\ H_{q}(\operatorname{SL}_{2}(A),Z_{1}(A^{2}))&p=2\\ 0&p>2\end{array}\right.\implies H_{p+q}(\operatorname{SL}_{2}(A),\mathbb{Z}).\] In our calculations we usually use the bar resolution \(B_{\bullet}(\operatorname{SL}_{2}(A))\to\mathbb{Z}\)[1, Chap.I, SS5]. The group \(\operatorname{SL}_{2}(A)\) acts transitively on the sets of generators of \(X_{i}(A^{2})\) for \(i=0,1\). Let \[\boldsymbol{\infty}:=\langle\boldsymbol{e}_{1}\rangle,\quad\boldsymbol{0}:= \langle\boldsymbol{e}_{2}\rangle,\quad\boldsymbol{a}:=\langle\boldsymbol{e} _{1}+a\boldsymbol{e}_{2}\rangle,\quad a\in A^{\times},\] where \(\boldsymbol{e}_{1}:=\begin{pmatrix}1\\ 0\end{pmatrix}\) and \(\boldsymbol{e}_{2}:=\begin{pmatrix}0\\ 1\end{pmatrix}\). We choose \((\boldsymbol{\infty})\) and \((\boldsymbol{\infty},\boldsymbol{0})\) as representatives of the orbit of the generators of \(X_{0}(A^{2})\) and \(X_{1}(A^{2})\), respectively. Therefore \[X_{0}(A^{2})\simeq\operatorname{Ind}_{B(A)}^{\operatorname{SL}_{2}(A)}\mathbb{ Z},\qquad\qquad X_{1}(A^{2})\simeq\operatorname{Ind}_{T(A)}^{\operatorname{SL}_{2}(A)} \mathbb{Z},\] where \[B(A):=\operatorname{Stab}_{\operatorname{SL}_{2}(A)}(\infty)=\Big{\{}\begin{pmatrix} a&b\\ 0&a^{-1}\end{pmatrix}:a\in A^{\times},b\in A\Big{\}},\] \[T(A):=\operatorname{Stab}_{\operatorname{SL}_{2}(A)}(\boldsymbol{\infty}, \boldsymbol{0})=\Big{\{}\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}:a\in A^{\times}\Big{\}}.\] Note that \(T(A)\simeq A^{\times}\). In our calculations usually we identify \(T(A)\) with \(A^{\times}\). Thus by Shapiro's lemma we have \[E_{0,q}^{1}\simeq H_{q}(B(A),\mathbb{Z}),\qquad E_{1,q}^{1}\simeq H_{q}(T(A), \mathbb{Z}).\] In particular, \(E^{1}_{0,0}\simeq\mathbb{Z}\simeq E^{1}_{1,0}\). Moreover \(d^{1}_{1,q}=H_{q}(\sigma)-H_{q}(\mathrm{inc})\), where \(\sigma:T(A)\to B(A)\) is given by \(\sigma(X)=wXw^{-1}=X^{-1}\) for \(w:=E(0)=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\). This easily implies that \(d^{1}_{1,0}\) is trivial, \(d^{1}_{1,1}\) is induced by the map \(T(A)\to B(A)\), \(X\mapsto X^{-2}\), and \(d^{1}_{1,2}\) is trivial. Thus \(\ker(d^{1}_{1,1})=\mu_{2}(A)=\{a\in A^{\times}:a^{2}=1\}\). It is straightforward to check that \(d^{1}_{2,0}:H_{0}(\mathrm{SL}_{2}(A),Z_{1}(A^{2}))\to\mathbb{Z}\) is surjective and for any \(b\in\mu_{2}(A)\), \(d^{1}_{2,1}([b]\otimes\partial_{2}(\boldsymbol{\infty},\boldsymbol{0}, \boldsymbol{a}))=b\). Hence \(E^{2}_{1,0}=0\) and \(E^{2}_{1,1}=0\). ## 2. The refined scissors congruence group Let \(Z_{2}(A^{2}):=\ker(\partial_{2})\). Following Coronado and Hutchinson [3, SS 3] we define \[\mathcal{RP}(A):=H_{0}(\mathrm{SL}_{2}(A),Z_{2}(A^{2}))=Z_{2}(A^{2})_{\mathrm{ SL}_{2}(A)}.\] Note that \(\mathcal{RP}(A)\) is a \(\mathcal{G}_{A}\)-module. The inclusion \(\mathrm{inc}:Z_{2}(A^{2})\to X_{2}(A^{2})\) induces the map \[\lambda:\mathcal{RP}(A)=Z_{2}(A^{2})_{\mathrm{SL}_{2}(A)}\stackrel{{ \mathrm{inc}}}{{\longrightarrow}}X_{2}(A^{2})_{\mathrm{SL}_{2}(A)}.\] The orbits of the action of \(\mathrm{SL}_{2}(A)\) on \(X_{2}(A)\) is represented by \(\langle a\rangle[\;]:=(\boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a})\), \(\langle a\rangle\in\mathcal{G}_{A}\). Therefore \(X_{2}(A^{2})_{\mathrm{SL}_{2}(A)}\simeq\mathbb{Z}[\mathcal{G}_{A}]\). The \(\mathcal{G}_{A}\)-module \[\mathcal{RP}_{1}(A):=\ker\big{(}\lambda:\mathcal{RP}(A)\to\mathbb{Z}[ \mathcal{G}_{A}]\big{)}\] is called the _refined scissors congruence group_ of \(A\). We call \[\mathrm{GW}(A):=H_{0}(\mathrm{SL}_{2}(A),Z_{1}(A^{2}))\] the _Grothendieck-Witt group_ of \(A\). Let \(\epsilon:=d^{1}_{2,0}:\mathrm{GW}(A)\to\mathbb{Z}\). The kernel of \(\epsilon\) is called the _fundamental ideal_ of \(A\) and is denoted by \(I(A)\). Consider the sequence \[X_{4}(A^{2})_{\mathrm{SL}_{2}(A)}\stackrel{{\overline{\partial_{ 4}}}}{{\to}}X_{3}(A^{2})_{\mathrm{SL}_{2}(A)}\stackrel{{\overline{ \partial_{3}}}}{{\to}}\mathcal{RP}(A)\to 0\] of \(\mathcal{G}_{A}\)-modules. The orbits of the action of \(\mathrm{SL}_{2}(A)\) on \(X_{3}(A)\) and \(X_{4}(A)\) are represented by \[\langle a\rangle[x]:=(\boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a}, \boldsymbol{a}\boldsymbol{x}),\;\;\text{ and }\;\langle a\rangle[x,y]:=(\boldsymbol{\infty}, \boldsymbol{0},\boldsymbol{a},\boldsymbol{a}\boldsymbol{x},\boldsymbol{a} \boldsymbol{y}),\;\;\langle a\rangle\in\mathcal{G}_{A},x,y,x/y\in\mathcal{W}_{ A},\] respectively. Thus \(X_{3}(A^{2})_{\mathrm{SL}_{2}(A)}\) is the free \(\mathbb{Z}[\mathcal{G}_{A}]\)-module generated by the symbols \([x]\), \(x\in\mathcal{W}_{A}\) and \(X_{4}(A^{2})_{\mathrm{SL}_{2}(A)}\) is the free \(\mathbb{Z}[\mathcal{G}_{A}]\)-module generated by the symbols \([x,y]\), \(x,y,x/y\in\mathcal{W}_{A}\). It is straightforward to check that \[\overline{\partial_{4}}([x,y])=[x]-[y]+\langle x\rangle\Big{[}\frac{y}{x} \Big{]}-\langle x^{-1}-1\rangle\bigg{[}\frac{1-x^{-1}}{1-y^{-1}}\bigg{]}+\langle 1 -x\rangle\bigg{[}\frac{1-x}{1-y}\bigg{]}.\] Let \(\overline{\mathcal{RP}}(A)\) be the quotient of the free \(\mathcal{G}_{A}\)-module generated by the symbols \([x]\), \(x\in\mathcal{W}_{A}\) over the subgroup generated by the elements \[[x]-[y]+\langle x\rangle\Big{[}\frac{y}{x}\Big{]}-\langle x^{-1}-1\rangle \bigg{[}\frac{1-x^{-1}}{1-y^{-1}}\bigg{]}+\langle 1-x\rangle\bigg{[}\frac{1-x}{1-y} \bigg{]},\qquad x,y,x/y\in\mathcal{W}_{A}.\] Thus we have the natural map \(\overline{\mathcal{RP}}(A)\to\mathcal{RP}(A)\). It is straightforward to check that the composite \[\overline{\mathcal{RP}}(A)\to\mathcal{RP}(A)\stackrel{{\lambda}}{{ \longrightarrow}}\mathbb{Z}[\mathcal{G}_{A}]\] is given by \[[x]\mapsto-\langle\!\langle x\rangle\!\rangle\langle\!\langle 1-x\rangle\!\rangle.\] Let \(\overline{\mathcal{RP}}_{1}(A)\) be the kernel of this composite. Thus we have a natural map \[\overline{\mathcal{RP}}_{1}(A)\to\mathcal{RP}_{1}(A).\] The sequence \[X_{3}(A^{2})_{\mathrm{SL}_{2}(A)}\stackrel{{\overline{\partial_{ 3}}}}{{\rightarrow}}X_{2}(A^{2})_{\mathrm{SL}_{2}(A)}\stackrel{{ \overline{\partial_{2}}}}{{\rightarrow}}\mathrm{GW}(A)\to 0\] induces the natural map \[\overline{\mathrm{GW}}(A):=\mathbb{Z}[\mathcal{G}_{A}]/\big{\langle}\langle\! \langle a\rangle\!\rangle\langle\!\langle 1-a\rangle\!\rangle:a\in\mathcal{W}_{A} \big{\rangle}\to\mathrm{GW}(A).\] Let \(\mathcal{I}_{A}\) be the kernel of the augmentation map \(\mathbb{Z}[\mathcal{G}_{A}]\to\mathbb{Z}\) and set \[\overline{I}(A):=\mathcal{I}_{A}/\big{\langle}\langle\!\langle a\rangle\! \rangle\langle\!\langle 1-a\rangle\!\rangle:a\in\mathcal{W}_{A}\big{\rangle}.\] Thus we have a natural map \(\overline{I}(A)\to I(A)\). If the complex \(X_{\bullet}(A^{2})\to\mathbb{Z}\) is exact in dimension \(<2\), then \(\overline{I}(A)\to I(A)\) is surjective. If the complex is exact in dimension \(<3\), then the maps \[\overline{\mathcal{RP}}(A)\to\mathcal{RP}(A)\text{ and }\overline{\mathcal{ RP}}_{1}(A)\to\mathcal{RP}_{1}(A)\] are surjective and \(\overline{I}(A)\simeq I(A)\). Moreover, if the complex is exact in dimension \(<4\), then \(\overline{\mathcal{RP}}(A)\simeq\mathcal{RP}(A)\) and \(\overline{\mathcal{RP}}_{1}(A)\simeq\mathcal{RP}_{1}(A)\). **Remark 2.1**.: Let \(X_{\bullet}(A^{2})\to\mathbb{Z}\) be exact in dimension \(<2\). From the exact sequence \[0\to Z_{2}(A^{2})\to X_{2}(A^{2})\to Z_{1}(A^{2})\to 0\] we obtain the exact sequence \(\mathcal{RP}(A)\stackrel{{\lambda}}{{\longrightarrow}}\mathbb{Z} [\mathcal{G}_{A}]\to\mathrm{GW}(A)\to 0\). This induces the exact sequence \[\mathcal{RP}(A)\stackrel{{\lambda}}{{\longrightarrow}}\mathcal{I} _{A}\to I(A)\to 0.\] If we set \[[a]^{\prime}=(\boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a})+(\boldsymbol{ 0},\boldsymbol{\infty},\boldsymbol{a})-(\boldsymbol{\infty},\boldsymbol{0}, \boldsymbol{1})-(\boldsymbol{0},\boldsymbol{\infty},\boldsymbol{1})\in \mathcal{RP}(A),\] then \(\lambda([a]^{\prime})=p_{-1}^{+}\langle\!\langle a\rangle\!\rangle\), where \(p_{-1}^{+}:=\langle-1\rangle+1\in\mathbb{Z}[\mathcal{G}_{A}]\). This induces a natural surjection \[\mathcal{I}_{A}/p_{-1}^{+}\mathcal{I}_{A}\twoheadrightarrow I(A).\] ## 3. The map \(H_{n}(T(A),\mathbb{Z})\to H_{n}(B(A),\mathbb{Z})\) The groups \(B(A)\) and T(A) sit in the extension \(1\to N(A)\to B(A)\to T(A)\to 1\), where \[N(A):=\left\{\begin{pmatrix}1&b\\ 0&1\end{pmatrix}:b\in A\right\}\simeq A.\] This extension splits canonically and \(T(A)\) acts as follow on \(N\): \[a.\begin{pmatrix}1&b\\ 0&1\end{pmatrix}:=\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\begin{pmatrix}1&b\\ 0&1\end{pmatrix}\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}^{-1}=\begin{pmatrix}1&a^{2}b\\ 0&1\end{pmatrix}.\] So if we assume that \(T(A)=A^{\times}\) and \(N(A)=A\), then the action of \(A^{\times}\) on \(A\) is given by \(a.b:=a^{2}b\). Thus \[H_{n}(B(A),\mathbb{Z})\simeq H_{n}(T(A),\mathbb{Z})\oplus H_{n}(B(A),T(A), \mathbb{Z}).\] We denote the relative homology group \(H_{n}(B(A),T(A),\mathbb{Z})\) by \(\mathcal{S}_{n}\). (See Section 8 for an exact sequence involving this relative homology group). By studying the Lyndon/Hochschild-Serre spectral sequence of the above extension, it follows that \[\mathcal{S}_{1}\simeq H_{0}(A^{\times},A)=A_{A^{\times}}=A/\langle a^{2}-1|a \in A^{\times}\rangle\] and \(\mathcal{S}_{2}\) sits in the exact sequence \[H_{2}(A^{\times},A)\to H_{2}(A,\mathbb{Z})_{A^{\times}}\to\mathcal{S}_{2}\to H _{1}(A^{\times},A)\to 0.\] **Lemma 3.1**.: _Let \(G\) be an abelian group, \(A\) a commutative ring, \(M\) an \(A\)-module and \(\varphi:G\to A^{\times}\) a homomorphism of groups which turns \(A\) and \(M\) into \(G\)-modules. If \(H_{0}(G,A)=0\), then for any \(n\geq 0\), \(H_{n}(G,M)=0\)._ Proof.: See [16, Lemma 1.8]. **Corollary 3.2**.: _Let \(A\) be a ring and let \(A^{\times}\) acts on \(A\) as \(a.x:=a^{2}x\). If \(H_{0}(A^{\times},A)=0\), then \(H_{n}(A^{\times},A)=0\) for any \(n\geq 0\)._ Proof.: Use the above lemma by considering \(\varphi:A^{\times}\to A^{\times}\), \(a\mapsto a^{2}\). **Example 3.3**.: (i) If \(A\) is a local ring such that \(|A/\mathfrak{m}_{A}|>3\), then always we can find \(a\in A^{\times}\) such that \(a^{2}-1\in A^{\times}\). Thus \(H_{0}(A^{\times},A)=0\). (ii) Let \(A\) be a ring such that \(6\in A^{\times}\). Then \[1=3(2^{2}-1)+(-1)(3^{2}-1)\in\langle a^{2}-1:a\in A^{\times}\rangle.\] Hence \(H_{0}(A^{\times},A)=0\). **Example 3.4**.: If \(H_{0}(A^{\times},A)=0\), then by the above corollary \(H_{n}(A^{\times},A)=0\) for \(n\geq 0\). Thus \(\mathcal{S}_{1}=0\) and \(\mathcal{S}_{2}\simeq H_{2}(A,\mathbb{Z})_{A^{\times}}\). Therefore \(H_{1}(T(A),\mathbb{Z})\simeq H_{1}(B(A),\mathbb{Z})\) and we have the exact sequence \[0\to H_{2}(A,\mathbb{Z})_{A^{\times}}\to H_{2}(B(A),\mathbb{Z})\to H_{2}(T(A), \mathbb{Z})\to 0.\] Moreover we have the exact sequence \[H_{3}(A,\mathbb{Z})_{A^{\times}}\to\mathcal{S}_{3}\to H_{1}(A^{\times},A\wedge A )\to 0.\] **Lemma 3.5**.: _If \(A\) is a subring of \(\mathbb{Q}\), then for any \(n\geq 0\),_ \[H_{n}(B(A),\mathbb{Z})\simeq H_{n}(T(A),\mathbb{Z})\oplus H_{n-1}(A^{\times},A).\] _In particular if \(6\in A^{\times}\), then \(H_{n}(T(A),\mathbb{Z})\simeq H_{n}(B(A),\mathbb{Z})\)._ Proof.: It is well known that any finitely generated subgroup of \(\mathbb{Q}\) is cyclic. Thus \(A\) is a direct limit of infinite cyclic groups. Since \(H_{n}(\mathbb{Z},\mathbb{Z})=0\) for any \(n\geq 2\)[1, page 58] and since homology commutes with direct limit [1, Exer. 6, SS 5, Chap. V], we have \(H_{n}(A,\mathbb{Z})=0\) for \(n\geq 2\). Now the claim follows from an easy analysis of the Lyndon/Hochschild-Serre spectral sequence associated to the split extension \(1\to N(A)\to B(A)\to T(A)\to 1\). If \(6\in A^{\times}\), then by Example 3.3(ii) we have \(H_{0}(A^{\times},A)=0\). So by Corollary 3.2, \(H_{n}(A^{\times},A)=0\) for any \(n\). Therefore the claim follows from the first part of the lemma. **Example 3.6**.: (i) Let \(A=\mathbb{Z}\). Since \(\mathbb{Z}^{\times}=\{\pm 1\}\), the action of \(\mathbb{Z}^{\times}\) on \(A=\mathbb{Z}\) is trivial. Thus \(H_{n}(\mathbb{Z}^{\times},\mathbb{Z})\) is \(\mathbb{Z}\) if \(n=0\), is trivial if \(n\) is even and is \(\mathbb{Z}/2\) if \(n\) is odd. Now by the previous lemma we have \[H_{1}(B(\mathbb{Z}),\mathbb{Z})\simeq H_{1}(T(\mathbb{Z}),\mathbb{Z})\oplus \mathbb{Z},\] and for any positive integer \(m\), \[H_{2m}(B(\mathbb{Z}),\mathbb{Z})\simeq H_{2m}(T(\mathbb{Z}),\mathbb{Z})\oplus \mathbb{Z}/2\simeq\mathbb{Z}/2,\] \[H_{2m+1}(B(\mathbb{Z}),\mathbb{Z})\simeq H_{2m+1}(T(\mathbb{Z}),\mathbb{Z}) \simeq\mathbb{Z}/2.\] (ii) Let \(p\) be a prime and let \(A:=\mathbb{Z}_{(p)}=\{a/b\in\mathbb{Q}|a,b\in\mathbb{Z},p\nmid b\}\). Then \(\mathbb{Z}_{(p)}\) is local and its residue field is isomorphic to \(\mathbb{F}_{p}\). If \(p\neq 2,3\), then the residue field of \(A\) has more than \(3\) elements. Thus \[H_{n}(T(\mathbb{Z}_{(p)}),\mathbb{Z})\simeq H_{n}(B(\mathbb{Z}_{(p)}),\mathbb{ Z})\] for any \(n\geq 0\) (Example 3.3). Let \(B=\mathbb{Z}_{(2)}\). Consider the action of \(B^{\times}\) on \(\mathbb{Q}\) as usual: \(b.x:=b^{2}x\). It is straightforward to check that \(H_{0}(B^{\times},\mathbb{Q})=0\). Thus by Lemma 3.1, \(H_{n}(B^{\times},\mathbb{Q})=0\) for any \(n\geq 0\). Consider the exact sequence \(0\to B\to\mathbb{Q}\to\mathbb{Q}/B\to 0\). Note that \(\mathbb{Q}/B\simeq\mathbb{Z}_{2^{\infty}}:=\mathbb{Z}\big{[}\frac{1}{2}\big{]}/ \mathbb{Z}\). From the long exact sequence associated to this short exact sequence, we obtain \[H_{n-1}(B^{\times},B)\simeq H_{n}(B^{\times},\mathbb{Z}_{2^{\infty}}),\qquad n \geq 1.\] We have a similar result for \(B=\mathbb{Z}_{(3)}\). Therefore for \(p=2,3\), we have \[H_{n}(B(\mathbb{Z}_{(p)}),\mathbb{Z})\simeq H_{n}(T(\mathbb{Z}_{(p)}),\mathbb{ Z})\oplus H_{n}(\mathbb{Z}_{(p)}^{\times},\mathbb{Z}_{p^{\infty}}).\] Note that \(H_{n}(\mathbb{Z}_{(2)}^{\times},\mathbb{Z}_{2^{\infty}})\) and \(H_{n}(\mathbb{Z}_{(3)}^{\times},\mathbb{Z}_{3^{\infty}})\) are \(2\)-power and \(3\)-power torsion groups, respectively. One easily can show that \(H_{0}(\mathbb{Z}_{(2)}^{\times},\mathbb{Z}_{(2)})\simeq\mathbb{Z}/8\) and \(H_{0}(\mathbb{Z}_{(3)}^{\times},\mathbb{Z}_{(3)})\simeq\mathbb{Z}/3\). **Lemma 3.7**.: _Let \(p\) be a prime number and let \(A_{p}=\mathbb{Z}[\frac{1}{p}]\). Then_ (i)_\(H_{1}(B(A_{p}),\mathbb{Z})\simeq H_{1}(T(A_{p}),\mathbb{Z})\oplus\mathbb{Z} /(p^{2}-1)\),_ (ii) _for any \(n\geq 2\), \(H_{n}(T(A_{2}),\mathbb{Z})\simeq H_{n}(B(A_{2}),\mathbb{Z})\),_ (iii) _for \(p\neq 2\) and \(n\geq 2\), we have \(H_{n}(B(A_{p}),\mathbb{Z})\simeq H_{n}(T(A_{p}),\mathbb{Z})\oplus\mathbb{Z} /2\)._ Proof.: We need to calculate \(H_{n}(A_{p}^{\times},A_{p})\). The rest follows from Lemma 3.5. In the following we will use the calculation of the homology groups of cyclic groups [1, page 58]. From the extension \(1\to\mu_{2}(A_{p})\to A_{p}^{\times}\to\langle p\rangle\to 1\) we obtain the Lyndon/Hochschild-Serre spectral sequence \[{E^{\prime}}^{2}_{\ r,s}=H_{r}(\langle p\rangle,H_{s}(\mu_{2}(A_{p}),A_{p})) \Rightarrow H_{r+s}(A_{p}^{\times},A_{p}).\] Since \(\langle p\rangle\) is an infinite cyclic group, we have \({E^{\prime}}^{2}_{\ r,s}=0\) for \(r\geq 2\). Moreover \[H_{s}(\mu_{2}(A_{p}),A_{p})\simeq\begin{cases}A_{p}&\text{if $s=0$}\\ A_{p}/2&\text{if $s$ is odd}\\ 0&\text{if $s$ is even.}\end{cases}\] (i) This follows from the isomorphism \(H_{0}(A_{p}^{\times},A_{p})=A_{p}/\langle p^{2}-1\rangle\simeq\mathbb{Z}(p^{2 }-1)\). (ii) Since \(2\in A_{2}^{\times}\), \(A_{2}/2=0\). This implies that \({E^{\prime}}^{2}_{\ r,s}=0\) for any \(s\geq 1\). Now from the above spectral sequence we obtain \(H_{n}(A_{2}^{\times},A_{2})=0\) for any \(n\geq 1\). (iii) We need to calculate \({E^{\prime}}^{2}_{\ 0,s}\) and \({E^{\prime}}^{2}_{\ 1,s}\) for any \(s\geq 1\). Note that \(A_{p}/2\simeq\mathbb{Z}/2\). Now it is easy to see that \(H_{0}(\langle p\rangle,A_{p}/2)\simeq\mathbb{Z}/2\) and \(H_{1}(\langle p\rangle,A_{p}/2)\simeq\mathbb{Z}/2\). Thus for any \(s\geq 1\), \[{E^{\prime}}^{2}_{\ 0,s}\simeq{E^{\prime}}^{2}_{\ 1,s}\simeq\begin{cases}0& \text{if $s$ is even}\\ \mathbb{Z}/2&\text{if $s$ is odd.}\end{cases}\] Now from the above spectral sequence it follows that \(H_{n}(A_{p}^{\times},A_{p})\simeq\mathbb{Z}/2\) for any \(n\geq 1\). **Proposition 3.8**.: (i) _Let \(A\) be a local domain such that either \(A/\mathfrak{m}_{A}\) is infinite or if \(|A/\mathfrak{m}_{A}|=p^{d}\), we have \((p-1)d>2n\). Then \(H_{n}(T(A),\mathbb{Z})\simeq H_{n}(B(A),\mathbb{Z})\)._ (ii) _Let \(A\) be a local ring such that either \(A/\mathfrak{m}_{A}\) is infinite or if \(|A/\mathfrak{m}_{A}|=p^{d}\), we have \((p-1)d>2(n+1)\). Then \(H_{n}(T(A),\mathbb{Z})\simeq H_{n}(B(A),\mathbb{Z})\)._ Proof.: (i) For this see [9, Proposition 3.19]. (ii) Similar to the proof of part (i) presented in [9, Proposition 3.19], we can show that \(H_{n}(T(A),k)\simeq H_{n}(B(A),k)\), where \(k\) is a prime field and \((p-1)d>2n\). Now the claim follows from [14, Lemma 2.3]. ## 4. The refined Bloch group Let the complex \(X_{\bullet}(A^{2})\to\mathbb{Z}\) be exact in dimension \(<2\). Then from the exact sequence \[0\to Z_{2}(A^{2})\to X_{2}(A^{2})\to Z_{1}(A^{2})\to 0\] we obtain the long exact sequence \[H_{1}(\operatorname{SL}_{2}(A),Z_{2}(A^{2}))\!\to\!H_{1}(\operatorname{SL}_{2 }(A),X_{2}(A^{2}))\!\to\!H_{1}(\operatorname{SL}_{2}(A),Z_{1}(A^{2}))\! \stackrel{{\delta}}{{\to}}\!H_{0}(\operatorname{SL}_{2}(A),Z_{2 }(A^{2}))\] \[\to H_{0}(\operatorname{SL}_{2}(A),X_{2}(A^{2}))\to H_{0}(\operatorname{SL}_{2 }(A),Z_{1}(A^{2}))\to 0.\] Choose \((\mathbf{\infty},\mathbf{0},\mathbf{a})\), \(\langle a\rangle\in\mathcal{G}_{A}\), as representatives of the orbits of the generators of \(X_{2}(A^{2})\). Then \[X_{2}\simeq\bigoplus_{\langle a\rangle\in\mathcal{G}_{A}}\operatorname{Ind}_{ \mu_{2}(A)}^{\operatorname{SL}_{2}(A)}\mathbb{Z}\langle a\rangle,\] where \(\mu_{2}(A)\simeq\operatorname{Stab}_{\operatorname{SL}_{2}(A)}(\mathbf{\infty}, \mathbf{0},\mathbf{a})\). Thus \[H_{1}(\operatorname{SL}_{2}(A),X_{2}(A^{2}))\simeq\bigoplus_{\langle a\rangle \in\mathcal{G}_{A}}H_{1}(\mu_{2}(A),\mathbb{Z})\simeq\mathbb{Z}[\mathcal{G}_{A }]\otimes\mu_{2}(A).\] From the above exact sequence we obtain the exact sequence \[H_{1}(\operatorname{SL}_{2}(A),Z_{2}(A^{2}))\to\mathbb{Z}[\mathcal{G}_{A}] \otimes\mu_{2}(A)\to H_{1}(\operatorname{SL}_{2}(A),Z_{1}(A^{2}))\to\mathcal{ RP}_{1}(A)\to 0.\] The exact sequence \(0\to Z_{1}(A^{2})\stackrel{{\operatorname{inc}}}{{\to}}X_{1}(A^ {2})\stackrel{{\partial_{1}}}{{\to}}X_{0}(A^{2})\) induces the commutative diagram By the Snake lemma we have the exact sequence \[H_{1}(\operatorname{SL}_{2}(A),Z_{2}(A^{2}))\to\mathcal{I}_{A}\otimes\mu_{2}(A )\stackrel{{\gamma}}{{\to}}E_{2,1}^{2}\to\mathcal{RP}_{1}(A)\to 0.\] Let \(G\) be a group and let \(g,g^{\prime}\) be two commuting elements of \(G\). Set \[\mathbf{c}(g,g^{\prime}):=([g|g^{\prime}]-[g^{\prime}|g])\otimes 1\in H_{2}(G, \mathbb{Z})=H_{2}(B_{\bullet}(G)\otimes_{G}\mathbb{Z}).\] **Lemma 4.1**.: _The composite_ \[\mathcal{I}_{A}\otimes\mu_{2}(A)\stackrel{{\gamma}}{{\longrightarrow }}E_{2,1}^{2}\stackrel{{ d_{2,1}^{2}}}{{\longrightarrow}}H_{2}(B(A ),\mathbb{Z})\simeq(A^{\times}\wedge A^{\times})\oplus\mathcal{S}_{2}\] _sends \(\langle\!\langle a\rangle\!\rangle\otimes b\) to \(\big{(}a\wedge b,\mathbf{c}(\begin{pmatrix}1&a+1\\ 0&1\end{pmatrix},\begin{pmatrix}b&0\\ 0&b\end{pmatrix})\big{)}\)._ Proof.: The element \(\langle\!\langle a\rangle\!\rangle\otimes b\in\mathcal{I}_{A}\otimes\mu_{2}(A)\) is represented by \([b]\otimes((\mathbf{\infty},\mathbf{0},\mathbf{a})-(\mathbf{\infty},\mathbf{0},\mathbf{1}))\). Now we want to apply \(\gamma\) (that is induced by \(\partial_{2}\)). We see that \(\gamma(\langle\!\langle a\rangle\!\rangle\otimes(b))\) is represented by \([b]\otimes\partial_{2}((\mathbf{\infty},\mathbf{0},\mathbf{a})-(\mathbf{\infty},\mathbf{0},\mathbf{1}) )\in B_{1}(\operatorname{SL}_{2}(A))\otimes Z_{1}(A^{2})\). Consider the diagram \[B_{2}(\operatorname{SL}_{2}(A))\otimes X_{0}(A^{2})\xleftarrow{ \operatorname{id}_{B_{2}}\otimes\partial_{1}}B_{2}(\operatorname{SL}_{2}(A)) \otimes X_{1}(A^{2})\] \[B_{1}(\operatorname{SL}_{2}(A))\otimes X_{1}(A^{2})\xleftarrow{ \operatorname{id}_{B_{1}}\otimes\operatorname{inc}}B_{1}(\operatorname{SL}_{2 }(A))\otimes Z_{1}(A^{2}).\] If \(X_{a,b}:=[b]\otimes\partial_{2}((\mathbf{\infty},\mathbf{0},\mathbf{a})-(\mathbf{\infty},\mathbf{ 0},\mathbf{1}))\), then \[(\operatorname{id}_{B_{1}}\otimes\operatorname{inc})(X_{a,b})= [b]\otimes((\mathbf{0},\mathbf{a})-(\mathbf{\infty},\mathbf{a})-(\mathbf{0},\mathbf{1})+( \mathbf{\infty},\mathbf{1}))\] \[= (g_{a}^{-1}-h_{a}^{-1}-g_{1}^{-1}+h_{1}^{-1})[b]\otimes(\mathbf{\infty },\mathbf{0})\] \[= (d_{2}\otimes\operatorname{id}_{X_{1}})(Z_{a,b}\otimes(\mathbf{\infty },\mathbf{0}))\] where \[Z_{a,b}:=[g_{a}^{-1}|b]-[b|g_{a}^{-1}]-[g_{1}^{-1}|b]+[b|g_{1}^{-1}]-[h_{a}^{-1}|b ]+[b|h_{a}^{-1}]+[h_{1}^{-1}|b]-[b|h_{1}^{-1}],\] with \(g_{z}=\begin{pmatrix}0&1\\ -1&z\end{pmatrix}\) and \(h_{z}=\begin{pmatrix}1&z^{-1}\\ 0&1\end{pmatrix}\) for \(z\in A^{\times}\). Applying \(\operatorname{id}_{B_{2}}\otimes\partial_{1}\) we have \[(\operatorname{id}_{B_{2}}\otimes\partial_{1})(Z_{a,b}\otimes(\boldsymbol{ \infty},\boldsymbol{0}))=(wZ_{a,b}-Z_{a,b})\otimes(\boldsymbol{\infty}).\] Now \((wZ_{a,b}-Z_{a,b})\otimes 1\) is a representative of \((d_{2,1}^{2}\circ\gamma)(\langle\!\langle a\rangle\!\rangle\otimes b)\). We have the following facts: 1. For any \(g\in\operatorname{SL}_{2}(A)\), \(h\in B(A)\) and \(b,b^{\prime}\in\mu_{2}(A)\), \[\boldsymbol{c}(hg,b)=\boldsymbol{c}(h,b)+\boldsymbol{c}(g,b),\ \ \ \ \ \boldsymbol{c}(h,bb^{\prime})= \boldsymbol{c}(h,b)+\boldsymbol{c}(h,b^{\prime}).\] 2. For any \(g\in\operatorname{SL}_{2}(A)\), \(w([g|b]-[b|g])\otimes 1\) is a representative of \(\boldsymbol{c}(wg,b)-\boldsymbol{c}(w,b)\), i.e. \[\boldsymbol{c}(wg,b)-\boldsymbol{c}(w,b)=\overline{w([g|b]-[b|g])\otimes 1}.\] 3. For any \(h\in B(A)\) and \(b\in\mu_{2}(A)\), we have \[\boldsymbol{c}(h^{-1},b)=-\boldsymbol{c}(h,b)=\boldsymbol{c}(h,b^{-1})= \boldsymbol{c}(h,b).\] Now, for any \(z\in A^{\times}\), from the identity \(g_{z}^{-1}=-h_{z^{-1}}w\), we obtain \[\boldsymbol{c}(g_{z}^{-1},b)=\boldsymbol{c}(h_{z^{-1}},b)+\boldsymbol{c}(w,b) +\boldsymbol{c}(-1,b)\] (by just adding the null element \((d_{3}\otimes\operatorname{id})([-h_{a^{-1}}|w|b]+[b|-h_{a^{-1}}|w]-[-h_{a^{- 1}}|b|w])\) and using the first fact above). On the other hand, the second fact above gives, for any \(z\in A^{\times}\), the equality \[\overline{w([g_{z}^{-1}|b]-[b|g_{z}^{-1}])\otimes 1}=\boldsymbol{c}(wg_{z}^{-1},b)- \boldsymbol{c}(w,b).\] Moreover the formula \(wg_{z}^{-1}=z^{-1}h_{z^{-1}}^{-1}wh_{z}^{-1}\) and (1) in above gives the equality \[\overline{w([g_{z}^{-1}|b]-[b|g_{z}^{-1}])\otimes 1}=\boldsymbol{c}(z^{-1},b)+ \boldsymbol{c}(h_{z^{-1}}^{-1},b)+\boldsymbol{c}(wh_{z}^{-1},b)-\boldsymbol{ c}(w,b).\] Also using (2) we have \[\overline{w([h_{z}^{-1}|b]-[b|h_{z}^{-1}])\otimes 1}=\boldsymbol{c}(wh_{z}^{-1},b)- \boldsymbol{c}(w,b).\] Now joining all the formulas above we have: \[\overline{(wZ_{a,b}-Z_{a,b})\otimes 1} =\boldsymbol{c}(a^{-1},b)+\boldsymbol{c}(h_{a^{-1}}^{-1},b)- \boldsymbol{c}(h_{1}^{-1},b)-\boldsymbol{c}(h_{a^{-1}},b)+\boldsymbol{c}(h_{ a},b)\] \[=\boldsymbol{c}(a,b)+\boldsymbol{c}(h_{a}h_{1},b)=\boldsymbol{c}(a,b)+\boldsymbol{c}\Big{(}\begin{pmatrix}1&a^{-1}+1\\ 0&1\end{pmatrix},\begin{pmatrix}b&0\\ 0&b\end{pmatrix}\Big{)}\] (in the last equality, we use (1) and (3)). Substituting \(a\) with \(a^{-1}\) we see that \[(d_{2,1}^{2}\circ\gamma)(\langle\!\langle a\rangle\!\rangle\otimes b)= \boldsymbol{c}(a,b)+\boldsymbol{c}\big{(}\begin{pmatrix}1&a+1\\ 0&1\end{pmatrix},\begin{pmatrix}b&0\\ 0&b\end{pmatrix}\big{)}.\] We believe that the element \(\boldsymbol{c}(\begin{pmatrix}1&a+1\\ 0&1\end{pmatrix},\begin{pmatrix}b&0\\ 0&b\end{pmatrix})\), appearing in the previous lemma, is trivial for many interesting rings. For \(a\in A\) and \(b\in\mu_{2}(A)\), let \(x_{a}:=\boldsymbol{c}(\begin{pmatrix}1&a\\ 0&1\end{pmatrix},\begin{pmatrix}b&0\\ 0&b\end{pmatrix})\in H_{2}(B(A),\mathbb{Z})\). This element has order \(2\) and \(x_{a}=x_{-a}\). Since \(\begin{pmatrix}c&0\\ 0&c^{-1}\end{pmatrix}\begin{pmatrix}1&a\\ 0&1\end{pmatrix}\begin{pmatrix}c&0\\ 0&c^{-1}\end{pmatrix}^{-1}=\begin{pmatrix}1&ac^{2}\\ 0&1\end{pmatrix}\), for any \(c\in A^{\times}\) we have \(x_{a}=x_{ac^{2}}\). (In particular \(x_{c^{2}}=x_{1}\).) Thus \[x_{a(c^{2}-1)}=0,\ \ \ \ x_{c}=x_{c^{-1}}.\] For example if \(a\in\mathcal{W}_{A}\), then \(a+1:=\frac{1}{(a-1)}(a^{2}-1)\) and hence \[x_{a+1}=x_{(a-1)^{-1}(a^{2}-1)}=0.\] **Example 4.2**.: (i) If \(H_{0}(A^{\times},A)=0\), then \(A=\langle c^{2}-1|c\in A^{\times}\rangle\). Thus any \(a\in A\) is of the form \(a=\sum d(c^{2}-1)\). This implies that \(x_{a}=0\) for any \(a\in A\). (ii) If \(2\in A^{\times}\), then for any \(a\in A^{\times}\) we have \(x_{a}=x_{2(a/2)}=2x_{(a/2)}=0\). (iii) If \(F=A\) is a field, then \(x_{a}=0\): If \(\operatorname{char}(F)=2\), then \(b=1\) and thus \(x_{a}=0\). If \(\operatorname{char}(F)\neq 2\), then \(2\in F^{\times}\), and the claim follows from (ii). (iv) If \(A\) is a local ring such that \(A/\mathfrak{m}_{A}\) has at least \(3\) elements, then \(x_{a}=0\): If \(|A/\mathfrak{m}_{A}|=3\), then \(2\in A^{\times}\), and thus the claim follows from (ii). If \(|A/\mathfrak{m}_{A}|>3\), then there is \(c\in A^{\times}\), such that \(c^{2}-1\in A^{\times}\). Thus \(H_{0}(A^{\times},A)=0\) and the claim follows from (i). (v) Let \(A=\mathbb{Z}_{(p)}\), where \(p\) is a prime. Then \(x_{a+1}=0\) for any \(a\in A^{\times}\): For \(p>2\) the claim follows from (iv). Let \(p=2\) and let \(a=a^{\prime}/b^{\prime}\in\mathbb{Z}_{(2)}\). Then \(a^{\prime},b^{\prime}\) are odd and so \(a+1=(a^{\prime}+b^{\prime})/b^{\prime}=2c^{\prime}\), \(c^{\prime}\in\mathbb{Z}_{(2)}\). Now \(x_{a+1}=x_{2c^{\prime}}=2x_{c^{\prime}}=0\). (vi) Let \(A=\mathbb{Z}\big{[}\frac{1}{p}\big{]}\), where \(p\) is a prime. Then \(x_{a+1}=0\) for any \(a\in A^{\times}\): If \(p=2\), then by (ii), \(x_{a+1}=x_{a}=0\). If \(p\neq 2\), then \(a=\pm p^{n}\), \(n\in\mathbb{Z}\). Now we have \(a+1=2c\), where \(c\in A\). Thus \(x_{a+1}=0\). (vii) If \(\mu_{2}(A)=1\), then \(x_{a}=0\): Since \(\mu_{2}(A)=1\), we have \(b=1\) and thus \(x_{a}=0\). In the rest of this article we will mostly assume that \(x_{a+1}=0\) for any \(a\in A^{\times}\), i.e. \[\operatorname{im}(d_{2,1}^{2}\circ\gamma)=A^{\times}\wedge\mu_{2}(A).\] For example in our important results, for technical reasons, we will assume that \[H_{2}(B(A),\mathbb{Z})\simeq H_{2}(T(A),\mathbb{Z}),\] i.e. \(\mathcal{S}_{2}=0\). So the above condition will be satisfied. Now by the above lemma we have the commutative diagram with exact rows (4.1) Let \(\psi_{1}(a):=[a]+\langle-1\rangle[a^{-1}]\in\overline{\mathcal{RP}}(A)\), where \(p_{-1}^{+}=\langle-1\rangle+1\in\mathbb{Z}[\mathcal{G}_{A}]\). It is easy to check that \[g(a):=p_{-1}^{+}[a]+\langle\!\langle 1-a\rangle\!\rangle\psi_{1}(a)\in \overline{\mathcal{RP}}_{1}(A).\] We denote the image of this elements in \(\mathcal{RP}_{1}(A)\) by \(g(a)\) again. **Proposition 4.3**.: _Then under the composite_ \[\mathcal{RP}_{1}(A)\to\frac{A^{\times}\wedge A^{\times}}{A^{\times}\wedge\mu_ {2}(A)}\oplus\mathcal{S}_{2}\to\frac{A^{\times}\wedge A^{\times}}{A^{\times} \wedge\mu_{2}(A)}\] _we have_ \[g(a)\mapsto a\wedge(1-a).\] Proof.: From the complex \(0\to Z_{1}(A^{2})\overset{\mathrm{inc}}{\to}X_{1}(A^{2})\overset{\partial_{ 1}}{\to}X_{0}(A^{2})\to 0\) we obtain the first quadrant spectral sequence \[\mathcal{E}_{p.q}^{1}=\left\{\begin{array}{ll}H_{q}(\mathrm{GL}_{2}(A),X_{p }(A^{2}))&p=0,1\\ H_{q}(\mathrm{GL}_{2}(A),Z_{1}(A^{2}))&p=2\\ 0&p>2\end{array}\right.\implies H_{p+q}(\mathrm{GL}_{2}(A),\mathbb{Z}).\] This spectral sequence have been studied in [13, SS3]. Let \(\mathcal{P}(A):=H_{0}(\mathrm{GL}_{2}(A),Z_{2}(A^{2}))\). We have a \(\mathcal{R}_{A}\)-map \(\mathcal{RP}(A)\to\mathcal{P}(A)\), where \(\mathcal{P}(A)\) has the trivial action of \(\mathcal{G}_{A}\). Under this map \(g(a)\mapsto 2[a]\). This induces a map \(\mathcal{RP}_{1}(A)\to\mathcal{P}(A)\). One can show that \(\mathcal{E}_{2,1}^{2}\simeq\mathcal{P}(A)\) (see [13, Lemma 3.2]). The map \(\mathrm{SL}_{2}(A)\to\mathrm{GL}_{2}(A)\) induces the morphism of spectral sequences This induces the commutative diagram where \[B_{2}(A):=\mathrm{Stab}_{\mathrm{GL}_{2}(A)}(\boldsymbol{\infty})=\bigg{\{} \begin{pmatrix}a&b\\ 0&d\end{pmatrix}:a,d\in A^{\times},b\in A\Big{\}},\] \[T_{2}(A):=\mathrm{Stab}_{\mathrm{GL}_{2}(A)}(\boldsymbol{\infty},\boldsymbol{0 })=\bigg{\{}\begin{pmatrix}a&0\\ 0&d\end{pmatrix}:a,d\in A^{\times}\Big{\}}.\] This together with diagram (4.1) induce the commutative diagram where \(S^{2}_{\mathbb{Z}}(A):=(A^{\times}\otimes A^{\times})/\langle a\otimes b+b\otimes a :a,b\in A^{\times}\rangle\). Moreover the vertical map on the right is given by \(a\wedge b\to(2a\wedge b,2(a\otimes b))\) and the bottom horizontal map is given by \([a]\mapsto(a\wedge(1-a),-a\otimes(1-a))\). Now the claim follows from the fact that the composite \[\mathcal{RP}_{1}(A)\to\mathcal{P}(A)\to(A^{\times}\wedge A^{\times})\oplus S^ {2}_{\mathbb{Z}}(A)\] maps \(g(a)\) to \(2(a\wedge(1-a),-a\otimes(1-a))\). We denote the differential \(d^{2}_{2,1}\) by \(\lambda_{1}\): \[\lambda_{1}:\mathcal{RP}_{1}(A)\to H_{2}(B(A),\mathbb{Z})\simeq\frac{A^{ \times}\wedge A^{\times}}{A^{\times}\wedge\mu_{2}(A)}\oplus\mathcal{S}_{2}.\] The kernel of \(\lambda_{1}\) is called the _refined Bloch group_ of \(A\) and is denoted by \(\mathcal{RB}(A)\): \[\mathcal{RB}(A):=\ker(\lambda_{1}).\] From the spectral sequence we obtain a natural surjective map \[H_{3}(\operatorname{SL}_{2}(A),\mathbb{Z})\twoheadrightarrow\mathcal{RB}(A).\] Let \(\Sigma^{\prime}_{2}=\{1,\sigma^{\prime}\}\) be the symmetric group of order \(2\). This group acts on \(\operatorname{Tor}^{\mathbb{Z}}_{1}(\mu(A),\mu(A))\) as \((\sigma^{\prime},x)\mapsto-\sigma_{1}(x)\), where \(\sigma_{1}:\operatorname{Tor}^{\mathbb{Z}}_{1}(\mu(A),\mu(A))\to \operatorname{Tor}^{\mathbb{Z}}_{1}(\mu(A),\mu(A))\) is obtained by interchanging the group \(\mu(A)\). **Theorem 4.4** (Refined Bloch-Wigner in \(\operatorname{char}=2\)[15]).: _Let \(A\) be a ring such that_ (i)_\(\mu_{2}(A)=1\),_ (ii)_\(X_{\bullet}(A^{2})\to\mathbb{Z}\) is exact in dimension \(<2\)_ (iii)_\(H_{3}(T(A),\mathbb{Z})\simeq H_{3}(B(A),\mathbb{Z})\)._ _Then we have the exact sequence_ \[\operatorname{Tor}^{\mathbb{Z}}_{1}(\mu(A),\mu(A))^{\Sigma^{\prime}_{2}}\to H _{3}(\operatorname{SL}_{2}(A),\mathbb{Z})\to\mathcal{RB}(A)\to 0.\] _If \(A\) is a domain then we have the exact sequence_ \[0\to\operatorname{Tor}^{\mathbb{Z}}_{1}(\mu(A),\mu(A))\to H_{3}( \operatorname{SL}_{2}(A),\mathbb{Z})\to\mathcal{RB}(A)\to 0.\] Proof.: This is a slight generalization of [15, Theorem 6.1] and the proof is the same. Let study the map \(\mathcal{I}_{A}\otimes\mu_{2}(A)\to A^{\times}\wedge\mu_{2}(A)\subseteq A^{ \times}\wedge A^{\times}\) given by \(\langle\!\langle a\rangle\!\rangle\otimes b\mapsto a\wedge b\) (when \(A\) is a domain). Clearly \(\mathcal{I}^{2}_{A}\otimes\mu_{2}(A)\) is in the kernel of this map. This induces the map \[\mathcal{G}_{A}\otimes\mu_{2}(A)\simeq(\mathcal{I}_{A}/\mathcal{I}^{2}_{A}) \otimes\mu_{2}(A)\to A^{\times}\wedge\mu_{2}(A),\] \[\langle a\rangle\otimes b\mapsto\langle\!\langle a\rangle\!\rangle\otimes b \mapsto a\wedge b.\] **Lemma 4.5**.: _Let \(A\) be a domain. Then the kernel of the map \(\mathcal{G}_{A}\otimes\mu_{2}(A)\to A^{\times}\wedge A^{\times}\), given by \(\langle a\rangle\otimes(-1)\mapsto a\wedge(-1)\), has at most two elements._ Proof.: We may assume that \(\operatorname{char}(A)\neq 2\). In this case \(\mathcal{G}_{A}\otimes\mu_{2}(A)\simeq\mathcal{G}_{A}\). Let \(a\wedge(-1)=0\) in \(A^{\times}\wedge A^{\times}\). We know that \(A^{\times}=\varinjlim H\), where \(H\) runs through all finitely generated subgroups of \(A^{\times}\). As the direct limit commutes with wedge product, we have \(A^{\times}\wedge A^{\times}=\varinjlim H\wedge H\). We may take a finitely generated subgroup \(H\) such \(a,-1\in H\) and \(a\wedge(-1)=0\overset{\zeta}{\leftarrow}H\wedge H\). Let \(H\simeq F\times T\), where \(F\) is torsion free and \(T\) is a finite cyclic group. Thus \(-1\in T\) and we have \[H\wedge H\simeq(F\wedge F)\oplus(F\otimes T)\oplus(T\wedge T).\] Clearly \(T\wedge T=0\). Let \(a=p\omega\) with \(p\in F\) and \(T=\langle\omega\rangle\). From \(a\wedge(-1)=0\in H\wedge H\), it follows that \(p\otimes(-1)=0\) and \(\omega\wedge(-1)=0\). As \(-1\in T\), \(T\) has even order. Thus \(p\otimes(-1)=0\) implies that \(p\) is a square. Therefore \(\langle a\rangle=\langle\omega\rangle\). This completes the proof. Now let \(A\) be a domain. Then from the commutative diagram (4.1), we obtain the exact sequence \[H_{1}(\operatorname{SL}_{2}(A),Z_{2}(A^{2}))\to J\overset{\gamma}{\to}E_{2,1 }^{3}\to\mathcal{RB}(A)\to 0,\] where \(J\) sits in the exact sequence \(\mathcal{I}_{A}^{2}\otimes\mu_{2}(A)\to J\to(\mathbb{Z}/2)^{\prime}\to 0\) with \((\mathbb{Z}/2)^{\prime}\) a subgroup of \(\mathbb{Z}/2\) (Lemma 4.5). ## 5. The low dimensional homology of \(\operatorname{SM}_{2}\) Let \(\operatorname{SM}_{2}(A)\) denotes the group of monomial matrices in \(\operatorname{SL}_{2}(A)\). Then \(\operatorname{SM}_{2}(A)\) consists of matrices \(\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\) and \(\begin{pmatrix}0&a\\ -a^{-1}&0\end{pmatrix}\), where \(a\in A^{\times}\). Let \(\hat{X}_{0}(A^{2})\) and \(\hat{X}_{1}(A^{2})\) be the free \(\mathbb{Z}\)-modules generated by the sets \[\operatorname{SM}_{2}(A)(\boldsymbol{\infty}):=\{g.(\boldsymbol{\infty}):g \in\operatorname{SM}_{2}(A)\},\quad\operatorname{SM}_{2}(A)(\boldsymbol{ \infty},\boldsymbol{0}):=\{g.(\boldsymbol{\infty},\boldsymbol{0}):g\in \operatorname{SM}_{2}(A)\},\] respectively. It is easy to see that the sequence of \(\operatorname{SM}_{2}(A)\)-modules \[\hat{X}_{1}(A^{2})\overset{\hat{\varrho}_{1}}{\to}\hat{X}_{0}(A^{2})\overset {\hat{\epsilon}}{\to}\mathbb{Z}\to 0\] is exact and \[\ker(\hat{\partial}_{1})=\mathbb{Z}\{(\boldsymbol{\infty},\boldsymbol{0})+( \boldsymbol{0},\boldsymbol{\infty})\}.\] We denote this kernel by \(\hat{Z}_{1}(A^{2})\). Observe that \(\hat{Z}_{1}(A^{2})\simeq\mathbb{Z}\) and \(\operatorname{SM}_{2}(A)\) acts trivially on it. From the complex \[0\to\hat{Z}_{1}(A^{2})\overset{\text{inc}}{\to}\hat{X}_{1}(A^{2})\overset{ \hat{\varrho}_{1}}{\to}\hat{X}_{0}(A^{2})\to 0, \tag{5.1}\] we obtain the first quadrant spectral sequence \[\hat{E}_{p.q}^{1}=\left\{\begin{array}{ll}H_{q}(\operatorname{SM}_{2}(A), \hat{X}_{p}(A^{2}))&p=0,1\\ H_{q}(\operatorname{SM}_{2}(A),\hat{Z}_{1}(A^{2}))&p=2\\ 0&p>2\end{array}\right.\Rightarrow H_{p+q}(\operatorname{SM}_{2}(A),\mathbb{ Z}).\] Since the complex (5.1) is a \(\operatorname{SM}_{2}(A)\)-subcomplex of (1.1), we have a natural morphism of spectral sequences \[\begin{CD}\hat{E}^{1}_{p,q}@>{}>{}>H_{p+q}(\operatorname{SM}_{2}(A),\mathbb{Z}) \\ @V{}V{}V@V{}V{}V\\ E^{1}_{p,q}@>{}>{}>H_{p+q}(\operatorname{SL}_{2}(A),\mathbb{Z}).\end{CD} \tag{5.2}\] As in case of \(\operatorname{SL}_{2}(A)\), we have \(\hat{X}_{0}\simeq\operatorname{Ind}_{T(A)}^{\operatorname{SM}_{2}(A)}\mathbb{Z}\) and \(\hat{X}_{1}\simeq\operatorname{Ind}_{T(A)}^{\operatorname{SL}_{2}(A)}\mathbb{Z}\). Thus by Shapiro's lemma we have \[\hat{E}^{1}_{0,q}\simeq H_{q}(T(A),\mathbb{Z}),\ \ \ \ \hat{E}^{1}_{1,q}\simeq H_{q}(T(A), \mathbb{Z}).\] Therefore \[\hat{E}^{1}_{p.q}=\left\{\begin{array}{ll}H_{q}(T(A),\mathbb{Z})&p=0,1\\ H_{q}(\operatorname{SM}_{2}(A),\mathbb{Z})&p=2\\ 0&p>2\end{array}\right.\Rightarrow H_{p+q}(\operatorname{SM}_{2}(A),\mathbb{Z}).\] Moreover, \(\hat{d}^{1}_{1,q}=H_{q}(\hat{\sigma})-H_{q}(\hat{\operatorname{inc}})=\hat{ \sigma}_{*}-\hat{\operatorname{inc}}_{*}\), where \(\hat{\sigma}:T(A)\to T(A)\) is given by \(X\to wXw^{-1}=X^{-1}\). Thus \(\hat{d}^{1}_{1,0}\) is trivial, \(\hat{d}^{1}_{1,1}\) is induced by the map \(X\mapsto X^{-2}\) and \(\hat{d}^{1}_{1,2}\) is trivial. A direct calculation shows that the map \(\hat{d}_{2,q}:H_{q}(\operatorname{SM}_{2}(A),\mathbb{Z})\to H_{q}(T(A), \mathbb{Z})\) is the transfer map [1, SS9, Chap. III]. Hence the composite \[H_{q}(\operatorname{SM}_{2}(A),\mathbb{Z})\stackrel{{\hat{d}_{2,q}}}{{\rightarrow}}H_{q}(T(A),\mathbb{Z})\stackrel{{\operatorname{ inc}_{*}}}{{\rightarrow}}H_{q}(\operatorname{SM}_{2}(A),\mathbb{Z})\] coincides with multiplication by \(2\)[1, Proposition 9.5, Chap. III]. In particular, \(\hat{d}_{2,0}:\mathbb{Z}\rightarrow\mathbb{Z}\) is multiplication by \(2\). From these we obtain the exact sequence \[1\rightarrow\mathcal{G}_{A}\to H_{1}(\operatorname{SM}_{2}(A),\mathbb{Z}) \rightarrow\mathbb{Z}/2\to 0.\] If fact this can be obtain directly from the extension \(1\to T(A)\rightarrow\operatorname{SM}_{2}(A)\rightarrow\langle\overline{w} \rangle\to 1\): \[1\rightarrow\mathcal{G}_{A}\to H_{1}(\operatorname{SM}_{2}(A),\mathbb{Z}) \rightarrow\langle\overline{w}\rangle\to 1.\] Observe that \(w^{2}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\in T(A)\). A direct calculation shows that \(\hat{d}^{1}_{2,1}(\overline{w})=-1\) and \(\hat{d}^{1}_{2,1}\mid_{\mathcal{G}_{A}}=0\). Thus \[\hat{E}^{2}_{1,1}=\mu_{2}(A)/\{\pm 1\},\ \ \ \ \ \hat{E}^{2}_{2,1}=\mathcal{G}_{A}.\] Again a direct calculation shows that \[\hat{d}^{2}_{2,1}:\mathcal{G}_{A}\to H_{2}(T(A),\mathbb{Z})\simeq A^{ \times}\wedge A^{\times}\] is given by \(\langle a\rangle\mapsto a\wedge(-1)\). Therefore from the spectral sequence \(\hat{E}^{1}_{p,q}\Rightarrow H_{p+q}(\operatorname{SM}_{2}(A),\mathbb{Z})\) we obtain the exact sequence \[0\rightarrow\frac{A^{\times}\wedge A^{\times}}{A^{\times}\wedge\{\pm 1\}} \to H_{2}(\operatorname{SM}_{2}(A),\mathbb{Z})\rightarrow\mu_{2}(A)/\{\pm 1 \}\to 1.\] Thus we have: **Lemma 5.1**.: _If \(\mu_{2}(A)=\{\pm 1\}\), then \(H_{2}(\mathrm{SM}_{2}(A),\mathbb{Z})\simeq\frac{A^{\times}\wedge A^{\times}}{A^{ \times}\wedge\mu_{2}(A)}\)._ Now if \(\mu_{2}(A)=\{\pm 1\}\), then it follows from this lemma that the image of the map \(\hat{d}^{1}_{2,2}:H_{2}(\mathrm{SM}_{2}(A),\mathbb{Z})\to A^{\times}\wedge A^{\times}\) is \(2(A^{\times}\wedge A^{\times})\). Thus \(\hat{E}^{2}_{1,2}\simeq\frac{A^{\times}\wedge A^{\times}}{2(A^{\times}\wedge A ^{\times})}\). Moreover one can show that \(\hat{E}^{2}_{2,2}\simeq\frac{2(A^{\times}\wedge A^{\times})}{A^{\times}\wedge \mu_{2}(A))}\). ## 6. The third homology of \(\mathrm{SL}_{2}\) Let the complex \(X_{\bullet}(A^{2})\to\mathbb{Z}\) be exact in dimension \(<2\). Then the natural map \(\alpha:\mathcal{G}_{A}=\hat{E}^{2}_{2,1}\to E^{2}_{2,1}\) sits in the diagram Recall that for any \(a\in A^{\times}\), we defined \(\psi_{1}(a):=[a]+\langle-1\rangle[a^{-1}]\in\mathcal{RP}(A)\). **Lemma 6.1**.: _The composite map \(\delta\circ\alpha:\mathcal{G}_{A}\to\mathcal{RP}_{1}(A)\) is given by \(\langle a\rangle\mapsto\psi_{1}(a^{2})\)._ Proof.: The element \(\langle a\rangle\in\mathcal{G}_{A}\) is represented by \[[a]\otimes\{(\boldsymbol{\infty},\boldsymbol{0})+(\boldsymbol{0}, \boldsymbol{\infty})\}\in H_{1}(\mathrm{SM}_{2}(A),\hat{Z}_{1}(A^{2})).\] Its image in \(H_{1}(\mathrm{SL}_{2}(A),Z_{1}(A^{2}))\), through \(\alpha\), is represented by the element \[S:=[a]\otimes\partial_{2}((\boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a} ^{2})+(\boldsymbol{0},\boldsymbol{\infty},\boldsymbol{a}^{2})).\] We have \[\delta(S) =(d_{1}\otimes\mathrm{id}_{Z_{2}(X^{2})})\Big{(}[a]\otimes \partial_{2}((\boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a}^{2})+( \boldsymbol{0},\boldsymbol{\infty},\boldsymbol{a}^{2}))\Big{)}\] \[=[\ ]\otimes\Big{(}(\boldsymbol{\infty},\boldsymbol{0}, \boldsymbol{1})+(\boldsymbol{0},\boldsymbol{\infty},\boldsymbol{1})-( \boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a}^{2})-(\boldsymbol{0}, \boldsymbol{\infty},\boldsymbol{a}^{2})\Big{)}\] \[=[\ ]\otimes\partial_{3}\Big{(}(\boldsymbol{\infty}, \boldsymbol{0},\boldsymbol{1},\boldsymbol{a}^{2})+(\boldsymbol{0},\boldsymbol{ \infty},\boldsymbol{a}^{2},\boldsymbol{1})\Big{)}.\] It is straightforward to check that this element represent \(-\psi_{1}(a^{2})\). Thus \[\delta(S)=-\psi_{1}(a^{2})=\psi_{1}(a^{2}).\] For any \(a\in A^{\times}\), let \(X_{a}\) and \(X^{\prime}_{a}\) denote the elements \((\boldsymbol{\infty},\boldsymbol{0},\boldsymbol{a})\) and \((\boldsymbol{0},\boldsymbol{\infty},\boldsymbol{a})\) of \(X_{2}(A^{2})\) respectively. Let \(\chi_{a}\in H_{1}(\mathrm{SL}_{2}(A),Z_{1}(A^{2}))\) be represented by \([wa]\otimes\partial_{2}(X_{-a}-X_{a})\). We usually write \[\chi_{a}:=[wa]\otimes\partial_{2}(X_{-a}-X_{a}).\] We remind that usually \(\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}\) is denoted by \(a\). **Lemma 6.2**.: _For any \(a\in A^{\times}\), \(\gamma(\langle\!\langle a\rangle\!\rangle\otimes(-1))-\alpha(\langle a\rangle)= \langle-1\rangle\langle\!\langle a\rangle\!\rangle.\chi_{1}\)._ Proof.: Let \(Y:=(\boldsymbol{\infty},\boldsymbol{0})+(\boldsymbol{0},\boldsymbol{\infty}) \in Z_{1}(A^{2})\). For any \(a\in A^{\times}\), we have 1. \(d_{2}([wa|wa])=wa[wa]-[-1]+[wa]\), 2. \(d_{2}([w|a])=w[a]-[wa]+[w]\). Thus modulo \(\operatorname{im}(d_{2}\otimes\operatorname{id}_{Z_{1}(A^{2})})\), we have 1. \([-1]\otimes\partial_{2}(X_{-a})=[wa]\otimes\partial_{2}(X_{a}^{\prime})+[wa] \otimes\partial_{2}(X_{-a})\), 2. \([wa]\otimes Y=[a]\otimes Y+[w]\otimes Y\). Hence \[[wa]\otimes\partial_{2}(X_{-a}-X_{a})= [wa]\otimes\partial_{2}(X_{-a})-[wa]\otimes\partial_{2}(X_{a})\] \[= [-1]\otimes\partial_{2}(X_{-a})-[wa]\otimes\partial_{2}(X_{a}^{ \prime})-[wa]\otimes\partial_{2}(X_{a})\] \[= [-1]\otimes\partial_{2}(X_{-a})-[wa]\otimes Y\] \[= [-1]\otimes\partial_{2}(X_{-a})-([a]\otimes Y+[w]\otimes Y)\] \[= [-1]\otimes\partial_{2}(X_{-a})-[w]\otimes Y-\alpha(\langle a\rangle)\] \[= [-1]\otimes\partial_{2}(X_{-a})-[w]\otimes\partial_{2}(X_{1}+X_{1 }^{\prime})-\alpha(\langle a\rangle)\] Now, using the identity (1) in above for \(a=1\), we get \[[wa]\otimes\partial_{2}(X_{-a}-X_{a})-[w]\otimes\partial_{2}(X_ {-1}-X_{1}) =[-1]\otimes\partial_{2}(X_{-a}-X_{-1})-\alpha(\langle a\rangle)\] \[=\langle-1\rangle\gamma(\langle\!\langle a\rangle\!\rangle \otimes(-1))-\alpha(\langle a\rangle).\] On the other hand, \[[wa]\otimes\partial_{2}(X_{-a}\!-X_{a})\!-\![w]\otimes\partial_{2 }(X_{-1}\!-\!X_{1})\!= \langle a\rangle([w]\otimes\partial_{2}(X_{-1}\!-\!X_{1}))-[w] \otimes\partial_{2}(X_{-1}\!-\!X_{1})\] \[=\langle\!\langle a\rangle\!\rangle([w]\otimes\partial_{2}(X_{-1 }-X_{1}))\] \[=\langle\!\langle a\rangle\!\rangle\chi_{1}.\] Therefore \(\langle\!\langle a\rangle\!\rangle\cdot\chi_{1}=\langle-1\rangle\gamma(\langle \!\langle a\rangle\!\rangle\otimes(-1))-\alpha(\langle a\rangle)\). **Remark 6.3**.: It is straightforward to show that \(\delta(\chi_{1})=\psi_{1}(-1)\in\mathcal{RP}_{1}(A)\). **Corollary 6.4**.: _If \(-1\in(A^{\times})^{2}\), then for any \(a\in A^{\times}\), \(\gamma(\langle\!\langle a\rangle\!\rangle\otimes(-1))=\alpha(\langle a\rangle)\)._ Proof.: First observe that for any \(s\in A^{\times}\) and \(X\in X_{2}(A^{2})\), we have \[[w]\otimes(sX-X)=[s]\otimes(wX+sX).\] Now if \(i^{2}=-1\), then by the above relation we have \[[w]\otimes\partial_{2}(X_{-1}-X_{1}) =[w]\otimes\partial_{2}(iX_{1}-X_{1})\] \[=[i]\otimes\partial_{2}(wX_{1}+iX_{1})\] \[=[i]\otimes\partial_{2}(X_{1}^{\prime}+X_{1})\] \[=[i]\otimes Y=\alpha(\langle i\rangle).\] Now the claim follows from Lemma 6.2. **Corollary 6.5**.: _Let \(\mu_{2}(A)=\{\pm 1\}\) and \(-1\in(A^{\times})^{2}\). Then \(\gamma(\mathcal{I}_{A}^{2}\otimes\mu_{2}(A))=0\). In particular, we have the exact sequence_ \[\mathcal{G}_{A}\stackrel{{\alpha}}{{\longrightarrow}}E_{2,1}^{2} \stackrel{{\delta}}{{\longrightarrow}}\mathcal{RP}_{1}(A)\to 0.\] Proof.: The ideal \(\mathcal{I}_{A}^{2}\) is generated by the elements \(\langle\!\langle a\rangle\!\rangle\langle\!\langle b\rangle\!\rangle= \langle\!\langle ab\rangle\!\rangle-\langle\!\langle a\rangle\!\rangle- \langle\!\langle b\rangle\!\rangle\). Thus by the above corollary \[\gamma(\langle\!\langle a\rangle\!\rangle\langle\!\langle b\rangle\!\rangle \otimes(-1))=\alpha(\langle ab\rangle)-\alpha(\langle a\rangle)-\alpha( \langle b\rangle)=\alpha(\langle aba^{-1}b^{-1}\rangle=\alpha(\langle 1\rangle)=0.\] The second part follows from the first part and the fact that \(\mathcal{I}_{A}/\mathcal{I}_{A}^{2}\simeq\mathcal{G}_{A}\) and \(\operatorname{im}(\gamma)=\operatorname{im}(\alpha)\). **Theorem 6.6**.: _Let \(A\) be a commutative ring such that_ (i)_\(\mu_{2}(A)=\{\pm 1\}\) and \(-1\in(A^{\times})^{2}\),_ (ii)_\(X_{\bullet}(A^{2})\to\mathbb{Z}\) is exact in dimension \(<2\)._ (iii)_\(H_{i}(T(A),\mathbb{Z})\simeq H_{i}(B(A),\mathbb{Z})\) for \(i=2,3\)._ _Then we have the exact sequence_ \[H_{3}(\operatorname{SM}_{2}(A),\mathbb{Z})\to H_{3}(\operatorname{SL}_{2}(A), \mathbb{Z})\to\mathcal{RB}(A)\to 0.\] Proof.: The morphism of spectral sequences (5.2) induces a map of filtration \[\begin{array}{ccccccccc}0\subseteq&\hat{F}_{0}&\subseteq&\hat{F}_{1}& \subseteq&\hat{F}_{2}&\subseteq&\hat{F}_{3}=H_{3}(\operatorname{SM}_{2}(A), \mathbb{Z})\\ &\downarrow&&\downarrow&&\downarrow&&\downarrow\\ 0\subseteq&F_{0}&\subseteq&F_{1}&\subseteq&F_{2}&\subseteq&F_{3}=H_{3}( \operatorname{SL}_{2}(A),\mathbb{Z})\end{array}\] where \(E_{p,3-p}^{\infty}=F_{p}/F_{p-1}\) and \(\hat{E}_{p,3-p}^{\infty}=\hat{F}_{p}/\hat{F}_{p-1}\). Clearly \(F_{2}=F_{3}\) and \(\hat{F}_{2}=\hat{F}_{3}\). Consider the following commutative diagram with exact rows (6.1) By Corollary 6.5, we have the exact sequence \(\hat{E}_{2,1}^{2}\to E_{2,1}^{2}\to\mathcal{RP}_{1}(A)\to 0\). From the commutative diagram with exact rows we obtain the exact sequence \[\hat{E}_{2,1}^{\infty}\to E_{2,1}^{\infty}\to\mathcal{RB}(A)\to 0.\] Now consider the commutative diagram with exact rows Since \(\hat{E}^{1}_{0,3}\simeq E^{1}_{0,3}\), the natural map \(\hat{F}_{0}\to F_{0}\) is surjective. Moreover, since \(\hat{E}^{1}_{1,2}\simeq E^{1}_{1,2}\), the map \(\hat{E}^{\infty}_{1,2}\to E^{\infty}_{1,2}\) is surjective. These imply that the map \(\hat{F}_{1}\to F_{1}\) is surjective. Now the claim follows by applying the snake lemma to the diagram (6.1). **Remark 6.7**.: We think that the condition \(-1\in A^{\times 2}\) in Theorem 6.6 is not essential (at least when \(A\) is a domain). To remove this condition we need to prove that under the map \(\gamma:\mathcal{I}_{A}\otimes\mu_{2}(A)\to E^{2}_{2,1}\), \(\mathcal{I}^{2}_{A}\otimes\mu_{2}(A)\) maps to zero. Having this, then \[\mathcal{G}_{A}\simeq\mathcal{G}_{A}\otimes\mu_{2}(A)\stackrel{{ \bar{\gamma}}}{{\longrightarrow}}E^{2}_{2,1}\to A^{\times}\wedge\mu_{2}(A)\ \ \text{and}\ \ \mathcal{G}_{A}\stackrel{{\alpha}}{{\longrightarrow}}E^{2}_{2,1} \to A^{\times}\wedge\mu_{2}(A)\] have the same kernel by Lemma 4.5. Then we can proceed as in the above proof. **Example 6.8**.: Here we give examples of rings that satisfy the conditions of Theorem 6.6: (1) Any local domain of characteristic \(2\) such that its residue field has more than \(64\) elements satisfies in the conditions of the theorem (Proposition 1.1, Theorem 3.8). (2) Let \(B\) be a domain such that \(-1\) is square. Let \(\mathfrak{p}\) be a prime ideal of \(B\) such that either \(B/\mathfrak{p}\) is infinite or if \(|B/\mathfrak{p}|=p^{d}\), then \((p-1)d>6\). Then \(A:=B_{\mathfrak{p}}\) satisfies in the conditions of Theorem 6.6 (Proposition 1.1, Theorem 3.8). (3) Any domain with many units such that \(-1\) is an square (e.g \(F\)-algebras which are domains and \(F\) is an algebraically closed) [13, SS2]. (4) Let \(A=\mathbb{Z}[\frac{1}{m}]\), where \(m\) can be expressed as a product of primes \(m=p_{1}^{\alpha_{1}}\cdots p_{t}^{\alpha_{t}}\) (\(\alpha_{i}\geq 1\)) with property that \((\mathbb{Z}/p_{i})^{\times}\) is generated by the residue classes \(\{-1,p_{1},\ldots,p_{i-1}\}\) for all \(i\leq t\). In particular, \(p_{1}\in\{2,3\}\). Then \(A\) satisfies in the above conditions of Theorem 6.6 (Lemma 3.5, [10, Example 6.14]). ## 7. A spectral sequence for relative homology Let \(G\) be a group and \(M\) a \(G\)-module. We denote these by a pair \((G,M)\). A morphism of pairs \((f,\sigma):(G^{\prime},M^{\prime})\to(G,M)\) is a pair of group homomorphisms \(f:G^{\prime}\to G\) and \(\sigma:M^{\prime}\to M\) such that \[\sigma(g^{\prime}m^{\prime})=f(g^{\prime})\sigma(m^{\prime}).\] This means that \(\sigma\) is a map of \(G^{\prime}\)-modules. For a group \(H\) let \(C_{\bullet}(H)\to\mathbb{Z}\) be the standard resolution of \(\mathbb{Z}\) over \(\mathbb{Z}[H]\)[1, Chap.I, SS5]. The map \(f:G^{\prime}\to G\), induces in a natural way a morphism of complexes \(f_{\bullet}:C_{\bullet}(G^{\prime})\to C_{\bullet}(G)\). The morphism of the pairs \((f,\sigma):(G^{\prime},M^{\prime})\to(G,M)\), induces a morphism of complexes \[f_{\bullet}\otimes\sigma:C_{\bullet}(G^{\prime})\otimes_{G^{\prime}}M^{ \prime}\to C_{\bullet}(G)\otimes_{G}M.\] Let \(G^{\prime}\) be a subgroup of \(G\) and \(M^{\prime}\) be a \(G^{\prime}\)-submodule of \(M\). We take \((i,\sigma):(G^{\prime},M^{\prime})\hookrightarrow(G,M)\) as the natural pair of inclusion maps. Then the morphism \[i_{\bullet}\otimes\sigma:C_{\bullet}(G^{\prime})\otimes_{G^{\prime}}M^{\prime }\to C_{\bullet}(G)\otimes_{G}M\] is injective. We denote the \(n\)-homology of the quotient complex \(C_{\bullet}(G)\otimes_{G}M/C_{\bullet}(G^{\prime})\otimes_{G^{\prime}}M^{\prime}\) by \(H_{n}(G,G^{\prime},M^{\prime},M)\): \[H_{n}(G,G^{\prime},M,M^{\prime}):=H_{n}(C_{\bullet}(G)\otimes_{G}M/C_{\bullet} (G^{\prime})\otimes_{G^{\prime}}M^{\prime}).\] If \(M^{\prime}=M\), then \(H_{n}(G,G^{\prime},M,M^{\prime})\) is the usual relative homology group \(H_{n}(G,G^{\prime},M)\). From the exact sequence of complexes \[0\to C_{\bullet}(G^{\prime})\otimes_{G^{\prime}}M^{\prime}\to C_{\bullet}(G) \otimes_{G}M\to C_{\bullet}(G)\otimes_{G}M/C_{\bullet}(G^{\prime})\otimes_{G^ {\prime}}M^{\prime}\to 0\] we obtain the long exact sequence \[\cdots\to H_{n}(G^{\prime},M^{\prime})\to H_{n}(G,M)\to H_{n}(G,G^{\prime},M,M^ {\prime})\to H_{n-1}(G^{\prime},M^{\prime})\] \[\to H_{n-1}(G,M)\to H_{n-1}(G,G^{\prime},M,M^{\prime})\to\cdots\] **Proposition 7.1**.: _Let \(G^{\prime}\) be a subgroup of \(G\). Let \(L^{\prime}_{\bullet}\to M^{\prime}\) be an exact \(G^{\prime}\)-subcomplex of an exact \(G\)-complex \(L_{\bullet}\to M\). Then we have the first quadrant spectral sequence_ \[\mathbb{E}^{1}_{p,q}=H_{q}(G,G^{\prime},L_{p},L^{\prime}_{p})\Rightarrow H_{p +q}(G,G^{\prime},M,M^{\prime}).\] Proof.: Let \(i:G^{\prime}\hookrightarrow G\) and \(\sigma_{\bullet}:L^{\prime}_{\bullet}\hookrightarrow L_{\bullet}\) be the usual inclusions. The morphism of double complexes \[i_{\bullet}\otimes\sigma_{\bullet}:C_{\bullet}(G^{\prime})\otimes_{G^{\prime }}L^{\prime}_{\bullet}\to C_{\bullet}(G)\otimes_{G}L_{\bullet}\] is injective. We denote its quotient by \(D_{\bullet,\bullet}\colon\, D_{\bullet,\bullet}=\operatorname{coker}(i_{\bullet} \otimes\sigma_{\bullet})\). This double complexes induces two spectral sequences \[\mathcal{E}^{1}_{p,q}(I)=H_{q}(D_{p,\bullet})\Rightarrow H_{p+q}(\operatorname {Tot}(D_{\bullet,\bullet})),\ \ \ \ \mathcal{E}^{1}_{p,q}(II)=H_{q}(D_{\bullet,p})\Rightarrow H_{p+q}( \operatorname{Tot}(D_{\bullet,\bullet})).\] These are the spectral sequences \[\mathcal{E}^{1}_{p,q}(I)=H_{q}\bigg{(}\frac{C_{p}(G)\otimes_{G}L_{\bullet}}{C_ {p}(G^{\prime})\otimes_{G^{\prime}}L^{\prime}_{\bullet}}\bigg{)}\Rightarrow H _{p+q}(\operatorname{Tot}(D_{\bullet,\bullet})),\] and \[\mathcal{E}^{1}_{p,q}(II)=H_{q}\bigg{(}\frac{C_{\bullet}\otimes_{G}L_{p}}{C^{ \prime}_{\bullet}\otimes_{G^{\prime}}L^{\prime}_{p}}\bigg{)}\Rightarrow H_{p+ q}(\operatorname{Tot}(D_{\bullet,\bullet})).\] By definition \(\mathcal{E}^{1}_{p,q}(II)=H_{q}(G,G^{\prime},L_{p},L^{\prime}_{p})\). Moreover since \(L_{\bullet}\) and \(L^{\prime}_{\bullet}\) are exact in dimension \(>0\), we have \(\mathcal{E}^{1}_{p,q}(I)=0\) for any \(q>0\). For \(q=0\), we have \(\mathcal{E}^{1}_{p,0}(I)\simeq\dfrac{C_{p}(G)\otimes_{G}M}{C_{p}(G^{\prime}) \otimes_{G^{\prime}}M^{\prime}}\). Now the homology of the sequence \(\mathcal{E}^{1}_{p+1,0}(I)\to\mathcal{E}^{1}_{p,0}(I)\to\mathcal{E}^{1}_{p,0}(I)\) is \[\mathcal{E}^{2}_{p,0}(I)\simeq H_{q}(G,G^{\prime},M,M^{\prime}).\] Now by an easy analysis of the spectral sequence \(\mathcal{E}^{1}_{p,q}(I)\), for any \(n\geq 0\) we obtain the isomorphism \[H_{n}(\operatorname{Tot}(D_{\bullet,\bullet}))\simeq H_{n}(G,G^{\prime},M,M^{ \prime}).\] Thus if we take \(\mathbb{E}^{1}_{p,q}:=\mathcal{E}^{1}_{p,q}(II)\), then we obtain the spectral sequence \[\mathbb{E}^{1}_{p,q}=H_{q}(G,G^{\prime},L_{p},L^{\prime}_{p})\Rightarrow H_{p+q} (G,G^{\prime},M,M^{\prime}).\] . The groups \(\mathcal{RP}_{1}(A)\) and \(H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\) Let \(\mu_{2}(A)=\{\pm 1\}\) and the complex \(X_{\bullet}(A^{2})\to\mathbb{Z}\) be exact in dimension \(<1\). The complex \[0\to\hat{Z}_{1}(A^{2})\to\hat{X}_{1}(A^{2})\to\hat{X}_{0}(A^{2})\to 0\] is a \(\mathrm{SM}_{2}(A)\)-subcomplex of the \(\mathrm{SL}_{2}(A)\)-complex \[0\to Z_{1}(A^{2})\to X_{1}(A^{2})\to X_{0}(A^{2})\to 0.\] By Proposition 7.1, from the morphism of complexes we obtain the first quadrant spectral sequence \[\mathbb{E}^{1}_{p,q}\!=\!\begin{cases}H_{q}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2 }(A),X_{p}(A^{2}),\hat{X}_{p}(A^{2}))&\text{if $p=0,1$}\\ H_{q}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),Z_{1}(A^{2}),\hat{Z}_{1}(A^{2}))& \text{if $p=2$}\\ 0&\text{if $p>2$}\end{cases}\Rightarrow\!H_{p+q}(\mathrm{SL}_{2}(A), \mathrm{SM}_{2}(A),\mathbb{Z}).\] Consider the long exact sequence \[\cdots\to H_{q}(\mathrm{SM}_{2}(A),\hat{X}_{p}(A^{2})) \to H_{q}(\mathrm{SL}_{2}(A),X_{p}(A^{2}))\to\mathbb{E}^{1}_{p,q} \to H_{q-1}(\mathrm{SM}_{2}(A),\hat{X}_{p}(A^{2}))\] \[\to H_{q-1}(\mathrm{SL}_{2}(A),X_{p}(A^{2}))\to\cdots.\] Since \[H_{q}(\mathrm{SL}_{2}(A),X_{0}(A^{2}))\simeq H_{q}(B(A),\mathbb{Z}),\quad H_{ q}(\mathrm{SL}_{2}(A),X_{1}(A^{2}))\simeq H_{q}(T(A),\mathbb{Z}),\] and \[H_{q}(\mathrm{SM}_{2}(A),\hat{X}_{0}(A^{2}))\simeq H_{q}(T(A),\mathbb{Z}), \quad H_{q}(\mathrm{SM}_{2}(A),\hat{X}_{1}(A^{2}))\simeq H_{q}(T(A),\mathbb{Z}),\] from the above exact sequence, for any \(q\), we get \[\mathbb{E}^{1}_{0,q}\simeq\mathcal{S}_{q}\simeq H_{q}(B(A),T(A),\mathbb{Z}), \quad\mathbb{E}^{1}_{1,q}=0.\] Therefore \[\mathbb{E}^{2}_{0,q}\simeq\mathbb{E}^{1}_{0,q},\quad\mathbb{E}^{2}_{1,q}=0, \quad\mathbb{E}^{2}_{2,q}\simeq\mathbb{E}^{1}_{2,q}.\] Now by easy analysis of the spectral sequence we get the exact sequence \[\cdots\!\to\!H_{n+2}(\mathrm{SL}_{2}(A), \mathrm{SM}_{2}(A),\mathbb{Z})\!\to\!\mathbb{E}^{2}_{2,n}\!\to\!H_{n+1}( \mathrm{B}(A), T(A),\mathbb{Z})\!\to\!H_{n+1}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A), \mathbb{Z})\] \[\to\mathbb{E}^{2}_{2,n-1}\to H_{n}(B(A),T(A),\mathbb{Z})\to\cdots\] where the maps \(H_{n}(B(A),T(A),\mathbb{Z})\to H_{n}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A), \mathbb{Z})\) is induced by the natural inclusion of pairs \((B(A),T(A))\hookrightarrow(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A))\). It is easy to see that \(\mathbb{E}_{0,0}^{2}=0=\mathbb{E}_{1,0}^{2}\). Moreover we have the exact sequence \[H_{0}(\mathrm{SM}_{2}(A),\mathbb{Z})\to H_{0}(\mathrm{SL}_{2}(A),Z_{1}(A^{2})) \rightarrow\mathbb{E}_{2,0}^{2}\to 0.\] Note that \(H_{0}(\mathrm{SM}_{2}(A),\mathbb{Z})\simeq\mathbb{Z}\) and \(H_{0}(\mathrm{SL}_{2}(A),Z_{1}(A^{2}))=\mathrm{GW}(A)\). Moreover the map \(\mathbb{Z}\rightarrow\mathrm{GW}(A)\) is injective and sends \(1\) to \(p_{-1}^{+}=\langle-1\rangle+1\). Thus \[\mathbb{E}_{2,0}^{2}\simeq\mathrm{GW}(A)/\langle\langle-1\rangle+1\rangle \simeq W(A).\] where \(W(A)\) is the Witt group of \(A\). Furthermore we have the exact sequence \[H_{1}(\mathrm{SM}_{2}(A),\mathbb{Z})\to H_{1}(\mathrm{SL}_{2}(A),Z_{1}(A^{2}) )\rightarrow\mathbb{E}_{2,1}^{2}\to 0.\] From the commutative diagram we obtain the exact sequence \[\mathcal{G}_{A}\stackrel{{\alpha}}{{\longrightarrow}}E_{2,1}^{2} \rightarrow\mathbb{E}_{2,1}^{2}\to 0.\] On the other hand we have the exact sequence \[H_{3}(B(A),T(A),\mathbb{Z})\to H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A), \mathbb{Z})\rightarrow\mathbb{E}_{2,1}^{2}\to H_{2}(B(A),T(A),\mathbb{Z})\to\] \[H_{2}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\to W(A)\to A_{A^{ \times}}\to H_{1}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z}) \to 0.\] **Proposition 8.1**.: _Let \(A\) be a \(\mathrm{GE}_{2}\)-ring such that \(H_{i}(T(A),\mathbb{Z})\simeq H_{i}(B(A),\mathbb{Z})\) for \(i\leq 3\). Then_ (i)_\(H_{2}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\simeq W(A)\simeq \mathrm{GW}(A)/\langle\langle-1\rangle+1\rangle\)_ (ii)_\(H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\simeq\mathbb{E}_{2,1}^ {2}\). In particular we have the exact sequence_ \[\mathcal{G}_{A}\stackrel{{\alpha}}{{\longrightarrow}}E_{2,1}^{2} \to H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\to 0.\] Proof.: It follows from our hypothesis that \(H_{i}(B(A),T(A),\mathbb{Z})=0\) for \(0\leq i\leq 3\). Now the claims follows from the above discussions. **Theorem 8.2**.: _Let \(A\) be a universal \(\mathrm{GE}_{2}\)-ring such that \(H_{i}(T(A),\mathbb{Z})\simeq H_{i}(B(A),\mathbb{Z})\) for \(i\leq 3\). Then we have an exact sequence_ \[I(A)\otimes\mu_{2}(A)\to H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A), \mathbb{Z})\rightarrow\frac{\mathcal{RP}_{1}(A)}{\langle\psi_{1}(a^{2}):a\in A ^{\times}\rangle}\to 0.\] _In particular, if \(-1\in(A^{\times})^{2}\), then \(H_{3}(\mathrm{SL}_{2}(A),\mathrm{SM}_{2}(A),\mathbb{Z})\simeq\mathcal{RP}_{1}(A)\)._ Proof.: The first claim follows from the above Proposition, Lemma 6.1 and the following diagram with exact row and column: (Note that in above diagram we may replace \(\mathcal{I}_{A}\) with \(I(A)\).) The second claim follows from the first claim, Lemma 6.4 and the fact that \(\psi_{1}(a^{2})=0\). **Theorem 8.3**.: _Let \(A\) be ring such that \(H_{i}(T(A),\mathbb{Z})\simeq H_{i}(B(A),\mathbb{Z})\) for \(i\leq 3\). Let \(H_{1}(\operatorname{SL}_{2}(A),\mathbb{Z})=0\)._ (i) _If \(A\) is a \(\operatorname{GE}_{2}\)-ring, then \(H_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\simeq K_{1}^{\operatorname{MW} }(A)\)._ (ii) _If \(A\) is a universal \(\operatorname{GE}_{2}\)-ring, then \(H_{3}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z}\big{[}\frac{1}{2}\big{]})\simeq \mathcal{RP}_{1}(A)\big{[}\frac{1}{2}\big{]}\)._ Proof.: (i) From the inclusions \(T(A)\subseteq\operatorname{SM}_{2}(A)\subseteq\operatorname{SL}_{2}(A)\), we obtain the long exact sequence \[\cdots\to H_{n}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\to H_{n}( \operatorname{SL}_{2}(A),T(A),\mathbb{Z})\to H_{n}(\operatorname{SL}_{2}(A), \operatorname{SM}_{2}(A),\mathbb{Z})\to\] \[H_{n-1}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\to\cdots\] Since \(H_{1}(\operatorname{SL}_{2}(A),\mathbb{Z})=0\), we have \[H_{1}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2}(A),\mathbb{Z})=0=H_{1}( \operatorname{SL}_{2}(A),T(A),\mathbb{Z}).\] It is easy to see that \[H_{1}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\simeq\mathbb{Z}/2.\] We already have seen that \(H_{2}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2}(A),\mathbb{Z})\simeq W(A)\) (Proposition 8.1). Form the exact sequences \[H_{2}(T(A),\mathbb{Z})\!\to\!H_{2}(\operatorname{SL}_{2}(A),\mathbb{Z})\!\to\! H_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\!\to\!H_{1}(T(A),\mathbb{Z})\!\to\!H_{1}( \operatorname{SL}_{2}(A),\mathbb{Z})=0\] and \[H_{2}(T(A),\mathbb{Z})\to H_{2}(\operatorname{SL}_{2}(A),\mathbb{Z})\to I^{2} (A)\to 0\] we obtain the exact sequence \[0\to I^{2}(A)\to H_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\to K_{1}^{M} (A)\to 0. \tag{8.1}\] Now consider the exact sequence \[H_{2}(T(A),\mathbb{Z})\to H_{2}(\operatorname{SM}_{2}(A),\mathbb{Z})\to H_{2} (\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\to H_{1}(T(A),\mathbb{Z})\] \[\to H_{1}(\operatorname{SM}_{2}(A),\mathbb{Z})\to H_{1}(\operatorname{SM}_{2} (A),T(A),\mathbb{Z})\to 0.\] Since \(H_{2}(T(A),\mathbb{Z})\to H_{2}(\operatorname{SM}_{2}(A),\mathbb{Z})\) is surjective (by Lemma 5.1) and \(H_{1}(\operatorname{SM}_{2}(A),\mathbb{Z})\) sites in the exact sequence \(1\to\mathcal{G}_{A}\to H_{1}(\operatorname{SM}_{2}(A),\mathbb{Z})\to\mathbb{Z} /2\to 0\), we have \[H_{2}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\simeq{A^{\times}}^{2}\simeq 2K_{ 1}^{M}(A).\] Thus we have the exact sequence \[0\to 2K_{1}^{M}(A)\to H_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\to I(A)\to 0. \tag{8.2}\] It is known that the first Milnor-Witt \(K\)-group of \(A\), \(K_{1}^{\operatorname{MW}}(A)\), satisfies in the exact sequences (8.1) and (8.2) ([11, SS2]). From the exact sequences (8.1) and (8.2) we obtain the commutative diagram Since \(I(A)/I^{2}(A)\simeq\mathcal{G}_{A}\simeq K_{1}^{M}(A)/2K_{1}^{M}(A)\), the above diagram is Cartesian. Thus \[H_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\simeq K_{1}^{M}(A)\times_{I(A )/I^{2}(A)}I(A).\] But it is well-known that \(K_{1}^{\operatorname{MW}}(A)\) is the Cartesian product of the maps \(K_{1}^{M}(A)\to I(A)/I^{2}(A)\) and \(I(A)\to I(A)/I^{2}(A)\) (or we can take this as definition). Thus \[H_{2}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z})\simeq K_{1}^{M}(A)\times_{I(A )/I^{2}(A)}I(A)\simeq K_{1}^{\operatorname{MW}}(A).\] (ii) Consider the long exact sequence \[H_{3}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\to H_{3}(\operatorname{SL}_{2} (A),T(A),\mathbb{Z})\to H_{3}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2} (A),\mathbb{Z})\] \[\to 2K_{1}^{M}(A)\to K_{1}^{\operatorname{MW}}(A)\to W(A).\] This gives us the exact sequence \[H_{3}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\to H_{3}(\operatorname{SL}_{2} (A),T(A),\mathbb{Z})\to H_{3}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2} (A),\mathbb{Z})\to 0.\] Consider the exact sequence \[H_{3}(T(A),\mathbb{Z})\!\to\!H_{3}(\operatorname{SM}_{2}(A),\mathbb{Z})\!\to\! H_{3}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z})\!\to\!H_{2}(T(A),\mathbb{Z})\!\to\!H_{2}( \operatorname{SM}_{2}(A),\mathbb{Z})\] We have seen that the kernel of the right hand side map is isomorphic to \(A^{\times}\wedge\mu_{2}(A)\). Moreover using the spectral sequence \(\hat{E}_{p,q}\Rightarrow H_{p+q}(\operatorname{SM}_{2}(A),\mathbb{Z})\) we obtain the exact sequence \[0\to(A^{\times}\wedge A^{\times})/2\to H_{3}(\operatorname{SM}_{2}(A), \mathbb{Z})/H_{3}(T(A),\mathbb{Z})\to\mathcal{G}_{A}\to A^{\times}\wedge A^{ \times}.\] These show that \(H_{3}(\operatorname{SM}_{2}(A),T(A),\mathbb{Z}\big{[}\frac{1}{2}\big{]})=0\) Thus \[H_{3}(\operatorname{SL}_{2}(A),T(A),\mathbb{Z}\big{[}\frac{1}{2}\big{]})\simeq H _{3}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2}(A),\mathbb{Z}\big{[} \frac{1}{2}\big{]})\simeq\mathcal{RP}_{1}(A)\big{[}\frac{1}{2}\big{]}.\] **Remark 8.4**.: It is known that \(K_{1}^{\operatorname{MW}}(A)\) and \(\mathcal{RP}_{1}(A)\) have certain localization property [5, Theorem 6.3], [12, Theorem A]. Wendt in [19, App. A] have introduced a higher version of these groups. It would be interesting to see what is the connection of these groups to the relative homology groups \(H_{n}(\operatorname{SL}_{2}(A),\operatorname{SM}_{2}(A),\mathbb{Z}\big{[} \frac{1}{2}\big{]})\).
2302.13332
Practicing carpe diem in the journey of studying physics: A brief review of the scientific contribution of Ru-Keng Su
We briefly review the scientific contributions of the late Prof. Ru-Keng Su in his academic life. In the area of intermediate and high-energy nuclear physics, Su explored various topics in high-energy nuclear physics and particle physics, inclusively about the finite temperature field theory, effective models for nuclear and quark matter, soliton, and quasiparticle models, among others. In gravity and cosmology, Su's research primarily embraces black hole thermodynamics, quasinormal modes, cosmological microwave background radiation, modified theories of gravity, and AdS/CFT correspondence and its applications. Besides, many aspects of Su's distinguished impact on the Chinese academic physics community are discussed. We also summarize the biographical and academic career of Su. This article is an elaborated version of the memorial article that will be published in \href{https://www.mdpi.com/journal/symmetry}{\it symmetry}.
Shaoyu Yin, Wei-Liang Qian, Ping Wang, Bin Wang, Rong-Gen Cai
2023-02-26T15:08:19Z
http://arxiv.org/abs/2302.13332v1
# Practicing carpe diem in the journey of studying physics ###### Abstract We briefly review the scientific contributions of the late Prof. Ru-Keng Su in his academic life. In the area of intermediate and high-energy nuclear physics, Su explored various topics in high-energy nuclear physics and particle physics, inclusively about the finite temperature field theory, effective models for nuclear and quark matter, soliton, and quasiparticle models, among others. In gravity and cosmology, Su's research primarily embraces black hole thermodynamics, quasinormal modes, cosmological microwave background radiation, modified theories of gravity, and AdS/CFT correspondence and its applications. Besides, many aspects of Su's distinguished impact on the Chinese academic physics community are discussed. We also summarize the biographical and academic career of Su. This article is an elaborated version of the memorial article that will be published in _symmetry_. ## I Introduction Ru-Keng Su (May 27, 1938 - June 3, 2022) was a highly respected Chinese theoretical physicist whose research interests span from the depth of the subatomic realm to the far reaches of the universe. During his academic career, Su made notable contributions to intermediate and high-energy nuclear physics, general relativity, and cosmology. As one of the leading physicists in China, he has assisted in developing Chinese physical society to date. This article elaborates on a few scientific results in memory of Su's contributions to the community. The remainder of the article is organized as follows. The following Sec. II.1 and II.2 are devoted to discussing Su's contributions, respectively, to the areas of nuclear and particle physics and general relativity. We give a brief account of relevant studies and, inclusively, those carried out in collaboration with his students and collaborators. The appendices consist of a brief biography and a summary of academic achievements of Su. As his disciples, we understand that it would be beneficial and educative for the followers to revisit his academic achievements as well as the personal life of our beloved supervisor. With the help of his family and friends, we tried to collect as much information as possible. Although we are aware that the writing may be subjective, as they are presented from our point of view, we sincerely hope it will help the readers appreciate how extraordinary Su's life has been. ## II Publications Overall, Prof. Su has published more than 200 academic papers in prestigious journals in physics, with a wide range of topics covering from the microscopic particles to the immense universe. His research interest can be categorized mainly into (1) nuclear and particle physics and (2) general relativity (GR) and cosmology. ### Nuclear and Particle Physics Su explored various topics in intermediate and high energy nuclear physics and particle physics, inclusively about the gauge fields [1], Higg mechanism [2; 3], soliton solutions and confinement [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], and vacuum and fractional charge [18; 19; 20; 21]. Notably, Su is well-known for his significant contributions to finite-temperature field theory [22], embracing both the real-time [4; 7; 10; 13; 20; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] and imaginary time [11; 12; 16; 36; 37; 38; 39; 40; 41] formalisms. Among others, these studies were primarily carried out with his students Song Gao, Yi-Jun Zhang, Zhi-Xing Qian _et al._ In [4], using generalized Bogoliubov transformation associated with the coherent state, a kink-soliton solution in \((1+1)\) dimensional real \(\phi^{4}\) theory is derived. The solution is interpreted as transitioning from a normal to the superfluidity state triggered by a spontaneous symmetry breaking of the vacuum state. Subsequently, the spectra of the underlying system at zero and finite temperature are evaluated using the real-time Green function formalism. The properties of the phase transition are elaborated, and as expected, the symmetry is restored at higher temperatures from which the critical temperature is derived. In [11], imaginary-time formalism is employed to explore the non-topological solitons proposed by Friedberg and Lee [42; 43], applied to the context of scalar soliton stars. This type of soliton is triggered by the specific form of the effective potential and does not feature a topologically non-trivial profile. By deriving the form of an effective Lagrangian at finite temperature, the gravitational field equations are derived and solved analytically under reasonable approximation. The physical characteristics of the soliton star are evaluated and compared to those obtained under mean-field scenarios and/or at vanishing temperatures. Su's main contributions to nuclear physics consist of studies primarily on the effective models for nuclear and quark matter. They have explored topics such as Coulomb instability [44; 45; 46; 47; 15; 47], the speed of sound [48], hadrons in medium [49], thermodynamics [50; 51; 52; 53], phase transition [54; 55] of hadronic matter, and the quasiparticle models. Figure 1: Ru-Keng Su, in 2007 at Zhang Jiajie, Hunan. In [15], the Coulomb instability in hot nuclei is explored in the context of the Skyrme model. As a self-consistent mean-field approach comprised of a zero-range interaction with kinetic and density-dependent terms, the model is a viable tool in investigating the nuclear structure and low-energy dynamics, where collective correlations have been a priori built into the theoretical framework. Utilizing finite-temperature real-time Green's function, a detailed analysis is carried out focusing on the stability in the low-temperature and high-density regions of the system. The stability's bound, the limit temperature, is found to be approximately proportional to the critical temperature. By comparing various models, it is also argued that the analysis discriminates between different approaches. The liquid-gas phase transition in the hadronic matter as a two-component system was explored in [54]. It was found that the isospin degree of freedom plays a crucial role in the phase transition of the underlying system. The phase diagram in the parameter space regarding isospin asymmetry and concentration is significantly modified when the interaction becomes isospin dependent. In particular, a "cut-off" feature is observed, and the critical temperature in the original system is replaced by a limit temperature. The latter, subsequently, gives rise to various scenarios of phase transition regarding how the system is initially prepared. The hadron properties [56; 57; 58; 59; 60; 61; 62; 63], thermodynamics [64; 65], and phase transition [66; 47; 67] of strange hadronic matter were investigated. The studies were also performed for systems of quark degree of freedom to scrutinize the thermodynamics [68; 69; 70; 71; 72] of quark matter, and thermodynamics [73; 74; 75; 76; 77] of strange quark matter. The stability conditions for strangelet and strange matter were closely analyzed. Su has also made noticeable contributions to various relevant aspects of the quasiparticle model, as well as the thermodynamic properties and consistency. In [78], the role of an additional contribution to the thermal potential and its consequential effect on the strange quark matter were explored. A series of studies regarding the quark mass density- and temperature-dependent (QMDTD) model was performed in [70; 73; 74; 76]. The temperature dependence of the stable radius of a strangelet was discussed in [73]. The temperature dependence of the bag constant \(B\) was explored and shown to cure the divergence that occurred at vanishing baryon density in the phase diagram for the bulk strange quark matter of the original QMDTD model [74]. A systematic analysis regarding the stability of strangelet was performed in [76] in the framework of the QMDTD model. It was observed that stable strangelets are more likely to be encountered in the region with a sizeable negative electric charge and significant strangeness. The analysis was then extended to the dibaryon systems [70] regarding different decay channels, and the results were found in good agreement with those obtained by the chiral SU(3) quark model. The QMDTD setup was then applied to the context of Friedberg-Lee soliton bag [42; 43; 79] nonlinear coupled to the sigma [71] as well as omega [72] mesons. The model was further extended to investigate the properties of deconfinement [72; 80] and nuclear matter [81]. As an alternative approach to address the thermodynamic consistency, an additional fictitious degree of freedom was introduced [82; 83] to elaborate a generalized version of the first law of thermodynamics. These works were primarily carried out in collaboration with Hong-Qiu Song, as well as Su's students Zhi-Xin Qian, Ping Wang, Yun Zhang, Wei-Liang Qian, Li Yang, Chen Wu, and Shaoyu Yin. ### General Relativity and Cosmology In GR and cosmology, Prof. Su's research was carried out mainly with his students Rong-Gen Cai, Bin Wang, Cheng-Gang Shao, Li-Hui Xue, Da-Ping Du, Weigang Qiu, Jian-Yong Shen, Songbai Chen, Qiyuan Pan, Shaoyu Yin, Chang Feng, etc., along with other collaborators. Prof. Su's interest in GR and cosmology dates back to his early publications after the Cultural Revolution. In several papers published in Chinese and later in English, he discussed cosmological responses, gravitational radiation, cosmological microwave background radiation, the open universe model, Dirac's cosmological model and large number assumption [84], Einstein-Dirac equation, wormholes, higher-order gravitational theory, among others. Though these publications primarily aimed to introduce the research frontier to more Chinese readers, the selection revealed his insight, fast response, and personal interest. Such work prepared him for innovative research in GR, and some topics had long been among his favorite ones, which he dwelled on throughout his research career. Prof. Su's formal research on GR started at a regular pace in 1992 with two publications on black holes and wormholes [16; 85]. Since then, his research has been carried out simultaneously on both sides of nuclear-particle physics and the GR. Soon there was a big boost owing to the excellent collaboration with Rong-Gen Cai, a doctoral candidate under the supervision of Prof. Su. They studied various properties of different types of black objects, including thermodynamics for dilaton black holes, neutral or charged [86; 87; 88], as well as black strings and p-branes [89], phase transitions of black holes [90; 91], stability of Cauchy horizons [92], statistical mechanics [93] and Hawking radiation [94] of BTZ black holes. In this period, the most fruitful direction of their research is about the thermodynamics and statistical properties of black objects. Prof. Su and his collaborators successfully applied the Landau nonequilibrium fluctuation and phase transition theory in discussing the phase transition of various types of black holes. Specifically, the discussions of some divergent second moments helped to clarify the nature of phase transitions [86; 87; 90; 91]. They also showed particular interest in the dilaton black hole [86; 87; 90; 91], which stems from string theory and has many interesting characters such as the secondary hair [95] and unusual thermal properties [96; 97]. A second boost appeared when Bin Wang joined Prof. Su's group as a doctoral candidate and later as a permanent staff member. Then Prof. Su and Prof. Wang formed a minority group in the Department of Physics at Fudan University, where most of the teams focused on condensed matter, but their group remained the most productive one as judged by their academic output. Their study on black objects was continued and grew even more diversified and productive [98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115], and numerical simulation of quasinormal modes had also been developed later [116; 117; 118; 119; 120; 121; 122; 123]. Meanwhile, their research topic extended to cosmology, with rich topics including the influme of particle process on cosmological evolution [124], relation between the Cardy formula of entropy and the Friedmann equation in the context of AdS/CFT duality for the (A)dS universes [125] as well as the brane universes [126], quasi-normal modes of de-Sitter space-time [127], tachyonic inflation [128], brane-world [129], curvature of the universe based on supernova measurement [130], cosmological constant and dark sectors [131; 132; 133; 134; 135; 136], cosmic microwave background radiation [137], modified gravity such as using the Dirac cosmology, which he favored all along, to explain the accelerated expansion of contemprary universe [138] and the \(1/R\) gravity in the solar system tests [139], thermal effetc in inflation [140], entanglement entropy in holographic models [141], etc. It can be inferred from the publications' titles that the scope of the research was not limited to a thin branch. Generally speaking, a visible trend in their research direction is to expand from the purely mathematical study, such as the stability and geometry of black hole horizon in various dimensions of space-time [98; 99; 100], to topics more related to actual observations. This revealed their crucial value in keeping pace with the fast progress of cosmology and positively and vigorously participating the international competition. They did well, and their group became one of China's most active and prestigious research teams on GR and cosmology. The collaboration between Su and Wang significantly contributed to the field of black hole quasi-normal modes and the constraints on the cosmological model. Among dozens of articles published in high-impact journals, several most frequently cited works include testing the viability of the interacting holographic dark energy model[135], where they utilized observational constraints combining the latest gold type Ia supernova samples, Wilkinson Microwave Anisotropy Probe observations, the baryon acoustic oscillation measurement, \(H(z)\) and lookback time measurements, the joint statistical analysis provided the state-of-the-art testing of the interacting holographic dark energy model contemporarily. They have also studied with a similar approach the constraints on the dark energy from the holographic connection the small \(l\) CMB suppression [132], on dark energy from holography [131], on the dark energy and dark matter mutual coupling [136], from which it can be seen how Prof. Su's group tried to catch up the academic frontier and to compete with the world-leading research teams. Prof. Su's research on GR had some prominent characteristics. First of all, taking advantage of his expertise in nuclear and high-energy physics, he could make cross-field studies combining topics in GR with nuclear and particle physics or utilizing the thermal field theory as a handy tool, such as in the study of the scalar wormhole at finite temperature [142] and astrophysics of compact objects involving both GR and nuclear/particle models [143; 16; 114]. Secondly, he encouraged students' initiative and would like the abled students to choose their favorite topics rather than simply make an assignment for them. For instance, when Cheng-Gang Shao worked as a postdoc in Su's group, he studied gravitational test [139], slightly deviated from the mainstream of the group's research; while Li-Hui Xue, a student with excellent programming skills contributed significantly in developing the numerical codes. As a matter of fact, many new research branches in Su's group were initialized by some energetic students. However, it is worth emphasizing that, in all the projects, Prof. Su always keeps pace with the progress and makes necessary guidance with excellent insight. In Prof. Su's group, there was always an active and democratic atmosphere and open for any discussion. One of his slogans is "I only worry about those of my students who do not dispute against me". ## III Other scientific writings Prof. Su authored several highly appreciated and widely used textbooks, including "Quantum Mechanics" (first edition in 1997 by Fudan University Press, second edition in 2002 by Higher Education Press), "Statistical Physics" (first edition in 1990 by Fudan University Press, second edition in 2004 by Higher Education Press), "Advanced Quantum Mechanics" (English version, co-authored with Bin Wang in 2004 by Fudan University Press), and "Challenges in Physics - Selected Frontiers and Basic Topics of Physics" (in 1991 by Liaoning Education Press). He also translated A. Messiah's "Quantum Mechanics" (Volume 1, in collaboration with Jiayong Tang, published in 1986 by Science Press). ## Appendix A Life, Education, and Academic career Born on May 27, 1938, Prof. Ru-Keng Su was a native of Shunde, a prosperous town close to the Pearl (Zhujiang) River delta in the Guangdong Province of China. Though he rarely talked about his childhood, it can be inferred that he was born into a well-off family, as he suffered during the Cultural Revolution due to his parentage. This explains the excellent education he received and his exquisite taste in literature and classical Chinese poems. At eighteen, he was admitted to Peking University, the top university in China, to study Physics. Subsequently, he left the southern border of China and traveled to the capital in the far north. The journey was uneasy for him at that time, which took several days. He told us more than once very vividly how he took the boat to cross the Yangtze river at Wuhan when the famous bridge crossing the river had not been built yet and experienced distinct folk customs in Central China. Though harassed by continual political movements and limited food rationing during that special period, Mr. Su won top-notch scores in his undergraduate studies, despite the disturbance of coercive collective labor on farmland in the daytime and volunteer work at night. Specially, Mr. Su had been working extensively to assist Prof. Zhuxi Wang in composing a booklet of Concise 10-digit Logarithmic Tables as an offering to the first-decade celebration of P. R. China. He used a rolling calculator, a state-of-the-art facility in China then, while Prof. Wang double-checked his results using two abacuses. His talent was recognized, and when he graduated in 1960, he was chosen to work at Fudan University, a prestigious university in Shanghai. Since then, he started his life there and contributed all his passion to teaching and research in the Department of Physics at Fudan University for more than half-century. He told us that on his first night in Shanghai, he wandered alone in the university stadium under the bright mid-autumn moon, hungry and pondering about his possible future at Fudan. Prof. Su felt fortunate that he was assigned to the division of theoretical physics, where he joined the research group of senior professor Shixun Zhou. Prof. Su enthusiastically plucked himself into heavy teaching and research tasks. Prof. Zhou composed a concise textbook on quantum mechanics, which became popular in numerous Chinese universities in the 1960s. This inspired Prof. Su to enrich the contents, notably more modern progress, and elaborate his renowned _Quantum Mechanics_, which has become one of China's most widely adopted textbooks. Even during the difficult period, Su still managed to be active in research for as long as possible, which brought him trouble and punishment. Nevertheless, his optimism allowed him to be determined in his research and pursue a valuable life, even if he was persecuted and forced to take on unpleasant chores. After the 70s, his effort was rewarded, demonstrated by the burst of publications. He was promoted to associate professor in 1982, then to full professor in 1987, owing to his distinctive achievements in teaching and research. Since 1987, Prof. Su has started to serve as a Ph.D. supervisor. Encouraged by Prof. Chen-Ning Yang, Fudan University established a local research team on nuclear physics. The team was led by Prof. Fujia Yang, whose members include, among others, Prof. Chaohao Gu and Prof. Daqian Li. As an active team member, Prof. Su's expertise and devotion were highly appreciated. Subsequently, he was invited by Prof. Yang and visited the nuclear physics research group at Stony Brook, New York State University, three times between 1984 and 1990. During his stay in the United States, Prof. Su also visited the Institute of Nuclear Physics at the University of Washington in Seattle in 1985, where he collaborated with E.M. Henley. Besides, he visited the Department of Physics and Astronomy, University of Kentucky, from September 1989 to February 1990. Prof. Su loved to tell the following story, especially to those heading for the States to pursue their studies. Once, he had to transfer in Chicago on a journey from Seattle to New York. Having heard of violent crimes in the city, he was pretty worried about being robbed of his hard-earned salary carrying with himself. Taking advice from some Chinese fellows, he decided to wear a pair of sunglasses and a black overcoat with his hands stuck in the pockets, forming the shape of a gun. Having camouflaged himself in such an over-the-top stereotype of the local Chinese gangster and strode defiantly through the crowd in public, Prof. Su arrived at his destination safely and happily, with himself and the cash intact. In the 1990s, he worked at the City University of Hong Kong for a couple of years, where he made some good friends, including Prof. Jiju Xiao, who used Prof. Su's _Quantum Mechanics_ as a textbook in his lessons. It was the first Chinese physics textbook adopted in the university, extending its readership beyond Mainland China. At the beginning of the 1990s, he was invited by Prof. Tsung-Dao Lee and became a member of the advisory committee at the Chinese Center of Advanced Science and Technology. As a reward for his academic activity and support for the center, as well as a personal expression of friendship, for many years, Prof. Su received greeting cards or books with Prof. Lee's paintings as New Year gifts. ## Appendix B Notable status and awards Given his academic achievement, Prof. Su had been elected as the vice-chairman of the Chinese Society of High Energy Physics and a member of the Senate in the Division of Mathematics and Physics of the National Natural Science Foundation of China (NSFC). Prof. Su was also an active member of the Center of Theoretical Nuclear Physics in the National Laboratory of Heavy Ion Collisions in Lanzhou. In his public services, his integrity and insight were highly appreciated by his colleagues and peers. Many junior scientists are still grateful for the unbiased and unselfish support they received from him. Su received three times the second prize for Scientific and Technological Development issued by the Ministry of Education. He was awarded in 1988 for "Vacuum stability, spontaneous symmetry breaking, and thermal field theory" (in collaboration with Guang-Jiong Ni), in 1992 by "Theoretical research on phase transition in nuclear matter at finite temperature and density", and in 1996 by "Temperature field theory and its application in nuclear physics and astrophysics". In 1999, he won the second prize in the prestigious Natural Science Award issued by the Chinese Academy of Science for "Critical phenomena in nuclear systems and the effect of many-body correlation" (in collaboration with Hongqiu Song). In 2003, he received the second prize in the Shanghai Science and Technology Achievement Award for "Theoretical research of holographic principle in black hole physics and cosmology" (in collaboration with Bin Wang). He has served on national boards of physics and astronomy foundations and played a pivotal role in promoting the development of Chinese physics society. In particular, he consistently served as a member of the core evaluation committee of the National Natural Science Foundation of China (NSFC) and the academic advisory committee of the China Center of Advanced Science and Technology (CCAST). ## Appendix C Miscellaneous contributions Su's interest in physics is not limited to the above topics in theoretical physics. He also significantly contributed to science popularization in China, fulfilling the vital task of making scientific knowledge understandable and accessible to the lay public. Moreover, Su had also deeply indulged himself in other fundamental problems in theoretical physics. Relevant topics include the time arrow, negative temperature, hidden variables of quantum mechanics, neutrino mass, the essence of light, coherent states, and the Klein paradox, among others. He was not only among the first few Chinese physicists working on finite-temperature field theory but also played a vital role in promoting its usage by fellow Chinese physicists. Besides scholastic publications, Su is well-known for his talent in classical Chinese poems in the Chinese high-energy physics community. For a specific period in the past, Su and his peers were accustomed to sending correspondence in the form of poems to express their feelings about research and life. Su later showed those pieces to the students who attended his lectures. He had also published two reviews in the "Journal of Football" which, according to some readers, demonstrated better logic than some experts of the sport. ## Appendix D Teaching and orientations Prof. Su had taught at Fudan University for more than half a century. His lectures covered all major physics courses and contributed significantly to the curriculum reform in Fudan. For undergraduate students, he lectured "quantum mechanics", "thermodynamics", "statistical physics", "classical mechanics", "modern physics", "methods of mathematical physics", among others. For graduate students, he had given "advanced quantum mechanics", "advanced statistical physics", "many body theory", "frontiers in nuclear and particle physics", "quantum field theory", "thermal field theory", "general relativity and cosmology", "soliton and instanton", "quantum field theory in curved spacetime". In 1999, his textbook "quantum mechanics" was awarded the first prize in Shanghai Excellent Teaching Materials for Universities. He also played a major role in tutoring students to prepare for CUSPEA, which contributed significantly to the excellent scores won by the students from Fudan. Su was highly acclaimed for his humorous, passionate, and modern teaching methodology. He was a prominent educator who cast a remarkable constructive impact on his students. Students overwhelmingly adored him for his profound insights and clear explanations and rated him as the most popular teacher at Fudan for many successive years. Over the decades, tens of thousands of students have listened to his lectures in person, while countless others have studied following his textbooks and online video records. In 1993, Su was awarded the second prize in the Shanghai Excellent Teaching Achievement Award for "constantly reforming teaching content and methods to cultivate potential talents". In 1999, he won the Baosteel Excellent Teacher Award. In 2001, he won the second prize in the Shanghai Excellent Education Achievement Award for "merging the frontier of science into didactic materials and profound development of the pedagogy in teaching quantum mechanics". In 2003, he won the Shanghai Teaching Master Award. As a part of the unique collection of colleague physics materials (in collaboration with Qimin Jia and Guangjiong Ni), his textbook won the first prize in Shanghai Excellent Education Achievement Award and the second prize in National Teaching Achievement Award; Soil" won the second prize of Shanghai Teaching Achievement Award. In 2004, "quantum mechanics" was selected as a national premier course for its excellency. In 2016, the Ministry of Education included it as part of the first batch of "Public Courses of National Excellent Resource". The corresponding online course accumulated at least several hundred thousand views. Many audiences claimed that Su's energetic and charming teaching style inspired their interest, helped them to understand esoteric quantum mechanics, and even to pass the graduate school entrance exam. Su cultivated eleven Ph.D. and dozens of masters and supervised hundreds of undergraduate students. The Ph.D. students are Rong-Gen Cai (1995), Song Gao (1995), Yi-Jun Zhang (1997), Bin Wang (1998), Ping Wang (1999), Yun Zhang (2003), Wei-Liang Qian (2003), Wei-Gang Qiu (2005), Jian-Yong Shen (2008), Chen Wu (2009), and Shaoyu Yin (2010). There also were three postdoctoral fellows: Shuqian Ying (1993-1997), Cheng-Gang Shao (2004-2006), and Songbai Chen (2006-2008). These students who have benefited from Su's teaching are engaged in various trades worldwide, and many are working at the front line of scientific research and education. Su's knowledge and spirit will continue to be passed on and carried forward from generation to generation.
2303.12305
flippy: User friendly and open source framework for lipid membrane simulations
Animal cells are both encapsulated and subdivided by lipid bilayer membranes. Beyond just acting as boundaries, these membranes' shapes influence the function of cells and their compartments. Physically, membranes are two-dimensional fluids with complex elastic behavior, which makes it impossible, for all but a few simple cases, to predict membrane shapes analytically. Instead, the shape and behavior of biological membranes can be determined by simulations. However, the setup and use of such simulations require a significant programming background. The availability of open-source and user-friendly packages for simulating biological membranes needs improvement.Here, we present flippy, an open-source package for simulating lipid membrane shapes, their interaction with proteins or external particles, and the effect of external forces. Our goal is to provide a tool that is easy to use without sacrificing performance or versatility. flippy is an implementation of a dynamically triangulated membrane. We use a precise yet fast algorithm for calculating the geometric properties of membranes and can also account for local spontaneous curvature, a feature not all discretizations allow. Finally, in flippy we can also include regions of purely elastic (non-fluid) membranes and thus explore various shapes encountered in living systems.
George Dadunashvili, Timon Idema
2023-03-22T04:33:10Z
http://arxiv.org/abs/2303.12305v1
# _flipopy_: User friendly and open source framework for lipid membrane simulations ###### Abstract Animal cells are both encapsulated and subdivided by lipid bilayer membranes. Beyond just acting as boundaries, these membranes' shapes influence the function of cells and their compartments. Physically, membranes are two-dimensional fluids with complex elastic behavior, which makes it impossible, for all but a few simple cases, to predict membrane shapes analytically. Instead, the shape and behavior of biological membranes can be determined by simulations. However, the setup and use of such simulations require a significant programming background. The availability of open-source and user-friendly packages for simulating biological membranes needs improvement. Here, we present _flipopy_, an open-source package for simulating lipid membrane shapes, their interaction with proteins or external particles, and the effect of external forces. Our goal is to provide a tool that is easy to use without sacrificing performance or versatility. _flipopy_ is an implementation of a dynamically triangulated membrane. We use a precise yet fast algorithm for calculating the geometric properties of membranes and can also account for local spontaneous curvature, a feature not all discretizations allow. Finally, in _flipopy_ we can also include regions of purely elastic (non-fluid) membranes and thus explore various shapes encountered in living systems. ## Background Lipid bilayer membranes form the envelopes of animal cells and of many organelles contained in them. These biological membranes are highly flexible materials, capable of adopting many nontrivial shapes corresponding to specific cell functions and responding to environmental circumstances. Therefore, we can infer which processes occur inside a cell or organelle from the shapes of their membranes [1]. Inversely, in synthetic biology, membranes can be manipulated to mimic growth and division processes of living cells [2, 3, 4, 5]. Achieving symmetric, stable division over many generations is a significant challenge in the bottom-up assembly of living cells. A crucial part of the problem is understanding how external mechanical and chemical cues drive membrane reshaping. Predicting the shapes of membranes analytically is very difficult and therefore limited to cases of membranes with few constraints and high symmetry. Even numeric solutions to analytic equations are usually only possible in highly symmetric cases. In order to predict membrane shapes for generic problems, we need to use simulations. Full atomistic simulations are out of the question due to computational constraints when we are interested in large-scale membrane reshaping. Luckily several types of simulations describe the membrane behavior on a large scale, like self-assembled membranes [6], phase fields based methods [7], and dynamically triangulated membrane Monte Carlo (DTMMC) simulations [8]. The latter method is based on minimizing membrane surface energy, which makes interpreting results easy and leaves an opportunity to connect the findings to an analytical model [9]. Thus, it is not surprising that the method of dynamically triangulated membrane simulations has found broad adoption in the field and has been used to model diverse set of experimental systems from membranes responding to osmotic conditions [9] and shear flow [10], to interactions of membranes with colloidal particles [11, 12, 13, 14, 15] and with proteins [16, 17, 18]. ## Implementation While DTMMC simulations are widely used, simulation codes are rarely published. We therefore wrote the _flipopy_ software as an open-source package. While DTMMC simulations are popular, they are hard to write and even harder to optimize. Even with only basic functionality, a DTMMC code quickly becomes large and hard to maintain. Therefore, to make further progress in the development of DTMMC simulations, we need an open-source library with a vibrant community and developer base around it. We want our package to help biophysicists to get to implementing the specifics of their system without needing to reinvent the wheel and code the whole dynamically triangulated membrane from scratch. _flippy_ is designed with this objective in mind. An ideal simulation framework for membranes would not involve programming at all on the end user's side. Simulating a membrane under a specific physical constraint would be like conducting an experiment. A fully interactive framework would drastically reduce the barrier to simulations, only require understanding the experimental setup, and enable researchers to directly compare their results to simulations. However, this ideal case of an interactive framework is hard to implement in a vacuum. While keeping it in mind as an end-goal, we decided to start with a more manageable task. Even though using a c++ library requires much more knowledge than just using interactive software, we keep user-friendliness and a high level of abstraction as our primary goals. The implementation of _flippy_ as a c++ library instead of a domain-specific scripting language allows users the flexibility to incorporate it in existing code bases and easily extend it, thus contributing to our goals of community-based growth. Having an implementation in a compiled language additionally allows the users to create fast simulations, increasing the range of systems that can be simulated by _flippy_ in a reasonable time. The fact that c++ does not have a centralized package manager usually makes it hard to obtain or use external libraries. In our experience, this is the most significant inconvenience related to using the c++ language. To minimize this friction as much as possible, we opted to implement a header-only library and eliminate almost all external dependencies. Our code only relies on an external JSON parser to easily save simulation data. This parser is itself licensed under the same open source license as _flippy_, which enabled us to bundle it with _flippy_[19]. This independence from external dependencies makes using our software package as easy as it gets for c++ libraries. ### Code quality control Every large code base is prone to hidden bugs and unexpected behaviors in new use cases. To minimize errors, we implemented an extensive unit testing framework, and we are happy to report that our code base has over 95% coverage. Thus, almost every function implemented in our base is covered by at least one test case, and we intend our library of unit tests to grow continuously. We are aware that unit tests cannot guarantee that the code is bug-free, and we intend to use the bug reporting facilities of the GitHub repository to enable our users to report bugs and help improve the package. ### Mathematical details of the implementation Since we aspire to a user-friendly framework, _flippy_ must implement commonly needed utilities that almost every DTMMC simulation will require. The triangulation provided by _flippy_ needs to do proper bookkeeping of several important geometric quantities, during the update of the triangulation, like the local curvature vector, local area, and local unit bending energy of each node. We also keep track of the global counterparts of these quantities, i.e., the total area and total unit bending energy of the triangulated shape. These quantities are defined on continuous shapes, requiring a mathematically rigorous discretization on a triangulated lattice. By this, we mean that the discretized quantities should converge to their continuous counterparts for finer triangulations, and the simulation should become more precise with an increasing number of triangles. From all the above, the local curvature is most challenging to discretize and can lead to triangulation-dependent curvature energies, as demonstrated by Gompper and Kroll [20]. We use an extension of the method proposed in [20] for calculating local mean curvature. This extension was introduced by Meyer et al. [21] to calculate the local area associated with a node more precisely and sidestep numerical problems that occur for triangulations containing obtuse triangles. Finally, we use the same expression for the node-associated volume as Guegen et al.[9], which is fast to calculate since it only relies on already computed quantities. However, this node-associated volume does not have a physical meaning, it only sums to the correct total volume enveloped by a closed triangulation. ## Results In this section, we want to demonstrate _flippy's_ ability to abstract away the implementation details of a dynamic triangulation and Monte Carlo updating scheme. To this end, we go through the process of simulating a simple experimental system of a deflated giant unilamellar vesicle (GUV) and use _flippy_ to predict the equilibrium shape of the vesicle. In the following, we will only present the key elements of the code. The complete version of this simulation is provided on GitHub; for more details, please see the _Summary and outlook_ section. The system of a deflated GUV can be modeled by the following surface energy \[E_{\text{surf}}=\frac{\kappa}{2}\int\mathrm{d}A(2H)^{2}+K_{A}\frac{(A-A_{t})^{2} }{A_{t}}+K_{V}\frac{(V-V_{t})^{2}}{V_{t}}, \tag{1}\] where \(\kappa\) is the bending rigidity and \(H\) is the local mean curvature of the membrane. The integral \(\int\mathrm{d}A\) ranges over the surface area of the vesicle. This part of the energy describes the tendency of the biological membranes to minimize their local square mean curvature [22, 23]. The Lagrange multipliers \(K_{A}\) and \(K_{V}\) fix the area \(A\) and volume \(V\) to their target values \(A_{t}=A_{0}=4\pi R_{0}^{2}\) and \(V_{t}=0.6V_{0}=0.6\frac{4\pi}{3}R_{0}^{3}\), where \(R_{0}\) is the radius of the initial (pre deflation) spherical GUV. Since the deflation of the vesicle does not change its area, we keep it fixed to the initial value. However, the target value of the volume is fixed to 60% of the initial volume to account for deflation. We picked 60% of the initial volume because, for this value, we expect the equilibrium configuration to be a biconcave shape, providing an easy visual way to judge the success of the simulation. However, the prediction of a biconcave shape is only precise for zero temperature, i.e., for \(k_{B}T=0\). For simplicity, the following example describes a simulation performed at \(k_{B}T=1\). Thus, the resulting shapes will be noisy and not perfectly biconcave. All the complexity of creating and maintaining a dynamic triangulation is hidden in three conceptual steps; define the energy, initiate a triangulation, and initiate an updater that will use the energy to update the triangulation. We can start with the definition of the energy function that implements the eq. (1). Since the MonteCarloUpdater will use this energy, its signature needs to follow a specific convention that the updater will recognize, ``` doublesurface_energy(fp::Node<double,unsignedint>const&node, fp::Triangulation<double,unsignedint>const&, EnergyParametersconst&) ``` where the first argument needs to be a _flippy_ Node type, representing the node that is being updated, and the second argument needs to be a Triangulation type representing the triangulation that is being updated. The third argument can be any type and is intended to be a user-defined data struct containing all the parameters of the energy function. The actual function body is then a straightforward implementation of eq. (1): ``` doublesurface_energy([(maybeunused)]fp::Node<double,unsignedint>const&node, fp::Triangulation<double,unsignedint>const&trg, EnergyParametersconst&prms){ doubleV=trg.global_geometry().volume; doubleA=tr.global_geometry().area; doubledV=V-prms.V_t; doubleA=A-prms.A_t; doubleenergy=prms.K_V*dV*dV/prms.V_t+prms.K_A*dA+dA/prms.A_t; returnenergy; } ``` Here the first variable in the function signature is designated [[maybeunused]] since in this particular implementation of the energy, we are not interested in the local properties of any given node and thus do not use this variable. The second step in the implementation of the model is to declare a triangulation: ``` fp::Triangulation<double,unsignedint>tr(n_triang,R_0,r_Verlet); ``` where the template parameters double and unsignedint specify which internal representation of floating point and integer numbers the Triangulation class is supposed to use. The first argument of the instantiation n_triang specifies the level of triangulation, which sets the fineness of the mesh. The second argument, R_0, sets the initial radius of the triangulated sphere, and the last argument r_Verlet, relates to the implementation of membrane self-intersection avoidance. _flippy_ implements a Verlet list to check spatial closeness of the nodes efficiently [24]. The third step is to declare a Monte Carlo updater that will use the energy function to update the triangulation according to a Metropolis algorithm [25]: ``` fp::MonteCarloUpdater<double,unsignedint,EnergyParameters, std::mt19937,fp::SPHERICAL_TRIANGULATION> mc_updater(tr,prms,surface_energy,rng,l_min,l_max); ``` The signature of this class instantiation is quite large since the updater needs to know the energy function, all necessary update parameters, and the triangulation. The first two template parameters specify the internal representation of numbers, just like in the case of the Triangulation class. These parameters must be the same in both cases. EnergyParameters specifies the user-defined struct type name that contains the parameters used inside the energy function. std::mt19937 specifies the type of the random number generator that we will provide to the updater for generating random numbers for the Metropolis algorithm. The last parameter specifies the triangulation type (currently, spherical and planar triangulations are possible). The instance of the updater itself has six arguments. The first four provide the updater with references to the already declared instances of triangulation class, energy parameters struct, energy function, and a random number generator. The last two arguments specify minimum and maximum allowed distances between the triangulation nodes. All that is left to do is to create an update loop that specifies in what order and how often we want to update the triangulation. ``` for(unsignedintmc_step=0;mc_step<max_mc_steps;++mc_step){ for(unsignedintnode_id:shuffled_ids){ disp1={disp1_distr(rng),disp1_distr(rng),disp1_distr(rng)}; mc_updater.move_MC_updater(guv[node_id],disp1); } std::shuffle(shuffled_ids.begin(),shuffled_ids.end(),rng); for(unsignedintnode_id:shuffled_ids){ mc_updater.flip_MC_updater(guv[node_id]); } } ``` where in every update step, we loop over each node and use the methods of the MonteCarloUpdater class to move nodes and flip bonds. Between the loops where we _move_ and _flip_ the nodes, we shuffle the shuffled_ids vector, which was defined before the loop and contains the ids of the nodes. The shuffling step ensures that we iterate randomly through the nodes at each Monte Carlo step and do not introduce unwanted correlations between node updates. This loop represents the logic of the experiment that we want to model. In this case, the experiment is simple; we are equilibrating a vesicle at a constant temperature (starting from slightly unphysical initial conditions of mismatching volume). This loop will become more complex as the needs of the simulation will grow. However, this corresponds to the true complexity that arises from the system itself and not from the implementation. Some higher stages of complexity might require a more sophisticated updater. The MonteCarloUpdater class is provided by flippy because a Metropolis updating scheme is a popular one. However, the Triangulation class itself is completely agnostic towards the updating scheme that is used on it. The user is free to implement another updating scheme if the Metropolis algorithm is not suitable to their problem and still be able to use the triangulation provided by flippy. This also enables us to easily extend flippy with new updaters. Finally, to make saving the state of the simulation easy, _flippy_'s Triangulation class has a method that saves the representation of the data as a _JSON_ object, which is a text-based human-readable data format [26]. _flippy_ comes bundled with an open source _JSON_ parser [19]. A single statement is sufficient to create _JSON_ data of the current state of the triangulation ``` fp::Jsondata=tr.make_egg_data(); ``` and a utility function in flippy allows the saving of this data to a text file as follows: ``` fp::json_dump("test_run_final",data); ``` The make_egg_data methods naming refers to the fact that this _JSON_ data contains the necessary information to reinitialize the triangulation (like an egg contains all the nutrients for the chicken that will hatch from it), thus allowing the user to continue simulation from a save-file. If we use this simple code [27] (which is comfortably below 100 lines, including all imports, variable definitions, and comments), we will obtain (in a few minutes) the expected biconcave shape (see fig. 1 B and C). This example clearly shows that _flippy_ is capable of simulating a simple physical system in few lines of code, where all unnecessary complexity is abstracted away in the library and all the complexity that is still left in the user written code, contains necessary information about specific characteristics of the simulated system in question. Importantly, this abstraction and simplicity don't come at the cost of unreasonable runtime of the simulation. ## Summary and Outlook The source code of _flippy_ is available on GitHub [28], together with a full documentation and further demonstrations. The code used for the simulation in the _Results_ section is also part of _flippy's_ GitHub repository, and the most up-to-date version of it can be found in the demo/biconcave_shapes_MC folder of the repository [28]. The version of the code that was most up to date at the time of writing this paper, and was used to generate the code snippets in the _Results_
2303.05679
Clustering with minimum spanning trees: How good can it be?
Minimum spanning trees (MSTs) provide a convenient representation of datasets in numerous pattern recognition activities. Moreover, they are relatively fast to compute. In this paper, we quantify the extent to which they are meaningful in low-dimensional partitional data clustering tasks. By identifying the upper bounds for the agreement between the best (oracle) algorithm and the expert labels from a large battery of benchmark data, we discover that MST methods can be very competitive. Next, we review, study, extend, and generalise a few existing, state-of-the-art MST-based partitioning schemes. This leads to some new noteworthy approaches. Overall, the Genie and the information-theoretic methods often outperform the non-MST algorithms such as K-means, Gaussian mixtures, spectral clustering, Birch, density-based, and classical hierarchical agglomerative procedures. Nevertheless, we identify that there is still some room for improvement, and thus the development of novel algorithms is encouraged.
Marek Gagolewski, Anna Cena, Maciej Bartoszuk, Łukasz Brzozowski
2023-03-10T03:18:03Z
http://arxiv.org/abs/2303.05679v3
# Clustering with minimum spanning trees: ###### Abstract Minimum spanning trees (MSTs) provide a convenient representation of datasets in numerous pattern recognition activities. Moreover, they are relatively fast to compute. In this paper, we quantify the extent to which they can be meaningful in data clustering tasks. By identifying the upper bounds for the agreement between the best (oracle) algorithm and the expert labels from a large battery of benchmark data, we discover that MST methods can overall be very competitive. Next, instead of proposing yet another algorithm that performs well on a limited set of examples, we review, study, extend, and generalise existing, the state-of-the-art MST-based partitioning schemes, which leads to a few new and interesting approaches. It turns out that the Genie method and the information-theoretic approaches often outperform the non-MST algorithms such as k-means, Gaussian mixtures, spectral clustering, BIRCH, and classical hierarchical agglomerative procedures. **Keywords:** hierarchical clustering, minimum spanning tree, MST, cluster validity measure, single linkage, Genie algorithm ## 1 Introduction Clustering (segmentation) aims to find some _meaningful_ partitions of a given dataset in a purely supervised manner. They are useful in many practical applications; see, e.g., (Guo, Yang, Li, Xiong, & Ma, 2023; Hwang et al., 2023; Zhao et al., 2023; Zhou et al., 2023). Up to this date, many clustering approaches have been proposed (see, e.g., (Wierzchon & Klopotek, 2018) for an overview) together with methods to assess their usefulness: internal (Arbelaitz, Gurrutxaga, Muguerza, Perez, & Perona, 2013; Gagolewski, Bartoszuk, & Cena, 2021; Halkidi, Batistakis, & Vazirgiannis, 2001; Jaskowiak, Costa, & Campello, 2022; Maulik & Bandyopadhyay, 2002; Milligan & Cooper, 1985; Q. Xu, Zhang, Liu, & Luo, 2020) and external cluster validity measures (Gagolewski, 2022a; Horta & Campello, 2015; Rezaei & Franti, 2016; Wagner Wagner, 2006) on various kinds of benchmark data (Dua & Graff, 2021; Franti & Sieranoja, 2018; Gagolewski, 2022b,7; Graves & Pedrycz, 2010; Thrun & Ultsch, 2020). Given a dataset \(\mathbf{X}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) with \(n\) points in \(\mathbb{R}^{d}\), the space of all its possible \(k\)-partitions \(\mathcal{X}_{k}\), is very large. Namely, the number of possible divisions of \(\mathbf{X}\) into \(k\geq 2\) nonempty, mutually disjoint clusters is equal to the Stirling value of the second kind, \(\left\{\begin{smallmatrix}n\\ k\end{smallmatrix}\right\}=O(k^{n})\). Thus, in practice, clustering algorithms tend to construct a simpler representation of the search space to make their job easier. For instance, in the well-known \(k\)-means algorithm (Lloyd, 1957 (1982)), \(k\) (continuous) cluster centroids are sought and the point's belongingness to a subset is determined by means of the proximity thereto. In hierarchical agglomerative algorithms, we start with \(n\) singletons, and then keep merging pairs of clusters (based on different criteria, e.g., average or complete linkage; see (Mullner, 2011)) until we obtain \(k\) of them. In divisive schemes, on the other hand, we start with one cluster consisting of all the points and then we try to split it into smaller and smaller chunks iteratively. From this perspective, different spanning trees of a given dataset offer a very attractive representation. In particular, the1 minimum spanning tree (MST; the shortest dendrite) with respect to the Euclidean metric2 minimises the sum of pairwise distances. Footnote 1: We will assume in this paper that an MST is always unique. This can be assured by adding, e.g., a tiny amount of noise to the points’ coordinates. Footnote 2: How to perform the appropriate feature engineering is an independent problem (e.g., selection of relevant features, normalisation of columns, noise point removal, etc.), which we are not concerned with in our paper for simplicity of presentation. More formally, given an undirected weighted graph representing our dataset \(G=(V,E,W)\); \(V=\{1,\ldots,n\},E=\{\{u,v\},u<v\},W(\{u,v\})=\|\mathbf{x}_{u}-\mathbf{x}_{v}\|\), the minimum spanning tree \(T=\mathrm{MST}(G)=(V,E^{\prime},W^{\prime})\), \(E^{\prime}\subset E\), \(W^{\prime}=W|_{E^{\prime}}\) is a connected tree spanning \(V\) with \(E^{\prime}\) minimising \(\sum_{\{u,v\}\in E^{\prime}}W(\{u,v\})\). Any spanning tree representing a dataset with \(n\) points has \(n-1\) edges. If we remove \(k-1\) of them, we will obtain \(k\) connected components which can be interpreted as clusters; compare Figure 1. This reduces the search space to \(\binom{n-1}{k-1}=O(n^{k-1})\). While still large, some heuristics (e.g., greedy approaches) allow for further simplifications. MSTs are fast to compute: in \(O(n^{2})\) time for general metrics, see the classic algorithms by Boruvka (1926), Jarnik (1930) (which is more widely known as a method by Prim (1957); see (Olson, 1995) for its parallelised version), and Kruskal (1956); see (Gower & Ross, 1969; Graham & Hell, 1985; Zhong, Malinen, Miao, & Franti, 2015) for some historical notes. In small-dimensional Euclidean spaces, further speed-ups are possible (e.g., (March, Ram, & Gray, 2010): \(\Omega(n\log n)\) for \(d=2\)). Approximate MST can be computed as well (e.g., (Naidan, Boytsov, Malkov, & Novak, 2019; Zhong et al., 2015)). Applications of MST-based algorithms are plentiful (e.g., gene expression discussed in (Y. Xu, Olman, & Xu, 2002), pattern recognition in images (Yin & Liu, 2009), etc.). Overall, in our case, they allow for detecting well-separated clusters of arbitrary shapes (e.g., spirals, connected line segments, blobs; see Figure 2). They do not necessarily have to be convex like in the \(k\)-means algorithm (via its connection to Voronoi diagrams). This paper aims to review, unify, and extend a large number of existing approaches to clustering based on MSTs (that yield a specific number of clusters, \(k\)) and determine which of them works best on an extensive battery of benchmark data. Furthermore, we quantify how well the particular MST-based methods perform in general: are they comparable with state-of-the-art clustering procedures? This paper is set out as follows. Section 2 reviews existing MST-based methods and introduces some noteworthy generalisations thereof, in particular: divisive and agglomerative schemes optimising different cluster validity Figure 1: Removing three edges from a spanning tree gives four connected components, which we can treat as separate clusters measures (with or without additional constraints). In Section 3, we answer the question of whether MSTs can provide us with a meaningful representation of the benchmark datasets studied for the purpose of data clustering tasks. Then, we pinpoint the best-performing algorithms and compare them with non-MST-based approaches. Section 4 concludes the paper and suggests some topics for further research. Figure 2: Example benchmark datasets (see Tables 2 and 3 and (Chang and Yeung, 2008; Fáníí and Sieranoja, 2018; Gagolewski, 2022b; Zahn, 1971)). Minimum spanning trees often lead to a meaningful representation of well-separable clusters of arbitrary shapes \begin{table} \begin{tabular}{l l} \hline \hline & method \\ \hline 1 & Genie\_G0.1 (Gagolewski, 2021; Gagolewski, Bartoszuk, \& Cena, 2016) \\ 2 & Genie\_G0.3 \\ 3 & Genie\_G0.5 \\ 4 & Genie\_G0.7 \\ 5 & Genie+Ic (\(k+0\)) (information criterion – agglomerative from a partial partition) \\ & (Cena, 2018; Gagolewski, 2021) \\ 6 & Genie+Ic (\(k+5\)) \\ 7 & Genie+Ic (\(k+10\)) \\ 8 & IcA (Gagolewski, 2021) (information criterion – agglomerative strategy) \\ 9 & ITM (information criterion – divisive strategy) (Müller, Nowozin, \& Lampert, 2012) \\ 10 & Single \\ 11 & HEMST (Grygorash, Zhou, \& Jorgensen, 2006) \\ 12 & CTCEHC (Ma, Lin, Wang, Huang, \& He, 2021) \\ 13 & MST/D\_BallHall (Ball \& Hall, 1965) (optimising the cluster validity index – a divisive strategy over MSTs) \\ 14 & MST/D\_CalinskiHarabasz (Calinski \& Harabasz, 1974) \\ 15 & MST/D\_DaviesBouldin (Davies \& Bouldin, 1979) \\ [MISSING_PAGE_POST] CNN\_25 (Gagolewski et al., 2021) \\ 34 & MST/D\_DuNN\_25\_Min\_Max (Gagolewski et al., 2021) \\ 35 & MST/D\_DuNN\_25\_Mean\_Mean \\ 36 & MST/D\_DuNN\_25\_Max\_Min \\ \hline 37\({}^{*}\) & Average \\ 38\({}^{*}\) & Complete \\ 39\({}^{*}\) & Ward \\ 40\({}^{*}\) & GaussMix \\ 41\({}^{*}\) & KMeans \\ 42\({}^{*}\) & Birch (T=0.01, BF=50) (Zhang, Ramakrishnan, \& Livny, 1996) \\ 43\({}^{*}\) & Spectral (RBF, G=1) \\ \hline 44–95\({}^{*}\) & Minima of 52 different cluster validity measures (Gagolewski et al., 2021) \\ 96–97\({}^{*}\) & Other hierarchical methods (centroid, median, weighted/McQuitty linkage) \\ 98–125\({}^{*}\) & Birch with 23 other parameter settings \\ 126–140\({}^{*}\) & Spectral with 19 other parameter settings \\ \hline \hline \end{tabular} \end{table} Table 1: Clustering methods studied (\({}^{*}\) denotes an algorithm not based on MSTs) ## 2 Methods Table 1 lists all the methods we consider in this study. Let us describe them in detail. ### Divisive algorithms Perhaps the most widely known MST-based method is the classic single linkage scheme (Wroclaw Taxonomy, dendrite method, nearest neighbour clustering). It was proposed by Polish mathematicians Florek, Lukasiewicz, Perkal, Steinhaus, and Zubrzycki in (1951). That the single linkage clustering can be computed using the following divisive scheme over MSTs was noted in (Gower & Ross, 1969). **Algorithm 1** (Single Linkage - Divisively).: To obtain the single linkage \(k\)-partition of a given dataset \(\mathbf{X}\) represented by a complete graph \(G\) whose weights correspond to pairwise distances between all point pairs, proceed as follows: 1. Let \(T=\text{MST}(\text{G})=(V,E^{\prime},W^{\prime})\); 2. Let \(\{\{1,\ldots,n\}\}\) be an initial 1-partition consisting of the cluster representing all the points; 3. For \(i=1,\ldots,k-1\) do: 1. Split the cluster containing the \(u\)-th and the \(v\)-th point (so that they do not belong to the same connected component anymore), where \(\{u,v\}\in W^{\prime}\) is the edge of the MST with the \(i\)-th greatest weight; 4. Return the current \(k\)-partition as a result. In other words, we remove the \(k-1\) edges of the greatest lengths3 from \(E^{\prime}\) and study the resulting connected components. Footnote 3: Which usually leads to outliers being classified as singleton clusters. Another divisive algorithm over MSTs was studied by Calinski and Harabasz in (1974). They minimised the total within-cluster sum of squares (the same as in the k-means algorithm; they provided it as an alternative to the agglomerative (but non-MST) Ward (1963) algorithm and to the one by Edwards and Sforza (1965) who employed an _exhaustive_ divisive procedure). More generally, let \(F:\mathcal{X}_{l}\in\mathbb{R}\) be some objective function that we would like to maximise over the set of possible partitionings of any cardinality like \(l\) (not just \(k\), which we treat as fixed). We will refer to it as a _cluster validity measure_. Moreover, let \(C(V,E^{\prime\prime})=(X_{1},\ldots,X_{l})\in\mathcal{X}_{l}\) be a partition corresponding to the connected components (with no loss in generality, assuming that there are \(l\) of them) in a subgraph \((V,E^{\prime\prime})\) of \((V,E)\). **Algorithm 2** (Maximising \(F\) over an MST - Divisively).: A general divisive scheme over an MST is a greedy optimisation algorithm that goes as follows: 1. Let \(T=\text{MST}(\text{G})=(V,E^{\prime},W^{\prime})\); 2. Let \(E^{\prime\prime}=E^{\prime}\); 3. For \(i=1,\ldots,k-1\) do: 1. Find \(\{u,v\}\in E^{\prime\prime}\) which is a solution to: \[\max_{\{u,v\}}F(C(V,E^{\prime\prime}\setminus\{\{u,v\}\}))\] 2. Remove \(\{u,v\}\) from \(E^{\prime\prime}\); 4. Return \(C(V,E^{\prime\prime})\) as a result. Overall, a divisive scheme is slightly more time-intense (the partition refinement data structure can be used) than the agglomerative approach which we mention below. However, it is still significantly more feasible than in the case where the dataset is represented by a more complicated graph (nearest neighbours, complete, etc.). And thus, in the case of the single linkage scheme, the objective is such that we simply maximise the sum of weights of the omitted MST edges and in the setting of the Calinski and Harabasz (1974) paper, we maximise (note the minus): \(-\text{WCSS}(X_{1},\ldots,X_{l})=-\sum_{i=1}^{l}\sum_{\boldsymbol{x}_{j}\in X _{i}}\|\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\|^{2}\), where \(\boldsymbol{\mu}_{i}\) is the centroid (componentwise arithmetic mean) of the \(i\)-th cluster. Naturally, other objective functions can be studied. For instance, Muller, Nowozin, and Lampert in (2012) considered the information-theoretic criterion based on entropy which takes into account cluster sizes and average within-cluster MST edges' weights: \(\text{IC}(X_{1},\ldots,X_{l})=-d\sum_{i=1}^{k}\frac{n_{i}}{n}\log\frac{L_{i}} {n_{i}}-\sum_{i=1}^{k}\frac{n_{i}}{n}\log\frac{n_{i}}{n}\), where \(L_{i}\) denotes the sum of the weights of edges in the subtree of the MST representing the \(i\)-th cluster and \(n_{i}\) denotes its size. Interestingly, this estimator can be derived from the Renyi entropy estimated on various graph representations of data, including MSTs; see, e.g., (Eggels and Crommelin, 2019; Hero III and Michel, 1998; Pal, Poczos, and Szepesvari, 2010). This leads to an algorithm called ITM4. Footnote 4: Python implementation available at [https://github.com/amueller/information-theoretic-mst](https://github.com/amueller/information-theoretic-mst). Other popular internal cluster validity indices can be optimised as well. Here, we shall consider the most notable measures that we reviewed in our previous paper (Gagolewski et al., 2021) and implemented in (Gagolewski, 2021) (leading to clustering methods denoted with MST/D_ in Table 1): the indices by Ball-Hall (1965), Calinski-Harabasz (1974, Eq. (3)) (equivalent to maximising the above WCSS), Davies-Bouldin (1979, Def. 5), Silhouette, SilhouetteW (Rousseeuw, 1987), generalisations of the Dunn index (1974) proposed in (Bezdek and Pal, 1998) (GDunn_dX_dY) and (Gagolewski et al., 2021) (DuNN_M_X_Y), and the nearest-neighbour count (Gagolewski et al., 2021) (WCNN_M). As a byproduct, we will be able to assess the meaningfulness of the cluster validity measures, just like in (Gagolewski et al., 2021) where we have done this in the space of _all_ possible clusterings (leading to the conclusion that many measures are actually _invalid_). On a side note, as the size of the space of all possible \(k\)-partitions of MSTs is \(O(n^{k-1})\), for small \(k\), it is technically possible to find the true maximum of \(F\) (note that for \(k=2\) the divisive strategy gives exactly the global maximum). We leave this topic for further research. ### Agglomerative algorithms Single linkage was rediscovered by Sneath in (1957), who introduced it as a general agglomerative scheme. Its resemblance to the Kruskal MST algorithm (and hence that an MST is sufficient to compute it) was noted in, amongst others, (Gower & Ross, 1969). Thus, we can formulate it also as follows. **Algorithm 3** (Single Linkage - Agglomeratively).: To obtain the single linkage \(k\)-clustering: 1. Let \(T=\mathrm{MST}(\mathrm{G})=(V,E^{\prime},W^{\prime})\); 2. Let \(\{\{1\},\ldots,\{n\}\}\) be an initial \(n\)-partition consisting of \(n\) singletons; 3. For \(i=1,\ldots,n-k\) do: 1. Merge the two clusters containing the \(u\)-th and the \(v\)-th point, where \(\{u,v\}\in W^{\prime}\) is the edge of the MST with the \(i\)-th smallest weight; 4. Return the current \(k\)-partition as a result. For a given MST with edges sorted increasingly, the disjoint sets (union-find) data structure can be used to implement the above so that the total run-time is only \(O(n-k)\). Given a cluster validity measure \(F\), the above agglomerative approach can be generalised as below. **Algorithm 4** (Maximising \(F\) over an MST - Agglomeratively).: A general agglomerative scheme over an MST is a greedy optimisation algorithm that consists of the following steps: 1. Let \(T=\mathrm{MST}(\mathrm{G})=(V,E^{\prime},W^{\prime})\); 2. Let \(E^{\prime\prime}=\emptyset\); 3. For \(i=1,\ldots,n-k\) do: 1. Find \(\{u,v\}\in E^{\prime}\setminus E^{\prime\prime}\) which is a solution to: \[\max_{\{u,v\}}F(C(V,E^{\prime\prime}\cup\{\{u,v\}\}))\] 2. Add \(\{u,v\}\) to \(E^{\prime\prime}\); 4. Return \(C(V,E^{\prime\prime})\) as a result. In the single linkage case, \(F\) is simply the sum of the MSTs edges left unconsumed (or minus the weight of the edge to be omitted). Unfortunately, many cluster validity measures are not only inherently slow to compute, but also they might not be well-defined for singleton clusters (and this is the starting point of the agglomerative algorithm). Due to an already large number of procedures in our study, we will consider the agglomerative maximising of only the aforementioned information criterion, leading to the algorithm which we denote as IcA in Table 1. Its implementation is available in (Gagolewski, 2021). ### Variations on the agglomerative scheme Genie (Gagolewski, Bartoszuk, & Cena, 2016) is an example of a variation on the agglomerative single linkage theme, where we greedily optimise the total edge lengths, but under the _constraint_ that if the Gini index of the cluster sizes5 grows above a given threshold \(g\), only the smallest clusters can take part in the merging. Footnote 5: Let \((c_{1},\ldots,c_{l})\) be a sequence such that \(c_{i}\) denotes the cardinality of the \(i\)-th cluster in a given \(l\)-partition. The Gini index is given by \(\mathrm{G}(c_{1},\ldots,c_{l})=\sum_{i=1}^{l}(l-2i+1)c_{(i)}/(n-1)\sum_{i=1}^{l }c_{i}\in[0,1]\), where \(c_{(i)}\) denotes the \(i\)-th greatest value. It is a measure of inequality of the cluster sizes. **Algorithm 5** (Genie).: Given \(g\in(0,1]\): 1. Let \(T=\mathrm{MST}(\mathrm{G})=(V,E^{\prime},W^{\prime})\); 2. Let \(E^{\prime\prime}=\emptyset\); 3. For \(i=1,\ldots,n-k\) do: 1. If the Gini index of the sizes of clusters in \(C(V,E^{\prime\prime})\) is below \(g\), pick \(\{u,v\}\in E^{\prime}\setminus E^{\prime\prime}\) as the edge with the smallest weight (equivalently, that the sum of weights of edges in \(E^{\prime}\setminus(E^{\prime\prime}\cup\{\{u,v\}\})\) is the largest); 4. Otherwise, pick \(\{u,v\}\in E^{\prime}\setminus E^{\prime\prime}\) as the edge with the smallest weight _provided that_ the size of the connected component containing \(u\) (or \(v\)) is the smallest of them all; 5. Add \(\{u,v\}\) to \(E^{\prime\prime}\); 6. Return \(C(V,E^{\prime\prime})\) as a result. Here, we will rely on the implementation of Genie included in the _genieclust_ package for Python (Gagolewski, 2021). Given a precomputed MST, the procedure runs in \(O(n\sqrt{n})\) time. The algorithm depends on the threshold parameter \(g\). In this study, we will only compare the results obtained for \(g\in\{0.1,0.3,0.5,0.7\}\) (for a comprehensive treatment of the sensitivity analysis of Genie's parameters; see (Gagolewski, Cena, & Bartoszuk, 2016)). In (Gagolewski, Bartoszuk, & Cena, 2016), the use of \(g=0.3\) is recommended. Cena in (2018) noted that Genie gives very good results, but sometimes other thresholds might work better than the default one. She thus proposed an agglomerative scheme optimising the information criterion, which does not start from a set of \(n\) singletons, but the intersection of the clusters obtained by multiple runs of Genie. We have implemented an extended version of this algorithm in the _genieclust_(Gagolewski, 2021) package. Namely, what we denote with _Genie+Ic (\(k+l\))_ in Table 1, is a variation of Algorithm 4 that starts at \(E^{\prime\prime}=E^{\prime}\setminus(E_{0.1}\cup E_{0.3}\cup E_{0.5}\cup E_{0.7})\), where \(E_{g}\) is the final \(E^{\prime\prime}\) from the run of Algorithm 5 seeking \(k+l\) clusters using a given threshold \(g\), (i.e., an intersection of possibly more fine-grained clusterings returned by the Genie algorithm with different parameters). We shall only consider \(l\in\{0,5,10\}\), as we observe that other choices of \(g\) and \(l\) led to similar results. ### Other methods Other MST-based methods that we consider in this study6 include: Footnote 6: Their Python implementation is available at [https://github.com/lukaszbrzozowski/msts](https://github.com/lukaszbrzozowski/msts). * deletes edges from the MST to achieve the best possible edge weights' standard deviation reduction; * constructs a preliminary partition based on the vertex degrees and then merges clusters based on the geodesic distances between the cluster centroids. There are a few other MST-based methods in the literature, but usually they do not result in a given-in-advance number of clusters, \(k\) (which we require for benchmarking purposes as described in the next section). For instance, Zahn in (Zahn, 1971) constructs an MST and deletes "inconsistent" edges (with weights significantly (\(\pm c\sigma\)) larger than the average weight of the nearby edges), but the number thereof cannot be easily controlled. We also do not include the methods whose search space is not solely based on the information from MSTs (e.g., (Gonzalez-Barrios and Quiroz, 2003; Karypis et al., 1999; Mishra and Mohanty, 2019; Zhong et al., 2011; Zhong et al., 2010)), which construct the MST based on transformed distances (Campello et al., 2015; Chaudhuri and Dasgupta, 2010), which use an MST for very different purposes, such as auxiliary density estimation (e.g., (Peter, 2013)), refinement thereof (e.g., (Wang et al., 2009)). We also do not include a few of the methods which we found so badly described, that we could not implement them ourselves. \begin{table} \begin{tabular}{|c c c c c c|} \hline \hline & battery & dataset & \(n\) & \(d\) & \(k\)s & \\ \hline 1 & FCPS & atom & 800 & 3 & 2 & \\ 2 & & chainlink & 1000 & 3 & 2 & \\ 3 & & engytime & 4096 & 2 & \(2^{\times}2\) & \\ 4 & & hepta & 212 & 3 & 7 & \\ 5 & & lsun & 400 & 2 & 3 & \\ 6 & & target & 770 & 2 & 2, 6 & \\ 7 & & tetra & 400 & 3 & 4 & \\ 8 & & twodiamonds & 800 & 2 & 2 & \\ 9 & & wingnut & 1016 & 2 & 2 & \\ 10 & Graves & dense & 200 & 2 & 2 & \\ 11 & & fuzzyx & 1000 & 2 & \(2^{\times}3\), 4, 5 & \\ 12 & & line & 250 & 2 & 2 & \\ 13 & & parabolic & 1000 & 2 & \(2!\), \(4!\) & \\ 14 & & ring & 1000 & 2 & 2 & \\ 15 & & ring\_noisy & 1050 & 2 & 2 & \\ 16 & & ring\_outliers & 1030 & 2 & 2, 5 & \\ 17 & & zigzag & 250 & 2 & 3, 5 & \\ 18 & & zigzag\_noisy & 300 & 2 & 3, 5 & \\ 19 & & zigzag\_outliers & 280 & 2 & 3, 5 & \\ 20 & Other & chameleon\_t4\_8k & 8000 & 2 & 6 & \\ 21 & & chameleon\_t5\_8k & 8000 & 2 & 6 & \\ 22 & & chameleon\_t8\_8k & 8000 & 2 & 8 & \\ 23 & & hdbscan & 2309 & 2 & 6 & \\ 24 & & iris & 150 & 4 & 3 & \\ 25 & & square & 1000 & 2 & 2 & \\ 26 & SIPU & a1 & 3000 & 2 & 20 & \\ 27 & & a2 & 5250 & 2 & 35 & \\ 28 & & a3 & 7500 & 2 & 50 & \\ 29 & & aggregation & 788 & 2 & 7 & \\ 30 & & compound & 399 & 2 & \(4^{\times}2\), \(5^{\times}2\), \(6!\) & \\ 31 & & d31 & 3100 & 2 & 31 & \\ 32 & & flame & 240 & 2 & \(2^{\times}2\) & \\ 33 & & jain & 373 & 2 & 2 & \\ 34 & & pathbased & 300 & 2 & 3, 4 & \\ \hline \hline \end{tabular} \end{table} Table 2: Benchmark datasets studied (part I; see (Gagolewski, 2022b); database (Gagolewski et al., 2022) v.1.1.0). Exclamation marks denote “difficult” labellings:!! – maximal obtained AAA (Eq. (1)) was \(<0.5\),!! – max AAA \(<0.8\), \(!\) - max AAA \(<0.95\). Asterisks mark cases where the performance of MST-based methods is subpar (* – maximal AAA for MST relative to the maximal overall AAA was \(<0.95\)). Also, e.g., \(2^{\times}3\) means that there are three reference label vectors with \(k=2\). \begin{table} \begin{tabular}{l l l r r l} \hline & battery & dataset & \(n\) & \(d\) & \(k\)s \\ 35 & SIPU & r15 & 600 & 2 & 8, 9, 15 \\ 36 & & s1 & 5000 & 2 & 15 \\ 37 & & s2 & 5000 & 2 & 15 \\ 38 & & s3 & 5000 & 2 & 15! \\ 39 & & s4 & 5000 & 2 & 15!! \\ 40 & & spiral & 312 & 2 & 3 \\ 41 & & unbalance & 6500 & 2 & 8 \\ 42 & UCI & ecoli & 336 & 7 & 8!! \\ 43 & & ionosphere & 351 & 33 & 2!! \\ 44 & & sonar & 208 & 60 & 2!!!! \\ 45 & & statlog & 2310 & 18 & 7!! \\ 46 & & wdbc & 569 & 30 & 2! \\ 47 & & wine & 178 & 13 & 3*! \\ 48 & & yeast & 1484 & 8 & 10!!!! \\ 49 & WUT & circles & 4000 & 2 & 4 \\ 50 & & cross & 2000 & 2 & 4 \\ 51 & & graph & 2500 & 2 & 10*! \\ 52 & & isolation & 9000 & 2 & 3 \\ 53 & & labirynth & 3546 & 2 & 6 \\ 54 & mk1 & 300 & 2 & 3 \\ 55 & mk2 & 1000 & 2 & 2 \\ 56 & mk3 & 600 & 3 & 3! \\ 57 & mk4 & 1500 & 3 & 3 \\ 58 & & olympic & 5000 & 2 & 5* \\ 59 & & smile & 1000 & 2 & 4, 6 \\ 60 & & stripes & 5000 & 2 & 2 \\ 61 & & trapped\_lovers & 5000 & 3 & 3 \\ 62 & & twosplashes & 400 & 2 & 2! \\ 63 & & windows & 2977 & 2 & 5 \\ 64 & x1 & 120 & 2 & 3 \\ 65 & x3 & 185 & 2 & 3, 4 \\ 66 & z1 & 192 & 2 & 3* \\ 67 & z2 & 900 & 2 & 5 \\ 68 & z3 & 1000 & 2 & 4 \\ \hline \end{tabular} \end{table} Table 3: Benchmark datasets studied (part II) ## 3 Experiments ### Clustering datasets, reference labels, and assessing the similarity thereto We test the discussed methods against the benchmark suite for clustering algorithms introduced in (Gagolewski, 2022b). We use version 1.1.0 of the open-access database (Gagolewski et al., 2022) (which features datasets discussed in, amongst others, (Bezdek, Keller, Krishnapuram, Kuncheva, & Pal, 1999; Dua & Graff, 2021; Franti & Sieranoja, 2018; Franti & Virmajoki, 2006; Graves & Pedrycz, 2010; Jain & Law, 2005; Karypis et al., 1999; McInnes, Healy, & Astels, 2017; Rezaei & Franti, 2016; Sieranoja & Franti, 2019; Thrun & Stier, 2021; Thrun & Ultsch, 2020; Ultsch, 2005)). We have taken into account all the datasets with \(n<10\),000 except UCI/glass, WUT/x2, and Other/iris5 whose some 25-near-neighbour graphs' connected components were too small, leading to some of the algorithms' failing (e.g., MST/D_WCNN_25 and MST/D_DuNN_25_Min_Max). This gives 68 datasets7 in total; see Tables 2 and 3. Footnote 7: Note that the website of the clustering-benchmarks project (Gagolewski, 2022b) features an interactive datasets’ explorer; see [https://clustering-benchmarks.gagolewski.com](https://clustering-benchmarks.gagolewski.com). Each dataset comes with one or more reference label vectors created by experts. Each of them defines a specific number of clusters, \(k\). We run each algorithm in a purely unsupervised manner: they are only given the data matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) and \(k\) on input, not the true labels. To enable a fair comparison (ceteris paribus), no kind of data preprocessing (e.g., standardisation of variables, removal of noise points, etc.) is applied. However, let us note that the spectral method and Gaussian mixtures can be thought of as algorithms that have some built-in feature engineering capabilities. In other cases, the methods are asked to rely only on the "raw" Euclidean distance. As a measure of clustering quality, we consider the adjusted asymmetric accuracy (AAA; (Gagolewski, 2022a)) given by: \[\text{AAA}(\mathbf{C}) = \frac{\max_{\sigma:\text{permutation of }\{1,\ldots,k\}}\frac{1}{k} \sum_{i=1}^{k}\frac{c_{i,\sigma(i)}}{c_{i,\cdot}}-\frac{1}{k}}{1-\frac{1}{k}}\] \[= 1-\min_{\sigma}\left(\frac{1}{k}\sum_{i=1}^{k}\frac{c_{i,1}+ \cdots+c_{i,k}-c_{i,\sigma(i)}}{\frac{k-1}{k}(c_{i,1}+\cdots+c_{i,k})}\right),\] where the confusion matrix \(\mathbf{C}\) is such that \(c_{i,j}\) denotes the number of points in the \(i\)-th reference cluster that a given algorithm assigned to the \(j\)-th cluster. AAA is a measure of the overall percentage of correctly classified points in each cluster (one minus average classification error) that uses the optimal matching of cluster labels between the partitions (just like PSI (Rezaei & Franti, 2016) which is additionally symmetric and hence less interpretable). It is corrected for chance and cluster size imbalancedness. The total number of unique reference labels was 89. Let us note that some label vectors might define the same number of clusters \(k\). Thus, only 83 unique partitions needed to be generated and in the case of tied \(k\)s, the maximal AAA was considered. This is in line with the recommendation from (Gagolewski, 2022b), where it was noted that there could be many equally valid partitions and the algorithm should be rewarded for finding _any_ of them (note that unlike in (Gagolewski et al., 2021), we consider the maximum over datasets _and_\(k\)s, not just datasets); see also (Dasgupta and Ng, 2009; Luxburg, 2012) for further discussion. Also, following the aforementioned guidelines, if a reference partitioning marks some points as noise, the actual way they are allocated to particular clusters by the clustering methods studied is irrelevant (they are omitted when computing the confusion matrix). ### Some benchmark cases are difficult for all the methods Overall, \(68/83\simeq 82\%\) of cases can be considered "easy" for at least one of the methods (maximal AAA \(\geq\) 0.95). In other words, for each of them, there exists an approach that reproduces the reference partition relatively well. On the other hand, 6 benchmark cases turned out very "difficult" for all of the methods studied (AAA \(<\) 0.80). We marked them with two and three exclamation marks in Tables 2 and 3. The said sextet includes most datasets that we sourced from the UCI repository, which are all high-dimensional, and it is hard to verify if the reference clusters are meaningful. Originally, these datasets were suggested for benchmarking classification, not clustering problems. This might mean that there is something wrong with these reference label vectors themselves (and not the algorithms tested; e.g., the clusters are overlapping), or that some further data preprocessing must be applied in order to reveal the cluster structure (this is, e.g., the case for the WUT/twosplashes datasets which normally requires the features be standardised beforehand, we got max AAA of 0.86). Therefore, we exclude these 6 datasets from further analysis, as it does not make sense to compare an algorithm against what is potentially noise. The topmost box-and-whisker in Figure 3 ("Max All" on the lefthand side) depicts the distribution of the highest observed cluster validity scores across all the remaining 77 benchmark cases. ### Are MST-based methods any good? Recall that the number of possible partitions of an MST with \(n\) edges into \(k\) subtrees is equal to \((n-1)(n-2)\cdots(n-k+1)\). For all datasets with \(k=2,3,4\), and those with \(n\leq 2500\) for \(k=5\), we were able to identify the true maximum of AAA easily using the brute-force approach (considering all the possible partitions of the MST). The remaining cases were too time-consuming to examine exhaustively. Therefore, we applied a tabu-like steepest ascent search strategy with at least 10 random restarts to find the lower bound for the maximum (similarly as in (Gagolewski et al., 2021)). Studying the "Max MST" box-and-whisker on the righthand side of Figure 3, which denotes these theoretically achievable maxima of AAA (a hypothetical "oracle" MST-based algorithm), we note that for only \(4/77\simeq 5\%\) datasets, the minimum spanning tree (with respect to the Euclidean distance between unpreprocessed points) is not a good representation of the feature space. Namely, the accuracy scores relative to "Max All" is significantly smaller than 0.95. We marked them with asterisks in Tables 2 and 3 (WUT/olympic, WUT/z1, UCI/wine, and WUT/graph). In terms of absolute AAA for "Max MST" \(3/77\simeq 4\%\) and \(12/77\simeq 16\%\) cases gave scores \(<0.8\) and \(<0.95\), respectively. On the other hand, 6 cases turned out difficult for the non-MST methods (relative "Max Obs. Non-MST" AAA less than 0.95). This includes Graves/parabolic, SIPU/pathbased with \(k=3\) and \(k=4\), SIPU/Compound for \(k=6\), WUT/cross, and Other/chameleon_t8_8k. Still, they can be successfully tackled with MSTs. Figure 3: The distribution of the adjusted asymmetric accuracies across the 77 benchmark cases (absolute AAA on the left and AAA relative to “Max All” on the righthand side). “Max Obs.” gives the maximal observed AAA based on the outputs of all the 140 methods, and their counterparts for MST and non-MST algorithms only are denoted with “Max Obs. MST” and “Max Obs. Non-MST”. “Max MST” gives the theoretically achievable maxima of the accuracy scores for the MST-based methods. Moreover, “Max All” is the maximum of “Max MST“ and “Max Obs.”. Apart from a few “hard” datasets, the MST-based methods are potentially very competitive, despite their simplicity. They can be improved further by appropriate feature engineering. ### Which MST-based algorithm then? The above observation does not mean that we are in possession of an algorithm that gets the most out of the information conveyed by the minimum spanning trees, nor that a single strategy is always best. We should thus inspect which strategies and/or objective functions are more useful than others. Figure 4 depicts the adjusted accuracies relative to "Max MST" for each method, i.e., how well each algorithm compares to the best possible solution. We note that the agglomerative Genie (Gagolewski, 2021; Gagolewski, Bartoszuk, & Cena, 2016) algorithm outperforms other approaches. The agglomerative and divisive approaches optimising the information criterion (Genie+Ic, IcA (Cena, 2018), ITM (Muller et al., 2012)) also give high average relative AAA and the one optimising the new near-neighbour-based criteria (DuNN_25_Min_Max, WCNN_25, etc.) yield high median relative scores. As far as other "standalone" algorithms are concerned, HEMST and Single linkage exhibit inferior performance, and CTCEHC is comparable with the divisive Calinski-Harabasz criterion optimiser. Quite strikingly, some well-established internal cluster validity measures promote clusterings of very poor agreeableness with the reference labels (Davies-Bouldin, SilhouetteW, some generalised Dunn indices). This is in line with our observation in (Gagolewski et al., 2021), where we performed a similar study over the space of _all_ possible partitionings. This puts their actual meaningfulness into question: are they really good indicators of clustering quality? ### How MST-based methods compare against other clustering approaches? Figure 5 compares the MST and non-MST approaches in terms of absolute AAAs. As far as the current (large) benchmark battery is concerned, the MST-based methods outperform the popular "parametric" approaches (Gaussian Mixtures, K-means) and other algorithms (Birch, Ward, Average, Complete linkage, and spectral clustering with the best-identified parameters) implemented in the _scikit-learn_ package (Pedregosa et al., 2011) for Python. We also notice that choosing the wrong objective function to optimise over MST can also lead to very poor results. This is particularly the case if the Davies-Bouldin and SilhouetteW indices are considered. Figure 4: The distribution of the adjusted asymmetric accuracies for different MST-based algorithms relative to the “Max MST” AAA score. The agglomerative Genie (Gagolewski, 2021; Gagolewski, Bartoszuk, & Cena, 2016) and the information criterion-based methods (Genie+Ic, IcA (Cena, 2018; Gagolewski, 2021), ITM (Müller et al., 2012)) outperform other approaches. Also, the new divisive near-neighbour-based schemes give a high median performance. We also note that many well-established cluster validity measures provide poor guidance for the selection of an informative partitioning. Figure 5: The distribution of the adjusted asymmetric accuracies for different algorithms. The MST-based algorithms Genie (Gagolewski, 2021; Gagolewski, Bartoszuk, & Cena, 2016), Genie+IC (Cena, 2018; Gagolewski, 2021), and ITM (Müller et al., 2012) outperform other methods. However, we also see that an invalid objective function to be optimised over an MST can lead to meaningless clusterings. ## 4 Conclusion Apart from a few "difficult" label vectors, the minimum spanning tree-based methods have been shown to be potentially very competitive clustering approaches. Furthermore, they can be improved by appropriate feature engineering (scaling of data columns, noise point and outlier removal, modifying the distance matrix, etc.; see, e.g., (Campello et al., 2015; Yin and Liu, 2009)). They are quite simple and easy to compute: once the minimum spanning tree is considered (which takes up to \(O(n^{2})\) time, but approximate methods exist as well; e.g., (Naidan et al., 2019)), we can potentially get a whole hierarchy of clusters of any cardinality. For instance, our top performer, the Genie algorithm as implemented in (Gagolewski, 2021), needs \(O(n\sqrt{n})\) to generate all possible partitions given a prebuilt MST. Unlike, e.g., the well-known \(k\)-means algorithm, which is fast for small fixed \(k\)s, this property makes them suitable for solving extreme clustering tasks (compare (Kobren, Monath, Krishnamurthy, and McCallum, 2017)). Just like in our previous contribution (Gagolewski et al., 2021) (where we tried to find an optimal clustering over the _whole_ space of all possible partitions), we note that many internal cluster validity indices actually promote clusterings that agree poorly with the reference ones. This puts their validity/meaningfulness into question. Overall, no single best MST-based method probably exists, but there is still some room for improvement, and thus the development of new algorithms is encouraged. In particular, the new divisive and agglomerative approaches we have proposed in this paper perform well on certain dataset types. Therefore, it might be promising to explore the many possible combinations of parameters/objective functions we have left out due to the obvious space constraints in this paper. Future work should involve the testing of clustering methods based on near-neighbour graphs and more complex MST-inspired data structures (compare (Franti, Virmajoki, and Hautamaki, 2006; Gonzalez-Barrios and Quiroz, 2003; Karypis et al., 1999; Zhong et al., 2011, 2010)). It would also be interesting to inspect the stability of the results when different random subsets of benchmark data are selected or study the problem of overlapping clusters (e.g., (Campagner, Ciucci, and Denoeux, 2023)). Also, the application of the MST-based algorithms could be examined in the problem of community detection in graphs (e.g., (Gerald, Zaatiti, Hajri, et al., 2023)). Finally, let us recall that we have only focused on methods that guarantee to return a fixed-in-advance number of clusters \(k\). In the future, it would be interesting to allow for the relaxation of this constraint. ## Acknowledgements This research was supported by the Australian Research Council Discovery Project ARC DP210100227 (MG). ## CRediT author statement MG: Conceptualisation, Methodology, Data Curation, Software, Visualisation, Investigation, Formal analysis, Writing - Original Draft AC: Methodology, Data Curation, Investigation MB: Software, Data Curation, Investigation LB: Software, Investigation ## Data Availability All benchmark data are publicly available from [https://clustering-benchmarks.gagolewski.com](https://clustering-benchmarks.gagolewski.com) (Gagolewski, 2022b). In particular, a snapshot of the test battery (Gagolewski et al., 2022) can be fetched from [https://github.com/gagolews/clustering-data-v1/releases/tag/v1.1.0](https://github.com/gagolews/clustering-data-v1/releases/tag/v1.1.0). All computed partitions can be downloaded from [https://github.com/gagolews/clustering-results-v1/](https://github.com/gagolews/clustering-results-v1/). ## Conflict of interest All authors certify that they have no affiliations with or involvement in any organisation or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
2305.01791
Monolayer WS$_2$ electro- and photo-luminescence enhancement by TFSI treatment
Layered material heterostructures (LMHs) can be used to fabricate electroluminescent devices operating in the visible spectral region. A major advantage of LMH-light emitting diodes (LEDs) is that electroluminescence (EL) emission can be tuned across that of different exciton complexes (e.g. biexcitons, trions, quintons) by controlling the charge density. However, these devices have an EL quantum efficiency as low as$\sim$10$^{-4}$\%. Here, we show that the superacid bis-(triuoromethane)sulfonimide (TFSI) treatment of monolayer WS$_2$-LEDs boosts EL quantum efficiency by over one order of magnitude at room temperature. Non-treated devices emit light mainly from negatively charged excitons, while the emission in treated ones predominantly involves radiative recombination of neutral excitons. This paves the way to tunable and efficient LMH-LEDs
A. R. Cadore, B. L. T. Rosa, I. Paradisanos, S. Mignuzzi, D. De Fazio, E. M. Alexeev, J. E. Muench, G. Kakavelakis, S. M. Shinde, D. Yoon, S. Tongay, K. Watanabe, T. Taniguchi, E. Lidorikis, I. Goykhman, G. Soavi, A. C. Ferrari
2023-05-02T21:41:48Z
http://arxiv.org/abs/2305.01791v1
# Monolayer WS\({}_{2}\) electro- and photo-luminescence enhancement by TFSI treatment ###### Abstract Layered material heterostructures (LMHs) can be used to fabricate electroluminescent devices operating in the visible spectral region. A major advantage of LMH-light emitting diodes (LEDs) is that electroluminescence (EL) emission can be tuned across that of different exciton complexes (e.g. biexcitons, trions, quintons) by controlling the charge density. However, these devices have an EL quantum efficiency as low as\(\sim\)\(10^{-4}\)%. Here, we show that the superacid bis-(triuoromethane)sulfonimide (TFSI) treatment of monolayer WS\({}_{2}\)-LEDs boosts EL quantum efficiency by over one order of magnitude at room temperature. Non-treated devices emit light mainly from negatively charged excitons, while the emission in treated ones predominantly involves radiative recombination of neutral excitons. This paves the way to tunable and efficient LMH-LEDs. Transition metal dichalcogenide monolayers (1L-TMDs) are ideal to study light-matter interactions and many-body effects at the atomic scale[1; 2; 3]. Compared to bulk semiconductors[2], the reduced dielectric screening combined with the spatial confinement of charge carriers[1] favours the formation of various excitonic complexes which can be controlled by modulation of the carrier density[1; 2; 3; 4; 5; 6; 7; 8]. Thus, 1L-TMDs photoluminescence (PL) spectra host features arising from formation of charged[4; 5; 6; 7; 8] and neutral[9; 10; 11; 12] exciton complexes. Layered material heterostructures (LMHs) combining single layer graphene (SLG), 1L-TMDs, and hexagonal boron nitride (hBN), from 1L-hBN to hundreds of layers, are promising for electronics[13; 14], photonics[15], and optoelectronics[16; 17]. Direct bandgap 1L-TMDs and LMHs can be used to make light-emitting diodes (LEDs)[18; 19; 20; 21; 22; 23; 24; 25; 26; 27], with fast modulation speed (up to GHz)[25; 7; 28], and emission wavelength tunability[7; 25; 6; 25] besides multi-spectral (visible\(\sim\)618nm[21; 22; 23] to near-infrared\(\sim\)1160nm[29; 30]) emission. In 1L-TMD-based LEDs, the electroluminescence (EL) efficiency (\(\eta_{EL}\)), i.e. ratio between emitted photons and injected electrons (\(e\))[19; 20], depends on the optical emission of the material[30; 31; 32; 33; 34; 35; 36; 37], as well as on its doping level[38; 39; 40; 41; 6]. In doped 1L-TMDs, the PL and EL emission originates from either negative (X\({}^{-}\))[33; 34; 38; 28] or positive (X\({}^{+}\))[6; 19; 20] trions, depending on the type of doping. However, 1L-TMD-LEDs based on trionic emission show low \(\eta_{EL}\) (typically\(<\)0.05%[19; 20]) with respect to neutral exciton (X\({}^{0}\)) emission (typically \(\eta_{EL}<\)1%[6; 7; 31; 32; 38; 39]). This difference in \(\eta_{EL}\) occurs due the small (\(\sim\)30meV) binding energy of trions[42]. Since the X\({}^{-}\) binding energy is close to the lattice thermal energy at room-temperature (RT=300K,\(\sim\)25.2meV), trions dissociate[2]. An excess of free-carriers decreases the available phase-space filling for exciton complexes, due to Pauli blocking, with a reduction of trion and exciton binding energies[43] and oscillator strengths[44] (i.e. the probability of absorption/emission of electromagnetic radiation[45]). In 1L-TMDs, low light-emission efficiency is observed in both EL (\(\eta_{EL}\sim\)\(10^{-4}\)[33; 34] to\(\sim\)1%[6; 7; 31; 32; 38; 39]) and PL (\(\eta_{PL}\sim\)\(10^{-3}\)[36; 40] to\(\sim\)5%[1; 2; 3]). \(\eta_{PL}\) is defined as the ratio between emitted and absorbed photons[19; 20]. Thus, several chemical approaches were suggested to enhance \(\eta_{PL}\), such as treatment with 2,3,5,6-tetrafluoro 7,7,8,8-tetracyanoquinodimethane[46], hydrogen peroxide[47], titanyl phthalocyanine[48], sulfuric acid[49], oleic acid[50; 51; 52], and the superacid (i.e. with acidity greater than that of 100% pure sulfuric acid[53]) bis-(trifluoromethane)sulfonimide (TFSI)[54; 55; 56]. TFSI treatment increased the PL intensity of 1L-WS\({}_{2}\) up to\(\sim\)10-times[52; 54; 55; 56] due to depletion of excess \(e\), promoting X\({}^{0}\) recombination. The effect of chemical passivation of 1L-TMDs on \(\eta_{EL}\) combined with gated-PL emission in 1L-TMD-based LEDs was not reported to date, to the best of our knowledge. Refs.[54; 55; 56; 57; 58; 59; 60; 61] reported PL measurements on 1L-TMDs and focused on non-gated samples, thus limiting the modulation of charge density in 1L-TMDs. Ref.[8] performed gated-PL measurements in 1L-WS\({}_{2}\), finding that both TFSI treatment and electrical gating increase \(\eta_{PL}\) by a factor of up to\(\sim\)10 (at\(\sim\)\(10^{19}\)cm\({}^{-2}\)s\({}^{-1}\) photocarrier generation rate), because both processes reduce the \(n\)-type behaviour of 1L-WS\({}_{2}\) and suppress X\({}^{-}\) formation, thus enhancing X\({}^{0}\) radiative recombination. However, gated-PL measurements after TFSI passivation were not provided. The activation of trapping states on TFSI-treated 1L-TMDs was not discussed. Ref.[67] carried out EL experiments with TFSI passivation for high-speed (MHz) modulation, but did not report PL nor EL emission tunability. Therefore, an investigation on how TFSI affects EL emission and modifies gated-PL of 1L-TMD-based devices is required. Here, we fabricate LEDs with 1L-WS\({}_{2}\) as active material on a metal-insulator-semiconductor (MIS) structure. We measure EL and gated-PL before and after TFSI treatment. We find that TFSI increases \(\eta_{EL}\) by over one order of magnitude at RT, and PL intensity by a factor\(\sim\)5. We find that X\({}^{-}\) and X\({}^{0}\) are present in both EL and PL before TFSI treatment, whereas X\({}^{0}\) dominates after. We attribute this to depletion of excess \(e\) and changes in the relaxation pathway, induced by the treatment. This paves the way to more efficient 1L-TMDs-based LEDs and excitonic devices. ## II Results and Discussion We use 1L-WS\({}_{2}\) as the active light-emitting layer since it has a direct bandgap[68; 69; 70; 71], its PL emission is\(\sim\)60 times stronger than 1L-MoS\({}_{2}\)[39; 69] at RT, \(\eta_{EL}\) can be up to\(\sim\)50 times larger than 1L-MoS\({}_{2}\)[19; 20] at RT, while Refs.[54; 55; 56; 52; 57; 58; 59; 60; 61; 62; 63; 64; 65] demonstrated that TFSI treatment increases up to\(\sim\)10-times its PL intensity. Fig.1a shows the 1L-WS\({}_{2}\)/hBN/SLG tunnel junction configuration used here, where the metallic electrodes provide contacts to apply a voltage (_V_) between SLG and 1L-WS\({}_{2}\). This is prepared as follows. WS\({}_{2}\) crystals are synthesized using a two-step self-flux technique[72] using 99.9999% purity W and S powders without any transporting agents. Commercial (Alfa Asar) sources of powders contain a number of defects and impurities (Li, O, Na, and other metals as determined by secondary ion mass spectroscopy). Before growth, W and S powders are thus purified using electrolytic[73] and H\({}_{2}\)[73] based techniques to reach 99.995% purity. WS\({}_{2}\) polycrystalline powders are created by annealing a stoichiometric ratio of powders at 900\({}^{\circ}\)C for 3 weeks in a quartz ampoule sealed at 10\({}^{-7}\) Torr. The resulting powders are re-sealed in a different quartz ampoule under similar pressures and further annealed at 870-910\({}^{\circ}\)C with thermodynamic temperature differential (hot to cold zone difference)\(\sim\)40\({}^{\circ}\)C. The growth process takes 5 weeks. At the end of the growth, ampoules are cooled to RT slowly (\(\sim\)40\({}^{\circ}\)C/hour)[74]. We use this material as bulk source because our previous work[74] demonstrated that this has a point defect density\(\sim\)10\({}^{9}\)-10\({}^{10}\) cm\({}^{-2}\), on par or better than previous reports[75]. Bulk WS\({}_{2}\), hBN (grown by the temperature-gradient method[76]), and graphite (sourced from HQ Graphene) crystals are then exfoliated by micromechanical cleavage using Nitto-tape[77] on 285nm SiO\({}_{2}\)/Si. Optical contrast[78] is first used to identify 1L-WS\({}_{2}\), SLG, FLG (3-10nm), and hBN(\(<\)5nm). The LMs are then characterized by Raman spectroscopy as discussed in Methods. After Raman characterization of all individual LMs on SiO\({}_{2}\)/Si, the FLG/1L-WS\({}_{2}\)/hBN/SLG LMH is assembled using dry-transfer as for Refs.[79; 80]. FLG is picked-up from SiO\({}_{2}\)/Si using a polycarbonate (PC) membrane on a polydimethylsiloxane (PDMS) stamp (as mechanical support) at 40\({}^{\circ}\)C. We use 40\({}^{\circ}\)C because this is sufficient to increase the adhesion of the PC film[81], to pick all LMs from SiO\({}_{2}\)/Si. Then, FLG is aligned to one edge of 1L-WS\({}_{2}\) on SiO\({}_{2}\)/Si and brought into contact using \(xyz\) micromanipulators at 40\({}^{\circ}\)C, leaving the majority of 1L-WS\({}_{2}\) without FLG cover to be used as active area (AA). AA is the region from where light emission is expected, and it is the overlap area between 1L-WS\({}_{2}\) and SLG (green-shaded part in Fig.1b). Next, FLG/1L-WS\({}_{2}\) is aligned to a hBN flake deposited onto SiO\({}_{2}\)/Si and brought into contact using \(xyz\) micromanipulators at 40\({}^{\circ}\)C. Finally, FLG/1L-WS\({}_{2}\)/hBN is aligned to a SLG on SiO\({}_{2}\)/Si and brought into contact using \(xyz\) micromanipulators at 180\({}^{\circ}\)C, whereby PC preferentially adheres to SiO\({}_{2}\)[79], allowing PDMS to be peeled away, leaving PC/FLG/1L-WS\({}_{2}\)/hBN/SLG on SiO\({}_{2}\)/Si. PC is then dissolved in chloroform for\(\sim\)15mins at RT, leaving the FLG/1L-WS\({}_{2}\)/hBN/SLG LMH on SiO\({}_{2}\)/Si[79; 80]. After LMH assembly, Cr/Au electrodes are fabricated by electron beam lithography (EBPG 5200, Raith GMBH), followed by metallization (1:50nm) and lift-off. The tunnel junction based on a MIS structure consists of a LMH with 1L-WS\({}_{2}\) as the light emitter, FL-hBN (typically from 2 to 4nm) acting as tunnel barrier, and Figure 1: a) Schematic of LED. Cr/Au electrodes, SLG, FLG, hBN, and 1L-WS\({}_{2}\) are indicated. b) Optical image of device. Scale bar 4\(\mu\)m. The dotted lines highlight the footprint of SLG, FLG, hBN, 1L-WS\({}_{2}\). The green-shaded part corresponds to the active area\(\sim\)23\(\mu\)m\({}^{2}\). Cr/Au contacts the bottom SLG; FLG contacts the top 1L-WS\({}_{2}\). Band diagram for (c) _V_=0V and (d) _V_\(>\)0V. Tuning the SLG E\({}_{F}\) (gray dotted line) across the 1L-WS\({}_{2}\) valence band edge, E\({}_{V}\), allows \(h\) tunneling from SLG to 1L-WS\({}_{2}\), resulting in current onset and light emission via radiative recombination with \(e\) from the _n_-type 1L-WS\({}_{2}\). The blue circles represent \(e\) accumulated on 1L-WS\({}_{2}\) due to the MIS structure, while the red circles are \(h\) injected into 1L-WS\({}_{2}\) through the hBN barrier a SLG electrode to inject holes (_h_) into 1L-WS\({}_{2}\). We use FL-hBN\(<\)5nm so that a low (typically\(<\)5V) driving voltage is sufficient for charge injection to the 1L-WS\({}_{2}\)[82; 83]. We employ FLG (\(\sim\)3-10nm) to contact 1L-WS\({}_{2}\), because FLG reduces the contact resistance[84], while Cr/Au electrodes give Ohmic contacts to SLG and FLG[84]. SLG could also be used to contact 1L-WS\({}_{2}\), however, as the optical contrast is higher in FLG than SLG[78; 85], using FLG makes it easier to align it to 1L-WS\({}_{2}\) during transfer. Since TTSI treatment requires direct exposure of 1L-TMDs[54], we place 1L-WS\({}_{2}\) on top of the stack to compare the device performance before and after treatment. We TTSI-treat 4 samples for EL and gated-PL measurements. These are immersed in a TFSI solution (0.2 mg/mL) in a closed vial for 10mins at 100\({}^{\circ}\)C[54; 55; 56], then removed, dried by a N\({}_{2}\) gun, and annealed on a hot plate at 100\({}^{\circ}\)C for 5mins[54; 55; 56]. Fig.1b is an image of the 1L-WS\({}_{2}\)-LEDs. The FLG electrode is placed on the side of the SLG to avoid direct tunneling of carriers from SLG to FLG, hence keeping as AA the LMH region extended over SLG and 1L-WS\({}_{2}\), green-shaded in Fig.1b. If there is a FLG/SLG overlap, tunneling through FLG-SLG may be possible, not resulting in _e-h_ recombination into 1L-WS\({}_{2}\), hence no EL[6; 25; 38]. Figs.1c,d sketch the band diagram of our LEDs for _V_=0V and _V_\(>\)0V, respectively. For _V_=0V (at thermodynamic equilibrium as indicated in Fig.1c), the Fermi level, E\({}_{F}\), is constant across the junction, and the net current (_I_) is zero[28; 25; 6; 21; 38]. For _V_\(>\)0V (positive potential on SLG), the SLG E\({}_{F}\) is shifted below the 1L-WS\({}_{2}\) valence band energy E\({}_{V}\) (Fig.1d), and \(h\) from SLG tunnel across the hBN barrier into 1L-WS\({}_{2}\), promoting EL emission by radiative recombination between the injected excess \(h\) and intrinsic _e_[21; 22; 23; 24; 28; 35; 38]. The EL emission is expected to increase as a function of tunneling current because of the increasing \(h\) injected into 1L-WS\({}_{2}\) available for _e-h_ recombination. The LMs are characterized by Raman, PL, EL spectroscopy using a Horiba LabRam HR Evolution. The Raman spectra are collected using a 100x objective with numerical aperture (NA)=0.9, and a 514.5nm laser with a power\(\sim\)5\(\mu\)W to avoid damage or heating. The voltage bias dependent PL and EL are collected using a long working distance 50x objective (NA=0.45). For the PL spectra, we use a 532nm (2.33eV) laser in order to excite above the X\({}^{0}\) emission (\(\sim\)2eV)[9; 10]. The power is kept\(\sim\)80nW to avoid laser-induced thermal effects[2; 9; 10; 11]. The voltage (_V_) and current (_I_) between source (SLG) and drain (1L-WS\({}_{2}\)) electrodes are set (_V_) and measured (_I_) by a Keithley 2400. Fig.2 shows the Raman spectrum of 1L-WS\({}_{2}\)/hBN/SLG on Si/SiO\({}_{2}\) after device fabrication and before current-voltage (_I-V_) measurements. The Raman modes of each LM can be identified. For 1L-WS\({}_{2}\), Pos(A\({}_{1}^{{}^{\prime}}\)) and its full width af half maximum, FWHM(A\({}_{1}^{{}^{\prime}}\)), change from\(\sim\)418.9\(\pm\)0.2cm\({}^{-1}\); 3.9\(\pm\)0.2cm\({}^{-1}\), before assembly, to\(\sim\)419.8\(\pm\)0.2cm\({}^{-1}\); 3.4\(\pm\)0.2cm\({}^{-1}\), after. All the changes in the other modes are close to our spectral resolution and errors, as for Ref.[86]. Pos(A\({}_{1}^{{}^{\prime}}\)) and FWHM(A\({}_{1}^{{}^{\prime}}\)) are sensitive to changes in _n_-doping[87; 88]. The mechanism responsible for this effect is an enhancement of electron-phonon (e-ph) coupling when \(e\) populate the valleys at K and Q simultaneously[88]. The energy of the K and Q valleys is modulated by the A\({}_{1}^{{}^{\prime}}\) ph[88]. Since the K and Q energies are modulated out-of-phase, charge transfer between the two valleys occurs in presence of the A\({}_{1}^{{}^{\prime}}\) ph[88; 87]. When the K and Q valleys are populated by \(e\), these are transferred back and forward from one valley to the other[88; 89]. This increases the e-ph coupling of out-of-plane modes, such as A\({}_{1}^{{}^{\prime}}\)[88]. The same process does not occur for _p_-doping[88]. The reason for this asymmetry between _n_- and _p_-doping is due to a much larger energy separation (\(\sim\)230meV[88]) between the VB \(\Gamma\) and K valleys than that (\(\sim\)100meV[88]) of the CB K and Q valleys. From the changes in Pos(A\({}_{1}^{{}^{\prime}}\)) and FWHM(A\({}_{1}^{{}^{\prime}}\)), and by comparison with Ref.[88], we estimate a reduction in _n_-doping\(\sim 5\times 10^{12}\)cm\({}^{-2}\). For hBN in Fig.2, Pos(E\({}_{2g}\))\(\sim\)1366.4\(\pm\)0.2cm\({}^{-1}\) and FWHM(E\({}_{2g}\))\(\sim\)9.2\(\pm\)0.2cm\({}^{-1}\). Although FWHM(E\({}_{2g}\)) changes within the error, Pos(E\({}_{2g}\)) downshifts\(\sim\)2.1cm\({}^{-1}\) after assembly, suggesting a contribution from strain (see Methods for comparison between FL- and bulk-hBN Raman). Uniaxial strain lifts the degeneracy of the E\({}_{2g}\) mode and results in the splitting in two subpeaks E\({}_{2g}^{+}\) and E\({}_{2g}^{-}\), with shift rates\(\sim\)-8.4 and -25.2cm\({}^{-1}\)/%[90; 91]. For small levels of uniaxial strain (\(<\) Figure 2: 514.5nm Raman spectrum of 1L-WS\({}_{2}\)/hBN/SLG LMH after device fabrication. The SLG and hBN Raman modes are labelled on it and the modes for 1L-WS\({}_{2}\) as for Table 1. The 1300-2900cm\({}^{-1}\) spectral window was multiplied by a factor of 10 for better visualization be observed and the shift rate is\(\sim\)-16.8cm\({}^{-1}\)/%[90; 91]. For biaxial strain, splitting does not occur and E\({}_{2g}\) shifts with rate\(\sim\)-39.1cm\({}^{-1}\)/%[90]. Since we do not observe splitting, the E\({}_{2g}\) shift can be attributed to uniaxial or biaxial tensile strain\(\sim\)0.13% or\(\sim\)0.06%, respectively. For SLG in Fig.2, no D peak is observed after LMH assembly, indicating negligible defects[92; 93; 94]. In Fig.2 Pos(G)\(\sim\)1585.1\(\pm\)0.2cm\({}^{-1}\), FWHM(G)\(\sim\)9.0\(\pm\)0.2cm\({}^{-1}\), Pos(2D)\(\sim\)2692.3\(\pm\)0.2cm\({}^{-1}\), FWHM(2D)\(\sim\)20.9\(\pm\)0.2cm\({}^{-1}\), I(2D)/I(G)\(\sim\)2.4, and A(2D)/A(G)\(\sim\)5.6. These indicate that the SLG is _p_-doped, with E\({}_{F}\sim\)150\(\pm\)50meV[93; 94; 95] by taking into account the average dielectric constant (\(\sim\)3.85) of the environment (\(\varepsilon_{SiO_{2}}\sim\)3.8[96] and \(\varepsilon_{hBN}\sim\)3.9[97]). E\({}_{F}\sim\)150meV should correspond to Pos(G)\(\sim\)1584.1cm\({}^{-1}\) for unstrained SLG[98]. However, Pos(G)\(\sim\)1585.1\(\pm\)0.2cm\({}^{-1}\), which implies a contribution from compressive uniaxial (biaxial) strain\(\sim\)0.04% (\(\sim\)0.01%). The strain level for SLG and hBN are different, most likely due to the fact that the SLG is directly exfoliated onto SiO\({}_{2}\)/Si, while hBN is picked up and transferred by PDMS stamps, hence, this could induce a larger amount of strain on hBN. Fig.3a plots the _I-V_ characteristics. For _V_=0V the current is zero (Fig.1c). When \(V\) is applied, an electrical rectification (i.e. diode behavior) with negligible leakage current (_I_\(<\)10\({}^{-11}\)A) for _V_\(<\)0 is seen. A tunneling onset, (i.e. exponential increase of _I_) is seen at _V\({}_{ON}\)\(\sim\)_4.1V, Fig.3a. _V\({}_{ON}\)_ is related to the breakdown electric field (E\({}_{bd}\)) across the junction, which depends on the voltage drop on the hBN tunnel barrier and hBN thickness (_d_) accordingly to E\({}_{bd}\)=(V\({}_{bd}\)/_d_)\(\sim\)0.7-1V/nm[82; 83], where V\({}_{bd}\) is voltage breakdown V\({}_{bd}\)=_qnd\({}^{2}\)_/(\(\varepsilon_{0}\varepsilon_{hBN}\)), \(q\) is the \(e\) charge, \(n\) is total charge concentration, _e\({}_{0}\)=8.854\(\times\)10\({}^{-12}\) F/m and \(\varepsilon_{hBN}\sim\)3.9[82; 83], so that _V\({}_{ON}\)_ can vary between different devices. When _V_\(>\)_V\({}_{ON}\), \(h\) from SLG tunnel across the hBN barrier into 1L-WS\({}_{2}\), promoting EL emission by radiative recombination between the injected \(h\) and majority \(e\) in 1L-WS\({}_{2}\) (Fig.1c)[21; 22; 23; 24; 35; 38]. The EL intensity\(\sim\)634nm (\(\sim\)1.956eV) increases with tunneling current, as in Fig.3b. No light emission is observed in reverse _V_\(<\)_0\(V\) and small positive (0 \(<\)_V_\(<\)_V\({}_{ON}\)) biases, below the tunneling condition (_V\({}_{ON}\)\(<\)4.1V). A red-shift\(\sim\)48meV is observed in EL emission\(\sim\)634nm (\(\sim\)1.956eV) with respect to the PL X\({}^{0}\) emission of the unbiased device (dashed black line, Fig.3b). Fig.3b shows a EL peak position close to X\({}^{-}\) of unbiased PL (dashed black line, Fig.3b), implying a trionic EL emission, due to excess \(e\) in 1L-WS\({}_{2}\)[28; 38]. To further understand the EL emission origin, we perform EL and PL spectroscopy at the same \(V\). Fig.4a plots PL spectra at different \(V\). At _V_=0V, the PL peak is\(\sim\)619.2nm (\(\sim\)2.002eV), assigned to X\({}^{0}\)[9; 69]. By increasing \(V\) (i.e. increasing \(e\) density in 1L-WS\({}_{2}\)), a second peak appears at longer wavelengths (\(\sim\)630nm,\(\sim\)1.968eV), due to X\({}^{-}\)[9; 10; 11; 99]. For _V_\(>\)0V, the X\({}^{0}\) intensity gradually decreases and nearly vanishes, while X\({}^{-}\) shifts to longer wavelengths, Fig.4a. This is expected for trionic emission, due to _e_-doping induced by _V_[9; 10; 11; 12; 38]. Similar effects were observed in 1L-MoS\({}_{2}\)/SiO\({}_{2}\)/Si[101], hBN/1L-WSe\({}_{2}\)/hBN/SiO\({}_{2}\)/Si[6], and hBN/1L-WS\({}_{2}\)/hBN/SiO\({}_{2}\)/Si[28]. Therefore, for similar tunneling current, EL agrees in energy and shape with the PL emission (see, e.g., the PL and EL spectra at the bottom of Fig.4a). This is confirmed by Fig.4b, where EL and PL peak positions are plotted for 4 devices, showing EL and PL emission at very similar wavelengths. Thus, EL predominantly originates from X\({}^{-}\)[9; 10; 21; 38; 6]. The variations in X\({}^{-}\) energy for different LEDs are due to changes in charge carriers density across different samples. E.g., the charge density variation in 1L-WS\({}_{2}\) can be due to the number of vacancies in 1L-WS\({}_{2}\)[41] and external impurities (PC residues and Figure 3: a) \(I\) as a function of \(V\) for 1L-WS\({}_{2}\)-LED. b) EL spectra for different tunneling currents without TFSI treatment. The dashed black line is the PL spectrum collected at _V_=0 and normalized to the maximum EL intensity. adsorbed water) after LED fabrication, which may vary from sample to sample. We now consider the origin and consequences of excess \(e\) in 1L-WS\({}_{2}\) for EL emission induced by _V._ Besides the intrinsic charge carriers in 1L-WS\({}_{2}\) (typically _n_-type due to S vacancies[41]), there is also an electrostatically induced charge in 1L-WS\({}_{2}\) when _V_\(>\)0V. A SLG/hBN/1L-WS\({}_{2}\) tunneling junction acts as a MIS capacitor[6; 28; 38]. When _V_\(>\)0 is applied to SLG, inducing positive charges in SLG, there is an opposite (negative) charge induced in 1L-WS\({}_{2}\)[6; 28; 38], thus making the charge density on 1L-WS\({}_{2}\) larger than for _V_=0. When _V_\(>\)_V\({}_{ON}\), \(h\) will be injected by tunneling into 1L-WS\({}_{2}\) (Fig.1d), hence, \(h\) will recombine with \(e\). Consequently, the EL emission originates from X\({}^{-}\) states. However, the radiative recombination efficiency (defined as the number of _e-h_ pairs that recombine by emission of a photon divided by total number of _e-h_ pairs) of X\({}^{-}\) is lower than X\({}^{0}\) because of the small (\(\sim\)30meV) binding energy of trions[42]. Thus, to gain higher \(\eta_{EL}\) one should favor X\({}^{0}\) EL emission by lowering the unbalanced free-carriers concentration in 1L-TMDs by either gate modulation[6; 12; 28; 31; 36; 38], physical[102; 103], or chemical doping[11; 25]. We thus treat 1L-WS\({}_{2}\) using TFSI to reduce doping and favor X\({}^{0}\) emission under bias and investigate the effects on EL emission and gated-PL. Fig.5 plots representative Raman spectra before (black) and after (red) TFSI treatment. By comparing the spectra before and after TFSI treatment, and the fits for the 1L-WS\({}_{2}\) in Table 1, we do not observe significant changes in peak position and FHWM. However, there is an overall intensity increase of the Raman modes of\(\sim\)50%, compared to the Si peak. This indicates a reduction of _n_-doping induced by TFSI treatment, because S vacancies in 1L-TMDs are commonly associated to _n_-type behaviour and the reduction of these defects will reflect in _p_-type doping fingerprint[54; 55; 56; 66]. Pos(A\({}_{1}^{\prime}\)) is unaffected by TFSI treatment, which suggests that the reduction in the intrinsic 1L-WS\({}_{2}\)_n_-doping induced by TFSI is\(<<\)10\({}^{12}\)cm\({}^{-2}\)[88]. Although TFSI is able to _p_-dope SLG when it is in contact with the TFSI solution[104], Fig.5 shows negligible (within the errors[86]) changes in the SLG (e.g. before (after): Pos(G)\(\sim\)1585.1 (1585.0)\(\pm\)0.2cm\({}^{-1}\), FWHM(G)\(\sim\)9.0 (9.1)\(\pm\)0.2cm\({}^{-1}\), Pos(2D)\(\sim\)2692.3 (2692.2)\(\pm\)0.2cm\({}^{-1}\), FWHM(2D)\(\sim\)20.9 (20.8)\(\pm\)0.2cm\({}^{-1}\), I(2D)/I(G)\(\sim\)2.4 (2.4), and A(2D)/A(G)\(\sim\)5.6 (5.6)) and hBN (e.g. before (after): Pos(E\({}_{2g}\))\(\sim\)1366.4 (1366.5)\(\pm\)0.2cm\({}^{-1}\) and FWHM(E\({}_{2g}\))\(\sim\)9.2 (9.1)\(\pm\)0.2cm\({}^{-1}\)) Raman spectra after treatment, as both are protected by the top 1L-W\({}_{2}\). Figure 5: 514.5nm Raman spectra of pristine (black line) and TFSI-treated (red line) 1L-WS\({}_{2}\)/hBN/SLG LMH. The SLG and hBN Raman modes are labelled, as well as the modes for 1L-WS\({}_{2}\), as for Table 1. The 150-450cm\({}^{-1}\) (1300-2800cm\({}^{-1}\)) ranges are normalized to the Si (2D) peaks, respectively. The E\({}_{2G}\) peak is multiplied by 10 for better visualization Figure 4: a) Evolution of PL as a function of _V._ For comparison, an EL spectrum for I\(\sim\)16nA is shown (red). The dashed lines are guides to the eye for the X\({}^{0}\) and X\({}^{-}\) positions. In all PL measurements up to 3V, _I_\(<\)10\({}^{-11}\)A. At 4V, _I_\(\sim\)10nA, indicating \(h\) tunneling through hBN into 1L-WS\({}_{2}\). b) EL and PL positions from 4 different devices. The dashed line plots the unbiased PL position of X\({}^{0}\) measured in Fig.3b Fig.6a plots a representative PL spectrum of 1L-WS\({}_{2}\) embedded in the LMH before TFSI treatment, and Fig.6b after treatment. For the pristine case, there are two components, fitted by two Lorentzians\(\sim\)618.7nm (\(\sim\)2.004eV) and\(\sim\)629.1nm (\(\sim\)1.971eV) corresponding to X\({}^{0}\)[69; 71] and X\({}^{-}\) emission[99; 100]. For non-biased devices, the spectral weight (defined as the area of each peak) of the PL emission indicates a majority emission due to X\({}^{0}\). After treatment, the PL emission evolves to a main single peak\(\sim\)618.1nm (\(\sim\)2.006eV), accompanied by a\(\sim\)4-fold increase in PL intensity. The changes in spectral weight of X\({}^{0}\) and X\({}^{-}\) emission after treatment can be assigned to a reduction in the _e_-density in 1L-WS\({}_{2}\)[54; 55; 56], in agreement with our Raman analysis. Refs.[50; 51; 52; 54; 55; 56; 50] reported that PL enhancement depends on sample quality (defects) and may vary 1 to 10 times. In our samples we observe a PL increase\(\sim\)5\(\pm\)1-times, consistent with Refs.[50; 51; 52; 54; 55; 56; 50]. Fig.7a plots typical _I_-_V_ characteristics of 3 devices before (solid black lines) and after (dashed red lines) TFSI treatment. \(I\) is not affected by the treatment. _V\({}_{ON}\)_ is mostly influenced by the hBN thickness[82; 83]. Figs.8a,b show EL collected before and after TFSI, respectively, for different \(I\). In both cases, EL is triggered for similar current levels (_I_\(<\)5nA), and the intensity increases linearly with \(I\), Fig.8c. The EL intensity slope as a function of current density (_I_ divided by AA) is affected by TFSI. For pristine-LEDs we get an average slope \(\alpha\sim\)1.4\(\pm\)0.3, while after TFSI \(\alpha\sim\)13.5\(\pm\)1.1, with 1 order of magnitude \(\eta_{EL}\) increase, Fig.8c. The red-shifts in the EL emission with \(I\) increase in pristine (\(<\)6nm) and TFSI treated LEDs (\(<\)5nm), Figs.8a,b, can be assigned to E\({}_{F}\) shift induced by the MIS structure[31; 6; 33]. Next, we estimate the external quantum efficiency (EQE) of our LEDs. This is defined as the ratio between the number of emitted photons (_N\({}_{ph}\)_) and that of injected \(h\) per second (_N\({}_{h}\)_)[105]: \[EQE=\frac{N_{ph}}{N_{h}}=\frac{\sum_{\lambda}N_{ph-counts}}{N_{h}}\times\frac{ A_{eff}}{\eta_{sys}}, \tag{1}\] where \(\sum_{\lambda}N_{ph-counts}\) is the sum of the total photons collected by the spectrometer over the measured spectral range, A\({}_{eff}\)=AA/A\({}_{spot}\), where A\({}_{spot}\) is the microscope objective spot size (A\({}_{spot}\)=\(\pi\)[1.22\(\lambda\)/2NA]\({}^{2}\sim\)2.2\(\mu\)m\({}^{2}\), with \(\lambda\)=618nm and NA=0.45), and \(N_{h}\)=_I\(\times\)t/q_, where \(t\) is the acquisition time, and \(q\) the \(e\) charge. The efficiency factor (defined as the ratio between the photons collected by the detector and the emitted photons by EL at the sample position) of our setup, including all optical components and spectrometer, is \(\eta_{sys}\sim\)0.0051, see Methods. From Eq.1 we get EQE\(\sim\)0.025\(\%\pm\)0.021% \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Peak & Bulk-WS\({}_{2}\) Assignment & Bulk-WS\({}_{2}\) & 1L-WS\({}_{2}\) Assignment & 1L-WS\({}_{2}\)-SiO\({}_{2}\) & 1L-WS\({}_{2}\)-LMH & TFSI + 1L-WS\({}_{2}\)-LMH \\ \hline 1 & LA(M) & 174.5 (11.1) & LA(M) & 175.6 (14.5) & 175.6 (14.6) & 174.9 (14.4) \\ 2 & LA(K) & 194.8 (3.3) & LA(K) & 193.3 (4.5) & 193.8 (3.3) & 193.3 (4.7) \\ 3 & A\({}_{1g}\)(K)-LA(K) & 213.7 (4.2) & A\({}_{1}^{{}^{\prime}}\)(K)-LA(K) & 214.5 (5.7) & 214.5 (5.2) & 213.5 (6.0) \\ 4 & A\({}_{1g}\)(M)-LA(M) & 232.8 (5.7) & A\({}_{1}^{{}^{\prime}}\)(M)-LA(M) & 231.5 (6.7) & 231.9 (7.1) & 231.4 (5.9) \\ 5 & A\({}_{1g}\)(M)-ZA(M) & 266.8 (6.9) & A\({}_{1}^{{}^{\prime}}\)(M)-ZA(M) & 265.3 (6.9) & 265.9 (7.2) & 265.4 (7.0) \\ 6 & E\({}_{2g}^{{}^{\prime}}\)(I) & 297.6 (4.2) & E\({}^{{}^{\prime\prime}}\)(I) & 297.7 (2.8) & 298.5 (3.1) & 298.7 (2.6) \\ 7 & LA(M)+TA(M) & 311.2(2.4) & LA(M)+TA(M) & 311.2 (2.5) & 311.8 (2.3) & 311.2 (2.4) \\ 8 & E\({}_{2g}^{{}^{\prime}}\)(M) & 324.6 (17.5) & E\({}^{{}^{\prime\prime}}\)(M) & 326.7 (25.5) & 325.9 (24.7) & 327.7 (25.7) \\ & 2LA(M) & 350.6 (8.3) & 2LA(M) & 352.4 (9.3) & 352.7 (9.2) & 352.7 (8.0) \\ & E\({}_{2g}^{{}^{1}}\)(I) & 356.9 (1.5) & E\({}^{{}^{\prime}}\)(I) & 357.2 (3.3) & 357.4 (3.1) & 357.2 (2.9) \\ & A\({}_{1g}\)(I) & 420.8 (2.1) & A\({}_{1}^{{}^{\prime}}\)(I) & 418.9 (3.9) & 419.8 (3.4) & 419.9 (3.4) \\ \hline \end{tabular} \end{table} Table 1: Pos and (FWHM) in cm\({}^{-1}\) of WS\({}_{2}\) Raman peaks, before and after LMH assembly, and TFSI treatment Figure 6: Fitting of PL spectra for (a) pristine and (b) TFSI-treated 1L-WS\({}_{2}\) on SiO\({}_{2}\)/Si, for 532nm excitation and\(\sim\)0.195%\(\pm\)0.324% for pristine- and TFSI treated-LEDs, respectively, corresponding to a\(\sim\)8.7\(\pm\)1.5-fold increase, thus demonstrating that TFSI can boost EQE by almost one order of magnitude. It was reported that, using pulsed (AC) bias, EL emission can be enhanced a factor\(\sim\)4[33] and up to\(\sim\)100 in a double optical cavity (distributed Bragg reflector (DBR) with an optical mirror)[106]. Therefore, AC bias and photonic cavities could be combined with TFSI treatment to achieve EQE\(>\)10% in 1L-TMDs. We now consider the EL emission features induced by TFSI treatment. By comparing EL before and after TFSI (Figs.8a,b), a blue-shift in EL is observed. In pristine-LEDs, the EL emission is\(\sim\)641.8nm (\(\sim\)1.931eV), Fig.9a, whereas after treatment it is\(\sim\)625.6nm (\(\sim\)1.982eV), Fig.9b. Fig.9c plots the EL peak position before and after treatment in 4 devices. After treatment, the EL emission shifts to shorter wavelengths, where X\({}^{0}\) is expected[68; 69] (dashed line in Fig.9c). In non-biased S-based TMDs devices, this shift could be due to the depletion of excess \(e\) in _n_-doped 1L-WS\({}_{2}\) due to TFSI[54; 55; 56; 57; 58; 59; 62; 63; 64]. Nevertheless, we cannot neglect the additional charge density induced by \(V\) on the MIS capacitor. E.g. the _I-V_ characteristics in Fig.7 show that \(I\) and _V\({}_{ON}\)_ do not change before and after TFSI, suggesting the same tunneling condition is maintained across the 1L-WS\({}_{2}\)/hBN/SLG junction. In both cases a comparable electric field (and electric charge) is developed across the junction for a given \(V\). Fig.7 implies that, independent of TFSI treatment, the same amount of negative charge is electrostatically induced in 1L-WS\({}_{2}\) at _V\(>\)_0. However, taking into account the EL spectral shift towards X\({}^{0}\) emission upon bias, the expected depletion of excess \(e\) in 1L-WS\({}_{2}\) cannot explain the electrical behaviour of Figs.9b,c. Consequently, the emission profile is not compatible with the _I-V_ curves before and after TFSI in Fig.7, given that the electric field across the junction should be modified by the \(e\) density change in 1L-WS\({}_{2}\). To get a better insight on the effects of TFSI on 1L-WS\({}_{2}\) based LEDs, Figs.9d,e plot normalized PL spec Figure 8: EL spectra from (a) pristine and (b) TFSI-treated 1L-WS\({}_{2}\)-LEDs for different tunneling currents. AA\(\sim\)21\(\mu\)m\({}^{2}\). c) EL intensity as a function of tunneling current divided by AA for pristine (black) and TFSI-treated (red) 1L-WS\({}_{2}\)-LEDs (3 devices). The dashed lines are a linear fit to the data Figure 7: _I-V_ curves of 3 LEDs before (solid black lines) and after (dashed red lines) TFSI treatment tra as a function of \(V\) before and after TFSI. In the pristine case (Fig.9d), the PL map shows an evolution in emission spectra from \(\sim\)620nm (\(\sim\)2.000eV) to \(\sim\)638nm (\(\sim\)1.943eV), corresponding to a spectral shift from X\({}^{0}\) to X\({}^{-}\) due to excess \(e\) in 1L-WS\({}_{2}\) induced by \(V\). After TFSI treatment (Fig.9e), the PL exhibits only a minor shift from\(\sim\)618nm (\(\sim\)2.006eV) to\(\sim\)622nm (\(\sim\)1.993eV), implying that the induced _e_-charge in 1L-WS\({}_{2}\) does not contribute to the X\({}^{-}\) emission pathway. Therefore, similar to Figs.9a,b, PL also indicates that the emission after TFSI treatment predominantly originates from radiative recombination of X\({}^{0}\), independent of \(V\). Refs.[58, 59, 54, 56, 57, 58, 59, 60, 61] claimed that TFSI treatment reduces the extent of _n_-type behavior in S-based 1L-TMDs due to S vacancies passivation, consistent with the suppression of X\({}^{-}\) formation in Refs.[62, 63, 64, 57, 65]. Ref.[8] reported that TFSI acts as a Lewis acid, i.e. it can accept an \(e\) pair from a donor[53], suppressing X\({}^{-}\) formation. Whereas Refs.[50, 51, 52] claimed that TFSI may activate sub-gap states and reduce the _n_-type behavior in S-based TMDs, as well as reducing X\({}^{-}\) formation. Our _I-V_, EL and gated-PL results suggest that TFSI treatment i) depletes the excess \(e\) in 1L-WS\({}_{2}\), acting as a Lewis acid[8] and ii) favours the radiative recombination of X\({}^{0}\) independent of bias, due to the activation of trapping states[50, 52] in 1L-WS\({}_{2}\) caused by the treatment. One would expect changes in the excitonic emission at such trapping states at RT, where the thermal energy can assist carrier de-trapping, and radiative recombination from excitons[64]. Therefore, the modification from non-radiative to radiative recombination by activation of trapping states could be further engineered to achieve more efficient optoelectronic devices. ## Conclusions We demonstrated a one order of magnitude enhancement in EL emission of 1L-WS\({}_{2}\)-LEDs by performing TFSI treatment. EL predominantly originates from trions in pristine devices, while neutral excitons dominate in treated ones. The neutral excitonic emission is also restored in 1L-WS\({}_{2}\) gated-PL measurements. We attribute these changes to a reduction of _n_-doping of 1L-WS\({}_{2}\), as well as changes in the relaxation and recombination pathways within 1L-WS\({}_{2}\). This paves the way to more efficient 1L-TMDs-based LEDs, and shed light into tunability of the excitonic emission of these devices. ## Methods ### Raman characterization of LMH individual constituents Raman spectroscopy allows us to monitor LMs at every step of device fabrication. This should always be Figure 9: EL spectra of (a) pristine and (b)TFSI-treated LEDs at similar tunneling current\(\sim\)12nA, fitted with Lorentzians. c) Position of EL emission for different LEDs before (black) and after (red) TFSI. Color-plot of the gated-PL of (d) pristine and (e) TFSI-treated LED at similar laser excitation power and integration time performed on individual LMs before and after assembly in LMHs and devices. This is an essential step to ensure reproducibility of the results, but, unfortunately, this is often neglected in literature. Ultralow-frequency (ULF) Raman spectra in the range\(\sim\)10-50cm\({}^{-1}\) probe shear (C), corresponding to layer motion parallel to the planes, and layer breathing modes (LBM), corresponding to the motion perpendicular to them[107, 108, 109, 93]. Pos(C)\({}_{N}\) can be used to determine the number of layers[107, 108, 109] as N=\(\pi(2\cos^{-1}[\frac{Pos(C)_{N}}{Pos(C)_{\infty}}])^{-1}\), with Pos(C)\({}_{\infty}\) the bulk Pos(C). Fig.10 plots the Raman spectra of non-treated 1L-WS\({}_{2}\) and bulk-WS\({}_{2}\). In Fig.10a, the C mode and LBM are not observed for 1L-WS\({}_{2}\), as expected[107, 108, 109]. In bulk-WS\({}_{2}\), Pos(C)\(\sim 26.9\pm 0.14\)cm\({}^{-1}\). The spectral resolution\(\pm\)0.14cm\({}^{-1}\) for the ULF region is obtained as for Ref.[86]. We observe two additional peaks\(\sim\)28.7\(\pm\)0.14cm\({}^{-1}\) and 46.4\(\pm\)0.14cm\({}^{-1}\), respectively, in agreement with Refs.[110, 111, 112]. These do not depend on N[110, 111] and are seen because 514.5nm (\(\sim\)2.4eV) is nearly resonant with the B exciton (\(\sim\)2.4eV) of 1L-WS\({}_{2}\)[113, 114, 115, 116, 117], and \(\sim\)20meV above the bulk-WS\({}_{2}\) B exciton (\(\sim\)2.38eV)[114, 115]. This gives rise to a resonant process[113, 114, 115, 116, 117], which occurs because the laser energy matches the electronic transition of the B exciton, revealing features associated with intervalley scattering mediated by acoustic ph[118, 119, 120]. A similar process also happens in 1L-MoS\({}_{2}\)[110, 111] and other 1L-TMDs[118, 119, 120]. Although our ULF filters cut\(\sim\)5cm\({}^{-1}\), the LBM is not detected in bulk-WS\({}_{2}\), as its frequency is expected to be\(<\)10cm\({}^{-1}\)[109], because this resonant process with a 514.5nm laser reduces the signal to noise ratio in this spectral region[110]. The high-frequency (HF) Raman spectra of non-treated 1L-WS\({}_{2}\) and bulk-WS\({}_{2}\) (Fig.10b) show various peaks, Table 1. The first order Raman modes, i.e. E\({}^{{}^{\prime}}\), A\({}^{{}^{\prime}}_{1}\) in 1L-WS\({}_{2}\)[68, 69, 70, 71] and E\({}^{1}_{2g}\), A\({}_{1g}\) in bulk-WS\({}_{2}\)[68, 69, 70, 71]. E\({}^{{}^{\prime}}\) (E1\({}_{2g}\)) and A\({}^{{}^{\prime}}_{1}\) (A\({}_{1g}\)) correspond to in-plane and out-of-plane optical ph for 1L(bulk)-WS\({}_{2}\). Their nomenclature for 1L and bulk differs due to the different crystal symmetry[68, 69, 70, 71]. In 1L-WS\({}_{2}\) we get Pos(E\({}^{{}^{\prime}}\))\(\sim\)356.8\(\pm\)0.2cm\({}^{-1}\), FWHM(E\({}^{{}^{\prime}}\)) \(\sim\)3.2\(\pm\)0.2cm\({}^{-1}\), Pos(A\({}^{{}^{\prime}}_{1}\))\(\sim\)418.5\(\pm\)0.2cm\({}^{-1}\), FWHM(A\({}^{{}^{\prime}}_{1}\))\(\sim\)4.3\(\pm\)0.2cm\({}^{-1}\). In bulk-WS\({}_{2}\) we have Pos(E\({}^{{}^{\prime}}_{2g}\))\(\sim\)356.8\(\pm\)0.2cm\({}^{-1}\), FWHM(E\({}^{{}^{\prime}}_{2g}\))\(\sim\)1.5\(\pm\)0.2cm\({}^{-1}\), Pos(A\({}^{{}^{\prime}}_{1}\))\(\sim\)420.8\(\pm\)0.2cm\({}^{-1}\), FWHM(A\({}_{1}\))\(\sim\)2.1\(\pm\)0.2cm\({}^{-1}\). In 1L-WS\({}_{2}\) the difference in peaks' position [Pos(E\({}^{{}^{\prime}}\))-Pos(A\({}^{{}^{\prime}}_{1}\))] is\(\sim\)61.7cm\({}^{-1}\) while this is\(\sim\)64.0cm\({}^{-1}\) in bulk-WS\({}_{2}\), further corroborating the identification of 1L[68]. In the HF spectra of 1L- and bulk-WS\({}_{2}\) we also observe the 2LA(M) mode, involving two longitudinal acoustic (LA) ph close to the M point[68, 69, 70]. For 1L-WS\({}_{2}\) Pos(2LA(M))\(\sim\)351.9\(\pm\)0.2cm\({}^{-1}\) and FWHM(2LA(M))\(\sim\)9.2\(\pm\)0.2cm\({}^{-1}\), whereas for bulk-WS\({}_{2}\) Pos(2LA(M))\(\sim\)350.6\(\pm\)0.2cm\({}^{-1}\) and FWHM(2LA(M))\(\sim\)8.3\(\pm\)0.2cm\({}^{-1}\). The 2LA(M) mode originates from a second-order double resonant process[118, 119, 120], where momentum conservation is satisfied by two LA ph with opposite momenta around K- and M-points[119], therefore sensitive to differences in band structure between bulk and 1L-WS\({}_{2}\)[68, 121]. I(A\({}_{1g}\))/I(E\({}_{1g}\))\(\sim\)3.2 in bulk-WS\({}_{2}\), where I is the peak height, is higher than I(A\({}^{{}^{\prime}}_{1}\))/I(E\({}^{{}^{\prime}}\))\(\sim\)0.8 in 1L-WS\({}_{2}\). I(2LA)/I(E\({}_{1g}\))\(\sim\)1 in bulk-WS\({}_{2}\) is lower than I(2LA(M))/I(E\({}^{{}^{\prime}}\))\(\sim\)1.7 in 1L-WS\({}_{2}\). This can be explained considering that the main first-order (E\({}^{{}^{\prime}}\), A\({}^{{}^{\prime}}_{1}\)) and second-order (2LA(M)) Raman modes are enhanced for 2.41eV excitation, due to exciton-ph coupling effects involving B exciton transitions[116, 122]. These depend on mode symmetry (i.e. differ between out-of-plane and in-plane modes) as well as N[118]. In bulk-WS\({}_{2}\), the out-of-plane A\({}_{1g}\) is resonant with the B exciton, unlike E\({}^{1}_{2g}\)[118]. The enhancement of A\({}_{1g}\) decreases with decreasing N due to the dependence of the lifetime of the intermediate excitonic states on N[118]. The difference between I(2LA)/I(E\({}^{{}^{\prime}}_{1}\)) in 1L-WS\({}_{2}\) and I(2LA)/I(E\({}^{1}_{2g}\)) in bulk-WS\({}_{2}\) is due to a change in band structure from direct bandgap in 1L to indirect in bulk-WS\({}_{2}\)[68, 69, 70, 71], which changes the double resonance conditions[118, 119, 120]. The Raman spectrum of 1L-WS\({}_{2}\) also shows 8 peaks in the range 170-350cm\({}^{-1}\) (Fig.10b and Table 1). LA(M) and LA(K) correspond to one-ph processes originating from the LA branch at the M- and the K-points, respectively[68, 69, 70, 71]. Since LA(M) and LA(K) and E\({}^{2}_{2g}\)(M) are one-ph processes from the edge of the BZ (q\(\neq\)0)[68, 70, 71, 70], they should not be seen in the Raman spectra since, due to the Raman fundamental selection rule[123], one-ph processes are Raman active only for ph with q\(\sim\)0, whereas for multi-ph scattering the sum of ph momenta needs to be\(\sim\)0[118, 119, 120, 121]. However these modes can be activated in presence of defects, as these can exchange momentum with ph, Figure 10: (a) Low- and (b) high-frequency 514.5nm Raman spectra of 1L-WS\({}_{2}\) (red) and bulk-WS\({}_{2}\) (black) on Si/SiO\({}_{2}\), normalized to the Si peak, with labels as for Table 1 such that the sum of the momenta in the process is\(\sim\)0[68, 69, 70, 71]. A\({}_{1g}\)(K)-LA(K), A\({}_{1g}\)(M)-LA(M), A\({}_{1g}\)(M)-ZA(M), LA(M)+TA(M) in bulk-WS\({}_{2}\) and A\({}^{{}^{\prime}}\)(K)-LA(K), A\({}_{1}^{{}^{\prime}}\)(M)-LA(M), A\({}_{1}\)(M)-ZA(M), LA(M)+TA(M) in 1L-WS\({}_{2}\) are combinational modes, and Raman allowed[68, 70, 71]. E\({}_{2g}^{2}\)(M) correspond to a one-ph process originating from the transverse optical (TO) branch at the M-point[68, 70, 71]. E\({}_{2g}^{2}\)(\(\Gamma\)) is a degenerate mode originating from the LO and TO branches at \(\Gamma\)[68, 70, 71]. Fig.11 plots the Raman spectra of a\(\sim\)3nm hBN flake (black curves) and bulk-hBN (red curves). The latter has 2 Raman-active modes[124, 125], C and E\({}_{2g}\). In Fig.11a Pos(C)\({}_{\infty}\)=52.3\(\pm\)0.14cm\({}^{-1}\) with FWHM\(\sim\)0.7\(\pm\)0.2cm\({}^{-1}\) for bulk-hBN and Pos(C)\({}_{N}\)=50.4\(\pm\)0.14cm\({}^{-1}\) FWHM\(\sim\)0.8\(\pm\)0.2cm\({}^{-1}\) for the hBN flake. In bulk-hBN Pos(C)\({}_{\infty}\)=\(\frac{1}{\pi c}\sqrt{\frac{\alpha}{\mu}}\) =52.3cm\({}^{-1}\), with \(\mu\) =6.9\(\times\)10\({}^{27}\)kg\(\AA^{-2}\) the mass of one layer per unit area, \(c\) the speed of light in cm s\({}^{-1}\), and \(\alpha\) the spring constant associated to the coupling between the adjacent layers[109, 86]. From this, we get \(\alpha=16.9\times 10^{18}\)Nm\({}^{-3}\). From N=\(\pi(2\cos^{-1}[\frac{Pos(C)_{N}}{Pos(C)_{\infty}}])^{-1}\), we get N=6\(\pm\)1 for the 3nm thick flake (measured with a Dimension Icon Bruker AFM in tapping mode) as shown in the inset of Fig.11b). In Fig.11b Pos(E\({}_{2g}\))\(\sim\)1368.5\(\pm\)0.2cm\({}^{-1}\) and FWHM(E\({}_{2g}\))\(\sim\)9.1\(\pm\)0.2cm\({}^{-1}\) for FL-hBN, and Pos(E\({}_{2g}\))\(\sim\)1367\(\pm\)0.2cm\({}^{-1}\) with FWHM(E\({}_{2g}\))\(\sim\)7.6\(\pm\)0.2cm\({}^{-1}\) for bulk-hBN. The peak broadening\(\sim\)1.5cm\({}^{-1}\) in FL-hBN can be attributed to strain variations within the laser spot, as thinner flakes conform more closely to the roughness of the underlying SiO\({}_{2}\)[86]. This is consistent with the fact that thicker hBN have lower root mean square (RMS) roughness[79, 83, 86, 126], e.g. 300nm SiO\({}_{2}\) has RMS roughness\(\sim\)1nm[83], 2-8nm hBN has RMS roughness\(\sim\)0.2-0.6nm[86], while\(>\)10nm hBN thick presents RMS roughness\(\sim\)0.1nm[79, 83]. The red curves in Figs.12a,b are the Raman spectra of SLG on SiO\({}_{2}\)/Si before LMH assembly. Pos(G)=1586.9\(\pm\)0.2cm\({}^{-1}\) with FWHM(G)=7.7\(\pm\)0.2cm\({}^{-1}\), Pos(2D)=2685.2\(\pm\)0.2cm\({}^{-1}\) with FWHM(2D)\(\sim\)29.3\(\pm\)0.2cm\({}^{-1}\), I(2D)/I(G)\(\sim\)0.85, A(2D)/A(G)\(\sim\)3.3. These indicate a _p_-doping[93, 94, 95] with E\({}_{F}\sim\) 200\(\pm\)50meV. No D peak is observed, thus negligible defects[92, 93, 94]. Pos(G) and Pos(2D) are affected by the presence strain[93, 94]. Biaxial strain can be differentiated from uniaxial from the absence of G-peak splitting with increasing \(\epsilon\)[127, 128], however at low (\(\leq\)0.5%) \(\epsilon\) the splitting cannot be resolved[127, 128]. Thus, the presence (or coexistence) of biaxial strain cannot be ruled out. For uniaxial(biaxial) strain, Pos(G) shifts by \(\Delta\)Pos(G)/\(\Delta\epsilon\)\(\approx\)23(60)cm\({}^{-1}\)/%[127, 128]. Pos(G) also depends on doping[95, 98]. E\({}_{F}\sim\) 200\(\pm\)50meV should correspond to Pos(G)\(\sim\)1584.3cm\({}^{-1}\) for unstrained SLG[98]. However, in our experiment Pos(G)\(\sim\)1586.9\(\pm\)0.2cm\({}^{-1}\), which implies a contribution from compressive uniaxial (biaxial) strain\(\sim\)0.1% (\(\sim\)0.04%). The black curves in Figs.12a,b show the Raman spectrum of the FLG electrode on SiO\({}_{2}\)/Si. Pos(G)\(\sim\) 1581.2\(\pm\)0.2cm\({}^{-1}\) with FWHM\(\sim\)12\(\pm\)0.2cm\({}^{-1}\), Pos(2D\({}_{1}\))\(\sim\) 2694.0\(\pm\)0.2cm\({}^{-1}\) with FWHM\(\sim\)48\(\pm\)0.2cm\({}^{-1}\), and Pos(2D\({}_{2}\))\(\sim\) 2725\(\pm\)0.2cm\({}^{-1}\) with FWHM\(\sim\)33\(\pm\)0.2cm\({}^{-1}\). Pos(C)\({}_{N}\sim\)41.4\(\pm\)0.14cm\({}^{-1}\), corresponding to N=5. ### Spectrometer efficiency The \(\eta_{sys}\) of our spectrometer is derived as follows. We use a 50x objective (NA=0.45). Hence, the solid angle is \(\theta\)=(1-cos\(\theta\))\(\times\)2\(\pi\), where \(\theta\)=arcsin(NA/n), and n is the refractive index. Assuming n=1 we get \(\theta\)=0.672. Thus, M\({}_{50x-eff}\)=\(\theta\)/(4\(\pi\))\(\times\)100%\(\sim\)5.4%. In our Horiba system, the optical path from M\({}_{50x}\) to CCD includes 7 Mirrors (M\({}_{eff}\sim\)83%), a slit (S\({}_{eff}\sim\)90%), a grating (G\({}_{eff}\sim\)60%) and a CCD detector (CCD\({}_{eff}\sim\)85%). Therefore, the calculated overall collection+Horiba efficiency is: Figure 11: (a) ULF and (b) HF 514.5nm Raman spectra of\(\sim\)3nm hBN on Si/SiO\({}_{2}\) normalized to the Si peak. Inset: AFM height profile of the \(\sim\)3nm hBN on Si/SiO\({}_{2}\) Figure 12: (a) ULF and (b) HF 514.5nm Raman spectra of SLG and FLG on Si/SiO\({}_{2}\) normalized to the Si peak \(\rm M_{50x-eff}\times\)(\(\rm M_{eff}\))\({}^{7}\times\)S\({}_{eff}\times\)G\({}_{eff}\times\)CCD\({}_{eff}\)\(\sim\)0.0067. To experimentally validate the calculation, we use a 0.5pW laser at 632.8nm and measure the counts at the CCD detector N\({}_{counts}\)=149748. The photon energy at 632.8nm is E\({}_{ph}\)=(1.24/0.638)\(\times\)1.6e\({}^{-19}\)=3.13e\({}^{-19}\)J. The laser power is P\({}_{opt}\)=0.5e\({}^{-12}\) J/s. As a result, if the system efficiency is 100% we expect to get 0.5e\({}^{-12}\)/3.13e\({}^{-19}\)=1597444 counts. Therefore, the Horiba system efficiency is \(\rm Syst_{eff}\)=149748/1597444=0.094. Considering M\({}_{50x-eff}\), we get an overall collection + Horiba efficiency \(\rm M_{50x-eff}\times\)\(\rm Syst_{eff}\)=0.054\(\times\)0.094=0.0051, consistent with the theoretical estimation. ###### Acknowledgements. We acknowledge funding from the EU Graphene and Quantum Flagships, EU grant Graph-X, ERC Grants Hetero2D and GSYNCOR and GIPT, EPSRC Grants EP/K01711X/1, EP/K017144/1, EP/N010345/1, EP/L016087/1, EP/V000055/1, EP/X015742/1
2306.09845
Exploring the origin of the extended main sequence turn off in M37 through the white dwarf cooling sequence
We use new observations from the Canada-France-Hawaii Telescope to study the white dwarf cooling sequence of the open cluster M37, a cluster that displays an extended main sequence turn-off and, according to a recent photometric analysis, also a spread of initial chemical composition. By taking advantage of a first epoch collected in 1999 with the same telescope, we have been able to calculate proper motions for sources as faint as g ~ 26 (about ~ 6 magnitudes fainter than the Gaia limit), allowing us to separate cluster members from field stars. This has enabled us to isolate a sample of the white dwarf population of M37, reaching the end of the cooling sequence (at g ~ 23.5). The here-derived atlas and calibrated catalogue of the sources in the field of view is publicly released as supplementary on-line material. Finally, we present an exhaustive comparison of the white dwarf luminosity function with theoretical models, which has allowed us to exclude the age-spread scenario as the main responsible for the extended turnoff seen in the cluster colour-magnitude-diagram.
M. Griggio, M. Salaris, D. Nardiello, L. R. Bedin, S. Cassisi, J. Anderson
2023-06-16T13:42:58Z
http://arxiv.org/abs/2306.09845v1
Exploring the origin of the extended main sequence turn off in M37 through the white dwarf cooling sequence ###### Abstract We use new observations from the Canada-France-Hawaii Telescope to study the white dwarf cooling sequence of the open cluster M37, a cluster that displays an extended main sequence turn-off and, according to a recent photometric analysis, also a spread of initial chemical composition. By taking advantage of a first epoch collected in 1999 with the same telescope, we have been able to calculate proper motions for sources as faint as \(g\sim 26\) (about \(\sim 6\) magnitudes fainter than the _Gaia_ limit), allowing us to separate cluster members from field stars. This has enabled us to isolate a sample of the white dwarf population of M37, reaching the end of the cooling sequence (at \(g\sim 23.5\)). The here-derived atlas and calibrated catalogue of the sources in the field of view is publicly released as supplementary on-line material. Finally, we present an exhaustive comparison of the white dwarf luminosity function with theoretical models, which has allowed us to exclude the age-spread scenario as the main responsible for the extended turnoff seen in the cluster colour-magnitude-diagram. keywords: astrometry - Hertzsprung-Russell and colour-magnitude diagrams - open clusters and associations: individual: M37 (NGC 2099) - techniques: photometric - white dwarfs ## 1 Introduction During the last few years the unprecedented quality of the photometric and astrometric data obtained with the _Gaia_ spacecraft has greatly refined our knowledge of the Milky Way open clusters (OCs). The OC census has improved through the rejection of thousands of misidentified OCs in the literature and the discovery of several hundreds new confirmed OCs (see, e.g., Cantat-Gaudin et al., 2018; Castro-Ginard et al., 2018, for some examples); moreover, the improved determination of stellar memberships and orbital parameters has provided us with a better characterisation of individual clusters. In this respect, the analysis of the exquisite, high-precision _Gaia_ colour-magnitude diagrams (CMDs) of _bona fide_ members of selected OCs, has recently revealed the presence of extended main sequence (MS) turn off (TO) regions and broadened MSs, that cannot be originated by field contamination, binaries and differential reddening alone (see, e.g., Bastian et al., 2018; Marino et al., 2018; Cordoni et al., 2018; Griggio et al., 2022, and references therein). These features are similar to what is observed in the Magellanic where star clusters younger than about 2 Gyr display extended TO regions (see, e.g., Mackey et al., 2008; Mackey and Broby Nielsen, 2007; Goudfrooij et al., 2014; Piatti and Bastian, 2016, and references therein), and clusters younger than \(\sim\) 600-700 Myr display also split MSs (see, e.g., Li et al., 2017; Correnti et al., 2017; Marino et al., 2018, and references therein). Whilst there is mounting evidence that rotation -as opposed to an age range among the cluster population- is the main culprit to explain these features in the CMD of both open OCs and Magellanic Cloud clusters (see, e.g., Bastian et al., 2018; Kamann et al., 2018, 2020, 2023, and references therein), our photometric analysis of the \(\sim\) 500 Myr old OC M37 (NGC 2099) - with an extended TO and no split MS - has targeted a magnitude range populated by stars with convective envelopes, hence predicted to be in any case slow rotators, disclosing the presence of a sizeable initial chemical abundance spread, which may or may not be somehow related to the extended TO (Griggio et al., 2022). We made use of synthetic stellar population and differential colour-colour diagrams using a combination of _Gaia_ and _Sloan_ photometry to show that the observed MS colour spread in the high-precision _Gaia_ Early Data Release 3 (EDR3 Gaia Collaboration et al., 2021) CMD can only be reproduced by differential reddening and unresolved binaries plus either a metallicity spread \(\Delta\)[Fe/H] \(\sim\) 0.15, or a range of initial helium mass fractions \(\Delta Y\sim\) 0.10. As discussed in Griggio et al. (2022), the existing spectroscopic (high- and medium resolution) measurements of the cluster stars' metallicity provide indications both in favour and against the existence of a [Fe/H] spread (in which case our results would point to a sizeable helium abundance spread), but a high-precision differ ential abundance analysis of a consistent sample of cluster stars is needed to address this issue spectroscopically. It is worth noticing that the existence of chemical abundance spreads in low-mass clusters like OCs (M37 has an estimated mass of just 1 000-1 500 \(M_{\odot}\), see Piskunov, A. E. et al. 2008) is unexpected and hard to explain, and has important implications not only for models of cluster formation and the test of stellar models on CMDs of OCs, but also for the technique of chemical tagging (Freeman and Bland-Hawthorn 2002), based on the idea that clustering in chemical space can in principle associate individual field stars with their birth clusters, assumed chemically homogeneous. If OCs are commonly born with a sizeable internal [Fe/H] range, the suitability of this technique for field stars in the disk of the Milky Way is challenged. In this paper, we present a new photometric analysis of M37's white dwarf (WD) cooling sequence (CS), which improves upon earlier results by Kalirai et al. (2001b) in several ways. The area covered by our observations is over three times larger than Kalirai et al. (2001b), who also used the outer regions of their mosaic to estimate field stars contamination, which are however now known to host several members stars (Griggio et al. 2022a). For our field decontamination we have used a safer region much further away from the cluster core, and in addition we exploited their data to obtain proper motions with a time baseline of 23 years, which allowed us to determine a sample of WD members. Taking advantage of this new data we have performed a theoretical analysis of the observed CS to seek for additional constraints on the origin of the cluster extended TO and its chemical abundance spread. The _present-day_ low total mass of M37 seems to preclude the presence of multiple generations of stars and hence of an age spread according to the scenario presented by Goudfrooij et al. (2014), because the cluster should not be able to retain the ejecta of those first-generation stars that can provide material for further episodes of star formation (asymptotic giant branch stars, supernovae). However, the chemical composition spread we detected photometrically seems to suggest otherwise, hence it is important to derive independent constraints about the origin of the observed extended TO. The study of the WD cooling sequence and its consistency -or lack of- with ages inferred from the TO can provide us with these independent clues. We also publicly release the catalogue with magnitudes and proper motions of the covered region, containing more than 120 000 sources. The outline of the paper is as follows. Section 2 presents our new observations, the data reduction process, and the artificial star tests; Sections 3 and 4 present the observed WD CS and its theoretical analysis, respectively, and are followed by Section 5 with the conclusions. ## 2 Observations The main data employed in this article was obtained with the MegaPrime camera at _CFHT_, between September 27th and 29th, 2022 (PI: Nardiello). The MegaPrime camera is composed of forty \(2048\times 4612\) pixels CCDs, with a pixel scale of \(\sim 0.187\) arcsec/px. We collected a set of three images with an exposure time of 300 s, and three images of 5 s, both in the _Sloan_ filters \(g\) and \(r\). The observations in \(g\) were repeated twice, for a total of eighteen images, twelve in \(g\) and six in \(r\). The data was dithered enough to cover the CCDs' gaps, with a total field of view of about \(1.2\times 1.0\) sq. degrees; a three-colour stacked image of the data is shown in Fig. 1. Since the brightest members of M37 MS and all the red clump stars were saturated even in the short exposures, we collected a set of 50 dithered images with exposure times of 10 s in both \(g\) and \(r\) with the Asiago Schmidt telescope, to complete the photometry of the brighter part of the CMD. The Asiago Schmidt telescope has a \(\sim\) 1 sq. degree field of view, and similar data collected with this instrument were described in Griggio et al. (2022a). We also took advantage of an early epoch collected at _CFHT_ (with the pioneering CHF12K camera, 12 CCDs, \(\sim 0.206\) arcsec/px, \(42\times 28\) sq. arcmin) in 1999 (PI: Fahlman, Kalirai et al. 2001a), to obtain proper motions. The CH12K was one of the first wide-field CCD camera to become operative, and these images were collected in the _Johnson B_ and V filters. We used three images per filter, with an exposure time of 300 s. A log of the observations is reported in Table 1. ### Preliminary photometry As a first step, we derived a 'preliminary photometry', i.e. we measured the flux and position of the brighter sources, that are then used as a starting point to correct for the geometric distortion and to compute the transformations between the different exposures. We treated each CCD of each exposure as an independent image; in the following we will use the terms 'exposure' and 'image' to refer to the image associated to the single CCD. Using a version of the software by Anderson et al. (2006) adapted to the _CFHT_ data, we \begin{table} \begin{tabular}{l l l l} \hline \hline Filter & Exp. time & N. of images & Avg. seeing \\ \hline **Megaprime** & & & \\ \hline \(g\) & 300 s & 6 & 0.55 arcsec \\ \(g\) & 5 s & 6 & 0.58 arcsec \\ \(r\) & 300 s & 3 & 0.56 arcsec \\ \(r\) & 5 s & 3 & 0.70 arcsec \\ \hline **Schmidt** & & & \\ \hline \(g\) & 10 s & 50 & 1.86 arcsec \\ \(r\) & 10 s & 50 & 1.97 arcsec \\ \hline **CFH12K** & & & \\ \hline \(B\) & 300 s & 3 & 0.79 arcsec \\ \(V\) & 300 s & 3 & 0.81 arcsec \\ \hline \end{tabular} \end{table} Table 1: Summary of the observations. Figure 1: Three-colour view of the field of view. We used \(g\) as blue, \(r\) as red, and a combination of \(gr\) for the green colour. computed a \(5\times 9\) grid of empirical point spread functions (PSFs) for each image to take into account for the time variations; the grid is necessary to account for the spatial variation of the PSF across the CCD. Each PSF is derived empirically from bright, unsaturated and isolated stars, and to each point on the image we associated a local PSF by a bilinear interpolation of the four closest PSFs in the grid. We then used the software described in Anderson et al. (2006) to find and measure the position and flux of the sources in the images by using the local PSF. The software outputs a catalogue with positions and instrumental magnitudes of the sources for each exposure. ### Geometric distortion Given that one of our goals was to measure proper motions, we needed accurate positions in both epochs. To this purpose, we corrected the geometric distortion following the same approach for both the detectors CFH12K and MegaPrime (the procedure is similar to the one adopted in Griggio et al. 2022a). We selected bright (\(g_{\rm instr}<-10\)), unsaturated sources from each catalogue derived by the preliminary photometry. We cross-identified the sources in our catalogues with the sources in the _Gaia_ DR3 catalogues, projected onto the tangent plane of each image in its central pixel, after transforming the positions to the epoch of each observation. We then fitted the residuals between the _Gaia_ positions and the positions measured in our images with a third-order polynomial, and applied the 75% of the correction. We then repeated the process, starting with the corrected positions of the previous iteration, reaching convergence after 30 iterations. After the correction, the residuals' dispersion for bright sources is smaller than 0.05 pixels in both detectors, corresponding to \(\sim 10\) mas for the 1999 data and to \(\sim 9\) mas for the 2022 data; summing up these residuals in quadrature we obtain a positional dispersion of \(\sim 14\) mas, to be diluted over a time-baseline of \(\sim 23\) years, i.e. about \(0.6\) mas yr\({}^{-1}\). Given the absolute proper motion of M37, which is about \(6\) mas yr\({}^{-1}\)(Griggio & Bedin, 2022), this will allow for a proper-motion-based separation between field objects and cluster members (see Sec. 2.4). ### Master frame and zero-points calibration To measure the faintest sources in the field of view, we needed to perform deep photometry as in Griggio et al. (2022a) (which we name'second-pass photometry', see Sec. 2.4). This requires to define a common reference system for all the exposures, to which we then refer the positions in both epochs, that we call'master frame'. The master frame was defined by the positions of the _Gaia_ DR3 catalogue, projected onto the plane tangent to the central point of image \(566225p\) for CFH12K data, and \(2785599p\) for MegaPrime data. The _Gaia_ positions were again transformed to the epoch of each observation. We used the catalogues of each image to derive the six-parameter transformations to bring the positions measured in the detector reference frame of each exposure onto the corresponding master frame. The MegaPrime exposures were also dithered enough to allow us measuring the CCDs' relative photometric zero points, which we found to be of the order of 0.01 mag. Our derived \(BV\) photometry for the CFH12K dataset, however, was not usable, in part because we could not access the calibration files, and in part because of the non-ideal dither pattern, which did not allow us to register the CCD zero points to a common photometric reference system. Therefore, the 1999 CFH12K images were used only to derive positions in this first epoch, which were in turn employed to derive the proper motions necessary to decontaminate cluster stars from field objects. ### Photometry and astrometry To extract the positions and fluxes for all the sources in the field of view we used the code K52, an evolution of the code developed by Anderson et al. (2008) for the Hubble Space Telescope data, which was adapted to deal with the _CFHT_ data. The program goes through several iterations, finding and measuring progressively fainter stars, using all the images simultaneously to find the sources, thus increasing the signal-to-noise ratio. This allows to find even the faintest sources that are lost in the noise in single exposure. The software uses a list of bright stars (derived from the preliminary photometry) to construct weighted masks, that help to avoid PSF-related artefacts. The flux is measured performing a PSF fitting of the inner \(5\times 5\) pixels of the source, with the appropriate local PSF, and averaged between all the images, with a local sky computed from the surrounding pixels. Measured stars are subtracted from the image before proceeding with the next iteration. The program outputs also some quality flags (see, e.g., Bedin et al., 2009), that we used to discard sources with galaxy-like shape and diffraction spikes. The \(gr\) instrumental magnitudes have been then calibrated using the deep photometric catalogue by Hartman et al. (2008) by means of a relation in the form \(m_{\rm cal}=m_{\rm instr}+a(g_{\rm instr}-r_{\rm instr})+b\), with the parameters \(a\) and \(b\) determined from a linear fit, as shown in Fig. 2. The calibrated CMD of all the sources in the field of view is shown in Fig. 3. We extracted the photometry from the Asiago data as described in Sec. 3.1 of Griggio et al. (2022a). We did not perform the second-pass photometry as we needed only the bright sources. We employed the same procedure outlined for the MegaPrime data to calibrate the Asiago photometry. The flux and position of the sources in the 1999 exposures were extracted with the software K52. However, due to the issues described in the previous section, we did not carry out the photometric calibration. Proper motions were calculated using the displacements \(\mathrm{d}x\) and \(\mathrm{d}y\) between the two epochs, divided by the time baseline of \(\sim 23\) years, and are shown in Fig. 4 (where we used the cluster's mean proper Figure 2: Calibration of the _CFHT_\(gr\) filters: the coloured lines denote the linear fit to the data. We display the difference in the \(g\) and \(r\) filters between Hartman et al. (2008) and our instrumental magnitudes, as a function of the instrumental \((g-r)\). motion as the origin); the displacements were measured by transforming the positions of the stars in the first epoch into the reference system of the second epoch with a six-parameter transformation, and cross-identifying the common sources. The bottom panel of Fig. 4 shows the member selection; we plotted the distance \(\mathrm{d}r\) from the origin as function of the \(g\) magnitude, and we drew by hand the red line following the distribution of cluster stars, with a sharp cut where cluster and field cannot be well separated by eye. In addition, we estimated the field median \(\mathrm{d}r\) and \(\mathrm{d}y\) and its intrinsic dispersion \(\sigma_{\mathrm{x},y}\) as 1.5 times the 68.27\({}^{\mathrm{th}}\) percentile of the distribution of \(\mathrm{d}x\) and \(\mathrm{d}y\) around their median, and excluded the sources with proper motion inside a circle centred on the field motion with radius given by the sum in quadrature of \(\sigma_{\mathrm{x},y}\) (dashed black circle in Fig. 4). For sources that are present in the catalogue by Griggio et al. (2022), we adopted their member flag, that, for sources at brighter magnitudes, is more reliable that the selection based on our measured proper motions as it is based on the _Gaia_ astrometry. This selection leads to Fig. 5, where we show in light grey all the sources with proper motions (which are less than those in Fig. 3, as the 2022 data are deeper and cover a larger area than the 1999 ones) and in blue the selected cluster members. We plotted the _CFHT_ photometry up to \(g=12.5\), and the Schmidt data for \(g<12.5\) to complete the TO and red clump regions which are saturated in the _CFHT_ short exposures. Our derived proper motions represent an extension of the _Gaia_ astrometry down to \(g\sim\) 26, and the deepest astro-photometric catalogue of M37 available until now. Unfortunately, given the large errors on the positions of faint sources in the first epoch, we cannot discriminate very well between members and field stars for \(g\gtrsim 22.5\). Nonetheless, we proved the capability of ground-based wide-field imagers in providing useful astrometry even in the _Gaia_ era. Finally, we confirm WD1, WD2 and WD3 of Griggio et al. (2022) as member candidates according to their proper motions obtained in this work, while WD5 proper motions are not compatible with those of the cluster. The other WDs, namely WD4, WD6 and WD7, fall outside the field of view, and we could not measure their motion. ### Artificial stars test To assess the completeness of our data set, we performed the artificial star (AS) test with the K52 program (see, e.g., Bedin et al., 2009). Briefly, we injected in the images 100 000 synthetic stars (one at a time, in order to not create false over-crowding), generated with random positions and random \(g\) magnitudes, both sampled from a uniform distribution, with \(r\) magnitudes such that they lie on the WD CS fiducial drawn by hand on the CMD (Fig. 6, left panel). The software then operates blindly, finding and measuring all the sources in the images. We then compared the list of measured stars with the AS input list. We considered an AS as recovered if its measured Figure 4: _Top panel_: proper motions for all the sources (grey), with selected members highlighted in blue. The dashed black circle is the cut described in the text. The origin is set to the cluster’s mean proper motion. _Bottom panel_: \(\mathrm{d}r\) vs \(g\) for all the sources (grey) and cluster members (blue). Figure 3: CMD of all the sources that passed the quality cuts in the \(gr\) filters (\(\sim\) 120 000). position is within 1 pixel in \(x\) and \(y\) from the injected position and its magnitude within 0.1 from the injected magnitudes in both filters. In the right panel of Fig. 6 we show the CMD of the recovered stars, that guided the choice of the region we adopted to derive the WD differential luminosity function (LF). We divided into 0.5 \(g\)-magnitude bins the recovered ASs, and for each bin we computed the median colour and the \(\sigma=68.27^{\rm th}\) percentile of the colour residuals around the median. The orange error bars in the right panel of Fig. 6 represents the \(3\sigma\) interval, and the blue and red curves connecting the edges of the error bars define the region that we will use for our analysis. The AS test let us infer the completeness of our data set, defined as the ratio between the number of recovered stars and the number of injected stars, which varies across the magnitude range covered by our observations. We computed this ratio for each 0.25 \(g\)-magnitude interval, and interpolated the values with a spline. The derived completeness curves are plotted in Fig. 7: the two horizontal lines mark the 80% and 50% completeness levels. Notice that the completeness drops below 50% at about \(g\sim\) 24, and reaches zero at \(g\sim\) 26. The completeness has been computed both for the 'cluster' region and 'field' regions, shown in Fig. 8 in blue and red respectively. The two regions have roughly the same area of about 0.2 deg\({}^{2}\), and will be employed in Sec. 3 in the study of the WD CS. ### Astro-photometric catalogue Together with this work we publicly release an astro-photometric catalogue of the sources that we measured in the _CFHT_ field of view. Proper motions are available only for sources in the common region of the 1999 data, which is about one-third of the new dataset (as shown in Fig. 8). The catalogue contains \(x\) and \(y\) positions on the master frame in Megaprime pixels, with 187 mas px\({}^{-1}\), the \(gr\) photometry and proper motions along the \(x\) and \(y\) axes in mas yr\({}^{-1}\). In addition, the quality flag denotes sources that passed our quality cuts, and the member flag those who are selected as member candidates in this work (blue points in Fig. 4). ## 3 The White Dwarf Cooling Sequence The 1999 dataset allowed us to measure proper motions for sources well beyond the _Gaia_ magnitude limit, down to \(g\sim\) 26; however, given the large errors, we could not discriminate well between cluster and field stars at magnitudes fainter than \(g\sim\) 22.5 (see Fig. 4, bottom Figure 5: CMD for all the sources with proper motions (light grey, \(\sim\) 24 000) and for those selected as cluster members (blue, \(\sim\) 3 200). Figure 6: Artificial stars test. _Left panel_: blue points denote the observed white dwarfs, the dark grey line represents the fiducial along which we generated the artificial stars. _Right panel_: recovered artificial stars. The orange error bars are calculated as three times the 68.27\({}^{th}\) percentile of the colour residuals around the median, in each 0.5 magnitude bin. The dashed lines connecting the edges of the error bars define the region in which we will count the white dwarfs. Figure 7: Completeness of our data in the ‘cluster’ and ‘field’ regions. See text and Fig. 8. panel). Most faint sources that have a clear point-like shape in the 2022 data are heavily affected by the noise in the 1999 data, making their position (and consequently, their proper motion) measurements very uncertain. For this reason, we did not employ proper motions to remove field objects in the derivation of the LF; we have instead performed a statistical decontamination (cfr. Bedin et al., 2023) using the regions defined in Fig. 8 to obtain the WDLF that we will compare to theoretical predictions in the next section. Fig. 9 shows the CMD of the WD CS, for both the 'cluster' and 'field' regions defined in Fig. 8. The red and blue lines in these CMDs are those defined by the AS test (Fig. 6) and mark the boundaries of the region within which we count WD candidates. The final LF is given by the difference between the completeness-corrected 'cluster' and 'field' LFs, and is shown in sea-green in the right panel of Fig. 9 (and reported in Table 2), with error bars corresponding to Poisson errors. The dashed dark-grey line represents the LF of WD member candidates selected by proper motions: we note that the two LFs have similar features, and in particular they terminate at the same magnitude \(g\sim 23.5\), where the completeness level is still greater than 50% (cfr. Fig. 7). This cut-off of the LF is well-defined and can be used as an age indicator for the cluster; for an increasing age of the cluster's population, the oldest (earlier forming) WDs have more time to cool down, thus shifting the LF cut-off towards fainter magnitudes. ## 4 Comparison with theory In this section we discuss the comparison of the WD LF of Table 2 with theoretical WD models, that enabled us to derive important constraints on the origin of the extended TO observed in the cluster CMD. Due to the issue highlighted below, we have only compared the LF in the \(g\) band with theory, and not the CS in the CMD. As already mentioned, we found in Griggio et al. (2022b) that the stellar population hosted by this cluster displays either a range of metallicity \(\Delta\)[Fe/H] \(\sim\) 0.15 and a range of differential reddening \(\Delta E(B-V)=0.06\) ([Fe/H] spread scenario), or a spread of \begin{table} \begin{tabular}{l c c} \hline \hline \(g\) & N & \(\sigma_{\rm N}\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: Our derived, completeness-corrected, WD differential LF. Negative values have been set to zero. Figure 8: Total field of view of the MegaPrime data. The black lines delimit the observed region. The blue filled area is the “cluster” region, while the red filled area is the “field” region. The two regions have the same area of \(\sim\) 70 Myr. The number of stars in each region is annotated in the lower corners. The green dashed line shows the area covered in 1999 by the CFH12K detector. Figure 9: CMD of the WD CS in the ‘cluster’ (_left panel_) and ‘field’ (_middle panel_) regions. Blue points with solid black edge in the left panel denote the sources that are member candidates according to their proper motions. The red and blue dashed lines are those defined by the AS test (see Fig. 6). The _right panel_ shows the completeness-corrected LF after field decontamination (sea green). The dark-grey dashed line represents the LF of the proper motion selected WD members. See text for details. helium abundance \(\Delta Y\sim\) 0.06 and a range \(\Delta E(B-V)=0.03\) (helium spread scenario). For the distance - 1450 pc, consistent with the range \(1500\pm 100\) pc determined by Griggio & Bedin (2022) from _Gaia_ EDR3 parallaxes - and reference [Fe/H] = 0.06 - consistent with the existing few high-resolution spectroscopic measurements (Pancino et al., 2010) - used in Griggio et al. (2022b) analysis, in the [Fe/H] spread scenario the reference \(E(B-V)\) ranges from 0.28 mag to 0.34 mag and the metallicity ranges from \(\rm[Fe/H]=0.06\) to \(\rm[Fe/H]=0.21\). The lowest metallicity isochrone (from the BaSTI-IAC database, Hidalgo et al., 2018) matches the blue envelope of the unevolved MS in the _Gaia_ CMD (\(G\) magnitudes between \(\sim\) 15 and \(\sim\) 17) for the lowest value of the reddening, \(E(B-V)=0.28\) (see, e.g., Figs. 1 and 9 in Griggio et al., 2022b). In the \(Y\) spread scenario, we found \(Y\) ranging from \(Y=0.269\) - the standard value of \(Y\) at \(\rm[Fe/H]=0.06\) in the BaSTI-IAC isochrones - to \(Y=0.369\), for \(E(B-V)\) between 0.33 mag and 0.36 mag. In this case, the blue envelope of the unevolved MS in the _Gaia_ CMD is matched by the most helium-rich \(Y=0.369\) (hence bluer) isochrones, and \(E(B-V)=0.33\). The leftmost panel of Fig. 10 shows that, in the [Fe/H] spread scenario, when we match a 400 Myr (the exact age is irrelevant to this discussion) \(\rm[Fe/H]=0.06\) BaSTI-IAC isochrone (from the same sets adopted in Griggio et al., 2022b) to the MS in the _Sloan g-_g_ (\(g-r\)) CMD using \(E(B-V)=0.28\) and the extinction ratios by Zhang & Yuan (2023), the models are redder than the blue edge of the unevolved MS in a wide magnitude range. This includes the interval between \(g\sim\) 16 and \(\sim\) 19, which approximately corresponds to the \(G\) magnitude range of the _Gaia_ CMD where the isochrones match the blue edge of the MS (Griggio et al., 2022b), as shown is the second panel from the left of the same figure. We also show a 400 Myr WD BaSTI-IAC isochrone calculated from hydrogen-envelope (DA) carbon-oxygen (CO) core WD cooling tracks (computed with the Cassisi et al., 2007, electron conduction opacities) with \(\rm[Fe/H]=0.06\) progenitors by Salaris et al. (2022), the initial-final-mass relation (IFMR) by Cummings et al. (2018) and progenitor lifetimes from Hidalgo et al. (2018), compared to the observed CS for the same choice of distance and reddening. The WD isochrone also appears redder than the observations. To investigate the cause(s) of this inconsistency with the fit to the MS in the _Gaia_ CMD, the third and fourth panel from the left in Fig. 10 display CMDs with the _Gaia_\(G\) magnitude on the vertical axis, and colours calculated using the _Gaia_ and one _Sloan_ magnitude. The same isochrone of the left panel is compared to the data in these two CMDs. We can see that the models in the \(G-(g-G_{\rm RP})\) CMD match the blue edge of the MS, whilst the isochrones are redder than the observed MS in the \(G-(G_{\rm BP}-r)\) CMD. This suggests that the inconsistency between the fits in the two photometric systems arises from a mismatch between the theoretical and observed \(r\) magnitudes1. For this reason, in our study of the WD cooling sequence, we will consider only the \(g\) magnitudes. Footnote 1: This mismatch exists also in comparison with Hartman et al. (2008) photometry, which has been used to calibrate our magnitudes, as shown in Fig. 2. To compare the WD \(g\)-band LF with models we computed grids of CO-core DA WD isochrones with the same inputs as the one in Fig. 10 (from progenitors with \(\rm[Fe/H]=0.06\)), for ages between 150 and 450 Myr at steps of 25 Myr, and calculated synthetic LFs using Monte-Carlo techniques. As shown in Fig. 10, at these ages the WD isochrones are sequences of continuously increasing magnitude in the \(g\) band, and the WD mass evolving at a given brightness increases monotonously with increasing \(g\). Due to the younger ages, the ranges of progenitor and WD masses along the isochrones are narrower than in the case of globular clusters. At 150 Myr the brightest part of the isochrones is populated by \(\sim\) 0.95 \(M_{\odot}\) WDs with progenitor masses equal to \(\sim\) 4.6 \(M_{\odot}\), whilst at 450 Myr the WDs have a mass equal to \(\sim\) 0.75 \(M_{\odot}\) with progenitors of \(\sim\) 2.9 \(M_{\odot}\). The bottom end of the isochrones is populated by 1.1 \(M_{\odot}\) WDs with \(\sim\) 6.4 \(M_{\odot}\) progenitors. For each isochrone, we have produced a sample of \(g\) magnitudes of synthetic WDs (20 000 for each age, to minimise statistical fluctuations of their magnitude distribution), by drawing randomly progenitor masses according to a Salpeter mass function (power law with exponent \(x=-2.3\)) and interpolating along the isochrone to determine the \(g\) magnitude of their WD progeny. We then corrected the magnitude for the assumed cluster distance and applied a random extinction (using the extinction-law by Zhang & Yuan, 2023) from values of \(E(B-V)\) drawn with a uniform probability within the range appropriate to the explored scenario ([Fe/H] or \(Y\) spread). Each synthetic \(g\) was then perturbed by a random Gaussian photometric error with \(\sigma\) estimated from the observations (see Sec. 2.5). For each of these samples (corresponding to a given WD isochrone age) we finally calculated the differential LF with the same binning of the observed one, and rescaled the total number of objects in the LF to the observed (completeness corrected) one, before comparing it with the observations. These sets of synthetic samples of WDs and the corresponding LFs have been computed for both the [Fe/H] spread and \(Y\) spread scenarios, considering two distances \(d\) equal to 1400 and 1600 pc, respectively the lower and upper limits of the distance determination from _Gaia_ parallaxes by Griggio & Bedin (2022). For the assumed reference metallicity \(\rm[Fe/H]=0.06\) the minimum \(E(B-V)\) values (determined as described above) for \(d=1400\) pc are 0.26 mag for the \(\Delta\)[Fe/H] scenario, and 0.31 mag for the \(\Delta\)\(Y\) scenario. At \(d=1600\) pc the minimum reddenings are \(E(B-V)=0.31\) for the \(\Delta\)[Fe/H] scenario, and 0.36 mag for the \(\Delta\)\(Y\) scenario. It is important to mention that for the \(\Delta\)[Fe/H] scenario we have calculated the WD isochrone for just one value of [Fe/H] (\(\rm[Fe/H]=0.06\)). This is because we have found that changing [Fe/H] of the progenitors by \(\pm 0.20\) dex produces isochrones virtually indistinguishable at these ages. The same is true also for the \(\Delta\)\(Y\) scenario, with isochrones calculated considering just the minimum value of \(Y\). We have determined the oldest cluster age compatible with the observed WD cooling sequence, by finding the theoretical LFs that match the magnitude of the cut-off of the WD LF. Fig. 11 shows the oldest ages compatible with the observed LF - between 200 and 350 Myr, summarised in Table 3 - for the two distances and the two scenarios discussed here. The derived ages are typically older (by 100-150 Myr) for shorter distances, as expected, and at a fixed distance they are very similar in both scenarios. At these ages, all WDs along the cluster CS have not yet started crystallization in their CO cores. It is important to stress that, in case the extended TO of this cluster is due to an age range, the WD LF tells us that the maximum age of the cluster stars cannot be older than the values given above, otherwise we should find WDs fainter than the observed LF cut-off. We have then repeated the same procedure by employing \begin{table} \begin{tabular}{l l l} \hline \hline \(d\) (pc) & age (Myr) & scenario \\ \hline 1400 & 350 & \(\Delta\)[Fe/H] \\ 1400 & 300 & \(\Delta\)[Fe/H] \\ 1600 & 200 & \(\Delta\)[Fe/H] \\ 1600 & 200 & \(\Delta\)[Fe/H] \\ \hline \end{tabular} \end{table} Table 3: Maximum ages compatible with the WD LF cut-off magnitude, for the two distances and scenarios discussed in the text. isochrones derived from WD cooling models (again from Salaris et al., 2022) calculated using the alternative Blouin et al. (2020) electron conduction opacities, and found results consistent with what we have previously obtained from calculations with the Cassisi et al. (2007) opacities. As an example, Fig. 12 shows how the 350 Myr theoretical LF in the \(\Delta\)[Fe/H] scenario calculated using Blouin et al. (2020) opacities and a distance of 1400 pc has the same cut-off magnitude as our reference calculations. We have also explored the possibility that the cluster hosts not just DA WDs, but also a 20% fraction of WDs with He-dominated atmospheres (this fraction is typical of the Galactic disc field WD population, see, e.g., Koester & Kepler, 2015). In this case, for each age, we have computed isochrones and synthetic samples of \(g\) magnitudes from the helium-envelope WD models by Salaris et al. (2022), and merged them with the corresponding DA samples in a proportion 20/80, before calculating the corresponding LF. The results about the WD-based cluster ages are again unchanged (see Fig. 12 for an example), because in this luminosity regime H- and He-envelope WD models cool down at very similar rates. Finally, we have explored the role played by the adopted IFMR. For all isochrones employed in our analysis we have adopted the semiempirical Cummings et al. (2018) IFMR, more specifically the one determined using the Bressan et al. (2012) stellar evolution models (see Cummings et al., 2018, for details) for the determination of the progenitor's lifetimes, because they are very close to the evolutionary lifetimes of Hidalgo et al. (2018) progenitors' models used for the calculation of the WD isochrones. As a test, we have calculated some DA WD isochrones and LFs (in the \(g\) band) in the age range between 200 and 350 Myr for \(\rm[Fe/H]=0.06\), employing the Cummings et al. (2018) IFMR calculated using MIST (Choi et al., 2016) non-rotating stellar models for the progenitor lifetimes. The Figure 11: Completeness-corrected differential WD LF of the WDs in M37 (sea green) compared to theoretical LFs calculated for the labelled ages and chemical compositions (see text for details). The errors in the number counts of the observed LF are also displayed. Figure 12: As the upper panel of Fig. 11. The theoretical LFs are for an age of 350 Myr and correspond to the reference DA calculations of Fig. 11, a population of 20% DB (helium envelope) and 80% DA WDs, and a DA population from models calculated using Blouin et al. (2020) electron conduction opacities, respectively (see text for details). Figure 10: CMDs of M37 stars in several magnitude and colour combinations. Theoretical isochrones (including the WD sequence in the left panel) are compared to the observations using \(E\left(B-V\right)=0.28\), and a distance \(d=1450\) pc (see text for details). The extinction law is taken from Zhang & Yuan (2023), for the _Sloan_ filters and from the _Gaia_ website ([https://www.cosmos.esa.int/web/gaia/edr3-extinction-law](https://www.cosmos.esa.int/web/gaia/edr3-extinction-law)) for _Gaia_ magnitudes. effect of this alternative IFMR on the magnitude of the LF cut-off at fixed age is only on the order of 0.01 mag, with a negligible impact on the results of our analysis. We have repeated this same test using the independent IFMR determined by El-Badry et al. (2018), and found again a negligible impact on the magnitude of the theoretical LF cut-off. ### Constraints on the origin of the extended TO The impact of these results on the interpretation of the cluster extended TO is shown by Fig. 13, which is analogous to Fig. 9 in Griggio et al. (2022). For each scenario and the same two distances of the WD analysis, we show here the cluster _Gaia_ CMD (from Griggio et al. 2022) together with pairs of isochrones for the combinations of [Fe/H] (or \(Y\)) and reddenings that match the blue and red limits of the single-star sequence in the magnitude range studied by Griggio et al. (2022), and ages equal to the corresponding maximum ages determined from the WD LF. According to the WD-based ages, no single star along the upper MS and TO can be redder than the metal richer isochrone in the \(\Delta\)[Fe/H] scenario, or redder than the helium poorer one in the \(\Delta Y\) scenario. This is clearly contradicted by the observed CMD, which displays large fractions (if not the whole cluster population) of objects redder than the reddest isochrone around the TO region. This leads to the conclusion that even considering the metallicity or the helium spread derived from the unevolved MS, the ages determined from the WD LF exclude the presence of an age spread as the reason for the observed extended TO. ### The role played by oxygen-neon core WDs In our analysis, we have considered the CS sequence to be populated by CO-core WDs, which are by far the most common type of WDs. However, according to stellar model calculations, stellar progenitors in a fairly narrow mass range between very approximately 6.5-7 and 9-10 \(M_{\odot}\), are expected to produce WDs with an oxygen-neon core and masses between \(\sim\) 1.1 and \(\sim\)1.3 \(M_{\odot}\), originated from the electron degenerate cores formed at the end of core carbon burning (see, e.g., Siess 2006; Poelarends et al. 2008; Doherty et al. 2017, and references therein). Predictions, both empirical and theoretical, for the IFMR of these WDs is very uncertain; however, it is still possible to make an informed assessment of their impact on the WD ages determined in our analysis. To this purpose, we have considered the ONe-core hydrogen-envelope WD models by Camisassa et al. (2019) and the CO-core DA models from the same group (Camisassa et al. 2017) - both from progenitors with roughly solar metallicity - for a strictly differential analysis using models calculated with the same code and physics inputs. We have considered the 1.1 \(M_{\odot}\) CO-core cooling model - corresponding to the mass of the more massive model used in our WD isochrones -, and the 1.2 \(M_{\odot}\) and 1.3 \(M_{\odot}\) ONe-core models, and calculated WD isochrones and luminosity functions in both \(\Delta\)[Fe/H] and \(\Delta Y\) scenarios for ages between 200 and 400 Myr, using progenitors lifetimes from Hidalgo et al. (2018) and the IFMR by Cummings et al. (2018) for WD masses up to 1.1 \(M_{\odot}\), as in our calculations. For the initial masses of the two ONe WD models we have made various assumptions, with values between 7 and 9-9.5 \(M_{\odot}\), and obtained always the same results in terms of the LF cut-off magnitudes. We found that the ONe-core WDs are located at fainter magnitudes with respect to the 1.1 \(M_{\odot}\) CO-core objects, because of their slightly faster cooling in the relevant luminosity range; the difference (for the 1.3 \(M_{\odot}\) models) in the \(g\) band LF cut-off is on the order of 0.2-0.3 mag. This implies that including massive ONe-core WDs in the calculation of the isochrones would in principle reduce the age necessary to match the observed cut-off by \(\sim\) 100 Myr, thus exacerbating the inconsistency between WD ages and the ages required to explain the extended TO in terms of an age spread. ## 5 Conclusions We have presented a new _Sloan_ photometry of the OC M37, from the very low-mass star regime to the main sequence TO and red clump, including the WD cooling sequence down to its termination. We make publicly available these catalogue (positions, photometry, proper motions and flags) and the atlases, as on-line supplementary material of this article. We have focused our analysis on the WD CS, and determined a new, improved WD LF that we have exploited to set constraints on the origin of the cluster extended TO. We have found that, irrespective of whether the chemical abundance spread revealed by Griggio et al. (2022) photometric analysis is due to variations of [Fe/H] or \(Y\), for the distance range determined using _Gaia_ EDR3 parallaxes the ages determined from the WD LF are incompatible with the ages required to match the observed extended TO region. The maximum age allowed by the analysis of the WD LF is much too young compared to the age required to match the redder and fainter TO region. This is especially true for the \(Y\)-spread Figure 13: Cluster’s _Gaia_ CMD compared to isochrones with the labelled parameters (see text for details). scenario, and also for the [Fe/H]-spread scenario when considering the upper limit of the parallax-based distance. Our results indirectly support the notion that stellar rotation is needed to explain the origin of the cluster extended TO, like the case of the OC NGC 2818, Bastian et al. (2018), where spectroscopic observations have confirmed the presence of a range of rotation rates among TO stars, with redder TO objects being faster rotators. A comprehensive analysis of the MS extended TO and WD cooling sequence of M37 using models including the effect of rotation2 is now needed, together with spectroscopic measurements of the rotation velocities of TO stars, and also spectroscopic metallicities, to determine whether the abundance spread revealed by the photometric analysis of Griggio et al. (2022) is due to a metal abundance or a helium spread. Footnote 2: Cordoni et al. (2018) have presented a first preliminary comparison of the cluster extended TO with models including rotation. ## Acknowledgements We thank our referee for comments that have helped improve the presentation of our results. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This work has made use of observations collected at Schmidt telescopes (Asiago, Italy) of INAF. MG, DN and LRB acknowledge support by MIUR under PRIN program #2017Z2HSMF and by PRIN-INAF 2019 under program #10-Bedin. MS acknowledges support from The Science and Technology Facilities Council Consolidated Grant ST/V00087X/1. ## Data Availability The catalogue is available as electronic material with this paper. The image stacks are available at [https://web.oapd.inaf.it/bedin/files/PAPERs_eMATERIALs/CFHT/M37_WDCS/](https://web.oapd.inaf.it/bedin/files/PAPERs_eMATERIALs/CFHT/M37_WDCS/). The isochrones for the MS and TO, and the WD models are available at the BaSTI-IAC model repository [http://basti-iac.oa-abruzzo.inaf.it/](http://basti-iac.oa-abruzzo.inaf.it/). The WD models by Camisassa et al. (2017) and Camisassa et al. (2019) are available at the La Plata group model repository [http://evolgroup.fcaglp.unlp.edu.ar/TRACKS/tracks.html](http://evolgroup.fcaglp.unlp.edu.ar/TRACKS/tracks.html).
2305.10844
Dissipative light bullets in a doped and weakly nonlocal optical fiber
The letter introduces an extended (3+1)-dimensional [(3+1)D] nonlocal cubic complex Ginzburg-Landau equation describing the dynamics of dissipative light bullets in optical fiber amplifiers under the interplay between dopants and a spatially nonlocal nonlinear response. The model equation includes the effects of fiber dispersion, linear gain, nonlinear loss, fiber nonlinearity, atomic detuning, linear and nonlinear diffractive transverse effects, and nonlocal nonlinear response. A system of coupled ordinary differential equations for the amplitude, temporal, and spatial pulse widths and position of the pulse maximum, unequal wavefront curvatures, chirp parameters, and phase shift is derived using the variational technique. A stability criterion is established, where a domain of dissipative parameters for stable steady-state solutions is found. Direct integration of the proposed nonlocal evolution equation is performed, which allows us to investigate the evolution of the Gaussian beam along a doped nonlocal optical fiber, showing stable self-organized dissipative spatiotemporal light bullets.
Ghislaine Flore Kabadiang Ngon, Conrad Bertrand Tabi, Timoléon Crépin Kofané
2023-05-18T09:58:02Z
http://arxiv.org/abs/2305.10844v1
# Dissipative light bullets in a doped and weakly nonlocal optical fiber ###### Abstract The letter introduces an extended (3+1)-dimensional [(3+1)D] nonlocal cubic complex Ginzburg-Landau equation describing the dynamics of dissipative light bullets in optical fiber amplifiers under the interplay between dopants and a spatially nonlocal nonlinear response. The model equation includes the effects of fiber dispersion, linear gain, nonlinear loss, fiber nonlinearity, atomic detuning, linear and nonlinear diffractive transverse effects, and nonlocal nonlinear response. A system of coupled ordinary differential equations for the amplitude, temporal, and spatial pulse widths and position of the pulse maximum, unequal wavefront curvatures, chirp parameters, and phase shift is derived using the variational technique. A stability criterion is established, where a domain of dissipative parameters for stable steady-state solutions is found. Direct integration of the proposed nonlocal evolution equation is performed, which allows us to investigate the evolution of the Gaussian beam along a doped nonlocal optical fiber, showing stable self-organized dissipative spatiotemporal light bullets. ## I Introduction Optical solitons have promising potential to become principal information carriers in telecommunication due to their capability to propagate long-distance signals without attenuation and change their shapes. One of the major goals in the field of soliton physics is the production of light fields that are localized in all three dimensions of space and time, which we will refer to as 3D spatiotemporal solitons or light bullets. This results from the simultaneous balance of diffraction and GVD by the transverse self-focusing and nonlinear phase modulation in the longitudinal direction, respectively [1]. Considerable attention is being paid to theoretically and experimentally analyzing spatial optical solitons' dynamics in material with local nonlinear and nonlocal responses [2; 3; 4]. Local nonlinearity of optical media is usually approximated by a local function of the light intensity assuming the refractive index change at a given spatial location depends solely on the light intensity at the same location, while nonlocality means that the change of the refractive index in a particular point is determined by the light intensity not only in the same point but also in its vicinity. Recently, it has been revealed that nonlocality can provide new effects such as strong modification of modulational instability [5; 6], suppression of beam collapse [7], dramatic change of the soliton interaction [8; 9], formation of multi-soliton bound states [10], stabilization of spatially localized vortex solitons [11], symmetry breaking azimuthal instability [12; 13] as well as stabilization of different nonlinear structures such as ringlike clusters of many solitons [14] and modulated localized vortex beams or azimuthons [15]. In long-distance soliton propagation, the energy of the soliton decreases because of fiber losses. This would produce soliton broadening because a reduced peak power weakens the SPM effect necessary to counteract the effect of GVD. Therefore, soliton must be amplified periodically using either the lumped or the distributed amplification scheme to overcome the effect of fiber losses. It was demonstrated that by doping the optical fiber with rare earth ions, such as Neodymium, Erbium, Praseodymium, and Ytterbium ions, just to name a few, energy gains were made in addition to the optical fiber. Thus, the light signal could be amplified each time it weakened, and the loss of information could be avoided [16]. On the other hand, there are interesting studies on the pulse propagation problem in doped fiber amplifiers within the rate equation approximation, the governing equation being the cubic complex Ginzburg-Landau (CGL) equation and its variants [17; 18; 19; 20; 21; 22]. Dissipative solitons can be propagated in such active fiber over long distances. In some previous studies [23; 24; 25; 26; 27; 28; 29], multidimensional spatiotemporal optical solitons, i.e., both (2+1)-dimensional [(2+1)D] and (3+1)-dimensional [(3+1)D] dissipative optical bullets were considered taking the (2+1)D and (3+1)D family of CGL equations. Stable spatiotemporal dissipative solitons have been reported. Among them are stationary stable and pulsating solutions, double, quadruple, six-fold, eightfold, and tenfold bullet complexes, self-trapped necklace-ring, ring-vortex solitons, uniform ring beams, spherical and rhombic distributions of light bullets, fundamental and cluster solitons, respectively. Recently, the stability diagram obtained from the Lenz transformation and the linear stability analysis has revealed that higher values of the quintic nonlocality contribute to reducing the modulational instability (MI) in weakly cubic-quintic nonlocal nonlinear media [30]. Very recently, it has been shown that instability regions from the pure quartic MI gain in weakly nonlocal birefringent fibers are more expanded due to nonlocality, which was confirmed via direct numerical simulations showing the emergence of Akhmediev breathers [31; 32]. For a nonlinear saturable media with competing nonlocal nonlinearity, it was reported that the quenching effect of the nonlocal nonlinearity on the MI is corrected, especially when the saturable index and the nonlocality range are well-balanced [33]. The main purpose of the present work is to investigate (3+1)D dissipative light bullets in fiber amplifiers under the interplay between dopant and a spatially nonlocal nonlinear response which, to the best of our knowledge, has not yet been proposed in the literature. The dopant is modelled as a two-level system whose dynamic response is governed by the population and dipole relaxation times. For incident optical of width such that, in cases of weak nonlocality and in the paraxial wave approximation, the (3+1)D cubic complex Ginzburg-Landau equation is derived, which includes the effects of fiber dispersion, linear gain, nonlinear loss, fiber nonlinearity, atomic detuning, linear and nonlinear diffractive transverse effects, and nonlocal nonlinear response. We mainly focus on the localized Gaussian solution in the form of three-dimensional traveling waves. Our main purpose is to assess the role played by a weak spatial nonlocality term in the shape formation of a dissipative light bullet. A system of eight coupled first-order differential equations of the solution's parameters of interest is derived on the basis of variational equations resulting from the Euler-Lagrange equations. The remaining part of this paper is organized as follows. In Sec. II, we derive the (3+1)D nonlocal cubic complex Ginzburg-Landau (CGL) equation governing the dynamics of the dissipative light bullets in a doped and weakly nonlocal nonlinear medium. In Sec. III, the dynamic characteristics of the dissipative light bullets, such as the amplitude, temporal and spatial pulse widths, the position of the pulse maximum, unequal wavefront curvatures, chirp parameters, and the phase shift in specially designed media, are studied using the variational technique. In Sec. IV, a stability criterion for steady-state solutions of the (3+1)D nonlocal cubic CGL equation is established, fixing a domain of dissipative light bullets parameters. In Sec. V, the direct integration of the (3+1)D nonlocal cubic CGL equation with the Runge-Kutta and the split-step fourier methods have been carried out, showing stable self-organized dissipative spatiotemporal light bullets. Some concluding remarks are given in Sec. VI II Derivation of the nonlinear evolution equation for electromagnetic pulse propagation in doped and weakly nonlocal nonlinear media The light pulse propagation problem in doped optical fiber can be solved by defining a complex dielectric constant as follows [17]: \[\epsilon(\omega)=n_{f}^{2}+2in_{f}\frac{\alpha_{f}}{k_{0}}+\chi_{a}(\omega), \tag{1}\] where \(\omega\) is the optical frequency, with \(\alpha_{f}\) being the fiber loss defined by: \[\alpha_{f}=\frac{10}{L}\log\frac{P_{out}}{P_{in}}(\text{dB/km}), \tag{2}\] where \(P_{in}\) is the input light pulse power, \(P_{out}\) is the output light pulse power and \(L\), the length of the optical fiber. \(k_{0}\) is the wavenumber given by \(k_{0}=\omega_{0}/c\), with \(c\) being the light speed, and \(\omega_{0}\), the carrier frequency. \(\chi_{a}(\omega)\) is the atomic susceptibility governing the response of the dopant in the optical fiber, which is determined by [17] \[\chi_{a}(\omega)=\frac{g_{p}}{k_{0}}\frac{(\omega-\omega_{a})T_{2}-i}{1+( \omega-\omega_{a})^{2}T_{2}^{2}}, \tag{3}\] with the peak gain \(g_{p}=\sigma(N_{2}-N_{1})\). The parameter \(\sigma\) is a transition cross-section, while \(N_{1}\) and \(N_{2}\) are the atomic densities for the two-level system's lower and upper energy levels. \(\omega_{a}\) is the atomic resonance frequency, and \(T_{2}\) is the relaxation time. Moreover, one can expand the function \(\chi_{a}(\omega)\) in Taylor's series up to the second order in the vicinity of the carrier frequency \(\omega_{0}\) and find [17] \[\chi_{a}(\omega) = \frac{g_{p}}{k_{0}}\left[\frac{\delta-i}{1+\delta^{2}}+\frac{1- \delta^{2}+2i\delta}{(1+\delta^{2})^{2}}(\omega-\omega_{0})T_{2}\right. \tag{4}\] \[\left.+\frac{\delta(\delta^{2}-3)+i(1-3\delta^{2})}{(1+\delta^{2} )^{3}}(\omega-\omega_{0})^{2}T_{2}^{2}\right],\] where \(\delta=(\omega_{0}-\omega_{a})T_{2}\) is the detuning parameter. The term \(n_{f}\), in Eq.(1), is the refractive index of the fiber, including linear, nonlinear, doping, and spatial-nonlocality phenomena. The given expression of \(n_{f}\) is [34] \[n_{f}=n_{0}(\omega)+n_{2}\frac{\int R(x-x^{\prime},y-y^{\prime})I(x^{\prime}, y^{\prime})dx^{\prime}dy^{\prime}}{\int R(x-x^{\prime},y-y^{\prime})dx^{\prime} dy^{\prime}}. \tag{5}\] Here, \(n_{0}(\omega)\) is the linear refractive index, \(n_{2}\) represents the nonlinear change in the refractive index, \(R(x-x^{\prime},y-y^{\prime})\) is the nonlocal response function which determines the spatial extent of nonlocality. \(I(x^{\prime},y^{\prime})\) is the nonlocal intensity which acts not only on the local points _(x,y)_, but also in the neighboring points and can be evaluated as [34] \[\begin{split} I(x^{\prime},y^{\prime})&=I(x,y)+\frac {1}{2}\frac{\partial^{2}}{\partial x^{2}}I(x,y)(x-x^{\prime})^{2}\\ &+\frac{1}{2}\frac{\partial^{2}}{\partial y^{2}}I(x,y)(y-y^{ \prime})^{2}\\ &+\frac{\partial^{2}}{\partial x\partial y}I(x,y)(x-x^{\prime})(y -y^{\prime}),\end{split} \tag{6}\] where \(I(x,y)\) is the external action of the intensity on the points _(x,y)_. Using this assumption, Eq. (5) can be rewritten as \[\begin{split} n_{f}&=n_{0}(\omega)+n_{2}|\vec{E}|^{2 }+\frac{n_{2}\gamma_{xx}}{2}\frac{\partial^{2}}{\partial x^{2}}(|\vec{E}|^{2}) \\ &+\frac{n_{2}\gamma_{yy}}{2}\frac{\partial^{2}}{\partial y^{2}}(| \vec{E}|^{2})+\frac{n_{2}\gamma_{xy}}{2}\frac{\partial^{2}}{\partial x \partial y}(|\vec{E}|^{2}).\end{split} \tag{7}\] In Eq. (7), we have considered \(I(x,y)=|\vec{E}|^{2}\), where \(\vec{E}\) is the electric-field vector. The coefficients \(\gamma_{xx}\), \(\gamma_{yy}\) and \(\gamma_{xy}\) represent the measures of weak nonlocality degree in the transverse coordinates \(x\) and \(y\), respectively. These nonlocality degree coefficients are determined by the relations \[\begin{split}\gamma_{\xi\xi}&=\frac{\int_{-\infty}^{ \infty}R(\xi-\xi^{\prime},\eta-\eta^{\prime})(\xi-\xi^{\prime})^{2}d\xi^{\prime }d\eta^{\prime}}{\int_{-\infty}^{\infty}R(\xi^{\prime},\eta^{\prime})d\xi^{ \prime}d\eta^{\prime}},\\ \gamma_{\eta\eta}&=\frac{\int_{-\infty}^{\infty}R( \eta-\eta^{\prime},\xi-\xi^{\prime})(\eta-\eta^{\prime})^{2}d\eta^{\prime}d \xi^{\prime}}{\int_{-\infty}^{\infty}R(\eta^{\prime},\xi^{\prime})d\eta^{ \prime}d\xi^{\prime}},\end{split} \tag{8}\] and \[\gamma_{\xi\eta}=\frac{\int_{-\infty}^{\infty}R(\xi-\xi^{\prime},\eta-\eta^{ \prime})(\xi-\xi^{\prime})(\eta-\eta^{\prime})d\xi^{\prime}d\eta^{\prime}}{ \int_{-\infty}^{\infty}R(\xi^{\prime},\eta^{\prime})d\xi^{\prime}d\eta^{ \prime}}, \tag{9}\] for \((\xi,\eta)=(x,y)\). The term \(\gamma_{\xi\eta}\) can be neglected in a suitably chosen and rotated coordinate system. Here, we consider the case of the so-called Gaussian nonlocal response functions [35] \[R(r-r^{\prime})=\frac{1}{\pi\sigma^{2}}e^{-\frac{(r-r^{\prime})}{\sigma^{2}}}, \tag{10}\] with a characteristic width \(\sigma\) defining the degree of nonlocality. Indeed, it has been shown that, for a Gaussian response, \(\gamma_{\xi\xi}\)=\(w_{{}_{R}}^{2}/2\), with \(w_{{}_{R}}=0.1\sigma/\sqrt{2}\). We recall that Eq. (6) is justified [34] when the nonlocality becomes weak, that is, \(\gamma_{\xi\xi}\ll 1\). Another necessary parameter to consider, as much as the dielectric constant, in the propagation of light in a doped optical fiber, is the propagation constant around the frequency \(\omega\) given as follows [17] \[\beta(\omega)=\frac{\omega}{c}\sqrt{\epsilon(\omega)}. \tag{11}\] From Eqs. (1)-(7), we can rewrite the propagation constant expressed in Eq. (11). Given that the term \(n_{0}(\omega)\frac{\omega}{c}\) corresponds to the propagation constant of the undoped fiber denoted \(\beta_{f}(\omega)\), we then obtain the following equation \[\begin{split}\beta(\omega)&=\beta_{f}(\omega)+ \frac{\omega}{c}n_{2}|\vec{E}|^{2}+\frac{\omega}{c}\frac{n_{2}\gamma_{xx}}{2} \frac{\partial^{2}}{\partial x^{2}}(|\vec{E}|^{2})\\ &+\frac{\omega}{c}\frac{n_{2}\gamma_{yy}}{2}\frac{\partial^{2}}{ \partial y^{2}}(|\vec{E}|^{2})+i\alpha_{f}+i\alpha_{f}\frac{n_{2}}{n_{0}}| \vec{E}|^{2}\\ &+i\alpha_{f}\frac{n_{2}}{n_{0}}\frac{\gamma_{xx}}{2}\frac{ \partial^{2}}{\partial x^{2}}(|\vec{E}|^{2})+i\alpha_{f}\frac{n_{2}}{n_{0}} \frac{\gamma_{yy}}{2}\frac{\partial^{2}}{\partial y^{2}}(|\vec{E}|^{2})\\ &+\frac{1}{2}\frac{\omega}{c}\frac{\chi_{a}}{n_{0}}.\end{split} \tag{12}\] In the framework of Maxwell's equations, we analyze the propagation of optical fields, and we get the following wave equation for each component of the field vector \(\vec{E}\) \[\Delta\vec{E}-\mu_{0}\frac{\partial^{2}\vec{D}}{\partial t^{2}}=0. \tag{13}\] The operator \(\Delta\) is the Laplacian operator whose expression in cartesian coordinates is \(\Delta=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}} +\frac{\partial^{2}}{\partial z^{2}}\). The Fourier transform of the quantity \(\vec{D}\), known as the electric displacement vector, is related to that of the electric field via the constitutive relation \[\begin{split}\tilde{D}(r,\omega-\omega_{0})=\epsilon_{0}\epsilon (\omega)\tilde{E}(r,\omega-\omega_{0}),\end{split} \tag{14}\] where \(\epsilon_{0}\) is the vacuum permittivity and \(\tilde{E}\) denotes the Fourier transform of the electric field \(\vec{E}\) defined as \[\tilde{E}(r,\omega-\omega_{0})=\int_{-\infty}^{\infty}\vec{E}(r,t)\exp(i( \omega-\omega_{0})t)dt. \tag{15}\] In the frequency domain, Eq. (13) takes the form of the Helmholtz equation \[\Delta\tilde{E}+\beta^{2}(\omega)\tilde{E}=0. \tag{16}\] The electric field vector \(\vec{E}(r,t)\) is written as \[\vec{E}(r,t)=\frac{1}{2}(\vec{e}\phi(x,y,z,t)\exp[i(\beta_{0}z-\omega_{0}t)]+ c.c.). \tag{17}\] Here, \(\phi(x,y,z,t)\) is the slowly varying envelope function which represents the light pulse carrying the information, \(c.c.\) is the complex conjugate, \(\beta_{0}=n_{0}(\omega_{0})k_{0}\) is the propagation constant at the carrier frequency \(\omega_{0}\), and \(\vec{e}\) is the polarization unit vector. The optical fiber has a cylinder shape with axis (_oz_), therefore, the radius \(r\) is defined as \(r=\sqrt{x^{2}+y^{2}}\). For optical beams, the paraxial and quasi-monochromatic approximations correspond to neglecting the \(\frac{\partial^{2}}{\partial z^{2}}\) derivatives of the slowly varying amplitude on the grounds that \(\left|\frac{\partial^{2}\phi}{\partial z^{2}}\right|\ll\beta_{0}\left|\frac{ \partial\phi}{\partial z}\right|\). This procedure leads to \[i\frac{\partial\phi}{\partial z}+\frac{1}{2\beta_{0}}\left(\frac{\partial^{2} \phi}{\partial x^{2}}+\frac{\partial^{2}\phi}{\partial y^{2}}\right)+(\beta( \omega)-\beta_{0}(\omega))\phi=0. \tag{18}\] Fiber dispersion plays a critical role in the propagation of optical pulses because different spectral components associated with pulse broadening can be detrimental to optical communication systems. Mathematically, the effects due to fiber dispersion are accounted for by expanding the mode-propagation coefficient \(\beta_{f}(\omega)\) in a Taylor series about the carrier frequency \(\omega_{0}\) at which the pulse spectrum is centred [20] \[\begin{split}\beta_{f}(\omega,|\phi|^{2})&=\beta_{0}+i \beta_{1}\frac{\partial}{\partial t}-\frac{\beta_{2}}{2}\frac{\partial^{2}}{ \partial t^{2}}\\ &+\left(\frac{\partial\beta_{f}}{\partial(|\phi|^{2})}\right)_{0}| \phi|^{2},\end{split} \tag{19}\] where \((...)_{0}\) denotes the evaluation at \(\omega=\omega_{0}\), \(|\phi|^{2}=0\), \(\beta_{1}=(\frac{\partial\phi}{\partial x})_{\omega=\omega_{0}}\) and \(\beta_{2}=(\frac{\partial^{2}\beta}{\partial\omega^{2}})_{\omega=\omega_{0}}\). Hence, the propagation constant \(\beta(\omega)\) given in Eq. (12) becomes \[\begin{split}\beta(\omega)&=\beta_{0}+i\beta_{1} \frac{\partial}{\partial t}-\frac{\beta_{2}}{2}\frac{\partial^{2}}{\partial t^{2} }+\left(\frac{\partial\beta_{f}}{\partial(|\phi|^{2})}\right)_{0}(|\phi|^{2})\\ &+\frac{\omega}{c}n_{2}|\phi|^{2}+\frac{\omega}{c}\frac{n_{2} \gamma_{xx}}{2}\frac{\partial^{2}}{\partial x^{2}}(|\phi|^{2})+\frac{\omega}{c} \frac{n_{2}\gamma_{yy}}{2}\frac{\partial^{2}}{\partial y^{2}}(|\phi|^{2})\\ &+i\alpha_{f}+i\alpha_{f}\frac{n_{2}}{n_{0}}|\phi|^{2}+i\alpha_{ f}\frac{n_{2}}{n_{0}}\frac{\gamma_{xx}}{2}\frac{\partial^{2}}{\partial x^{2}}(|\phi|^{2})\\ &+i\alpha_{f}\frac{n_{2}}{n_{0}}\frac{\gamma_{yy}}{2}\frac{ \partial^{2}}{\partial y^{2}}(|\phi|^{2})+\frac{1}{2}\frac{\omega}{c}\frac{\chi_{a}}{n_{0}}. \end{split} \tag{20}\] Using Eq. (20) and substituting \(\chi_{a}\) defined in Eq. (4), we derive the following (3+1)D nonlocal cubic CGL equation: \[\begin{split} i\frac{\partial\phi}{\partial z}&+( \beta_{reff}+i\beta_{ieff})\frac{\partial\phi}{\partial t}+\frac{1}{2\beta_{0}} \left(\frac{\partial^{2}\phi}{\partial x^{2}}+\frac{\partial^{2}\phi}{\partial y ^{2}}\right)\\ &+(p_{r}+ip_{i})\frac{\partial^{2}\phi}{\partial t^{2}}+(\gamma_{r }+i\gamma_{i})\phi+(q_{r}+iq_{i})|\phi|^{2}\phi\\ &+(\gamma_{xx,r}+i\gamma_{xx,i})\frac{\partial^{2}}{\partial x^{2 }}(|\phi|^{2})\phi\\ &+(\gamma_{yy,r}+i\gamma_{yy,i})\frac{\partial^{2}}{\partial y^{2 }}(|\phi|^{2})\phi=0.\end{split} \tag{21}\] In the above, \(\frac{\partial\phi}{\partial z}\) represents the displacement of the light pulse over the propagation distance \(z\). The expression \(\left(\frac{\partial^{2}\phi}{\partial x^{2}}+\frac{\partial^{2}\phi}{ \partial y^{2}}\right)\) corresponds to the linear diffraction of the light pulse along the transverse directions \(x\) and \(y\). The terms \(\frac{\partial^{2}}{\partial x^{2}}(|\phi|^{2})\phi\) and \(\frac{\partial^{2}}{\partial y^{2}}(|\phi|^{2})\phi\) correspond to the nonlinear diffraction related to spatial nonlocality. This nonlinear diffraction is associated with linear diffraction in order to influence self-focusing and avoid collapse [34]. The parameters \(\beta_{reff}=-\frac{g_{p}}{2n_{0}}\frac{2\delta T_{z}}{(1+\delta^{2})^{2}}\) and \(\beta_{ieff}=\beta_{1}+\frac{g_{p}}{2n_{0}}\frac{(1-\delta^{2})T_{2}}{(1+ \delta^{2})^{2}}\), are the real and imaginary parts of the inverse of the group velocity, respectively. The coefficient \(p_{r}=-\frac{\beta_{2}}{2}-\frac{g_{p}}{2n_{0}}\frac{\delta(\delta^{2}-3)T_{2} ^{2}}{(1+\delta^{2})^{2}}\), measures the wave dispersion, and \(p_{i}=-\frac{g_{p}}{2n_{0}}\frac{(1-\delta^{2})T_{2}^{2}}{(1+\delta^{2})^{2}}\) is the spectral filtering. The term \(\gamma_{r}=\frac{g_{p}}{2n_{0}}\frac{\delta}{\delta}\) is the linear loss/gain, and \(\gamma_{i}=\alpha_{f}-\frac{g_{p}}{2n_{0}}\frac{1}{(1+\delta^{2})}\) is the frequency shift. The parameter \(q_{r}=n_{z}\frac{\omega}{c}+\left(\frac{\partial\beta_{f}}{\partial(|\phi|^{ 2})}\right)_{0}\) represents the nonlinear coefficient. The case \(q_{r}\)\(>\)0 corresponds to the self-focusing Kerr nonlinearity, while the case \(q_{r}\)\(<\)0 corresponds to the self-defocusing Kerr nonlinearity. The parameter \(q_{i}=n_{2}\frac{\alpha_{f}}{n_{0}}\), accounts for nonlinear gain (loss) and/or other amplification (absorption) processes. Expressions \(\gamma_{xx,r}=\frac{1}{2}\frac{n_{0}\omega}{c}\gamma_{xx}\), \(\gamma_{yy,r}=\frac{1}{2}\frac{n_{0}\omega}{c}\gamma_{yy}\), \(\gamma_{xx,i}=\frac{1}{2}\frac{n_{0}\omega}{c}\gamma_{xx}\), and \(\gamma_{yy,i}=\frac{1}{2}\frac{n_{0}\omega}{c}\gamma_{yy,r}\) represent the real and imaginary parts of the nonlocality degrees along the transverse coordinates \(x\) and \(y\), respectively. To scale Eq. (21), we introduce the following physical parameters, namely, the diffraction length or Rayleigh length in the homogeneous medium: \(L_{Diff}=\beta_{0}r_{0}^{2}\), where \(r_{0}\) is the beam radius. The dispersion length \(L_{Disp}=\frac{T_{2}^{2}}{|p_{r}|}\), with \(T_{0}\) representing the typical initial pulse width. The effective length characterizing the influence of the nonlinearity \(L_{NL}=\frac{1}{q_{r}T_{0}}\), where \(P_{0}\) is the peak power of the incident pulse. The transverse coordinates scale are \(X=\frac{x}{r_{0}}\) and \(Y=\frac{y}{r_{0}^{2}}\), respectively. The longitudinal coordinate scale \(Z=\frac{T_{Disp}}{L_{Disp}}\), and the temporal coordinate scale \(\tau=\frac{T}{T_{0}}\). Here, \(T=t-\frac{z}{v_{g}}\) is the time in the moving coordinate system. The normalized field amplitude is \(\Phi(X,Y,Z,\tau)=\sqrt{P_{0}}N\phi(x,y,z,t)\), where \(N^{2}=\frac{L_{Disp}}{L_{NL}}\), with \(N=\sqrt{\frac{q_{r}P_{0}T_{0}^{2}}{|p_{r}|}}\). We also set \(a_{1}=\frac{p_{i}}{p_{r}}\), \(b_{1}=\frac{\gamma_{i}}{\gamma_{r}}\), \(c_{0}=\frac{q_{i}}{q_{r}}\), \(c_{{}_{1,XX}}=\frac{\gamma_{XX,r}}{\gamma_{XX,r}}\), and \(c_{{}_{2,YY}}=\frac{\gamma_{YY,r}}{\gamma_{YY,r}}\). Taking into account these scaling transformations, Eq. (21) then takes the form \[\begin{split} i\frac{\partial\Phi}{\partial Z}&+\zeta _{1}\left(\frac{\partial^{2}\Phi}{\partial X^{2}}+\frac{\partial^{2}\Phi}{ \partial Y^{2}}\right)+\left(1+ia_{1}\right)\zeta_{2}\frac{\partial^{2}\Phi}{ \partial\tau^{2}}\\ &+\left(1+ib_{1}\right)\zeta_{3}\Phi+\left(1+ic_{0}\right)\zeta_{4 }|\Phi|^{2}\Phi\\ &+\left(1+ic_{{}_{1,XX}}\right)\zeta_{5}\frac{\partial^{2}}{ \partial X^{2}}(|\Phi|^{2})\Phi\\ &+\left(1+ic_{{}_{2,YY}}\right)\zeta_{6}\frac{\partial^{2}}{ \partial Y^{2}}(|\Phi|^{2})\Phi=0,\end{split} \tag{22}\] where \(\zeta_{1}=\frac{L_{Disp}}{\beta\sigma_{0}^{2}}\), \(\zeta_{2}=\frac{p_{r}L_{Disp}}{T_{0}^{2}N\sqrt{P_{0}}}\), \(\zeta_{3}=\gamma_{r}L_{Disp}\), \(\zeta_{4}=\frac{q_{r}L_{NL}}{P_{0}}\), \(\zeta_{5}=\frac{\gamma_{XX}L_{NL}}{P_{0}\sigma_{0}^{2}}\), and \(\zeta_{6}=\frac{\gamma_{YY}L_{NL}}{P_{0}\sigma_{0}^{2}}\). Considering Eq. (24), for \(a_{1}=b_{1}=c_{0}=c_{{}_{1,XX}}=c_{{}_{2,YY}}=0\), and neglecting the second-order dispersion term \(\frac{\partial^{2}\Phi}{\partial\tau^{2}}\), we recover the (2+1)D nonlocal nonlinear Schrodinger (NLS) equation that was derived by Bezuhanov et al. [34] in the limit of weak nonlocality. The conditions for breathing soliton formation in one and two transverse dimensions were established for this equation. Furthermore, it was shown that the interplay between nonlinear diffraction and self-focusing is found to result in an increase of the power needed to form nonlocal spatial solitons. Indeed, the nonlocal 2D NLS equation has been also derived by Skupin et al. [35] in the highly nonlocal limit. Due to the mixture of local and nonlocal types of nonlinearity (Gaussian model of nonlocality, thermal nonlinearity, and the model of a dipolar Bose-Einstein condensate), a variety of solutions, such as rotating and nonrotating azimuths, accessible solitons can be stabilized. Moreover, the stabilization of 2D ring dark solitons and ring anti-dark solitons was demonstrated in nonlocal media [36]. ## III Analytical treatment using variational approach In this section, we employ the variational method [37; 38; 39] for dissipative systems to look for approximated solutions to Eq. (22) and obtain physical insight in terms of a few relevant parameters that will be then implemented in numerical simulations to confirm the analytic predictions qualitatively. In order to describe the dynamics of pulse evolution, various treatments have been developed to extract approximated soliton solutions to integrable and non-integrable nonlinear partial differential equations and have received many different names depending on the field of application, namely the method of moments [40], the method of collective coordinates [20; 41; 27], the time-dependent variational method [38], the effective-particle method [42; 43], averaged Lagrangian description [39; 44], to name a few. Lagrangian methods have become widely accepted as the preferred approach to account for the dynamics of the light pulse in optical fiber. Thus, in order to use the variational approach, the (3+1)D nonlocal cubic CGL equation can be rewritten in the form \[\begin{split} i\frac{\partial\Phi}{\partial Z}&+\zeta_{1} \left(\frac{\partial^{2}\Phi}{\partial X^{2}}+\frac{\partial^{2}\Phi}{\partial Y ^{2}}\right)+\zeta_{2}\frac{\partial^{2}\Phi}{\partial\tau^{2}}+\zeta_{3}\Phi \\ &+\zeta_{4}|\Phi|^{2}\Phi+\zeta_{5}\frac{\partial^{2}}{\partial X^ {2}}(|\Phi|^{2})\Phi\\ &+\zeta_{6}\frac{\partial^{2}}{\partial Y^{2}}(|\Phi|^{2})\Phi= \mathcal{Q},\end{split} \tag{23}\] in which the right-hand side contains dissipative terms \[\begin{split}\mathcal{Q}&=-ia_{1}\zeta_{2}\frac{ \partial^{2}\Phi}{\partial\tau^{2}}-ib_{1}\zeta_{3}\Phi-ic_{0}\zeta_{4}|\Phi|^ {2}\Phi\\ &-ic_{{}_{1,XX}}\zeta_{5}\frac{\partial^{2}}{\partial X^{2}}(| \Phi|^{2})\Phi-ic_{{}_{2,YY}}\zeta_{6}\frac{\partial^{2}}{\partial Y^{2}}(| \Phi|^{2})\Phi.\end{split} \tag{24}\] It should be noted that defining an appropriate ansatz function is essential for using the variational approach. In doing so, let us consider the trial function of the Gaussian shape \[\begin{split}\Phi(X,Y,Z,\tau)&=A(Z)\exp\Big{(}- \frac{X^{2}}{\sigma_{{}_{X}}^{2}(Z)}-\frac{Y^{2}}{\sigma_{{}_{Y}}^{2}(Z)}\\ &-\frac{\tau^{2}}{\sigma_{\tau}^{2}(Z)}+\frac{ik_{0}}{2}(\vartheta _{{}_{X}}(Z)X^{2}\\ &+\vartheta_{{}_{Y}}(Z)Y^{2}+\vartheta_{\tau}(Z)\tau^{2})+i\psi( Z)\Big{)},\end{split} \tag{25}\] where \(A(Z)\) is the amplitude, \(\sigma_{{}_{X}}(Z)\) and \(\sigma_{{}_{Y}}(Z)\) are the beamwidths in the transverse coordinates \((X,Y)\), \(\sigma_{\tau}(Z)\) is the temporal beamwidth, \(\vartheta_{{}_{X}}(Z)\) and \(\vartheta_{{}_{Y}}(Z)\) are the wave-front curvatures along the transverse coordinates \((X,Y)\), \(\vartheta_{\tau}(Z)\) is the temporal wave-front curvature, \(\psi(Z)\) is the phase, and \(k_{0}\) is the wavenumber. Variables \(A(Z)\), \(\sigma_{{}_{X}}(Z)\), \(\sigma_{{}_{Y}}(Z)\), \(\sigma_{{}_{Z}}(Z)\), \(\vartheta_{{}_{X}}(Z)\), \(\vartheta_{{}_{Y}}(Z)\) and \(\psi(Z)\) are all the parameters of the light pulse. Then, we obtain the first-order differential equations (FODES) of these parameters, which describe the evolution of the light pulse in the optical fiber. The variational approach used in this study is based on the Euler-Lagrange equation [45; 46] \[\begin{split}\frac{d}{dZ}\left(\frac{\partial\langle L_{c}\rangle }{\partial q^{\prime}}\right)&-\frac{\partial\langle L_{c} \rangle}{\partial q}\\ &=2\text{Re}\left\{\int\int\int dXd\tau\mathcal{Q}\frac{\partial \Phi^{*}}{\partial q}\right\},\end{split} \tag{26}\] where \(\Phi^{*}\) is the complex conjugate of the ansatz function \(\Phi\), \(\text{Re}\{\cdot\}\) denotes the real part, and \(q\) is the variable that corresponds to all the parameters of the light pulse \((A(Z)\), \(\sigma_{{}_{X}}(Z)\), \(\sigma_{{}_{Y}}(Z)\), \(\sigma_{{}_{Y}}(Z)\), \(\vartheta_{{}_{X}}(Z)\), \(\vartheta_{{}_{Y}}(Z)\), \(\vartheta_{\tau}(Z)\), \(\psi(Z)\)), with \((q^{\prime}=\frac{dq}{dZ})\). The left-hand side of Eq. (23), represents the conservative Lagrange's equation, with the conservative Lagrangian \(L_{c}\) being given by \[\begin{split} L_{c}&=i\frac{\partial\Phi}{\partial Z }+\zeta_{1}\left(\frac{\partial^{2}\Phi}{\partial X^{2}}+\frac{\partial^{2} \Phi}{\partial Y^{2}}\right)+\zeta_{2}\frac{\partial^{2}\Phi}{\partial\tau^{ 2}}\\ &+\zeta_{3}\Phi+\zeta_{4}|\Phi|^{2}\Phi+\zeta_{5}\frac{\partial^{2 }}{\partial X^{2}}(|\Phi|^{2})\Phi\\ &+\zeta_{6}\frac{\partial^{2}}{\partial Y^{2}}(|\Phi|^{2})\Phi. \end{split} \tag{27}\] Moreover, \(\langle L_{c}\rangle\) is the Lagrangian density written as \[\langle L_{c}\rangle=\int\int\int L_{c}dXdYd\tau, \tag{28}\] and the right-hand side \(\mathcal{Q}\) of Eq.(23) is the dissipative term. Using Eqs. (24)-(28), we get the following set of coupled first-order differential equations resulting from the variation with respect to the light pulse parameters: \[\begin{split}\frac{dA}{dZ}&=-A\zeta_{1}k_{0} \vartheta_{{}_{X}}-A\zeta_{1}k_{0}\vartheta_{{}_{Y}}-A\zeta_{2}\vartheta_{\tau} -\frac{7}{2}Ab_{1}\zeta_{3}\\ &-\frac{77}{16}A^{3}\sqrt{2}c_{0}\zeta_{4}+\frac{1}{8}Aa_{1} \zeta_{2}\left(35k_{0}^{2}\vartheta_{\tau}^{2}\sigma_{\tau}^{2}+\frac{156}{ \sigma^{2}}\right)\\ &+\frac{1}{128}A^{3}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}\left(75k_{0}^{ 2}\vartheta_{{}_{X}}^{2}\sigma_{{}_{X}}^{2}+\frac{996}{\sigma_{{}_{X}}^{2}} \right)\\ &+\frac{1}{128}A^{3}\sqrt{2}c_{{}_{2,YY}}\zeta_{6}\left(75k_{0}^{ 2}\vartheta_{{}_{Y}}^{2}\sigma_{{}_{Y}}^{2}+\frac{996}{\sigma_{{}_{Y}}^{2}} \right),\end{split}\] (29a) \[\begin{split}\frac{d\sigma_{{}_{X}}}{dZ}&=2\zeta_{1 }k_{0}\vartheta_{{}_{X}}\sigma_{{}_{X}}+b_{1}\zeta_{3}\sigma_{{}_{X}}+\frac{15} {8}A^{2}\sqrt{2}c_{0}\zeta_{4}\sigma_{{}_{X}}\\ &+\frac{1}{4}a_{1}\zeta_{2}\left(-7k_{0}^{2}\vartheta_{\tau}^{2} \sigma_{\tau}^{2}-\frac{28}{\sigma_{\tau}^{2}}\right)\sigma_{{}_{X}}\\ &+\frac{1}{64}A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}\left(13k_{0}^{ 2}\vartheta_{{}_{X}}^{2}\sigma_{{}_{X}}^{2}-\frac{252}{\sigma_{{}_{X}}^{2}} \right)\sigma_{{}_{X}}\\ &+\frac{15}{64}A^{2}\sqrt{2}c_{{}_{2,YY}}\zeta_{6}\left(-k_{0}^{2} \vartheta_{{}_{Y}}^{2}\sigma_{{}_{Y}}^{2}-\frac{12}{\sigma_{{}_{Y}}^{2}} \right)\sigma_{{}_{X}},\end{split}\] (29b) \[\begin{split}\frac{d\sigma_{{}_{Y}}}{dZ}&=2\zeta_{1 }k_{0}\vartheta_{{}_{Y}}\sigma_{{}_{Y}}+b_{1}\zeta_{3}\sigma_{{}_{Y}}+\frac{15} {8}A^{2}\sqrt{2}c_{0}\zeta_{4}\sigma_{{}_{Y}}\\ &+\frac{1}{4}a_{1}\zeta_{2}\left(-7k_{0}^{2}\vartheta_{\tau}^{2} \sigma_{\tau}^{2}-\frac{28}{\sigma_{\tau}^{2}}\right)\sigma_{{}_{Y}}\\ &+\frac{15}{64}A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}\left(-k_{0}^{2} \vartheta_{{}_{Y}}^{2}\sigma_{{}_{Y}}^{2}-\frac{12}{\sigma_{{}_{Y}}^{2}} \right)\sigma_{{}_{Y}}\\ &+\frac{1}{64}A^{2}\sqrt{2}c_{{}_{2,YY}}\zeta_{6}\left(13k_{0}^{ 2}\vartheta_{{}_{Y}}^{2}\sigma_{{}_{Y}}^{2}-\frac{252}{\sigma_{{}_{Y}}^{2}} \right)\sigma_{{}_{Y}},\end{split}\] (29c) \[\begin{split}\frac{d\sigma_{{}_{Y}}}{dZ}&=2\zeta_{2 }\vartheta_{\tau}\sigma_{\tau}+b_{1}\zeta_{3}\sigma_{\tau}+\frac{15}{8}A^{2} \sqrt{2}c_{0}\zeta_{4}\sigma_{{}_{Y}}\\ &+\frac{15}{64}A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}\left(-k_{0}^{2} \vartheta_{{}_{Y}}^{2}\sigma_{{}_{Y}}^{2}-\frac{252}{\sigma_{{}_{Y}}^{2}} \right)\sigma_{{} \[\begin{split}\frac{d\vartheta_{{}_{Y}}}{dZ}&=A^{2} \sqrt{2}\zeta_{4}\frac{1}{k_{0}\sigma_{{}_{Y}}^{2}}+8\zeta_{1}\frac{1}{\sigma_{{} _{Y}}^{4}}-2\zeta_{1}k_{0}\vartheta_{{}_{Y}}^{2}\\ &+\frac{1}{36}\frac{A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}}{k_{0} \sigma_{{}_{Y}}^{2}}\left(\frac{72}{\sigma_{{}_{X}}^{2}}+73k_{0}\vartheta_{{}_ {X}}\right)\\ &+\frac{1}{36}\frac{A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{6}}{k_{0} \sigma_{{}_{Y}}^{2}}\left(\frac{216}{\sigma_{{}_{Y}}^{2}}+127k_{0}\vartheta_{{} _{Y}}\right),\end{split} \tag{29f}\] \[\begin{split}\frac{d\vartheta_{{}_{Y}}}{dZ}&=A^{2} \sqrt{2}\zeta_{4}\frac{1}{k_{0}\sigma_{{}_{Y}}^{2}}+8\zeta_{1}\frac{1}{\sigma_ {{}_{Y}}^{4}}-2\zeta_{1}k_{0}\vartheta_{{}_{Y}}^{2}\\ &+\frac{1}{36}\frac{A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}}{k_{0} \sigma_{{}_{Y}}^{2}}\left(\frac{72}{\sigma_{{}_{X}}^{2}}+73k_{0}\vartheta_{{} _{X}}\right)\\ &+\frac{1}{36}\frac{A^{2}\sqrt{2}c_{{}_{3,YY}}\zeta_{6}}{k_{0} \sigma_{{}_{Y}}^{2}}\left(\frac{72}{\sigma_{{}_{Y}}^{2}}+73k_{0}\vartheta_{{} _{Y}}\right),\end{split} \tag{29g}\] \[\begin{split}\frac{d\psi}{dZ}&=-2\frac{\zeta_{1}}{ \sigma_{{}_{X}}^{2}}-2\frac{\zeta_{1}}{\sigma_{{}_{Y}}^{2}}-2\frac{\zeta_{2}}{ \sigma_{{}_{Y}}^{2}}\\ &-\frac{7}{8}A^{2}\sqrt{2}\zeta_{4}-\zeta_{3}-a_{1}\zeta_{2}k_{0 }\vartheta_{{}_{Y}}\\ &+\frac{1}{288}A^{2}\sqrt{2}c_{{}_{1,XX}}\zeta_{5}\left(-\frac{ 648}{\sigma_{{}_{X}}^{2}}-401k_{0}\vartheta_{{}_{X}}\right)\\ &+\frac{1}{288}A^{2}\sqrt{2}c_{{}_{2,YY}}\zeta_{6}\left(-\frac{ 648}{\sigma_{{}_{Y}}^{2}}-401k_{0}\vartheta_{{}_{Y}}\right).\end{split} \tag{29h}\] Eq. (29a)-(29h) constitute a complete set of variable parameters characterizing dissipative light bullets in optical fiber amplifiers under the suitable competition between dopants and a spatially nonlocal nonlinear response. To reach the steady-state solutions of the system of Eqs.(29a)-(29g), we set \(Z\) derivatives of the light pulse parameters to zero. The problem of light pulse instabilities that depend on the number of space dimensions and nonlinearity strength has recently attracted considerable attention. In fact, the existence and stability of multidimensional optical dissipative soliton solutions of the cubic-quintic CGL equation were addressed and comprehensively analyzed [39]. In the dissipative case, it was already remarked that the family of solutions reduces to a fixed double solution for a given set of dissipative parameters [47]. Indeed, only symmetric steady-state solutions with equal spatial widths, curvatures, and nonlocality degrees can exist, which means \(\sigma_{{}_{X}}=\sigma_{{}_{Y}}=\sigma_{{}_{X=Y}}=\sigma\), \(\vartheta_{{}_{X}}=\vartheta_{{}_{Y}}=\vartheta_{{}_{X=Y}}=\vartheta\), \(\gamma_{{}_{XX}}=\gamma_{{}_{YY}}=\gamma_{{}_{XX=Y}}\), \(\zeta_{5}=\zeta_{6}=\zeta_{{}_{X=Y}}\) and \(c_{{}_{1,XX}}=c_{{}_{2,YY}}=c_{{}_{X=Y}}\). Under such conditions, the amplitude as a steady state solution of Eqs.(29a)-(29g) has two discrete values \(A_{+}\) and \(A_{-}\) given by \[A_{\pm}=\sqrt{\frac{-D\pm\sqrt{D^{2}-4BC}}{2B}}+O(\theta^{2}), \tag{30}\] where \[\begin{split} B&=\sqrt{2}\zeta_{4}\zeta_{{}_{X=Y} }c_{{}_{X=Y}}(240a_{1}-496c_{0}-249),\\ D&=-320b_{1}\zeta_{3}\zeta_{{}_{X=Y}}c_{{}_{X=Y}} +\zeta_{1}\zeta_{4}(128a_{1}-256c_{0}),\\ C&=-128\sqrt{2}b_{1}\zeta_{1}\zeta_{3},\end{split} \tag{31}\] and \(\theta=\max\) (\(|a_{1}\zeta_{2}|,|b_{1}\zeta_{3}|,|c_{0}\zeta_{4}|,|c_{X=Y}\zeta_{{}_{X=Y}}|\)). The other relevant stationary solutions are explicitly given by the following four-parameter family of the dissipative light bullets: _(i) The beamwidth in the transverse coordinates_ \[\sigma_{{}_{X=Y}}=\frac{2\sqrt{-\zeta_{4}\left(2\zeta_{{}_{X=Y}}c_{{}_{X=Y}}A^{ 2}+\sqrt{2}\zeta_{1}\right)}}{A\zeta_{4}}+O(\theta^{2}), \tag{32}\] _(ii) The temporal beamwidth_ \[\sigma_{{}_{\tau}}=\frac{2\sqrt{\zeta_{4}\sqrt{2}\zeta_{2}}}{A\zeta_{4}}+O( \theta^{2}), \tag{33}\] _(iii) The wave-front curvatures along the transverse coordinates_ \[\begin{split}\vartheta_{{}_{X=Y}}&=\frac{1}{32k_{0} \zeta_{1}\left(2A^{2}\zeta_{{}_{X=Y}}c_{{}_{X=Y}}+\sqrt{2}\zeta_{1}\right)}\\ &\times\left(28A^{2}a_{1}\zeta_{1}\zeta_{4}-60A^{2}c_{0}\zeta_{1} \zeta_{4}\right.\\ &\left.-32A^{2}b_{1}\zeta_{3}\zeta_{{}_{X=Y}}c_{{}_{X=Y}}-16 \sqrt{2}b_{1}\zeta_{1}\zeta_{3}\right)+O(\theta^{2}),\end{split} \tag{34}\] _(iv) The temporal wave-front curvature_ \[\vartheta_{\tau}=\frac{\left(9A^{2}\sqrt{2}a_{1}\zeta_{4}-15A^{2}\sqrt{2}c_{0} \zeta_{4}-8b_{1}\zeta_{3}\right)}{16\zeta_{2}}+O(\theta^{2}). \tag{35}\] To further proceed, we first need to make a remark about the conservative systems. In this case, the total energy or beam power is conserved. For dissipative systems, the total energy or the beam power \(P=(\frac{7}{2})^{3/2}A^{2}\sigma_{{}_{X=Y}}^{2}\sigma_{{}_{X=Y}}\) is no longer conserved contrary to the case of conservative systems, but evolves in the so-called balance equation with nonzero curvatures. Therefore, using steady-state solutions given in Eqs. (32) and (33), the total power corresponding to ansatz (25) for the non-conservative (3+1)D nonlocal cubic CGL equation (23) is given by [47] \[P=\frac{8\left(2\zeta_{{}_{X=Y}}c_{{}_{X=Y}}A^{2}+\sqrt{2}\zeta_{1}\right)\sqrt{ \zeta_{4}\sqrt{2}\zeta_{2}}}{\left(\frac{7}{2}\right)^{-3/2}A\zeta_{4}^{2}}. \tag{36}\] It is also worth mentioning that the simultaneous balance between linear diffraction, dispersion, and nonlinear diffraction is induced by the spatial nonlocality and between gain and loss, depending on fixed steady-state solutions with nonzero spatial and temporal curvatures. ## IV Stability criterion for steady-state solutions In order to picture the stability zone, we use the Routh-Hurwitz stability criterion described by a necessary condition and a sufficient condition. Applying this stability criterion requires that we construct a Jacobi determinant. From the latter, we get the polynomial characteristic equation, and we are then able to verify the necessary and sufficient condition of the Routh-Hurwitz stability criterion. In the following, we introduce the notations: \(F_{A}\equiv\frac{d\sigma}{dZ}\), \(F_{\sigma}\equiv\frac{d\sigma}{dZ}\), \(F_{\sigma}\equiv\frac{d\sigma}{dZ}\), \(F_{\sigma_{\sigma}}\equiv\frac{d\sigma}{dZ}\), and \(F_{\vartheta_{\sigma}}\equiv\frac{d\sigma}{dZ}\), resulting from Eqs. (29a)-(29g). Recall that in the case of a symmetric input, the set of Eqs. (29a)-(29g) is reduced to only five equations: (29a), (29b), (29d), (29e) and (29g). The Jacobi determinant is constructed from the derivatives of these terms \(F_{A}\), \(F_{\sigma}\), \(F_{\vartheta}\), \(F_{\sigma_{\sigma}}\), and \(F_{\vartheta_{\sigma}}\), with respect to amplitude, spatial and temporal widths, spatial and at the equilibrium state \[\begin{array}{l}\det(J-\lambda I)=\\ \frac{|\frac{\partial F_{A}}{\partial A}-\lambda\ \frac{\partial F_{A}}{ \partial\sigma}\ \frac{\partial F_{A}}{\partial\vartheta}\ \frac{\partial F_{A}}{ \partial\sigma_{\sigma}}\ \frac{\partial F_{A}}{\partial\vartheta_{\sigma}}}{ \partial\sigma_{\sigma}}\ \frac{\partial F_{\vartheta}}{ \partial\sigma_{\sigma}}\\ \frac{\partial F_{A}}{\partial\sigma}\ \frac{\partial F_{\vartheta}}{ \partial\sigma}-\lambda\ \frac{\partial F_{\vartheta}}{ \partial\sigma}\ \frac{\partial F_{\vartheta}}{\partial\sigma_{\sigma}}\ \frac{\partial F_{\vartheta}}{ \partial\sigma_{\sigma}}\ \frac{\partial F_{\vartheta}}{ \partial\sigma_{\sigma}}\\ \frac{\partial F_{A}}{\partial\sigma}\ \frac{\partial F_{\vartheta}}{ \partial\sigma}\ \frac{\partial F_{\vartheta}}{\partial\vartheta}\ \frac{\partial F_{\vartheta}}{ \partial\sigma_{\sigma}}\ \frac{\partial F_{\vartheta}}{\partial\sigma_{\sigma}}-\lambda\ \frac{ \partial F_{\vartheta}}{\partial\sigma_{\sigma}}\\ \frac{\partial F_{\vartheta_{\sigma}}}{\partial A}\ \frac{\partial F_{\vartheta_{\sigma}}}{ \partial\sigma}\ \frac{\partial F_{\vartheta_{\sigma}}}{ \partial\vartheta}\ \frac{\partial F_{\vartheta_{\sigma}}}{ \partial\sigma_{\sigma}}\ \frac{\partial F_{\vartheta_{\sigma}}}{ \partial\sigma_{\sigma}}-\lambda\end{array} \tag{37}\] where \(I\) is the identity matrix. The fifth-order characteristic polynomial obtained from the Jacobi determinant is given by \[\lambda^{5}+a_{1}\lambda^{4}+a_{2}\lambda^{3}+a_{3}\lambda^{2}+a_{4}\lambda+ a_{5}=0. \tag{38}\] The coefficients \(a_{1},...,a_{5}\), depend on the partial derivatives of the functions \(F_{A}\), \(F_{\sigma}\), \(F_{\vartheta}\), \(F_{\sigma_{\sigma}}\), and \(F_{\vartheta_{\sigma}}\) given in the Appendix. However, the analytical expressions of those coefficients could not be presented due to their complicated expressions. For stability to be reached, the necessary condition implies that all the roots of the characteristic Eq. (38) should have negative real parts, i.e., \(\mathrm{Re}(\lambda_{i})<0\), with \(i=1\),...,5. The sufficient condition based on the Routh-Hurwitz criterion is such that the coefficients \(a_{1},...,a_{5}\) and their combinations should be positive [44; 48], i.e., \[a_{i}>0,\ \ \mathrm{with}\ \ \ i=1,...,5, \tag{39}\] and \[\begin{array}{l}b_{1}=\frac{a_{1}a_{2}-a_{3}}{a_{1}},\ \ b_{3}=\frac{a_{1}a_{4}-a_{5}}{a_{1}},\\ c_{1}=\frac{b_{1}a_{3}-a_{1}b_{2}}{b_{1}},\ \ d_{1}=\frac{c_{1}b_{3}-b_{1}c_{2}}{c_{1}}. \end{array} \tag{40}\] In Fig. 1, we represent the double solutions \(A_{+}\) and \(A_{-}\) of the steady state amplitudes versus \(q_{i}\) and \(\gamma_{{}_{X=Y}}\), where the dissipative parameter \(q_{i}\) represents the nonlinear loss related to the self-defocusing Kerr nonlinearity (\(q_{r}<0\)), and the parameter \(\gamma_{{}_{X=Y}}\) denotes the symmetric nonlocality degree along the transverse directions. By using the normalized coefficient \(N_{r}=2^{1/2}(4/3)^{3/4}\)[49], assuming a linear gain \(\gamma_{r}\) under anomalous dispersion, the following parameter values have been used for illustration: \(p_{r}=0.579/N_{r}\), \(p_{i}=0.201/N_{r}\), \(q_{r}=-110/N_{r}\), \(\gamma_{r}=150/N_{r}\), \(\gamma_{i}=-530/N_{r}\), \(k_{0}=2\pi/1.55\). As stated before [47], the stability of the solution \(A_{-}\) is only a prerequisite to obtaining optical light bullets after a spatiotemporal self-organized evolution. It can clearly be seen in Fig. 1 that when the spatial nonlocal self-defocusing Kerr nonlinearity response (\(\gamma_{{}_{X=Y}}\)) and the nonlinear loss (\(q_{i}\)) increase, the stability zone (yellow area representing \(A_{-}\)) gets expanded. Hence, the nonlocality and the nonlinear loss enhance optical light bullets localization in this regime. In Fig. 2(a) and (b), the stability zones (hatched areas) are shown for the spatial width versus \(q_{i}\) and \(\gamma_{{}_{X=Y}}\), respectively, while the same procedure is repeated for the temporal widths \(\sigma_{{}_{X=Y}}\) in Fig. 2(c) and (d) for the stable solution \(A_{-}\) under anomalous dispersion. The set of spatial and temporal widths values derived from these diagrams will be chosen from this stability range for the numerical simulations of the input spatiotemporal pulses. Zones of stability for the spatial and temporal chirps, \(\vartheta_{{}_{X=Y}}\) and \(\vartheta_{\tau}\), respectively, versus \(q_{i}\) and \(\gamma_{{}_{X=Y}}\), for the stable solution \(A_{-}\) and the anomalous dispersion are depicted in Fig. 3. The set of spatial and temporal chirps derived from these curves will be chosen from this stability range for the numerical simulations of the input spatiotemporal pulses. The stability zones for the phase \(\psi\) are depicted in Fig. 4(a) and (b) versus \(q_{i}\) and \(\gamma_{{}_{X=Y}}\), respectively, for the stable solution \(A_{-}\) under the anomalous dispersion. The set of phase values obtained from these diagrams will be chosen from this stability range for the numerical implementation of the input spatiotemporal pulses. ## V Numerical experiments Numerical studies of the evolution of the dissipative light bullet along a doped and weakly nonlocal optical fiber are carried out by means of the fourth-order Runge-Kutta computational method and the split-step Figure 1: Double solutions \(A_{+}\) and \(A_{-}\) of the steady state amplitudes versus \(q_{i}\) and \(\gamma_{{}_{X=Y}}\). The solution \(A_{-}\) (yellow sheet) is stable. The solution \(A_{+}\) (blue sheet) is rather unstable. The following parameter values have been used: \(N_{r}=2^{1/2}(4/3)^{3/4}\) (the normalized coefficient), \(p_{r}=0.579/N_{r}\) (the anomalous dispersion), \(p_{i}=0.201/N_{r}\) (the spectral filter- ing), \(q_{r}=-110/N_{r}\) (the self-defocusing Kerr nonlinearity), \(\gamma_{r}=150/N_{r}\) (the linear gain), \(\gamma_{i}=-530/N_{r}\) (the frequency shift), and \(k_{0}=2\pi/1.55\) (the wavenumber). Fourier method. The accuracy of numerical experiments is examined by testing different time and space steps. The mesh sizes are chosen as \(\Delta X\)=\(\Delta Y\)=0.002 and \(\Delta\tau\)=0.003. Then, we solve the original (3+1)D nonlocal cubic CGL equation given in Eq. (23) via the split-step Fourier method with the longitudinal step size \(\Delta Z\)=\(0.063\times 10^{-6}\). The following typical optical pulse parameters used in fiber-optic communication systems are adopted [17; 50; 51]: the wavelength \(\lambda=1.55~{}\mu\)m, the linear refractive index \(n_{0}=1.45~{}\)cm\({}^{2}\)/W, the nonlinear refractive index \(n_{2}=2.7\times 10^{-13}~{}\)cm\({}^{2}\)/W, the group velocity dispersion \(\beta_{2}=50~{}\)ps\({}^{2}\)/km, the nonlinear gain \(g_{p}=6.8~{}\)W\({}^{-1}\)km\({}^{-1}\), the pulse width \(1.763T_{0}=400\) fs, the peak power of the incident pulse \(P_{0}=9.43\) MW, and the nonlinear parameter \(q_{r}\). From the results of Fig. 5, we notice that the obtained analytical features corresponding to the steady-state solutions of the amplitudes \(A_{-}\) and \(A_{+}\) as functions of the spatial nonlocality parameter \(\gamma_{X=Y}\), respectively, are a Figure 4: Panel (a) shows the stability zone (dark area) for the solution \(A_{-}\) in the \((q_{i},\psi)-\)plane, while panel (b) displays the stability zone in the \((q_{i},\gamma_{X=Y})-\)plane, under anomalous dispersion. Parameter values are those used in Fig. 1. Figure 2: Diagrams of stability in the \((q_{i},\sigma_{X=Y})-\)plane [panel (a)], the \((\gamma_{X=Y},\sigma_{X=Y})-\)plane [panel (b)], the \((q_{i},\sigma_{\tau})-\)plane [panel(c)] and the \((\gamma_{X=Y},\sigma_{\tau})-\)plane [panel(d)], where the dark zone stands for the stable solution \(A_{-}\) under anomalous dispersion. The parameter values used are the same as in Fig. 1. good approximation of the numerically obtained curves. Along the same line, the analytical and numerical solutions of the steady-state \(A_{-}\) highlight the upper stable branches. On the contrary, the curves describing the analytical and numerical solutions of the steady-state \(A_{+}\) are on the lower unstable branches. For the stable evolution of the self-organized dissipative light bullets represented in numerical simulations, we choose the stable solution \(A_{-}\) as an input spatiotemporal pulse. Fig. 6 shows clearly that the light pulse remains practically constant during its evolution in the spatial domain. We can then notice that the anomalous dispersion, the linear diffraction, the linear gain and the nonlinear diffraction, the spatial nonlocal self-defocusing Kerr nonlinearity response, and the nonlinear loss are well balanced. In Fig. 7, the temporal input and output are identical, showing that no loss has been observed. In the same way, we can notice that the anomalous dispersion, the linear diffraction, the linear gain and the nonlinear diffraction, the spatial nonlocal self-defocusing Kerr nonlinearity response, and the nonlinear loss are well balanced. From the dynamical behaviors depicted in Fig. 8, we show the evolution of the temporal field profile of the dissipative light bullet intensity distribution in the propagation regime of anomalous dispersion, where the input and output pulses are similar, further confirming our stability predictions, under well-balanced competition from the various involved effects in addition to nonlocality. ## VI Concluding remarks In summary, we have predicted the dissipative light bullets in optical fiber amplifiers for a number of reasons: (i) we have rigorously derived a (3+1)D nonlocal cubic CGL equation valid for the dynamics of the dissipative light bullets in optical fiber amplifiers under the effects fiber dispersion, linear gain, nonlinear loss, fiber nonlinearity, atomic detuning, linear and nonlinear diffractive transverse effects, and nonlocal nonlinear response. (ii) We have also derived eight coupled first-order differential equations of motion of the dissipative light bullet parameters for the nonlocal (3+1)D CGL equation under the interplay between dopants and a spatially weakly nonlocal nonlinear response, with the help of the variational technique using the Gaussian ansatz function. (iii) We have established a Routh-Hurwitz stability criterion for dissipative spatiotemporal light bullets, where a domain of dissipative parameters for stable steady-state solutions has been found. (iv) We have carried out the direct integration of the proposed nonlocal evolution equation, which allowed us to investigate the evolution of the Gaussian beam along a doped nonlocal optical fiber, showing stable self-organized dissipative spatiotemporal light bullets. Considering the nonlocal CGL equation that we have derived, there are undoubtedly systems in other fields to which they would also apply. For example, Kuramoto [52] has proposed the nonlocal CGL equation for populations of biologically oscillating cells secreting substances whose rapid diffusion mediates the cell-cell interaction. It was found that under certain conditions, the correlations and fluctuations obey a power law similar to the one in the fully developed Navier-Stokes turbulence. Also, effective nonlocality in coupling may become relevant when the reaction-diffusion system involves three or more chemical components. Thus, Tanaka and Kuramoto [53] have proposed the nonlocal CGL equation as a reduced form of a universal class of reaction-diffusion systems near the Hopf bifurcation. In this context, novel dynamical states have been predicted, such as multi-affine chemical turbulence [54] and chimera states [55]. In addition, the nonlocal CGL equation has been used extensively to study electrochemical turbulence for electrochemical systems with migration coupling [56; 57]. Indeed, oscillatory electrochemical systems can be considered active distributed media and are mathematically described by a set of coupled partial differential equations. They only differ from the reaction-diffusion system in the spatial coupling term of the electric potential drop across the electrode/electrolyte interface. The spatial coupling in electrochemical systems is nonlocal. Some coherent structures have been found in the nonlocal CGL equation in the turbulent regime, which includes standing waves and robust heteroclinic orbits between fixed points or limit cycles [58]. We believe that such studies can be extended to the proposed model by studying their azimuthal manifestation on spherical surfaces, for example. Investigations in that direction are ongoing and will be published elsewhere. ###### Acknowledgements. The work by CBT is supported by the Botswana International University of Science and Technology under the grant **DVC/RDI/2/1/16I (25)**. CBT thanks the Kavli Institute for Theoretical Physics (KITP), the University of California Santa Barbara (USA), where this work was supported in part by the National Science Foundation Grant no.**NSF PHY-1748958**, NIH Grant no.**R25GM067110**, and the Gordon and Betty Moore Foundation Grant no.**2919.01**. Appendix: Partial derivatives of the functions \(F_{a}\), \(F_{\sigma}\), \(F_{\vartheta}\), \(F_{\sigma_{\tau}}\), \(F_{\vartheta_{\tau}}\) \[\frac{\partial F_{A}}{\partial A} = -\zeta_{1}k_{0}\vartheta-\zeta_{1}k_{0}\vartheta-\zeta_{2} \vartheta_{\tau}-(7\zeta_{3}b_{1})/2-(231A^{2}\sqrt{2}c_{0}\zeta_{4})/16+a_{1} \zeta_{2}(35k_{0}^{2}\vartheta_{\tau}^{2}\sigma_{\tau}^{2}+156/\sigma_{\tau}^ {2})/8\] \[+(3A^{2}c_{x_{\chi=Y}}\zeta_{x_{\chi=Y}}\sqrt{2}(75k_{0}^{2} \vartheta^{2}\sigma^{2}+996/\sigma^{2}))/128+(3A^{2}c_{x_{\chi=Y}}\zeta_{x_{ \chi=Y}}\sqrt{2}(75k_{0}^{2}\vartheta^{2}\sigma^{2}+996/\sigma^{2}))/12,\] \[\frac{\partial F_{\sigma}}{\partial A} = (15A\sqrt{2}c_{0}\zeta_{4}\sigma)/4+(A\sqrt{2}c_{x_{\chi=Y}} \zeta_{x_{\chi=Y}}(13k_{0}^{2}\vartheta^{2}\sigma^{2}-252/\sigma^{2})\sigma)/32\] \[+(15A\sqrt{2}c_{x_{\chi=Y}}\zeta_{x_{\chi=Y}}(-k_{0}^{2} \vartheta^{2}\sigma^{2}-12/\sigma^{2})\sigma)/32,\] \[\frac{\partial F_{\sigma_{\tau}}}{\partial A} = (15A\sqrt{2}c_{0}\zeta_{4}\sigma_{\tau})/4+(15A\sqrt{2}c_{x_{\chi=Y }}\zeta_{x_{\chi=Y}}(-k_{0}^{2}\vartheta^{2}\sigma^{2}-12/\sigma^{2})\sigma_{ \tau})/32\] \[+(15A\sqrt{2}c_{x_{\chi=Y}}\zeta_{x_{\chi=Y}}(-k_{0}^{2} \vartheta^{2}\sigma^{2}-12/\sigma^{2})\sigma_{\tau})/32,\] \[\frac{\partial F_{\vartheta}}{\partial\vartheta} = -4\zeta_{1}k_{0}\vartheta+(50\zeta_{{}_{X=Y}}c_{{}_{X=Y}}A^{2} \sqrt{2})/(9\sigma^{2}),\ \ \frac{\partial F_{\vartheta_{\tau}}}{\partial\vartheta}=-(\zeta_{{}_{X=Y}}c_{{}_{ X=Y}}A^{2}\sqrt{2})/(4\sigma_{\tau}^{2}),\] \[\frac{\partial F_{A}}{\partial\vartheta_{\tau}} = A\zeta_{2}+(35/4)Aa_{1}\zeta_{2}k_{0}^{2}\vartheta_{\tau}\sigma_{ \tau}^{2},\] \[\frac{\partial F_{\sigma}}{\partial\vartheta_{\tau}} = -(7a_{1}\zeta_{2}\sigma k_{0}^{2}\vartheta_{\tau}\sigma_{\tau}^{2 })/2,\ \ \frac{\partial F_{\sigma_{\tau}}}{\partial\vartheta_{\tau}}=2\zeta_{2} \sigma_{\tau}-5/2a_{1}\zeta_{2}\sigma_{\tau}^{3}k_{0}^{2}\vartheta_{\tau},\] \[\frac{\partial F_{\vartheta}}{\partial\vartheta_{\tau}} = 0,\ \ \frac{\partial F_{\vartheta_{\tau}}}{\partial\vartheta_{\tau}}= 8\zeta_{2}a_{1}/\sigma_{\tau}^{2}-4\zeta_{2}k_{0}\vartheta_{\tau}.\]
2306.08583
Virtual Histology with Photon Absorption Remote Sensing using a Cycle-Consistent Generative Adversarial Network with Weakly Registered Pairs
Modern histopathology relies on the microscopic examination of thin tissue sections stained with histochemical techniques, typically using brightfield or fluorescence microscopy. However, the staining of samples can permanently alter their chemistry and structure, meaning an individual tissue section must be prepared for each desired staining contrast. This not only consumes valuable tissue samples but also introduces delays in essential diagnostic timelines. In this work, virtual histochemical staining is developed using label-free photon absorption remote sensing (PARS) microscopy. We present a method that generates virtually stained histology images that are indistinguishable from the gold standard hematoxylin and eosin (H&E) staining. First, PARS label-free ultraviolet absorption images are captured directly within unstained tissue specimens. The radiative and non-radiative absorption images are then preprocessed, and virtually stained through the presented pathway. The preprocessing pipeline features a self-supervised Noise2Void denoising convolutional neural network (CNN) as well as a novel algorithm for pixel-level mechanical scanning error correction. These developments significantly enhance the recovery of sub-micron tissue structures, such as nucleoli location and chromatin distribution. Finally, we used a cycle-consistent generative adversarial network CycleGAN architecture to virtually stain the preprocessed PARS data. Virtual staining is applied to thin unstained sections of malignant human skin and breast tissue samples. Clinically relevant details are revealed, with comparable contrast and quality to gold standard H&E-stained images. This work represents a crucial step to deploying label-free microscopy as an alternative to standard histopathology techniques.
James E. D. Tweel, Benjamin R. Ecclestone, Marian Boktor, James Alexander Tummon Simmons, Paul Fieguth, Parsin Haji Reza
2023-06-14T15:42:35Z
http://arxiv.org/abs/2306.08583v2
Virtual Histology with Photon Absorption Remote Sensing using a Cycle-Consistent Generative Adversarial Network with Weakly Registered Pairs ###### Abstract Modern histopathology relies on the microscopic examination of thin tissue sections stained with histochemical techniques, typically using brightfield or fluorescence microscopy. However, the staining of samples can permanently alter their chemistry and structure, meaning an individual tissue section must be prepared for each desired staining contrast. This not only consumes valuable tissue samples but also introduces delays in essential diagnostic timelines. In this work, virtual histochemical staining is developed using label-free photon absorption remote sensing (PARS) microscopy. We present a method that generates virtually stained histology images that are indistinguishable from the gold standard hematoxylin and eosin (H&E) staining. First, PARS label-free ultraviolet absorption images are captured directly within sustained tissue specimens. The radiative and non-radiative absorption images are then preprocessed, and virtually stained through the presented pathway. The preprocessing pipeline features a self-supervised Noise2Void denoising convolutional neural network (CNN) as well as a novel algorithm for pixel-level mechanical scanning error correction. These developments significantly enhance the recovery of sub-micron tissue structures, such as nucleoli location and chromatin distribution. Finally, we used a cycle-consistent generative adversarial network CycleGAN architecture to virtually stain the preprocessed PARS data. Virtual staining is applied to thin unstained sections of malignant human skin and breast tissue samples. Clinically relevant details are revealed, with comparable contrast and quality to gold standard H&E-stained images. This work represents a crucial step to deploying label-free microscopy as an alternative to standard histopathology techniques. ## 1 Introduction Modern pathologists study the microscopic anatomy of tissue specimens to understand the nature and progression of disease. To perform microscopic inspection using brightfield or fluorescence microscopes, tissue specimens are thinly sectioned and stained with histochemical dyes. These dyes chemically label the structures and biomolecules within the sample, facilitating the differentiation of key tissue elements such as lipids, proteins, and nucleic acids [1]. The most prevalent stain set used in histology and cancer diagnosis is hematoxylin and eosin (H&E). Hematoxylin stains the chromatin in the nuclei purple, while eosin stains the cytoplasm and extracellular structures pink [2]. These contrasts enable pathologists to identify both tissue and nuclear abnormalities which indicate the presence, nature, and extent of malignancy. Depending on the disease, other specialized stains may be used to assess targeted tissue features. For example, Grocott's methenamine silver stain (GMS) or periodic acid Schiff (PAS) stains may be used to highlight fungal cells if a fungal infection is suspected [3]. In certain cases, advanced labelling techniques may be employed to identify specific proteins, or RNA/DNA sequences [4, 5]. These methods facilitate highly specific diagnostics, such as the identification of genetic subtypes in cancers. For example, immunohistochemical (IHC) staining or fluorescence _in situ_ hybridization (FISH) are used to identify HER2 positive breast tumors [6], where HER2 specific treatments have significantly improved patient outcomes [7]. In practice, simultaneous or sequential use of histochemical, IHC, and FISH agents is not possible on a single tissue section. The labelling process can introduce irreversible structural and chemical changes which render the specimen unacceptable for subsequent analysis [2, 8]. As such, each section must be independently sectioned, mounted, and stained; a technically challenging, expensive, and time-consuming workflow [9]. A trained histotechnologist may spend several hours to prepare a section for testing [10], with some labeling protocols requiring overnight incubation and steps spaced out across multiple days [11]. Hence, repeating staining or producing additional stains in a stepwise fashion can delay diagnostics and treatment timelines, degrading patient outcomes. Moreover, performing multiple stained sections can rapidly expend invaluable diagnostic samples, particularly when the diagnostic material is derived from needle core biopsies. This increases the probability of needing the patient to undergo further procedures to collect additional biopsy samples, incurring diagnostic delays, and significant patient stress. Label-free microscopy modalities offer an opportunity to revolutionize modern digital pathology by enhancing the diagnostic utility of valuable tissue specimens. Label-free microscopes leverage biomolecules endogenous optical characteristics to capture chromophore specific visualizations without histochemical labeling [12]. This opens the possibility of directly imaging within unprocessed tissue specimens, potentially facilitating in-vivo histological imaging in the future. When combined with deep-learning image translation techniques, label-free microscopes enable virtual histochemical staining from unlabelled tissue specimens. Ideally, label-free microscopy could provide pathologists immediate access to numerous specialized stains, enhancing diagnostic confidence while reducing processing time and tissue requirements. Towards this end, several modalities have recently achieved some success in developing deep-learning based label-free virtual histochemical staining including, quantitative phase imaging (QPI) [13], optical coherence tomography (OCT) [14], photoacoustic microscopy [15], and autofluorescence microscopy [16, 17, 18]. Additionally, multimodal non-linear microscopy techniques, such as coherent anti-Stokes Raman scattering, two-photon excitation fluorescence and second-harmonic generation, have been used for virtual H&E staining of tissue specimen [19]. These techniques have all shown some success in emulating one or more histochemical stains, however, their effectiveness primarily relies on the raw initial label-free contrast they can capture. Ideally, a given modality is able to recover sufficient chromophore specific data to match the desired chemical contrast, however this is not always the case. For example, while a sample's autofluorescence spectrum reveals significant information on its composition [20, 21, 22], some critical biomolecules may not possess distinct or measurable autofluorescence characteristics. While elastin, collagen, and other extranuclear constituents exhibit strong emissions, DNA and RNA have relatively low fluorescence quantum yield [23], which limits the measurement of nuclear contrast. As such, the staining network must predict nuclear contrast from surrounding structure as opposed to direct measurement. This, in turn, may limit the accuracy of histochemical staining emulation. For recovery of direct nuclear contrast, a modality that utilizes non-radiative (e.g., photothermal, and photoacoustic) relaxation of biomolecules can be used [24, 25, 26]. One such technique, known as Photon Absorption Remote Sensing (PARS) and previously called Total-Absorption Photoacoustic Remote Sensing, is able to concurrently measure both the non-radiative and radiative relaxation processes [27]. By capturing both absorption fractions simultaneously, PARS is able to recover rich biomolecule specific contrast, such as quantum efficiency ratio, not afforded by other independent modalities. In PARS, the optical relaxation processes (radiative and non-radiative) are observed following a targeted excitation pulse incident on the sample [27]. The radiative relaxation generates optical emissions from the sample which are then directly measured. The non-radiative relaxation causes localized thermal modulations and, if the excitation event is sufficiently rapid, pressure modulations within the excited region. These transients induce nano-second scale variations in the sample's local optical properties, which are captured with a co-focused detection laser. Additionally, the co-focused detection is able to measure the local optical scattering prior to excitation. Overall, PARS is able to simultaneously capture radiative and non-radiative absorption as well as and optical scattering from a single excitation event. Label-free virtual histology has previously been explored using an ultraviolet (UV, 266nm) excitation PARS platform [28]. UV excitation aligns with the absorption peak of several relevant biomolecules such as DNA, RNA, collagen, and elastin [29]. Important nuclear contrast comes primarily from the non-radiative relaxation of DNA, while surrounding connective tissue contrast comes from the radiative relaxation of extranuclear proteins. These combined PARS label-free contrasts are highly analogous to traditional chemical H&E staining. Recent work employed a pix2pix image translation network on PARS data for H&E emulation [28]. This supervised approach requires exact pixel-to-pixel matched ground truth data for emulation [30]. Perfect alignment of the datasets is not only challenging but is often not possible due to the deformations and potential degradations of the tissue specimen caused by the staining process. Misalignment in training pairs can significantly impact and compromise the quality of the pix2pix results. We present an improved virtual staining and image processing workflow for emulating histology images which are effectively indistinguishable from gold standard H&E pathology. The presented developments include a new staining network, and an optimized image preprocessing pathway. Here, a cycle-consistent generative adversarial network (CycleGAN [31]) architecture is applied for virtual staining. CycleGAN virtual staining does not require pixel-to-pixel level registration for training data [31]. However, semi-registered data is used here to reduce hallucination artifacts [32], while improving virtual staining integrity. In addition, advances in image preprocessing reduce inter-measurement variability during signal acquisition. Improvements include pulse energy correction and image denoising using the self-supervised Noise2Void network [33]. Additionally, a novel algorithm is presented for removal of pixel level mechanical scanning position artifacts, which blurs subcellular level features. These enhancements afford marked improvements in the clarity of small tissue structures, such as nucleoli and chromatin distribution. Direct comparisons are made between the previous pix2pix, standard unpaired CycleGAN, and the proposed loosely registered CycleGAN virtual colourizations. The loosely registered CycleGAN facilitates precise virtual staining with the highest quality of any PARS virtual staining method explored to date. When applied to entire whole slide sections of resected human tissues, the proposed virtual staining provides detailed emulation of subcellular and subnuclear diagnostic features comparable to the gold standard H&E. This work represents a significant step towards the development of a label-free virtual staining microscope. The successful label-free virtual staining opens a pathway to the development of in-vivo virtual histology, which could allow pathologists to immediately access multiple specialized stains from a single slide, enhancing diagnostic confidence, improving timelines and patient outcomes. ## II Materials and Methods ### Sample Preparation Tissue samples were first fixed in formalin solution for a period of 24 to 48 hours, within 20 minutes of excision. Samples were then dehydrated with ethanol and treated with xylene to eliminate residual ethanol and fats. The samples were subsequently embedded in paraffin wax, creating formalin-fixed paraffin-embedded (FFPE) blocks. A microtome was then used to cut thin tissue sections (\(\sim\)4-5\(\upmu\)m) from the FFPE blocks. Tissue sections were placed on glass microscope slides and baked at 60\({}^{\circ}\)C for approximately 60 minutes to evaporate excess paraffin. The sustained samples were first imaged at 40x with the PARS microscope and then directly stained with H&E. H&E-stained slides were then imaged at 40x (Morpholens 1, Morphle Digital Pathology). This process was performed on a variety of malignant human skin and breast tissue samples and direct one-to-one whole slide images were acquired for model training. Tissues were provided by clinical collaborators at the Cross-Cancer Institute (Edmonton, Alberta, Canada) from anonymous patient donors with all patient identification removed from the samples. Patient consent was waived by the ethics committee because these archival tissues were no longer required for patient diagnostics. No information regarding patient identity was provided to the researchers. Samples were collected under protocols approved by the Research Ethics Board of Alberta (Protocol ID: HREBA.CC-18-0277) and the University of Waterloo Health Research Ethics Committee (Photoacoustic Remote Sensing (PARS) Microscopy of Surgical Resection, Needle Biopsy, and Pathology Specimens; Protocol ID: 40275). All human tissue experiments were conducted in accordance with the government of Canada guidelines and regulations, including "Ethical Conduct for Research Involving Humans (TCPS 2)". ### Description of PARS Tissue Imaging Process The label-free images acquired for this study were captured using the whole slide scanning PARS system previously reported by Tweel _et al._[34]. In short, a 400ps pulsed 50KHz 266nm UV laser (Wedge XF 266, RPMC) is used to excite the sample, simultaneously inducing non-radiative and radiative relaxation processes. The non-radiative relaxation processes are sampled as time-resolved photothermal, and photoacoustic signals probed with a continuous wave 405nm detection beam (OBIS-LS405, Coherent). This detection beam is co-aligned and focused onto the sample with the excitation light using a 0.42 numerical aperture (NA) UV objective lens (NPAL-50-UV-YSTF, OptoSigma). The radiative emissions (\(>\)266nm) from the radiative relaxation process, as well as the transmitted detection light, are collected using a 0.7 NA objective lens (278-806-3, Mitutoyo). The 405nm detection wavelength and the radiative emissions are spectrally separated, and each directed toward an avalanche photodiode (APD130A2, Thorlabs). To form an image, mechanical stages move the sample in an "s"-like scanning pattern to laterally separate the excitation events on the sample (\(\sim\)250nm/pixel). At each excitation pulse, several hundred nanoseconds of time-resolved signal from each system photodiode is digitized at a 200MHz rate (CSE1442, RZE-004-200, Gage Applied). A portion of the collected signal is pre-excitation and is used to form the scattering image of the sample in its unperturbed state. The non-radiative image pixels are then derived as a percentage modulation in the detection scattering (post-excitation). Next, the radiative image pixels are obtained from the peak emission amplitude recorded after each excitation event. Pixels are then arranged in a cartesian grid based on the stage position feedback, forming a stack of three co-registered label-free image contrasts: non-radiative, radiative, and scattering. Finally, the excitation pulse energy and detection power, recorded throughout imaging, are used to correct image noise caused by laser power and pulse energy variability. Whole slide samples can be scanned using the automated workflow previously described by Tweel _et al._[34]. In brief, the entire tissue area is divided into subsections (500x500\(\upmu\)m), each individually scanned at their optimal focus position. Using their relative stage positions and small amount of overlap (\(\sim\)5%), these sections are stitched and blended into a single whole slide image. ### _PARS Data Preprocessing_ In addition to the correction of noise due to laser power and pulse variability, the Noise2Void (N2V) framework developed by Krull et al. [33] is used to further denoise the raw PARS images. Unlike many other traditional CNN-based denoising methods, N2V does not require paired training data with both a noisy and clean image target. It assumes that image noise is pixel-wise independent, while the underlying image signal contains statistical dependencies. As such, it facilitates a simple approach for denoising PARS images, and was used to train a denoising CNN for the radiative and non-radiative contrast channels, separately. Models were trained on a body of raw data taken from both human skin and breast whole slide images. A series of 125 PARS tiles was used to generate a model for each of the radiative and non-radiative images. Each model was trained over a series of 300 epochs, with 500 steps per epoch, using 96 pixel neighbourhoods. The final processing step before training the virtual staining model is to correct a scanning-related image artifact, which is uncovered after denoising the raw data. These artifacts are line-by-line distortions caused by slight inconsistencies in the mechanical scanning fast axis (x-axis) velocity, which results in uneven spatial sampling. As such, before colourization a custom jitter correction algorithm is used to fix these distortions (see more information in Supplemental Information Section A). ### _Dataset Preparation for Model Training_ In this work, a CycleGAN image translation model was used for virtual staining. While CycleGAN is able to learn an image domain mapping with unpaired data, it can be advantageous to provide the model with semi or loosely registered images, as a form of high-level labeling to better guide the training process and strengthen the model. As one-to-one H&E and PARS whole slide image pairs are obtainable, it seems most appropriate to prepare the dataset accordingly. However, the two datasets are not intrinsically registered, so a simple affine transform is used. Affine transforms allow for shearing and scaling, as well as rotation and translation [35]. In general, it is sufficient for the alterations of tissue layout on the slide which occur during the staining process. The affine transform is determined using the geometric relationship between three registration points. This found relation, or transformation matrix, is then applied to the entire whole slide image for both the non-radiative and radiative channels [35]. After the whole slide PARS Total Absorption (TA) image and H&E image are registered, the entire image is sliced into small tiles (512x512) which are paired together as shown in Figure 1(a). The total absorption (TA) image shows the radiative (blue) and non-radiative (red) raw images in a combine single colored image. However, during training, the network uses inverted TA patches, in which the radiative and non-radiative image pixel intensities are inverted before they are stacked into a colored image. Inverting these channels provides a colored image where the white background in the PARS data maps to the white background in the H&E data. After training is complete, the model can be applied to larger images, such as entire whole slide images, by virtually staining 512x512 tiles in parts. This process is shown in Figure 1(b) for a smaller inverted TA image. When applying the model, the virtually stained tiles overlap, and these overlap regions are averaged together in the final virtually stained image. Here an overlap of 50% was used. Figure 1: Visualization of data preparation process and inversion. (a) The registered total-absorption and H&E images are cut into matching tiles, to generate a loosely registered dataset. The pixel intensities of the total-absorption images are then inverted, to provide a better initialization for training. Finally, the datasets are used to train the virtual colorization model. (b) To form images, the model is repeatedly applied to overlapping tiles of the total absorption images. The overlapping tiles are subsequently averaged to form the final virtual colorization. In this study two CycleGAN models were trained on loosely paired data using the registration and dataset preparation methods described earlier. One model was trained on human skin tissue and another on human breast tissue. For each model, the training sets were composed of 5000 training pairs of size 512x512px (128x128 \(\upmu\)m) sourced from standard 40x magnification (250nm/pixel) whole slide images of each tissue type. The model generators were trained for 500 epochs with an early stopping criteria to terminate training when losses stopped improving. The model was trained with a learning rate of 0.0002, batch size of 1 and an 80/20% split of training and validation pairs. For comparison purposes, a pix2pix model and standard unpaired CycleGAN model were also trained for each tissue type. The pix2pix models were trained on the same dataset as the paired CycleGAN model, however with the more rigorous registration process and the same model parameters previously described by Boktor _et al._[28]. For the unpaired training of CycleGAN models, the same number of training pairs were used, however the TA and H&E domains were sourced from different whole slide images of the same tissue type. ## III Results and Discussion A current shortcoming of the PARS raw images is the presence of measurement noise. In a recent work by Tweed _et al._[34] significant improvements in PARS image quality were achieved by measuring detection power and excitation pulse energy. Image noise was then correction based on the laser energy variability. Even with the energy reference correction, measurement noise is still present in the non-radiative signals. This additive noise disproportionately impacts signals which exhibit low non-radiative relaxation since they generate smaller non-radiative perturbations in the detection beam. Figure 2 shows an example of the raw non-radiative and radiative image channels after reconstruction and laser power reference correction. At high magnification, significant noise can be seen in the raw data channels. This motivates denoising as a preprocessing step. However, noiseless PARS image targets were not available for training a traditional denoising CNN. Hence, the N2V framework, described in section II.C is an ideal method as it allows effective denoising without a clean image target. Figure 2 shows results after denoising with clear improvements in image quality for both the non-radiative and radiative channels. After removing noise from the raw data, the jitter artifacts mentioned in Section II.C are uncovered and become the main source of noise in the images. While these sub-resolution shifts and distortions between the rows of the image can be seen embedded within the noise, they are difficult to resolve and correct. Denoising not only helps improve raw data quality but helps make the jitter correction possible. As shown in Figure 2, most of the artifacts are removed after applying the jitter correction algorithm (more information in Supplemental Information Section A). After denoising and jitter correcting the raw data, the whole slide radiative and non-radiative images are registered to the ground truth H&E image. As mentioned in Section II.D a simple affine transform is used here to account for the tissue layout alterations accrued during the staining process. The three-point affine registration is less rigorous compared to the methods employed by Boktor _et al._[28] for pix2pix virtual staining. However, it is significantly faster and may generate upwards of 6000 closely registered 512x512 training pairs for a single 40x, 1cm\({}^{2}\), whole slide image. An example of a whole slide image before and after registration can be seen in Supplemental Information Section B (Figure S2) for a breast tissue sample. In some cases, alignment between these training pairs may be sufficient for error metrics specific to supervised learning, such as means squared error (MSE), or structural similarity index measure (SSIM). In the future this may enable hybrid training schemes which combine paired and unpaired data. Previous works in hybrid image-to-image translation have shown improve performance over purely unsupervised methods, even with a small amount of additional paired data [36, 37]. Although supervised error metrics were not explicitly employed on the loosely paired CycleGAN presented here, the results demonstrate a potential advantage over strictly supervised pix2pix, and over CycleGAN trained entirely on unpaired data. Next, a comparison between previous pix2pix based colorizations [28], unpaired CycleGAN, and the proposed paired CycleGAN was conducted. Figure 3 shows this comparison for unseen data from a variety of skin tissue structures. The pix2pix model was trained on the same PARS images used for the paired CycleGAN, however, registration was performed according to the previously reported process [28]. While the pix2pix model performs quite well, the transform tends to blur very fine structures. This artifact is observed in all four skin tissue examples. This blurring is likely caused by slight imperfections in alignment and registration between the training H&E and TA images, which severely weaken model performance. Figure 2: Denoising results with the N2V-based denoising CNN and subsequent jitter correction algorithm applied to both the raw non-radiative and radiative image channels. Three example regions are shown at higher magnification to see the effect of the denoising and jitter correction algorithms. The structure imaged here shows a hair follicle capture from human skin tissue. Achieving perfect alignment between the label-free images and H&E data is challenging as certain structures are very susceptible to staining artifacts. For example, lipid areas or regions with loose connective tissue may be severely altered, or even washed away during the deparaffinization and staining processes. This leads to inconsistencies between datasets which cannot be accounted for through registration. Subsequently, even if the label-free and H&E-stained images are aligned, there may still be variation or uncertainty in the data which affect translation quality. This highlights the importance of a more flexible model such as CycleGAN which can handle variation and uncertainty in the input data. As expected, both unpaired and paired CycleGAN implementations provide sharper virtual H&E (Figure 3). However, the model trained on an unpaired dataset shows some examples of mis-colorization. For example, Figure 3 Section 1 shows a cluster of cell nuclei which, in the unpaired model, have been tainted red and look to be colorized as red blood cells instead of cell nuclei. Conversely, Section 2 shows a crop of the skin's epidermis layer with some background whitespace. The unpaired CycleGAN adds texture to the Figure 3: Comparison of pix2pix as well as paired and unpaired CycleGAN implementation with the gold standard H&E and PARS total absorption (TA) acquisitions of skin tissue. outermost stratum coronium layer of the tissues. In addition, a hallucination structure is seen in the bottom right of the background, which is not seen in the TA image, or in the H&E. In Section 3, which shows part of a sebaceous gland, the unpaired CycleGAN overemphasizes the red colouration in certain regions. In Section 4, a similar overemphasis of red colors is observed. The red incorrectly implies the presence of red blood cells, in the connective tissue. In unpaired scenarios, CycleGAN is a highly under-constrained model, which has been known to create hallucinations. Hallucinations occur when spurious structures are added or features are removed during the image domain transfer [32]. This problem usually arises when the data provided in the target domain has under or over representation, or bias, towards certain image features. Providing the CycleGAN model with loosely paired training data can strengthen the model during training by ensuring an equal representation of features in both image domains (TA and H&E). Hence, in all four diverse tissue structure examples, the paired CycleGAN does not exhibit the same artifacts as the unpaired version. It is clear the CycleGAN implementation trained with paired data has superior performance, avoiding the hallucination of structures seen with the unpaired CycleGAN model. Figure 4: Comparison of pix2pix as well as paired and unpaired CycleGAN implementation with the gold standard H&E and PARS total absorption (TA) acquisitions of malignant breast tissue. The same comparison between colorization models was also performed on unseen malignant breast tissue areas, showing mainly glandular structures and connective tissues. These comparisons can be seen in Figure 4. As with the skin tissue examples, the data used for training was taken from separate whole slide images of the same tissue type. Overall, similar trends in the model performances are seen in the breast tissue structures. The pix2pix model applies a slight blur during the transformation and important nuclear details appear smudged. The unpaired CycleGAN implementation again produces hallucinations in the translation process. In all four examples, these hallucinations cause mis-colourization of the hematoxylin stain. Emulated hematoxylin stain is incorrectly spattered across areas of the connective tissue, falsely indicating the potential presence of nuclei. Abnormal nuclear morphology, such as increased nuclear size and irregular shape, organization, and patterning are all valuable details in cancer diagnosis and prognosis. Hence, these nuclear hallucinations are problematic. In contrast, the paired CycleGAN implementation avoids these hallucinations, and the virtual H&E closely resembles the ground truth. Whole slide images provide pathologists with critical access to both low and high magnifications of tissue structures. This enables them informed diagnostic decisions. At low magnification, pathologists can overview the tissue structure, identify areas of concern, and contextualize high magnification analysis. High magnifications allow for in-depth examination of tissue structure and cellular morphology which is vital for precise diagnoses. As such, demonstrating virtual staining on an entire whole slide image is an essential goal. In Figure 5 the semi-registered CycleGAN virtual staining model is applied to an entire malignant skin tissue sample. This tissue contains a diverse set of structures to assess the model's performance. Both low and high magnification images are shown and compared against the corresponding one-to-one H&E whole slide image. Figure 5(a) shows a low magnification depiction of the entire slide where a variety of structures can be identified including an artery, smooth muscle structures, and a basaloid tumor extending from the epidermis. This indicates a primary diagnosis of basal cell carcinoma for the skin sample. Figure 5(b) shows virtually stained sebaceous gland and hair follicle structures, stemming from the epidermis layer of the tissue. The virtual stain closely resembles the gold standard H&E. However, in the ground truth H&E, a blue color can be seen at the edge of the epidermis which is not shown in the virtually stained image. This blue inking is an artifact of the markings drawn on the tissue specimen by a surgeon during the resection process. Furthermore, part of the connective structures inside the void in the hair follicle bulb have been washed away during the deparaffinization and staining process. A similar removal of connective tissue can be in the upper right corner of the H&E image. As well, a few of the secretory cells in the sebaceous gland have been removed and appear fragmented. Notably, in Figure 5(d) the H&E image exhibits slash marks which are not present in the virtual H&E. In contrast, all these damaged structures are intact in the PARS virtual H&E. These artifacts exhibit why perfect registration is not always possible in certain parts of the tissue specimen. Additionally, they are prime examples of artifacts which may occur during histochemical processing. A potential advantage of implementing virtual staining is reducing such sample processing artifacts. Figure 5: Comparison of the proposed paired CycleGAN virtual staining performance with the gold standard H&E on a sample of malignant skin tissue. (a) Whole slide visualization showing the epidermis, basaloid tumour, artery, and smooth muscles (b) High magnification view of a sebaceous gland and hair follicle. (c) High magnification visualization of sweat glands and adipose tissue. (d) Higher magnification view of the basaloid cells and basaloid tumor. (e) High magnification view of the epidermis, highlighting lymphocytes as well as detailed view of subcellular structures including nucleoli and intercellular junctions. Figure 5(c) also shows the excellent performance of the semi-registered CycleGAN virtual staining model on a group of sweat glands. Here, the cytoplasmic membrane of the surrounding adipose tissue remains intact, and the lipid cell nuclei are recovered with clarity and color resembling the gold standard ground truth. Figure 5(d) shows a higher magnification view of the basaloid tumor which stems from the epidermis layer of the skin. The virtual H&E accurately mimics the staining colour of the basaloid cell nests of the tumour, which are important clinical features for diagnosis of basal cell carcinoma. Lastly, Figure 4(e) shows a close-up of the epidermal layers. Important diagnostic details can be seen in both the virtual and real H&E image, including subcellular structures such as nucleoli. Furthermore, within the stratum basale of the epidermis, a distinct network of thin lines and gaps is observed between the densely packed cells. These spaces are known as intercellular junctions which form intricate and essential networks of connections within the epidermis. These junctions help promote cell adhesion, facilitate intercellular communication, and maintain structural integrity in the epidermis. Proper observation of these structures is crucial during diagnostic evaluations. Disruptions or disorganization in these junctions can be indicative of cancer and potential invasiveness. Therefore, preserving the sharpness and quality of the input data, along with accurate staining colour, is of utmost importance in the virtual staining model. The paired CycleGAN model properly colourizes these smaller features and retains the input resolution of the raw data. This ensures that pathologists can effectively examine and evaluate these crucial aspects, leading to more accurate and reliable diagnoses. As initially reported by Ecclestone _et al._[27], the non-radiative and radiative label-free PARS contrasts tend to match the chemical staining contrast of hematoxylin and eosin respectively. However, there can still be notable distinctions in the chemical H&E staining and PARS visualizations. PARS may recover additional details and emphasize structures which are not highlighted by traditional H&E stains. In an example shown in Figure 6, the raw PARS image highlights the inner wall of an artery. Specifically, the internal elastic membrane, which provides elasticity and support to the artery, is emphasized in blue (radiative channel). This structure can be visualized with UV excitation due to the presence of certain fluorophores contained in the elastin fibres, one of which is a cross-linking tricarboxylic amino acid with a pyridinium ring [38]. In contrast, H&E is unable to show as clear a distinction between the layers of the artery. As such, PARS provides greater specificity to this structure compared to H&E. However, to accurately reflect the staining patterns of H&E, the virtual staining model deliberately supresses the intrinsic contrast to minimize the prominence of the internal elastic membrane. Traditionally a stain such as Verhoeff-Van Gieson (VVG), which highlights normal or pathologic elastic fibers, would be required to visualize the internal elastic membrane of arteries [39]. In clinical applications, VVG stain is sometimes combined with Masson's trichrome stain [40], to differentiate collagen and muscle fibers within tissue samples. This is performed to visualize potential increases in collagen associated with diseases like cirrhosis and assess muscle tissue morphology for pathological conditions affecting muscle fibers. In contrast, all these structures are well highlighted in the PARS raw data. Currently, the H&E virtual staining model flattens these structures during the image translation process. However, this highlights the potential use of the rich PARS raw data to replicate various clinically relevant contrasts beyond H&E staining. Currently, clinical studies are underway to explicitly validate PARS virtual H&E as diagnostically comparable to chemical H&E visualizations. To this end, additional staining contrast will be explored in the near future. A primary goal for PARS virtual staining is to provide several emulated histochemical stains from a single acquisition. Moreover, there is a potential to develop completely new histochemical like contrasts based on the endogenous PARS contrast. PARS may be able to provide contrast to biomolecules which are inaccessible with current chemical staining methods. ## IV Conclusion We present an optimized PARS data processing and virtual staining method. Specific signal processing advances are exhibited which help to reduce measurement variability. Here, measurement reference correction, and Noise2Void based image denoising, are successfully applied to improve image quality. Finally, a new algorithm is presented to reduce pixel level mechanical scanning position artifacts, which blur submicron scale features. These enhancements afford marked improvements in the clarity of small tissue structures, such as nucleoli and chromatin distribution. In conjunction, a new virtual staining processes is presented which uses a semi-registered CycleGAN. While the semi-registered CycleGAN does not require registration like pix2pix, providing the semi-registered data may enhances the colorization quality by reducing the presence of hallucination artifacts. Presented here, emulated H&E images are produced from label-free PARS images with quality and contrast that compare favorably to traditional H&E staining. The colorization performance represents the current best PARS virtual staining implementation. Applied to entire sections of unstained human tissues, the presented method enables accurate recovery of subtle structural and subnuclear details. With these improvements, the PARS virtual H&E images, may be effectively indistinguishable from gold standard chemically stained H&E scans. This represents an essential milestone in developing a new clinically ready label-free virtual staining microscope. In the near future, PARS label-free virtual staining the has potential to provide multiple histochemical stains from a single unlabelled sample enhancing diagnostic confidence, and greatly improving patient outcomes. ### Funding The authors thank the following sources for funding used during this project. Natural Sciences and Engineering Research Council of Canada (DGECR-2019-00143, RGPIN2019- 06134); Canada Foundation for Innovation (JELF #38000); Mitacs Accelerate (IT13594); University of Waterloo Startup funds; Centre Figure 6: Example of the differences in the intrinsic PARS contrast, and chemical H&E staining. The PARS total absorption image highlights the inner wall of the artery or the internal elastic membrane. This feature is not highlighted in the H&E image. for Bioengineering and Biotechnology (CBB Seed fund); illumiSonics Inc (SRA #083181); New frontiers in research fund - exploration (NFRFE-2019-01012); The Canadian Institutes of Health Research (CIHR PJT 185984); NSERC Discovery Horizons DH-2023-00371. ## Acknowledgements The authors would like to thank Dr. Ally-Khan Somani, Dr. Gilbert Bigras and the Cross-Cancer Institute in Edmonton, Alberta for providing human breast and skin tissue samples. The authors would like to thank Hager Gaouda for helping prepare and stain the tissue samples used in this study. The authors would also like to thank Dr. John Mackey and Dr. Deepak Dinakaran for their help in clinical consultation in the assessment of the results. ## Author Contribution Statement Authors J.E.D.T and B.R.E contributed equally to this work. ## Competing Interests Authors James Tweel, Benjamin Ecclestone, James Alexander Tummon Simmons and Parsin Haji Reza all have financial interests in IllumiSonics which has provided funding to the PhotoMedicine Labs. Authors Marian Boktor and Paul Fieguth do not have any competing interests.
2307.07599
JWST/CEERS sheds light on dusty star-forming galaxies: forming bulges, lopsidedness and outside-in quenching at cosmic noon
We investigate the morphology and resolved physical properties of a sample of 22 IR-selected DSFG at cosmic noon using the JWST/NIRCam images obtained in the EGS field for the CEERS survey. The resolution of the NIRCam images allowed to spatially resolve these galaxies up to 4.4um and identify their bulge even when extinguished by dust. The goal of this study is to obtain a better understanding of the formation and evolution of FIR-bright galaxies by spatially resolving their properties using JWST in order to look through the dust and bridge the gap between the compact FIR sources and the larger optical SFG. Based on RGB images from the NIRCam filters, we divided each galaxy into several uniformly colored regions, fitted their respective SEDs, and measured physical properties. After classifying each region as SF or quiescent, we assigned galaxies to three classes, depending on whether active SF is located in the core, in the disk or in both. We find (i) that galaxies at a higher z tend to have a fragmented disk with a low core mass fraction. They are at an early stage of bulge formation. When moving toward a lower z, the core mass fraction increases, and the bulge growth is associated with a stabilization of the disk: the NIRCam data clearly point toward bulge formation in preexisting disks. (ii) Lopsidedness is a common feature of DSFGs. It could have a major impact on their evolution; (iii) 23% of galaxies have a SF core embedded in a quiescent disk. They seem to be undergoing outside-in quenching, often facilitated by their strong lopsidedness inducing instabilities. (iv) We show that half of our galaxies with SF concentrated in their core are good SMG counterpart candidates, demonstrating that compact SMGs are usually surrounded by a larger, less obscured disk. (v) Finally, we found surprising evidence for clump-like substructures being quiescent or residing in quiescent regions.
Aurelien Le Bail, Emanuele Daddi, David Elbaz, Mark Dickinson, Mauro Giavalisco, Benjamin Magnelli, Carlos Gomez-Guijarro, Boris S. Kalita, Anton M. Koekemoer, Benne W. Holwerda, Frederic Bournaud, Alexander de la Vega, Antonello Calabro, Avishai Dekel, Yingjie Cheng, Laura Bisigello, Maximilien Franco, Luca Costantin, Ray A. Lucas, Pablo G. Perez-Gonzalez, Shiying Lu, Stephen M. Wilkins, Pablo Arrabal Haro, Micaela B. Bagley, Steven L. Finkelstein, Jeyhan S. Kartaltepe, Casey Papovich, Nor Pirzkal, L. Y. Aaron Yung
2023-07-14T19:45:28Z
http://arxiv.org/abs/2307.07599v3
_JWst_/ceerds Sheds Light on Dusty Star-Forming Galaxies: Forming Bulges, Lopsidedness and Outside-In Quenching at Cosmic Noon ###### Abstract Context:We investigate the morphology and physical properties of a sample of 22 IR-selected dusty star-forming galaxies at Cosmic Noon (\(z\sim 2\)), using _James Webb Space Telescope_ Near Infra-Red Camera images obtained in the Extended Groth Strip field for the Cosmic Evolution Early Release Science survey. Aims:The exceptional resolution of the NIRCam images allows us to spatially resolve these galaxies up to 4.4\(\mu\)m and identify their bulge/core even when very extinguished by dust. Methods:Based on red-green-blue images using the F115W, F200W and F444W filters, we divide each galaxy in several uniformly colored regions, fit their respective Spectral Energy Distribution and measure dust attenuations, stellar masses, star formation rates and ages. After classifying each region as star-forming or quiescent, we assign galaxies to three classes, depending on whether active star-formation is located in the core, in the disk or in both. Results:(i) \(\sim\) 70% of our DSFGs have a compact highly dust attenuated star-forming core that can contain up to 80% of the star-formation of the galaxy but only 20-30% of its stellar mass, and is always surrounded by a larger, less attenuated massive disk (no blue nuggets); (ii) 64% (27%) of disks are significantly (strongly) lopsided, likely due to asymmetric cold gas accretion, major mergers and/or large scale instabilities; (iii) 23% of galaxies have a star-forming core embedded in a quiescent disk, they are undergoing outside-in quenching, often facilitated by their strong lopsidedness inducing small and large scale instabilities; (iv) some galaxies host highly heterogeneous disks in term of RGB colors: these are driven by in-homogeneous dust attenuation; and (v) we find surprising evidence for clump-like substructures being quiescent and/or residing in quiescent regions. Conclusions:This work demonstrates the major impact _JWST_/NIRCam has on understanding the complexity of the evolution of distant massive galaxies. Conclusions: ## 1 Introduction Until recently, the existence of the so-called galaxy Main-Sequence, a correlation that the majority of star-forming galaxies observe in the stellar mass (\(M_{*}\)) versus star formation rate (SFR) plane up to redshift 3 (MS, e.g., Daddi et al. 2007; Elbaz et al. 2007; Noeske et al. 2007; Schreiber et al. 2015) and its tight scatter has been interpreted as evidence that star formation in most galaxies is a fairly ordered process (Schreiber & Wuyts 2020). The 'consensus' is that galaxies on the MS are forming stars in a quasi steady state inside gas-rich stellar disks (e.g., Sancisi et al. 2008; Dekel et al. 2009) whereas galaxies above the MS undergo a starburst, driven by stochastic processes such as major mergers, whose typical signature is compact star formation (e.g., Tacconi et al. 2008). However, recent studies at \(z\sim 1-3\) have shown that some massive (\(M_{*}\geq 10^{11}M_{\odot}\)) MS galaxies have a stellar distribution typical of late type galaxies but where the star formation only occurs in a compact nucleus (Elbaz et al. 2018; Puglisi et al. 2019, 2021; Tadaki et al. 2017, 2020; Franco et al. 2020; Gomez-Guijarro et al. 2022b; Jimenez-Andrade et al. 2019, 2021). The origin of these compact SF sub-mm galaxies (SMGs) observed with the _Atacama Large Millimeter Array_ (_ALMA_) is yet to be fully understood. Three main scenarios to form the compact sub-mm nucleus are : (1) gas fueled to the core via violent disk instabilities (VDI) and clump migration, (2) a starburst induced by a major merger or (3) accretion and/or minor mergers (e.g. Gomez-Guijarro et al. 2022a). These compact SF nuclei could be an indication of an early quenching phase (Puglisi et al. 2019; Franco et al. 2020; Puschnig et al. 2023). Besides the compact nucleus, high-\(z\) SF galaxies are observed to have giant SF clumps (radius \(\sim 1\)kpc). The origin of these clumps has been investigated by many studies (Puschnig et al. 2023; Fensch & Bournaud 2021; Hodge et al. 2019; Rujopakarn et al. 2019; Mandelker et al. 2014; Wuyts et al. 2012; Elmegreen 1994, 1989). Mandelker et al. (2014) suggests that they can either be _in-situ_ clumps, originating from VDI (e.g. Elmegreen 2011), in this case they are young and star-forming, or they can be _ex-situ_ clumps, originating from minor mergers, in that case they will be older and with a low gas fraction and low specific star-formation rate (sSFR). A recent simulation showed that the formation of such long-lived giant clumps is only possible with a gas fraction of at least 50% (Fensch and Bournaud 2021). This large gas fraction is necessary to induce VDI that will produce clumps that will migrate toward the center, creating strong gas nuclear inflow and triggering an evolution of the structure of the galaxy, leading to a morphological evolution (Fensch and Bournaud 2021). This scenario is also favored by some observations (Forster-Schreiber et al. 2011; Guo et al. 2012). More recently, Puschnig et al. (2023) studied a local galaxy as proxy for high-\(z\) galaxies, confirming that the giant SF clumps mostly originate from a fragmentation of the disk, induced by VDI and not accretion or minor mergers. With its high spatial resolution, the _James Webb Space Telescope_'s (_JWST_) near-IR Camera (NIRCam) is able to better resolve such giant SF clumps and could help constraining this scenario. It is thus becoming clear that the galaxies within the MS scatter are not all largely unperturbed gas-rich disks. The compact SF cores, as well as the giant clumps, independently of their formation history, imply complex phenomenology at play, much different than local SF galaxies in the MS that are typically well behaved spirals. Recently, emphasis has been brought onto other kind of asymmetries characterising high redshift SF galaxies. Kalita et al. (2022) discovered strong lopsidedness affecting the three massive SF galaxies in a \(z=2.91\) group core. They suggested a link between the lopsidedness of a galaxy in a dense environment to gas accretion and minor mergers. The lopsidedness would then be a marker of the point of impact of the accretion stream, following Bournaud et al. (2005) who investigated the origins of lopsidedness in simulated galaxies. Their conclusion is that it is very unlikely that the lopsidedness is the result of internal mechanisms but is more likely to be linked to the assembly history and the environment of the galaxy, to asymmetric gas accretion and to minor merger and interactions with neighbouring galaxies. This is also the conclusion of studies on lopsidedness of galaxies in the local universe (Jog and Combes 2009; Zaritsky et al. 2013). Rujopakarn et al. (2023) studied a galaxy in a dense environment with SF off-center substructures. They interpreted it as either forming spiral arms following a minor merger, an interaction with a neighbouring galaxy or a lopsided structure resulting from the point of impact of the cold gas accretion stream. Colina et al. (2023) reported _JWST_ MIRI observations of GN20, an extremely luminous sub-mm galaxy residing in a \(z=4.05\) protocluster (Daddi et al. 2009). They reveal a massive extended disk surrounding the sub-mm compact nucleus, displaying strong lopsidedness. As of today, the lopsidedness has only been studied in dense environments and serendipitously. Observing lopsided disk in less crowded environment and inferring their prevalence in complete samples could shed further light on their presumed origin from interactions and accretion, and clarify whether a massive hosting dark matter halo is, or not, required. By probing the rest-frame optical to near infrared (near-IR) at Cosmic Noon, _JWST_/NIRCam has a unique ability to fill the gap between the sub-mm compact nucleus observed with _ALMA_ and the larger galactic disk observed in the optical and will help critically examining the competing scenarios. As an example, Rujopakarn et al. (2023) recently studied substructures within a dusty star forming galaxy (DSFG) at \(z\sim 3\) imaged with both _ALMA_ and _JWST_. From NIRCam images, they showed that the ALMA substructures are also visible at 4\(\mu\)m, demonstrating the direct link that one can draw between near-IR and sub-mm emissions. This suggests that the long wavelength channel of NIRCam might be a good tracer of compact obscured star formation in MS DSFGs. The present study is part of the Cosmic Evolution and Epoch of Re-ionization Survey (CEERS1; ERS 1345, PI: S. Finkelstein) which is one of the Early Release Science (ERS) programs of the _JWST_ (Gardner et al. 2023) that observed a part of the Extended Groth Strip (EGS) _Hubble Space Telescope_ (_HST_) field with NIRCam (Rieke et al. 2023). EGS is too far North to be observed with _ALMA_ and there is no high resolution imaging with the _Northern Extended Millimeter Array_ (_NOEMA_) yet. However, the high sensitivity and exquisite spatial resolution of NIRCam towards 5\(\mu\)m can be used as a surrogate to identify the most obscured and massive regions within galaxies, hence those most likely vigorously star-forming. Footnote 1: [https://ceers.github.io](https://ceers.github.io) Understanding how DSFGs are formed and evolve is crucial to get the larger picture of galaxy formation and evolution, and it could be a key element to explain the quenching of galaxies at and after Cosmic Noon. To this aim, _JWST_/CEERS allows a major step forward. Indeed Kartaltepe et al. (2023) already showed that _JWST_ reveals the diversity of morphologies of galaxies at high redshift. _JWST_ high spatial resolution and sensitivity is able to detect faint disks that were previously undetectable with _HST_. Moreover, a recent study by Kamieneski et al. (2023) uses _JWST_/NIRCam to probe the dust attenuation and sSFR of a lensed DSFG at \(z=2.3\). They demonstrate the power of _JWST_/NIRCam to precisely measure these properties at sub-galactic scales, allowing them to conclude that despite a more dust attenuated bulge, the color gradient of this galaxy is mainly driven by an early stage of inside-out quenching. This makes _JWST_/NIRCam the best instrument to investigate the morphological evolution of DSFGs around Cosmic Noon, in terms of compact star formation, giant clumps and galaxy structure. The paper is organized as follows. In Sect. 2 we present the data used in this study and the sample selection process. In Sect. 3, we detail the methods used to analyse each galaxy individually. In Sect. 4, we outline the main results of the analysis. Finally, in Sect. 5, we discuss the possible implications of the results in terms of formation and evolution of DSFGs at Cosmic Noon. In this work, we adopt \(H_{0}\) = 70km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}\) = 0.3, \(\Lambda_{0}\) = 0.7, and a Chabrier IMF (Chabrier 2003). When necessary, we converted stellar masses and SFR from Salpeter IMF (Salpeter 1955) to Chabrier IMF by subtracting 0.24dex. ## 2 Data ### CEERS Imaging For the purpose of this study, we used the NIRCam imaging of CEERS, reduced using a customized pipeline by the CEERS collaboration (Bagley et al. 2022). It includes images in 7 filters: F115W, F150W, F200W, F277W, F356W, F410M and F444W for an average 5\(\sigma\) depth of 28.6 AB mag (See Table 3 of Bagley et al. (2022) for more details, each filter/pointing as a slightly different depth). The Point-Spread-Function (PSF) Full-Width at Half-Maximum (FWHM) of those filters range from 0.040" to 0.145" for F115W and F444W respectively2. For this study, we used the CEERS imaging from the June 2022 pointings, which represent 40% of the total area covered by NIRCam for CEERS between June and December 2022. We used the background subtracted images as we wanted to measure precise photometry. As we needed to extract galaxy properties based on spectral energy distributions (SEDs), we decided to complement shorter wavelengths by taking advantage of the existing _HST_ imaging in the field. We used the publicly available _HST_ data products version 1.9, available through CEERS. These mosaics were derived from _HST_ archival data, but with improved calibration compared to the default pipeline products, and have astrometry tied to Gaia-EDR3 (Lindegren et al. 2021). As described in the accompanying data release, the mosaics were created from the combination of _HST_ programs 10134, 12063, 12099, 12167, 12177, 12547, 13063, and 13792, and the reduction and calibration followed a similar procedure to those described in Koekemoer et al. (2011). We used two filters; F606W and F814W with a PSF FWHM of 0.115" and 0.110" respectively (Koekemoer et al. 2011). We did not use the _HST_/WFC3 images as these bands are redundant for bright galaxies, as they are covered by \(J\)_WST_/NIRCam images which are deeper and with better spatial resolution. ### The "Super-deblended" FIR catalog The goal of this paper is to study the morphology and SF activity of DSFGs. We select galaxies based on their IR detection in the state-of-the-art super-deblended far-IR (FIR) catalog of the EGS field (Le Bail et al., in preparation). FIR emission is a secure tracer of star formation (once the AGN components are removed), while optical/near-IR classification of SF galaxies is subject to larger uncertainties especially in the presence of dust. Hence, our FIR selection ensures the galaxies under scrutiny are truly highly SF. The super-deblending is based on a well-established technique (Liu et al. 2018; Jin et al. 2018). It is a multi-wavelength fitting technique meant to optimize the number of priors fitted at each band to extract the deepest reachable information. They used images from _Spitzer_ (24\(\mu\)m (FIDEL, Dickinson 2007)), _Herschel_ (100\(\mu\)m and 160\(\mu\)m (PEP, Lutz et al. 2011), 250\(\mu\)m, 350\(\mu\)m, 500\(\mu\)m (HerMES, Oliver et al. 2012)), SCUBA2 (850\(\mu\)m (S2CLS, Geach et al. 2017), 450\(\mu\)m and 850\(\mu\)m from Zavala et al. (2017)) and AzTEC (1.1mm from Aretxaga (2015)). The key was to obtain an adaptive balance as a function of wavelength between the density of priors fitted, the quality of the fit, and the achievable deblending given the PSF sizes. They started with the deepest images and fitted band after band toward shallower images. Extensive Monte-Carlo simulations ensured that the uncertainties associated to the flux measurements were "quasi-Gaussian" (see Liu et al. 2018; Jin et al. 2018; A. Le Bail et al. in preparation). ### Sample definition We selected all sources securely detected in the FIR catalog (see Sect. 2.2) that fell in the CEERS/NIRCam regions observed in June 2022. Since short wavelength channels have a slightly different field of view than long wavelength channels, we checked that the sources are observed in all of them and that they were not too close to the edge of the images so that there were not partially cut. In detail, we require the galaxies to have SNR\({}_{FIR}>5\), where SNR\({}_{FIR}\) is the signal-to-noise ratios (SNR) added in quadrature from 100\(\mu\)m to 1.1mm (Le Bail et al. in preparation) and have at least one detection (SNR\(>3\)) in a _Herschel_/SPIRE band after deblending (required to reliably measure SF components in case of AGNs). The implication of the IR selection is that we don't have a stellar mass complete sample of SF galaxies (e.g., complete above some mass threshold), and we have instead something closer to a (redshift-dependent) SFR limit. We are aware that we are missing SF galaxies below our IR detection threshold, as we wish to focus to highly (and securely) star-forming galaxies. We also limited the sample to galaxies within \(1.5<z<3.0\), as we are willing to focus on galaxies at "Cosmic Noon", as recalled in the Introduction. To get accurate redshift estimates, we used the recent redshift compilation produced by Kodra et al. (2022), which includes photometric redshifts based on CANDELS (Grogin et al. 2011; Koekemoer et al. 2011) as well as grism-based redshifts from 3D-HST (Momckra et al. 2016) and spectroscopic redshifts from the MOSDEF survey (Kriek et al. 2015). This sample comprised a total of 26 IR-detected sources. From these, 4 had to be rejected after a clean up. After close inspection, three galaxies were in a blended region and/or close to a much brighter IR source, making the _Herschel_ measurements less reliable. The last rejected source hosted an AGN (clear radio excess, \(\sim 10\times\) brighter than what is expected for the radio continuum based on IR emissions, and X-ray detected: ID15327, RA = 215.82825, Dec = 52.80844, \(z_{phot}=1.61\), \(log_{10}(L_{AGN}/L_{\odot})\gtrsim 11.3\)), hence the majority of its IR luminosity does not come from SF regions which are the main objects of this study. This left us with a clean sample of 22 FIR-bright DSFGs around Cosmic Noon. We illustrate in Fig. 1 the distribution of the sample in terms of stellar mass estimated in the pre-_JWST_ era (Stefanon et al. 2017) and total IR luminosity (Le Bail et al. in preparation, calculated based on the equations in Press et al. (1992)) versus redshift (Kodra et al. 2022). We also show the distance from the MS (Schreiber et al. 2015) with a 0.6 dex total scatter (Rodighiero et al. 2011) defined as \(\Delta_{MS}=SFR_{IR}/SFR_{MS}\). Schreiber et al. (2015) uses a Salpeter IMF (Salpeter 1955), we converted stellar masses and SFRs from Salpeter IMF to Chapter IMF by subtracting 0.24 dex. The red shaded region corresponds to the pure starburst region as defined in Liu et al. (2018) (\(log_{10}(SFR_{IR}/SFR_{MS})>0.6\) dex), we have two galaxies in our sample classified as pure starburst. The rest is mostly either within the scatter of the MS, but above its average trend, i.e. above the MS but below the starburst regime. In Figs. 2.1, 2.2 and 2.3, we show RGB cutouts of our sample of galaxies using the F115W, F200W and F444W filters of NIRCam. The galaxies are separated in three classes, as discussed in detail in the next Section. ## 3 Methods In this Section, we detail the methods used to analyze each galaxy, taking one of the objects (ID15371) as an example, to better clarify the procedure that we applied to all galaxies. For each galaxy, we started by creating cutouts in each band (_HST_/ACS F606W, F814W and _JWST_/NIRCam F115W, F150W, F200W, F277W, F356W, F410M, F444W). We show the cutouts of a DSFG in Fig. 3 where one can already see by eye a difference between the disk visible in all bands and the center of the galaxy invisible in the _HST_ images but getting brighter at longer wavelengths, justifying the need to study each component individually rather than the galaxy as a whole. One of the first steps was to see if we could identify a bulge and a disk in each galaxies just like for ID15371, as discussed below. _JWST_/NIRCam images have a spatial resolution ranging from 0.040\({}^{\prime\prime}\) at 1.15\(\mu\)m up to 0.145\({}^{\prime\prime}\) at 4.4\(\mu\)m. The larger 4.4\(\mu\)m PSF allows a resolution in physical size down to 1.23 (1.12) kpc for a galaxy at redshift 1.5 (3). This means that we were able to spatially resolve galaxy substructures down to a radius \(\sim 0.6\)kpc. This made the resolution of F444W perfect for this study as we know the sizes of compact SF regions and giant clumps to be \(\sim 1\)kpc (Gomez-Guijarro et al. 2022b; Rujopakarn et al. 2019; Forster-Schreiber et al. 2011). ### Measuring galaxy sizes Several studies have shown that the regions of star-formation, either traced by the dust emission at 1.1mm observed with _ALMA_ or by the radio continuum emission detected by the _Very Large Array_ (VLA), are more compact than the optical size of the galaxy (Puglisi et al. 2019; Gomez-Guijarro et al. 2022b; Fujimoto et al. 2017; Jimenez-Andrade et al. 2019, 2021). _JWST_, with its sensitivity of the near and mid-IR, can detect both the obscured star-forming central part of each galaxy invisible with _HST_ and the less obscured larger system, invisible with ALMA or _VLA_ and bridge the gap. To investigate this, we measured the total near-IR half-light radius (\(R_{e,NIR}\)) of each galaxy in the closest band to 1.6\(\mu\)m rest-frame (F410M or F444W filter depending on the redshift). This rest-frame wavelength was chosen as it is a known tracer of the stellar mass of galaxies and is not affected by dust attenuation (Hainline et al. 2011; Casey et al. 2014). Moreover a recent study using NIRCam/CEARS data showed the excellent agreement between the near-IR size and the stellar mass size of galaxies around Cosmic Noon (van der Wel et al. 2023). We measured \(R_{e,NIR}\) from a curve of growth method, given that in all cases the PSF has a negligible effect (much smaller than any \(R_{e,NIR}\)). The \(R_{e,NIR}\) was defined as the radius of a circular aperture, centered at the center of mass (barycenter) of the galaxy, which encompassed half of the total flux density of the galaxy at the considered wavelength. To estimate the uncertainty, we used the fact that we typically have a 5% uncertainty on the measurement of the total flux of the galaxy (see Sect. 3.5 for more details on the photometry measurements). We also measured the bias introduced when using a circular aperture for edge-on galaxies (like ID23510 in Fig. 2.1) by comparing the fluxes encompassed in an elliptical aperture and a circular aperture. The difference is about 5%. Hence, by changing the total flux of the galaxy within 10% we can estimate the uncertainty on \(R_{e,NIR}\) for which 50% of the total flux is encompassed. We also measured the total optical half-light radius (\(R_{e,O}\)) of each galaxy in the closest band to 550nm rest-frame following the same procedure to compare it with \(R_{e,NIR}\). ### Identification of cores/bulges Depending on the redshift, the F444W filter of NIRCam probes the rest frame near-IR between 1.1\(\mu\)m and 1.8\(\mu\)m which is a good tracer of stellar mass (van der Wel et al. 2023). Hence, inspection of galaxy morphologies in this filter allowed us to search for the center of mass of each galaxy in our sample, or lack there-of, as a well defined peak in the F444W images. We were able to clearly identify a peak in the flux distribution of this filter for every galaxy. Depending on the galaxy, the peak was more or less pronounced, but always confidently there. We then defined a region in each galaxy encompassing the peak, as the core or the bulge of the galaxy. The regions are defined by eye as the peak is easily identifiable in every galaxies, the limit of the core is where the flux coming from the red F444W filter doesn't dominate anymore the RGB (F115W, F200W, F444W) color. Generally, a bulge is often defined in the literature as a quiescent central component with a high Sersic index (e.g., \(n\sim 4\)), and is a common component in local massive galaxies. In our study we did not attempt obtaining Sersic fits of separate components, and, more importantly, we anticipated that in many cases the central concentrations would not be quiescent, actually, most of them were highly SF and attenuated. We decided thus to call the central concentrations as cores when they were SF and bulges when they were quiescent. They are represented by the regions delimited by the red dotted lines in all galaxies in Fig. 2.1, 2.2 and 2.3. We emphasize that for most of our sample, it would not have been possible to identify the center of mass only based on _HST_ images (see e.g. ID15371 in Fig. 3 as an obvious example). This demonstrates once again the power of _JWST_ when it comes to studying high-\(z\) DSFGs. ### Lopsidedness Having defined the core/bulge of each galaxy, we considered the rest to be the disk. Hence, we could obtain an evaluation of the lopsidedness for each galaxy. We considered it to be an important property to investigate because a lot of galaxies in our sample are obviously highly lopsided already by visual inspection (see for example ID11887, ID13776, ID18278, ID18694 in Fig. 2.1 and 2.2). To quantitatively study this phenomenon, we defined two Figure 1: Stellar Mass (upper-left panel), total IR luminosity (upper-right panel) (Le Bail et al. in preparation) and distance from the Main Sequence (MS, lower panel) of the galaxies in the selected sample versus their redshift. The colors on the upper-right panel delimits the luminous IR galaxies (LIRG, in yellow) and ultra-LIRG (ULIRG, in red) local regimes for information. On the lower panel, the yellow shaded region illustrates the MS from Schreiber et al. (2015) while the red shaded region illustrates the pure starburst regime (Liu et al. 2018). parameters: the eccentricity, defined as: \[E=\sqrt{\frac{(X_{core}-X_{disk})^{2}+(Y_{core}-Y_{disk})^{2}}{R_{disk}^{2}}}, \tag{1}\] where \((X_{core},Y_{core})\) and \((X_{disk},Y_{disk})\) are the coordinates of the central core of the galaxy and of its disk respectively, while \(R_{disk}\) is the radius of the disk. The center of the core was simply defined as the pixel with the maximum flux density in the F444W filter. The center of the disk was defined as the barycenter of the galaxy, and the center of the galaxy is the radius of the disk. The center of the core was simply defined as the pixel with the maximum flux density in the F444W filter. The center of the disk was defined as the barycenter of the galaxy, and the center of the galaxy is the radius of the disk. The center of the core was simply defined as the pixel with the maximum flux density in the F444W filter. The center of the disk was defined as the barycenter of the galaxy, and the center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is the radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of the galaxy is radius of the radius of the radius of the disk. The center of the galaxy is radius of the radius of the radius of the disk. The center of the galaxy is radius of the radius of the disk. The center of Figure 2: Type II: _Quenched disks with a SF core_ (see Sect. 3.7). Similar to Fig. 1. ter of the disk measured in the rest-frame optical band (F150W or F200W depending on redshift). We measure it in the optical and not in the near-IR because the disk is less attenuated than the core, hence brighter than the core at these wavelengths. To not be biased by the core, we applied a circular mask centered on \((X_{core},Y_{core})\) with a radius defined by the closest pixel to the center that has a F444W flux density less than half the core center flux density. Finally, \(R_{disk}\) was calculated using a circular aperture centered on \((X_{disk},Y_{disk})\) encompassing half of the disk flux density. This quantifies the eccentricity of the disk with respect to the core/bulge compared to its size and is a-dimensional. The other quantity that we defined to probe the lopsidedness of the galaxies is the asymmetry. The asymmetry was calculated for the F444W NIRCam filter as we are trying to probe the mass distribution asymmetries and, as previously mentioned, F444W is the best tracer of the stellar mass distribution. We calculated the asymmetry by rotating each image by 180deg and subtracting it from the original image, the center of rotation was \((X_{core},Y_{core})\) from Eq. 1. The asymmetry is defined as: \[A=\frac{\sum_{i=0}^{N}|F_{i}-F_{i}^{180r}|}{F_{tot}}, \tag{2}\] where \(F_{i}\) and \(F_{i}^{180r}\) are the flux of the \(i\)-\(th\) pixel and its 180deg symmetric counterpart with respect to the center of the central core/bulge as defined in Equation 1. \(F_{tot}\) is the total flux of the galaxy. Since we worked on background subtracted images, we considered the background asymmetry to be negligible. This quantity describes how smoothly and how symmetrically the stellar mass is distributed around the central core/bulge of the galaxy and is also a-dimensional. Usually, the lopsidedness is probed using a Fourier decomposition (e.g. Dolfi et al. 2023; Kalita et al. 2022; Jog & Combes 2009; Bournaud et al. 2005). We decided to use a different, simpler method; the asymmetry, Figure 3: Type III: _SF disks with a quenched bulge_ (see Sect. 3.7). Similar to Fig. 2.1 and 2.2. that has already been used in gas velocity space and was found to correlate well with the Fourier analysis of stars (Bournaud et al. 2005; Matthews et al. 1998). ### Clumpiness After identifying the core or bulge of each galaxy, we investigated the surrounding disk-like structures. Some of the galaxies have a smooth disk, others have a much more perturbed/complex disk morphology showing a large number of clumps (see Fig. 2.1, 2.2 and 2.3). We did not embark in a physical study of the clumps in this work. Our goal for this paper is to assess the presence or not of clumps in the disks and have an idea of how fragmented the disks are. Hence, we did not try to derive any physical properties of the individual clumps. We decided to measure a clumpiness index, defined as the number of clumps in the disk of each galaxy. We counted the number of clumps visually identifiable in the RGB (F115W, F200W, F444W) image, making sure that the bulge/central concentration was not counted as a clump. This number varies from 0 up to 7 for the clumpiest galaxy. To be counted as a clump, the feature had to be compact compared to the galaxy size, and either had to have a different RGB color from the surroundings and/or appear as a local brighter spot. The clumps appear most clearly at the shortest wavelength (F115W or F200W filters), as expected (Wuyts et al. 2012). For ID15371, we identify 4 clumps, there are shown by the white ellipses in the left panel of Fig. 4. ### Spatially resolved photometry To quantitatively study our galaxies, we needed photometry measurements. We decided to divide our galaxies in several components. For the simplest cases we only had the core/bulge and the disk, and when the disk had several clump/patches with different colors in the RGB image, we broke it down to several circular or elliptical regions. Each region was designed so that it had, qualitatively, a homogeneous (F115W, F200W, F444W) color. The division of the disk is once again done by visual inspection. We emphasize that we seek to study each region that has a different color, hence, if several clumps are close and with a similar RGB color, we consider them to be part of the same disk component. Moreover, due to the spatial resolution of the PSF-matched images, we did not want to design too small regions that could lead to biased flux measurements. We tried to respect a balance between the size of the component we defined (not too close to the PSF size) and the homogeneity of the RGB color inside it. We emphasize that the components are not necessarily concentric as most of the galaxies are not radially symmetric and are not limited in number. If we observed, for example, two blue disconnected patches in a galaxy, we defined them as two different components and fitted them individually. In the case of ID15371, we divided the galaxy in three regions, the red central core/bulge, the bluer disk and an intermediate region, that is still part of the disk but close to the red core and with intermediate colors (see Fig. 4). In terms of rest-frame colors, since our sample of galaxies is distributed across \(z\sim 1.5\) to \(z\sim 3\), F115W probes the rest-frame near-UV/blue (\(300-460\)nm), F200W probes the rest-frame green/red (500-800mn) and F444W probes the rest-frame near-IR (\(1110-1780\)nm). The scatter in rest-frame wavelength is less or equal to the band-width of each filter. This means that we globally probed consistent colors between galaxies. By dividing each galaxy in sub-galactic regions, there was a risk that small regions get close to the PSF FWHM of some filters. Hence, leading to an underestimation of the flux at the longest wavelengths, and an artificial deformation of the SED. To avoid this, we decided to work on PSF-matched images using the broader PSF of the F444W filter. In Fig. 4, we show RGB images of the DSFG ID15371 using (F115W, F200W, F444W) before and after PSF-matching. To make sure that we didn't underestimate stellar masses and SFR when running the SED fitting, we chose regions larger than the PSF FWHM (\(0.145^{\circ}\)). In Fig. 2.1, 2.2 and 2.3, for each galaxy we overlay the delimitation of the different components we decided to study separately based on their color (those RGB images are showed before PSF-matching). After having defined the regions to study, we measured the flux in each band for each region. To do so, we summed the value of each pixel in each region of the science image. The pixels were counted only once, meaning that the flux in the smaller regions (like the red ellipse for ID15371) was not included when calculating the flux of larger regions (like the green ellipse for ID15371, see Fig. 4). Figure 3: Cutouts of _HST_/ACS and _JWST_/NIRCam images of the DSFG ID15371 at \(z_{spec}=1.921\). Cutout size: \(3.6^{\prime\prime}\times 3.6^{\prime\prime}\). We also indicate the rest-frame wavelength corresponding to each filter (white label). The filled circle in the white box illustrate the PSF size for each filter. Our goal was to fit the SED of the different components of each galaxy. For the properties that we later extracted from these SEDs to be reliable, it was crucial that we had reliable uncertainties on the fluxes. To estimate the flux uncertainties, we re-normalized the errors propagated via the Root Mean Square (RMS) images. The uncertainty was defined as: \[df=f_{J,N}\times\sqrt{\sum_{i=0}^{N}\sigma_{i}^{2}}, \tag{3}\] where the sum was made on all pixels in the region, \(\sigma_{i}\) is the RMS of the pixel \(i\), and \(N\) the total number of pixel in the region. We decided to define \(f_{J,N}\), a normalisation factor that takes into account extra noise, e.g. from the correlated signal between pixels that is particularly important for the long wavelength filters that were drizzled from a pixel size of 63mas to 30mas. To calculate this factor, we measured the flux dispersion in \(\sim 20\) empty regions of the science image for several apertures in each band. We then compared this value to the RMS calculated from the RMS image in apertures of the same size and the normalisation factor is defined as their ratio. To be conservative, we never applied a factor leading to lower uncertainties. These factors are generally small, (\(f_{J,N}\sim 1.5\) at most). ### SED Fitting To characterize our sample of galaxies, we needed to have access to their resolved \(M_{*}\) and SFR. To this aim, we fitted each galaxy component SED using the Code Investigating GALaxy Emission (CIGALE, Boquien et al. 2019). We used a single declining exponential model also known as "r model" to model the star formation history of each galaxy. We adopted the Bruzual & Charlot (2003) model for computing the spectral evolution of single stellar populations with a fixed solar metallicity of Z = 0.02 which is reasonable for \(M_{*}\sim 10^{10-12}M_{\odot}\) DSFGs following the Mass-Metallicity relation (Ma et al. 2016). After testing with and without including nebular emissions, we decided not to include them as, for our sample, they lead to higher \(\chi^{2}\) with no noticeable effect on the extracted properties (\(A_{V}\), \(SFR\), \(M_{*}\) and redshift). Some galaxies showed possible signature of strong emission lines, visible as green patches/clumps in Figs. 2.1, 2.2 and 2.3. However, including them had a negligible effect on the estimation of the SFR since it usually had a 50% uncertainties. We discuss this in more detail in Sect. 5.1. We used a modified Charlot & Fall (2000) dust attenuation law and the Draine et al. (2007) dust emission models update from 2014 to predict FIR flux densities. The idea behind the modification of Charlot & Fall (2000) model is that young stars embedded in their birth cloud suffer from additional attenuation compared to stars that have broken out and escaped into the ISM, and that the attenuation curves associated to the birth cloud and the ISM must be different. In practice, this is modelled by assuming two different power-law attenuation curves of the form \(A(\lambda)\propto\lambda^{\delta}\): one for the birth cloud with a slope of \(\delta_{BC}=-1.3\), and one for the ISM with a slope of \(\delta_{ISM}=-0.7\). Because radiation from young stars has to travel through both the birth cloud and the ISM to escape the galaxy, the spectrum of stars younger than 10Myr are attenuated by both the birth cloud and ISM curves. Stars older than 10Myr are only attenuated by the ISM curve (Boquien et al. 2019). For the redshift, we used the Stefanon et al. (2017) catalog, as well as the latest redshift catalog published by Kodra et al. (2022). We encountered three different cases: * If we had a high-quality spectroscopic redshift, then we used it and fixed it. We have 5 galaxies with a spectroscopic redshift. * If we had a grism-based redshift from 3D-HST, we downloaded the spectrum and examined its quality, actual features detected, the redshift probability distribution and defined the redshift and its uncertainty accordingly. We have 10 galaxies for which we find a high-quality grism-based redshift. * If we only had photometric data, we allowed \((1+z)\) to vary within \(\pm 10\%\). We have 7 galaxies with a photometric redshift. In Fig. 5, we show the best SED models corresponding to each region of our example galaxy defined in Fig. 4. To be able to extract reliable information from the SED fit, it was crucial to check the fit quality. To be conservative and have reasonable \(\chi^{2}\), we decided to limit the photometric accuracy of each band to \(S/N=20\). However, if the CIGALE fit returns high \(\chi^{2}\) values, there is a possibility that the input flux uncertainties are still underestimated. In that case, we increased the uncertainties by adding up 10% of the flux to the error in each band. To consider the fit acceptable, we want the reduced \(\chi^{2}\) such as \(\chi^{2}_{red}\leq 1.67\), which is the reduced critical value corresponding to a significance level of 10% in the \(\chi^{2}\) test for 8 degrees of freedom. Figure 4: RGB (F115W, F200W, F444W) image of the galaxy ID15371 (3.6\({}^{\prime}\times 3.6^{\prime\prime}\)) at \(z_{spec}=1.921\) before (left panel) and after (right panel) PSF-matching. In the left panel, the white ellipses show the features we identified and counted as clumps. In the right panel, the colored dotted lines correspond to the division of the galaxy in homogeneously colored regions and the white filled circle to the PSF size. Figure 5: Best SED models computed by CIGALE (Boquien et al. 2019) for the red core (in red), the blue disk (in blue) and the intermediate region (in green) of the DSFG ID15371 at \(z_{spec}=1.921\) (the same example galaxy shown in Figs. 3 and 4). We show in the legend the value of the reduced \(\chi^{2}\) (\(\chi^{2}_{red}\)) for each SED fit. To estimate the robustness of the best model, we studied the \(\chi^{2}\) distributions associated to the 3 main free input parameters: the dust attenuation, the age of the stellar population and the e-folding time. In Fig. 6, the upper-left panel shows the \(\chi^{2}\) distribution associated with the different values of the dust V-band attenuation \(A_{V}\) of the stellar continuum used to fit the SED of the red core of the DSFG ID15371. The upper-right panel shows the same information for \(t/\tau\) with \(t\) and \(\tau\) being the age of the oldest stars and e-folding time of the stellar population used to define the star formation history of the galaxy. Taking the width of these distribution at \(\chi^{2}_{min}+1\) and \(\chi^{2}_{min}+2.7\) give us the 68% and 95% confidence interval respectively (Avni 1976), illustrated by the horizontal thick and thin dashed lines in Fig. 6. The fact that we see only a portion of the distribution for \(t/\tau\) comes from the fact that the age is getting close to the age of the Universe, allowing larger \(t\) would not make physical sense. We can use the same reasoning for the properties extracted from the SED like the \(M_{*}\) or the SFR averaged over the last 10Myrs. We show an example in the lower panels of Fig. 6. Just by looking at Fig. 6, we can already conclude that the red core of the DSFG ID15371 is dusty (\(A_{V}\sim 2.73\)) and weakly star-forming (SFR \(\sim\) 18 - 40 \(M_{\odot}/yr\) and \(t/\tau\gg 1\)). As a sanity check, we estimated the SED of the whole galaxy by summing up the SEDs of all the components. We then compared this SED with the near-IR and FIR flux densities measured in the super-deblended catalog (Stefanon et al. 2017, Le Bail et al. in preparation) to make sure that they were consistent. If the FIR flux densities are brighter than predicted by the SED fitting, it can be a hint that this galaxy is in a starburst episode and/or that there is a deeply attenuated component that is not visible even at 4.44\(\mu\)m. It can also be due to the presence of an AGN that boosts the FIR flux, this can be confirmed by a radio excess or an X-ray detection (Le Bail et al. in preparation, Stefanon et al. 2017). We recall that we removed from the sample only one galaxy where we knew that the FIR luminosity was dominated by the AGN luminosity (see Sect. 2.3) but kept those where the AGN luminosity didn't dominate the FIR luminosity. On the contrary, if the SED predicts a FIR flux density brighter than the one measured, it means that there is a problem in the fitting possibly linked to the grid of the input parameters. In Fig. 7, we show the comparison between the total SED of the galaxy ID15371 and the FIR flux densities. For this galaxy, the flux densities are consistent with the predicted FIR SED meaning that there is no hidden component. This is the case for all the galaxies in our sample except one (ID13107 for which we have a FIR detection brighter than the SED model, pointing toward either a deeply attenuated component or an AGN even though there is no AGN signature in X-ray or radio). However for 3 galaxies (ID13098, ID13776 and ID31281), the measured 100\(\mu\)m flux is boosted compared to the SED predicted flux, possibly a signature of a hot AGN, 2 of them have an X-ray detected AGN (Nandra et al. 2015). By observing Fig. 7, one can notice that the predicted IRAC fluxes are fainter than the actual measurement. This observation is not true for every galaxy, we measure the flux in the NIRCam F356W and F444W which probe the same wavelength as IRAC channel 1 and 2 to be fainter for this galaxy. This is mostly a sign of blending in the earlier IRAC imaging. A caveat of this SED fitting method is that we used the same SFH and parameters for all regions, some with very different properties. We chose to use the simple tau model because of the Figure 6: \(\chi^{2}\) distributions associated with the dust attenuation (upper-left panel), \(t/\tau\) (upper-right panel), the stellar mass (lower-left) panel and SFR averaged over the last 10Myrs (lower-right panel) produced by CIGALE (Boujen et al. 2019) for the fit of the red core SED of the galaxy ID15371 at \(z_{spec}=1.921\). The thick and thin black dashed lines corresponds to the 68% and 95% confidence interval respectively. Figure 7: Total SED of the galaxy ID15371 at \(z_{spec}=1.921\) in red. It was calculated by adding up the CIGALE best SED model of each component. The black points are the near-IR and FIR fluxes with their uncertainties or upper limits (arrows) from the super-deblended catalog (Le Bail et al. in preparation). From the FIR data points, we have \(SFR_{IR}=(150\pm 15)\)M\({}_{\odot}\) yr\({}^{-1}\) (Le Bail et al. in preparation), the CIGALE fits give a consistent total \(SFR_{tottotry}=(197\pm 125)\)M\({}_{\odot}\) yr\({}^{-1}\). Given its stellar mass (\(M_{*}=10^{13}\)M\({}_{\odot}\)), this galaxy is on the main sequence. meaning of \(t/\tau\) regarding the star-forming activity of the galaxy. We decided to make a two-pass SED fitting, in the first pass, the goal was to separate the star-forming from quiescent regions. In the second pass, we fitted the star-forming regions with a nearly constant SFR (by imposing \(\tau\gg t\)). This allowed to have a good estimate of the recent SFR. Moreover, by comparing it to the far-IR SFR from the super-deblended catalog (Le Bail et. al, in preparation.) and to the relative position of each component with respect to the Main Sequence or the quiescent quadrant of the UVJ color-color diagram (see Sect. 3.7 for more detail on these last two points), we had a confirmation of the star-forming activity of each galaxy component. For the quiescent regions, there can be a degeneracy between the age and the dust attenuation, to tackle this, we imposed \(t\gg\tau\). We estimated that the good quality of the photometry in the rest-frame near-IR and the two-pass SED fitting procedure allowed us to get robust estimates of both the stellar mass and the SFR of each component. To tackle the great diversity of galaxies, we decided to divide them into several classes as defined in the next Section. ### Classification From the CIGALE SED fitting, we derived an estimation of the \(M_{*}\), the SFR and the dust attenuation (\(A_{V}\)) of each component of the galaxies. For the galaxy ID15371, in the upper panel of Fig. 8, one can see the three components respective \(M_{*}\) and SFR plotted on the MS (Schreiber et al. 2015; Huang et al. 2023). All the components of the DSFG ID15371 have some ongoing star-formation, with the red core being on its way to quenching but still slowly star-forming. Using the best SED models provided by CIGALE, we also estimated the rest-frame U, V and J AB magnitudes. We used Apellaniz (2006) U and V filters, and for the J band, we used the 2MASS J relative spectral response curve. In Fig. 9, we display all the regions of our galaxies on the (\(V-J\), \(U-V\)) plane. We recover the sSFR effect: when moving from the lower right corner to the upper left corner, the sSFR decreases (Wang et al. 2017). This makes the UVJ color-color diagram ideal to separate SF galaxies from quiescent galaxies. We note that the galaxies with sSFR \(\lesssim 0.1\)Gyr\({}^{-1}\) are all in the quiescent region defined by Whitaker et al. (2011) and delimited by the black dashed line in the Figure. The colored dotted lines delimit the regions defined by Zick et al. (2018). For the DSFG ID15371, we have confirmation in the UVJ diagram that all the components are star-forming (lower panel of Fig. 8). Moreover, the three components are aligned on the diagonal of the diagram which is the signature of a gradient of dust attenuation from the center towards the outer parts (Calzetti et al. 2000). Indeed, from the SED fitting, we had \(A_{Vred}=2.70\pm 0.11>A_{V,green}=2.09\pm 0.23>A_{V,blue}=0.75\pm 0.11\). Generally, to estimate if a region was SF or quiescent, we used the UVJ color-color diagram (is the component in the quiescent quadrant or not?), the position relative to the MS (is the component on/above the MS or well below it?, what is its position compared to the other regions of the same galaxy?) and, as we used a simple exponential model for the star-formation history, the value of \(t/\tau\) is also an indicator of the star-formation activity. If \(t/\tau\gg 1\), then the peak of star-formation is firmly in the past, and the component is on its way to quenching. On the contrary, if \(t/\tau\lesssim 1\), the galaxy is still actively star-forming. Based on these three pieces of information, we were able to discriminate between SF and quiescent regions. Here, we defined a region as quiescent if below the Main Sequence and the other galaxy components (by \(\sim 0.6\)dex). Hence, some regions that we classified as quiescent are not completely passively evolving and could still be slowly star forming. Most of the time, the three indicators are in agreement, however, in some cases the results were ambiguous: the regions where all three indicators where not in agreement represent less than 5% of all the studied regions. In that case, we first looked at their position relative to the MS to see if it was consistent with the \(t/\tau\) values from the best models and it always was. The inconsistency of the UVJ color-color diagram can be explained in several ways: the UVJ diagram uses only a part of the information (3 rest-frame bands) contrary to the other probes that uses the full SED. More importantly, real situations exist where the UVJ diagram characterizes correctly the presence of star formation but this star formation is suppressed as exemplified by the sub-MS location (suppressed with respect to the ensemble average given the mass) and by the \(t/\tau\) (suppressed with respect to the past star formation history of this galaxy). That is the case for the central region of ID15371 (see Figs. 6 and 8). In the rare cases where the \(t/\tau\) value didn't allow any conclusion (\(t/\tau\sim 1\)), we decided based on the MS and the UVJ color-color diagram that were consistently pointing either toward star-formation or quiescence. In all the cases, we were able to classify the regions as quiescent or star-forming. As a result of this analysis, we had 22 vastly different galaxies with various morphologies, colors (see Figs. 2.1, 2.2 and 2.3) and star-formation activity. We found that the variety of features could be meaningfully re-conducted to three galaxy groups: * Type I: _SF disks with a red SF core_, characterized by the fact that all their regions are SF. Some have a complex multi Figure 8: Upper panel: Galaxy ID15371 components plotted over the Main Sequence (Schreiber et al. 2015; Huang et al. 2023). Lower panel: Galaxy ID15371 components in the UVJ color-color diagram, the black dotted line delimits the quiescent region (Whitaker et al. 2011) and the colored ones, the regions defined by Zick et al. (2018). See Fig. 4 for the definition of the components. color clumpy disk morphology in the RGB (F115W, F200W, F444W) image. They all have a dust attenuated red SF core. * Type II: _Quenched disks with a SF core_, characterized by a dust attenuated red SF core and a quenched disk (in one case, partially quenched). * Type III: _SF disks with a quenched bulge_, characterized by a quenched central bulge while the disk is still star-forming. These are similar to local spirals. For the disks with several components, they usually were all either SF of quiescent. There was only one galaxy (ID18278) where only a fraction of the disk was quiescent (green region in Fig. 2.2), we decided to include it to the Type II as the quiescent part encompasses 16% of the disk stellar mass and could be considered as an early stage of quenching. 4 galaxies hosts X-ray AGNs that do not dominate the FIR emissions; 1 is a Type I galaxy (ID30186), 2 are Type II galaxies (ID13098 and ID13776) and the last one is a Type III galaxy (ID23205) (Nandra et al. 2015). After having classified our sample of 22 galaxies, we had 10 Type I galaxies, 5 Type II and 7 Type III. The RGB cutouts of our sample are separated following the three Types, with Figs. 2.1, 2.2 and 2.3 showing the Type I, II and III galaxies respectively. This is summarized in the top panel Fig. 10 where each wedge size is proportional to the number of galaxies of the considered Type. We illustrate each Type with a pictogram, the color red representing quiescent regions and the color blue representing star-forming regions. The color of each wedge is linked to the Type, in all Figures in the rest of this paper, the red markers will represent Type I galaxies, green markers Type II and blue markers Type III. The lower panels of Fig. 10 summarize the properties of each Types by looking at the connection between sSFR and \(A_{V}\) and color (in mag AB) gradients. The first observation is that cores/bulges are systematically redder than disks and there is a strong correlation between \(A_{V}\) gradient and color gradient (Pearson coefficient = 0.83, p-value = 2e-6) while there is no correlation between sSFR gradient and color gradient (Pearson coefficient = 0.27, p-value = 0.23). This means that the color differences that we observe in Figs. 2.1, 2.2 and 2.3 trace dust density in-homogeneities and not older/younger stellar population. The Type I galaxies (in red) do not have a noticeable sSFR gradient (\(sSFR_{core}\sim 1.2\times sSFR_{disk}\)), but have a strong \(A_{V}\) gradient, hence, the fact that the cores of Type I galaxies appear much redder than the disks in Fig. 2.1 is due to their high dust density; the blue regions are low \(A_{V}\) regions. For the Type II galaxies, we observe the sSFR gradient we expected, the core is star-forming while the disk is quenched (\(sSFR_{core}\sim 6.5\times sSFR_{disk}\)), they have the strongest dust gradient because of their highly dust attenuated core and their quenched disk that has low level of dust attenuation. We note that the sSFR gradient should make the core appear bluer than the disk (because of the younger stellar population in the core), however we observe the exact opposite. The color gradients we observe in Fig. 2.2 are dominated by the dust attenuation gradient. Eventually, Type III galaxies have low attenuation both in their quenched bulge and star-forming disk, hence have a weak \(A_{V}\) gradient. Their sSFR gradient is however strong, as expected of the opposite sign compared to Type II (quenched bulge and star-forming disk, \(sSFR_{core}\sim 0.2\times sSFR_{disk}\)). In Fig. 2.3 the color gradients mostly trace the age difference between the stellar populations of the (redder) bulge and the (bluer) disk. We note that, the strong gradients we observe, both in sSFR and \(A_{V}\) justify the need to divide our galaxies in three Types to illustrate the three possible sSFR gradient and to divide them in several sub-galactic regions because of the huge dust gradient. Moreover, as expected by the selection criteria detailed in Sect. 2.3, we did not have any fully quiescent galaxy in our sample. ## 4 Results In this section, we present the results of the analysis of the 22 galaxies in our sample, distinguishing among the three classes we just defined in the previous Section. We first looked at the properties of the whole galaxies in Sect. 4.1 and then at the resolved properties at a sub-galactic level in Sect. 4.2. In Table 1, we give the main properties of our sample of 22 galaxies. In the following, we compared the behaviour of the different Types of galaxies. To assess the significance of the trends, we compared the difference between the mean of a property for each Type with the error on the mean. We emphasis that we also checked the median value and that it doesn't affect the observed trends. In the Figures, each star-shaped marker is the mean and the error bar is the error of the mean (defined as \(err_{mean}=rms/\sqrt{N}\) with \(rms\) the root mean square of the distribution and \(N\) the number of galaxy in each Type). ### General properties #### 4.1.1 Main Sequence galaxies To characterize the different Types of galaxies, we first looked at their typical redshift, \(M_{*}\) and \(sSFR_{IR}\). The redshifts and \(M_{*}\) were extracted from the SED fitting procedure described in Sect. 3.6 while the \(sSFR_{IR}\) was computed by dividing the \(SFR_{IR}\) of each galaxy by the sum of the \(M_{*}\) of each component with the \(SFR_{IR}\) taken from the super-deblended catalog (Le Bail et al., in preparation). In Fig. 11, a redshift trend is appearing: the Type I galaxies with their SF core and SF disk are on average at higher redshift Figure 9: UVJ color-color diagram. Each marker represents a region of a galaxy. The star-shaped markers are cores/bulges and circular markers are disk components. The color of the marker depends on the sSFR of the considered region. The error-bar in the lower right corner shows the average uncertainty. The black dotted line delimits the quiescent region (Whitaker et al. 2011) and the colored ones, the regions defined by Zick et al. (2018). The orange region corresponds to _Post-starbursts_, the yellow one to _Quiescent_, the green one to _low sSFR_ DSFG, the magenta one to _DSFG_ and the blue one to _Star-forming galaxies_. (\(z=2.32\pm 0.15\)) than the Type II galaxies with their SF core and quiescent disk (\(z=1.94\pm 0.11\)), that are themselves at a slightly higher redshift than the Type III galaxies (\(z=1.80\pm 0.09\)), analogs to the spiral galaxies we observe in the local universe with a quiescent bulge within a SF disk. The difference in redshift is \(2\sigma\) between the Types I and II and \(3\sigma\) between Type I and III. This suggests that this redshift trend is real and opens the possibility of an evolutionary link between class I and II/III. All of our galaxies have a \(M_{*}>10^{10}\)M\({}_{\odot}\) with an average of \(M_{*}=8.2^{+2.2}_{-1.7}\times 10^{10}\)M\({}_{\odot}\) (left panel of Fig. 11). There is no correlation between the Types and the \(M_{*}\), all Types have a similar average \(M_{*}\). By comparing the \(sSFR_{IR}\) of our galaxies with the MS of Schreiber et al. (2015) (right panel of Fig. 11), we confirmed that typically these galaxies are MS galaxies, consistently with Fig. 1 and Sect. 2.3. The MS sSFR at a fixed redshift was calculated by taking the mean \(M_{*}\) of our sample which is \(<M_{*}>\) = \(10^{10.92}\)M\({}_{\odot}\). Moreover, the typical \(sSFR_{IR}\) is observed to decrease at lower redshift, as expected from the cosmic trend. The Type III galaxies, which have a quenched bulge, have the weakest \(sSFR_{IR}\) on average (\(sSFR_{IR}=0.75^{+0.13}_{-0.14}\)Gyr\({}^{-1}\) for quenched bulges versus \(sSFR_{IR}=2.01^{+0.81}_{-0.36}\)Gyr\({}^{-1}\) for SF cores). They also are at lower redshift than the others. This suggests that they are more evolved than other classes. #### 4.1.2 Galaxy near-IR sizes The presence of highly obscured cores at the center of galaxies, like for ID15371 (see Fig. 3), can let us believe that we are studying the counterparts of the _ALMA_ compact SF SMGs. Indeed, SMGs are known to be compact, dust obscured and with a high star formation efficiency. The galaxies hosting a SF region at their center (Type I and II) tend to be slightly more compact in the near-IR, with \(R_{e,NIR}=2.34\pm 0.37\)kpc, than the galaxies with a quenched bulge (Type III), with \(R_{e,NIR}=2.93\pm 0.42\)kpc (this is tentative as there only is a \(1\sigma\) difference, see Fig. 12). The Type II galaxies and their quiescent disk are on average the most compact galaxies in the near-IR with a typical size of \(2.19\pm 0.30\)kpc. In Fig. 12, we compare the \(R_{e,NIR}\) to the \(M_{*}-R_{e}\) relation from van der Wel et al. (2014) based on rest-frame optical measurements. Most of our 22 galaxies are more compact in the near-IR than in the optical, with \(\sim 40\%\) being below the \(M_{*}-R_{e}\) relation scatter. We also checked that the optical sizes of our galaxies are compatible with the \(M_{*}-R_{e}\) relation. This demonstrates than in our galaxies, the dust, traced by the near-IR emissions, is more concentrated than the stellar light, traced by the optical emissions. This is a confirmation of an already well established fact (van der Wel et al., 2023; Gomez-Guijarro et al., 2022b; Jimenez-Andrade et al., 2021; Puglisi et al., 2019; Jimenez-Andrade et al., 2019; Fujimoto et al., 2017). However, we note that the Type I galaxies have very comparable optical and near-IR sizes (\(\sim 15\%\) difference in size on Figure 11: Left panel: \(M_{*}\) versus redshift. Right panel: \(sSFR_{IR}\) versus redshift, the yellow shaded region is the MS (Leslie et al., 2020; Huang et al., 2023). We show the error-bars for individual galaxies, when not visible, they are smaller than the marker. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. Figure 10: Upper panel: Distribution of the galaxy sample in the different groups based on their resolved SF activity. Each region size is proportional to the number of galaxies in the group, written in white. The pictograms illustrate the properties of each Type, the blue and red colors representing SF and quiescent regions respectively. We link each Type of galaxy to a color as defined by the wedges color. Middle panel: \(A_{V}\) gradient vs color gradient (in AB mag), red, green and blue markers are linked to the Types defined in the top panel. Lower panel: sSFR gradient vs color gradient (in AB mag), red, green and blue markers are linked to the same Types as in the middle panel. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. average), their star-forming core is not as concentrated as for the other galaxies of the sample. We discuss in Sect. 5.4 how the Type I and II galaxies might relate to the _ALMA_ SMGs. However, studying the half-light radius is not enough as a large fraction of the galaxies in our sample are not symmetric (see Figs. 2.1, 2.2 and 2.3). #### 4.1.3 Widespread Lopsidedness As one can see in Figs. 2.1, 2.2 and 2.3, some galaxies are strongly lopsided (marked with a '\(\langle\)D'\(\rangle\)'). They are asymmetric and/or their red central region is off-centered with respect to the disk. This lopsidedness appears to be quite common among Type I and II galaxies. In Figs. 2.1, 2.2 and 2.3, the marked galaxies are the 6 most lopsided galaxies, 3 are Type I (30% of the sample) and 3 are Type II (60% of the sample). The Type III galaxies look much more symmetric, these galaxies have a quenched bulge and are on average at lower redshift, they had presumably more time to evolve and stabilize their disk. To verify this, we investigate the lopsidedness of each galaxy. As explained in Sect. 3.3, for each galaxy we calculated its asymmetry (\(A\)) and eccentricity (\(E\)). Type III galaxies appear to be much less lopsided, they have a low eccentricity (\(9.8\pm 2.5\%\)) and asymmetry (\(22.8\pm 3.0\%\)) while Type I and II galaxies, which show comparable lopsidedness, tend to be much more asymmetric (\(33.0\pm 3.5\%\)) and off-centered (\(30.3\pm 4.0\%\)) (see upper panel of Fig. 13). The difference has a \(4.3\sigma\) and \(2.2\sigma\) significance for the eccentricity and asymmetry respectively. In the upper panel of Fig. 13, we show the eccentricity vs the asymmetry. We considered the Type III galaxies as not lopsided, and used their typical eccentricity and asymmetry as a proxy for measurement errors and systematic effects. The thin black dotted line shows the threshold to define a galaxy as weakly lopsided (\(A+E>0.37\), this value corresponds to the average \([A+E]+1\sigma\) of Type III galaxies). We have 14 galaxies that are at least weakly lopsided, representing 64% of the sample. If the galaxies are above the thick black dashed line, meaning that \(A+E>0.70\) (this value corresponds to the average \(2\times[A+E]+1\sigma\) of Type III galaxies), we consider them as strongly lopsided, we encircled them in Fig. 13 and they are visible in Figs. 2.1, 2.2 and 2.3 with a \(\langle\)D. We have 6 strongly lopsided galaxies, representing 27% of the sample. Usually, a strong asymmetry is linked to a strong eccentricity, however we have galaxies with a low level of asymmetry but with a highly off-centered disk. All the strongly lopsided galaxies (circled in black) have high eccentricity. In other words, we observe a lack of strong asymmetry with low eccentricity. The position of the average lopsidedness of Type I and II galaxies in Fig. 13, indicates that being lopsided might be a typical property of these galaxies. In the lower panels of Fig. 13, Type III galaxies, which are more evolved and have a quiescent bulge have low level of asymmetry. On the contrary, Type I and II galaxies have a higher level of asymmetry. We observe (1) a lack of galaxies with a compact disk and high asymmetry and vice-versa, (2) a lack of galaxies with a high core mass fraction and high asymmetry and vice-versa and (3) the galaxies with a quiescent bulge with high mass fraction have low asymmetry. This is consistent with the observation of galaxies in the local universe, indeed, present-day late-type galaxies with more extended disks and lower central stellar mass density are typically more lopsided than early-type galaxies with smaller disks and higher central stellar mass density (Dolf et al., 2023; Varela-Lavin et al., 2023). It seems that as the core grows in mass from accretion, the disk gets smaller and loses its lopsidedness, leading to Type I spiral-like galaxies. Figure 12: Optical (\(R_{e,o}\), left panel) and near-IR (\(R_{e,NIR}\), right panel) half-light radius measured in the closest band to 550nm and 1.6\(\mu\)m rest-frame respectively versus the total \(M_{*}\) of the galaxy. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. The yellow and orange shaded regions illustrate the Mass-Size relation derived by van der Wel et al. (2014) at the redshift of our sample. Figure 13: Eccentricity and Asymmetry. Upper panel: Eccentricity versus Asymmetry, markers with a black circle are the strongly lopsided galaxies (see Figs. 2.1, 2.2 and 2.3), thin black dotted line delimits weakly lopsided galaxies, thick black dashed line delimits strongly lopsided galaxies. Lower left panel: Asymmetry versus Disk half-light radius as defined in Sect. 3.3. Lower right panel: Asymmetry versus mass fraction in the core/bulge of the galaxy, asymmetry is calculated using the F444W filter. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. Thanks to the spatial resolution of _JWST_, we had access to sub-galactic scales, which is crucial to understand the morphology and evolution of DSFGs. ### Resolved properties For each galaxy, each component has been classified either as star-forming or quiescent (see Sect. 3.7). In Fig. 14, we show that the quiescent regions are massive (\(M_{*}\gtrsim 10^{10}M_{\odot}\)) and have a relatively low dust attenuation with an average of \(A_{V}\sim 1.6\) and maximum at \(A_{V}\sim 3\) while SF regions have an average of \(A_{V}\sim 2.3\) maximum at \(A_{V}\sim 5.4\). The SF regions follow a correlation (with a Pearson coefficient of 0.62, p-value = 9e-8), the more massive components are more attenuated. This is consistent with the idea that the stellar mass is the main driver of dust attenuation in SF galaxies (Lorenz et al. 2023). In the following Sections, we present the results regarding the core/bulge and disk of our galaxies. #### 4.2.1 Cores and bulges properties We first looked at the red central region of each galaxy, as defined in Figs. 2.1, 2.2 and 2.3. In the left panel of Fig. 15, we show the dust attenuation versus the \(M_{*}\) of the red star-forming cores (in red and green) and quiescent bulges (in blue). As mentioned above, the dust attenuation of SF cores (Type I and II) correlates with its \(M_{*}\): the more massive the core, the more dust attenuated (with a Pearson coefficient of 0.75, p-value = 0.001). Also, the bulges are less attenuated than SF cores, consistent with the fact that they are quiescent and host an evolved stellar population where the dust might have been consumed/destroy. Figure 15 also shows a trend in redshift. On average, the bulges are slightly more massive (\(M_{*}^{B}\)) than the SF cores (\(M_{*}^{C}\)) but with only a \(1.5\sigma\) significance. The SF cores of Type II galaxies (\(M_{*}^{H}\)) and those of Type I galaxies (\(M_{*}^{H}\)) are consistent within errors. \[M_{*}^{B}=3.75^{+1.04}_{-0.81}\times 10^{10}M_{\odot}\gtrsim M_{*}^{C}=1.81^{+1. 19}_{-0.65}\times 10^{10}M_{\odot} \tag{4}\] \[M_{*}^{II}=2.60^{+2.19}_{-1.19}\times 10^{10}M_{\odot}\approx M_{*}^{H}=1.26^{+ 0.92}_{-0.53}\times 10^{10}M_{\odot} \tag{5}\] The weak trend between the \(M_{*}\) of higher-\(z\) SF cores and lower-\(z\) bulges is consistent with the idea of a bulge that grows in mass with time, fed by accretion from the disk, clump migration or minor/major mergers. We compared the \(M_{*}\) and SFR fraction of the red cores and bulges with respect to the host galaxy (right panel of Fig. 15). For Type I galaxies, the red core \(M_{*}\) represents only \(21.6\pm 4.0\%\) of the \(M_{*}\) of the galaxy. This fraction is smaller than for the other galaxies of the sample where the red core represent \(34.4\pm 6.2\%\) for Type II (\(\sim 2\sigma\) difference) and \(35.9\pm 3.6\%\) for Type III (\(\sim 3\sigma\) difference) of their total \(M_{*}\). This can be linked to the redshift trend, the Type I galaxies being at higher redshift, their core could still be at an early stage of growth. It also explains their lowest \(R_{e,IR}/R_{e,O}=0.89\pm 0.14\), as their \(M_{*}\) is much less concentrated in the central region that the other two Types. As expected from the definition of our Types of galaxies, the Type II galaxies have a red core with a SFR fraction (\(64\pm 18\%\)) significantly greater than the \(M_{*}\) fraction (\(34.4\pm 6.2\%\)) since the disk is mostly quenched, while the Type III galaxies have a red bulge \(M_{*}\) fraction (\(35.9\pm 3.6\%\)) significantly more important than the SFR fraction (\(9.8\pm 3.4\%\)) as the bulge is quenched. Some of these cores/bulges appeared to be compact, we decided to investigate them further in the next Section. #### 4.2.2 Compact cores and bulges All of our galaxies have a central core/bulge appearing in the near-IR (filter F410M or F444W). For some galaxies, the core/bulge has a clear clump-like morphology, is much brighter than the surroundings and is clearly delimited (e.g. ID13776 and ID23205 in Fig. 16). We identified compact cores in 17 galaxies out of 22: 6 Type I galaxies with a compact core (60% of our sample), 4 Type II (80% of our sample) and all 7 Type III galaxies of our sample have a compact bulge. We decided to investigate further these compact cores/bulges by dividing the in two categories, the SF cores (from Type I and II) and the quiescent bulges (from Type III). To do so, we measured the half-light radius of the compact cores and bulges defined as the radius of a circular aperture encompassing half of the flux of the core, we applied a similar technique as described in Sect. 3.1. The SF cores tend to be Figure 14: \(A_{V}\) versus \(M_{*}\) for each regions identified in Figs. 2.1, 2.2 and 2.3. Quiescent regions are in orange and Star-forming regions in blue. At fixed stellar mass, star-forming regions are more attenuated, on average. Figure 15: Red regions properties. Left panel: Dust attenuation versus Stellar Mass. Right panel: SFR fraction versus \(M_{*}\) fraction in the red region. The black dotted lines is the 1:1 correlation. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. slightly more compact than the quiescent bulges (\(0.76\pm 0.03\)kpc vs \(0.84\pm 0.04\)kpc, with a \(\sim 1.5\sigma\) significance, see Fig. 17). The markers with a black circle are the compact core with an X-ray detection, possibly tracing an AGN. 3 of them are found in SF cores, and 2 are in the most massive galaxies with the largest SF cores. Even if the definition of the compact core is somehow arbitrary, and that there could be some level of contamination from the disk, this goes in the same direction as Cochrane et al. (2023) simulations. They found that without AGN feedback, the SF core would undergo a compaction event while the presence of AGN winds would prevent such compaction by evacuating the gas and precipitate the quenching of the core. We also note that the quiescent bulge tends to be larger in more massive galaxies. Ikarashi et al. (2017) found that the most compact cores of SMGs are those where there is both star formation and an AGN. This is not what we observe for two of the SF cores hosting an X-ray AGN (showed with the encircled markers in Fig. 17), it is possible that in these galaxies the AGN has strong feedback and the system is quite evolved and ready to quench. The third SF core hosting an X-ray AGN is however compact and the presence of the AGN could facilitate this compaction. The sizes of the SF compact cores are compatible with those measured in the sub-mm (See Sect. 5.4 for more details). After analyzing the cores of our galaxies, we decided to investigate their differences with respect to the disk, especially the reasons of the redness of the core compared to the surroundings. #### 4.2.3 NIRCam color variations within the disks In Sect. 3.7, we showed that the main driver of the color gradient between the cores and disks is the dust attenuation. When looking at Figs. 2.1, 2.2 and 2.3 we noticed that some disks are also highly in-homogeneous in terms of color. To investigate the physical processes responsible for the color variations we observed in the disks, we compared the color variations with the dust attenuation and sSFR variations in a similar way we did in Sect. 3.7 when we investigated the gradients between the cores and the disks. When measuring the variations, we always measured the differences between a redder part of the disk and a bluer part (in other words \(\Delta(F115W-F444W)>0\) in AB mag). We compared all the components of the disks, meaning that if a disk was divided in 3 patches, there are 3 markers in Fig. 18 comparing the first and second, second and third and first and third component respectively. In the two upper panels of Fig. 18, we first clearly identify a correlation between the color variations and the dust extinction variations (Pearson coefficient = 0.78, p-value = 6e-11) consistent with the expectation that the redder regions are those with the greatest \(A_{V}\)(Calzetti et al., 2000). However, we do not identify any correlation between color variations and sSFR variations (Pearson coefficient = 0.16, p-value = 0.29), some color variations are even inconsistent as when \(sSFR_{redder}>sSFR_{blue}\), we are comparing a red patch hosting a younger stellar population (more star-forming) with a bluer patch hosting an older stellar population (less star-forming), hence the colors should be the other way around. This two observations demonstrate that the color variations we observe within the disks in Figs. 2.1, 2.2 and 2.3 are driven by dust. NIRCam colors at \(z\sim 2\) trace dust, red spots are highly extinct while blue spots are weakly dust attenuated. This is consistent with previous studies based on NIRCam images (e.g. Miller et al. (2022)). As the clumps could play an important role in the color variations, it is important to investigate their abundance. #### 4.2.4 Clumpy disks As one can see in Figs. 2.1, 2.2 and 2.3, some galaxies are very clumpy. The clumpiness does not seem to be linked to a particular Type of galaxy. Most of the clumps are observed in the shortest wavelength, consistent with Wuyts et al. (2012) who state that the number of clumps decrease when moving toward longer Figure 16: Cutouts of a galaxy of each Type with the F444W filter. While a central clump-like core is clearly apparent for the ID13776 and ID23205, it is less clear in ID18278. We indicate the rest-frame central wavelength of the filter in parenthesis. Figure 17: Compact red cores and bulges half-light radius versus the stellar mass of the galaxy. Circular markers are individual galaxies, star markers are the mean value for each group of galaxy with their associated error bar indicating the error of the mean. Markers with a black circle are cores hosting an X-ray AGN (Nandra et al., 2015). wavelength. In Fig. 19, we investigate the the possible link between the clumpiness and the disk and the core of the galaxy. In the left panel,we show the distribution of the number of clumps observed in each disk versus the SFR of the disk (defined as the sum of the SFR of the regions delimited in Figs. 2.1, 2.2 and 2.3) separating the SF disks from the quiescent disks. There is no apparent correlation between the star-forming activity of the disk and the number of clumps. The fact that we observe clumps in quiescent disks is quite surprising as they usually are supposed to be place of local starburst (Wuyts et al. 2012). We discuss the implication of this result in Sect. 5.3. In the left panel of Fig. 19, we study the impact of the fraction of stellar mass in the core (in blue) or bulge (in red) on the number of clumps. The galaxies with a quiescent bulge, that we know to be at lower redshifts (see Sect. 4.1.1 and Fig. 11), have a higher fraction of their mass in their bulge (\(35.9\%\pm 3.6\%\)) than the galaxies with a star-forming core have in their core (\(25.8\%\pm 3.7\%\)) with a \(\sim 2\sigma\) significance. They also tend to have a smaller number of clump: \(1.7\pm 0.8\) clumps on average for a galaxy with a bulge and \(2.8\pm 0.6\) clumps on average for a galaxy with a star-forming core (\(1.1\sigma\) significance). The plot also shows that by looking at galaxies with a star-forming core (in blue in Fig. 19), the ones with the smallest \(M_{*}\) fraction at their core are also the clumpiest. We see here both the effects of the redshift, lower redshift galaxies have less clumps and of the central \(M_{*}\) fraction, higher fraction leads to less clumps. One could argue that the fact that galaxies with a star-forming core are at higher redshift than those with a quiescent bulge (Type III) means that we probe shorter rest-frame wavelength, hence, we have a higher probability of observing clumps in their disk (Wuyts et al. 2012). However, the range of redshift that we are probing here is quite narrow, and the clumps that we count are the brightest and visible in several filters. These galaxies actually are clumpier. ## 5 Discussion In this Section, we first discuss the green patches/clump that are visible in the RGB cutouts in Figs. 2.1, 2.2 and 2.3 in Sect. 5.1. Then, we discuss the presence of blue clumps inside quiescent disks in Sect. 5.3. We investigate the possible link between the compact SMGs observed with _ALMA_ and our DSFGs in Sect. 5.4. In Sect. 5.6, we discuss the origin and consequences of lopsidedness and its abundance. Eventually, in Sect. 5.7, we discuss two possible evolutionary paths that could lead to the formation of Type II galaxies. ### Bright emission lines When looking at Figs. 2.1, 2.2 and 2.3, one can notice that some of the disks have different colors, with a blue and a green part. The green clumps/patches are visible in all Types of galaxies. Considering their redshift, they probably are due to bright \(H_{\alpha}\) or [\(O_{III}\)] emission lines which are known tracer of star-formation. The \(H_{\alpha}\) line will fall in the green filter (F200W) for galaxies with a redshift between 1.67 and 2.39 and the [\(O_{III}\)] emission line will fall in the green filter for galaxies with a redshift between 2.52 and 3.47. On the 7 galaxies where we identify green patches, 2 are consistent with \(H_{\alpha}\) emission from a star-forming region (ID15371 and ID29608) and 3 are consistent with [\(O_{III}\)] emission from a star-forming region (ID18694, ID23510 and Figure 19: Number of clumps in the disk versus its SFR. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. Figure 18: Top panel: Comparison between the \(A_{V}\) and the F115W - F44W AB mag color between redder and bluer patches within the disks (if the disk has been divided in at least two components in Sect. 3.5). Middle panel: Comparison between the sSFR and the F115W - F44W AB mag color between redder and bluer patches within the disks. Lower panel: Comparison between the \(A_{V}\) and the sSFR between redder and bluer patches within the disks. Only variations are probed, not gradients, we do not look for radial effects has our galaxies have highly asymmetrical disks. ID23581). For the 2 remaining galaxies, it is more surprising, as the green patches/clump are observed in the quiescent disks of Type II galaxies. For the ID13107 galaxy (\(z=2.21\pm 0.02\)), the green patch is close to the center of the galaxy, it is then possible that the \(H_{\alpha}\) line is produced by the accretion disk of an AGN sitting at the center of the galaxy that becomes bright in this region because of a much weaker dust attenuation than in the core. Even though we have no radio or X-ray signature of an AGN in this galaxy, as mentioned before, the predicted SFR from the SED fitting is not enough to explain the FIR flux density observed with _Herschel_ for this galaxy. This convinced us that there could be an AGN at the core of this galaxy. For the ID18278 galaxy (\(z=1.805\)), the situation is different, the green patch is in the outer region and composed of clumps. These clumps could have actually been ionized by the hot evolved low-mass stars (Cid Fernandes et al., 2011; Belli et al., 2017) with an enhanced \(H_{\alpha}\) line due to shocks from the minor merger. Indeed, these clumps are old (age of oldest stars \(=2.5\pm 0.5Gyr\)) and have a very low sSFR, consistent with the _ex-situ_ clumps defined in Mandelker et al. (2014). ### Origin of dusty patches within disks In Sect. 3.7, we demonstrated that the color gradient is linked to the strong \(A_{V}\) gradient. The fact that the core is much more attenuated than the disk is expected because the SFR surface density is higher in the core than the disk, hence is the dust surface density and the dust column density. However, the patchy distribution of dust within the disks is more surprising. From the lower panel of Fig. 18, we observe a correlation between dust density and sSFR for Type II and Type III galaxies (Pearson coefficient = 0.62 and 0.83 with p-value = 0.04 and 0.01 respectively). Meaning that for these galaxies, the patches could be linked to not yet quenched regions in the disks of Type II galaxies and partly quenching disk for Type III. The patches could then find their origin in internal instabilities, or interactions with the local environment. For Type I galaxies, we do not observe this correlation (Pearson coefficient = 0.35, p-value = 0.07). For these galaxies, the patches could be correlated either to metallicity, higher metallicity leading to higher dust column, or to geometry. We investigated the origins of the patchy distribution by looking for correlations between the greatest difference in \(A_{V}\) in each disk and the redshift, the fraction of stellar mass in the core/bulge, the fraction of SFR in the core/bulge, the longsidedness and the environment. We found no correlation (all p-value \(>\) 0.2). We then looked for a correlation between the number of patches/components of each disk (as defined in Figs. 2.1, 2.2 and 2.3) and the same parameters. The only correlation we found, that is visible in Fig. 20, is with the mass fraction in the core (Pearson coefficient of -0.60, p-value = 0.003), the number of patches/components gets smaller when the mass is more concentrated in the core of the galaxy. This is especially true for the galaxies with a star-forming core (Type I and II, with a Pearson coefficient of -0.67 and a p-value of 0.006, while Type III have a Pearson coefficient of 0.14 with p-value of 0.76). This correlation is expected from Hopkins et al. (2023); when the central gravitational potential well is deep enough, it stabilizes and homogenizes the disk. This correlation is consistent with the one we observed for the clumps (See Sect. 4.2.4 and Fig. 19). However, if this (anti-)correlation justify why we do not see patches in Type III galaxies, it doesn't clear up the mystery of their origin. We would need spectroscopy to understand better what is happening in those disks, and even there the mystery would remain of why the disks are so in-homogeneous in dust attenuation, whether it is due to metallicity or geometry differences (and why these would persist over homogeneous patches within a disk, as opposed e.g. to simple radial gradients). ### Clumps in DSFGs In all the Types of galaxies, we identified the presence of clumps. We observed that galaxies at lower redshift tend to have less clumps, this suggests that the clumps either get destroyed within the disk and are not replaced by new clumps, or migrate toward the core and participate to its mass growth possibly triggering enhanced star formation. They might also be lower mass/less luminous, hence below our detection threshold. We do not see any evidence of recent major mergers in our galaxies, suggesting that most of the clumps we observe are originating from a fragmentation of a gas rich unstable SF disk, consistently with Pusching et al. (2023) and Fensch & Bournaud (2021) that showed that large scale instabilities in gas-rich galaxies can create such star-forming giant molecular clumps. We also noted that the most clumpy high-redshift galaxies also have the least concentrated cores, with less than 20% of their stellar mass at the center of the galaxy (see Fig. 19) and, on the contrary, the least clumpy galaxies at lower redshift have nearly 40% of their stellar mass in the quiescent bulge. We also showed in Fig. 15 that galaxies at later times have higher core mass fractions. This suggests that either, as the clumps migrate through the disk, they feed the central core, making it grow in mass or that, as the central gravitational potential well gets deeper, the disk is stabilized, the VDI are destroyed, and the galaxy can have a smoother spiral-like disk. Our observation are consistent with the simulations from Hopkins et al. (2023) that showed a well defined dynamical center is necessary to stabilize the disk and put an end to bursty star-formation. Also, we are in agreement with the new JWST results from Kalita et al. (2022), pointing to an increased galaxy fragmentation with decreasing bulge/core mass fraction. When looking at Fig. 2.2, one can clearly identify clumps in the Type II galaxies. The blue clumps of these quiescent disks Figure 20: Number of patches/component of each disk versus the stellar mass fraction in the core/bulge. In the lower left corner, we show the average error bar for individual galaxies. Circular markers are individual galaxies, star markers are the mean value for each Type of galaxy with their associated error bar indicating the error of the mean. (ID13107, ID18278 and ID13776 in Fig. 2.2) are due to a low dust attenuation and not a high sSFR. Indeed, the disk has \(A_{V}=1.0\pm 0.2\) while the central SF core has \(A_{V}=3.5\pm 0.2\). We recall that the blue colors in NIRCam color cutouts for these redshifts are typically a signature of low dust attenuation. This could indicate that clumps are not only formed in highly star-forming regions. ### Are we observing compact SMGs counterparts? In most of our IR-luminous galaxies, a central compact clump-like highly dust attenuated SF red core is present. While it is nearly invisible in the optical rest-frame, it becomes bright in the near-IR (see Figs. 3 and 16). As we showed in Sect. 4.2.3, they are surrounded either by a SF (Type I) or a quiescent (Type II) disk with much lower dust attenuation. We identify 10 of those (see Table 1 and Sect. 4.2.2) in our sample. When we measured the size of these red compact SF cores, we found that the average \(R_{c,NIR}\) was about 0.76kpc (Fig. 17). This size is compatible with the sizes measured with _ALMA_ for the compact SMGs : \(0.6\pm 0.2\)kpc in Zavala et al. (2022), \(\sim 0.73\)kpc in Gomez-Guijarro et al. (2022) or \(1-2\)kpc across in Rujopakarn et al. (2019). The NIRCam sizes tend to be slightly larger than the _ALMA_ sizes, this is not due to a spatial resolution issue, but to the heavy dust obscuration of the core. Moreover, compact SMGs at \(z\sim 2-3\) are characterized by a SFR \(\geq 100\)M\({}_{\odot}\) yr\({}^{-1}\)(Gomez-Guijarro et al. 2022; Jimenez-Andrade et al. 2021, 2019; Hodge et al. 2019). 7 out of the 10 galaxies where we identified a compact star-forming core have a total SFR compatible with this criteria (see Table 1). 5 of them having SFR \(\gtrsim 100\)M\({}_{\odot}\) yr\({}^{-1}\) in the core alone, 1 has SFR \(\geq 50\)M\({}_{\odot}\) yr\({}^{-1}\) in the core and the remaining galaxy has a lower SFR in their core. To confirm the possibility of SMGs counterpart, we can use the FIR super-deblended catalog in the EGS (Le Bail et al., in preparation). 6 out of the 10 galaxies are detected at 2\(\sigma\) in SCUBA2/850\(\mu\)m among which 3 are 3\(\sigma\) detected. 3 out of the 4 undetected galaxies at 850\(\mu\)m are in the shallower part of the FIR catalog. Moreover, if we look at the predicted flux at 1.1mm for these galaxies, the mean predicted flux is 0.80mJy, and 4 of them are predicted to be brighter than 1mJy at 1.1mm. A total of 5 galaxies have either a \(3\sigma\) detection in SCUBA2/850\(\mu\)m or a prediction \(>1.1\)mJy at 1.1mm (ID13776 and ID21190 from the Type II class and ID16544, ID29608 and ID30186 from the Type I class). They correspond to the 5 galaxies measured with a SFR\(\gtrsim 100\)M\({}_{\odot}\) yr\({}^{-1}\) in their core. All these elements convinced us that we have at least 5 or 6 galaxies that are good candidates of compact SMGs counterparts, they are equally distributed between Type I and II. Contrary to what is observed with _ALMA_, these compact cores are not isolated, they all are surrounded by a larger disk. The fact that their is a huge dust gradient between the core and the disk, as we showed in Sect. 3.7 might explain why we do not this the latter in sub-mm surveys: the core is bright in the rest-frame near-IR while the disk is bright in the rest-frame optical. The presence of a disk confirms Hodge et al. (2019) and Puglisi et al. (2019) who both stated that the compact SMGs are obscured part of a larger system. The fact that some galaxies in our sample have highly extinct cores could link them the so-called _HST_-dark galaxies. We compared our sample with the _HST_-dark and _HST_-faint galaxies in the same field from Perez-Gonzalez et al. (2023). Our galaxies are in general agreement with the SFG at \(z<4\) in Perez-Gonzalez et al. (2023), especially with the fact that we observe highly dusty patches out to large radii. Four of the galaxies in our sample are classified as _HST_-faint (ID16544, ID18694, ID23581 and ID26188). All are Type I galaxies, which seems logical because quiescent regions have lower \(A_{V}\), hence are brighter in _HST_. One of them (ID23581) has \(A_{V,min}>3\), hence expected to be _HST_ faint/dark, while the remaining three galaxies have \(A_{V,min}\sim 1.5\), which is the average \(A_{V,min}\) of the sample. It is more surprising that those galaxies are _HST_ faint/dark. However, these 4 galaxies are actually the galaxies at the highest redshift of the sample (\(2.7<z<2.9\)), with photometric redshift from our SED fitting procedure consistent with the ones from Kodra et al. (2022) and the ones from the super-deblending (Le Bail et al., in preparation). There is a chance that their HST faintness comes more from their higher redshift than their high level of dust (at least for 3 of them). ### Relation to Blue Nuggets simulations In the cosmological simulations from Lapiner et al. (2023), the typical high-redshift and low-mass galaxy is a gas-rich, star-forming, highly perturbed, and possibly rotating system, fed by intense streams from the cosmic web. When the stellar mass is in the ballpark of \(\sim 10^{10}\)M\({}_{\odot}\), the galaxy undergoes a major, last, wet compaction into a 'Blue Nugget', starting with a compact gaseous star-forming system that rapidly turns into a compact stellar system. The galaxies that we observe are all above this \(\sim 10^{10}\)M\({}_{\odot}\) threshold. However, non of them look like a blue ungeet except possibly ID13098. We discuss the specific case of ID13098 in Sect. 5.7. The other ones that are in the range of mass where the wet compaction should happen do have a compact dusty star forming core, but they also have a much larger star-forming disk. Moreover, the more massive galaxies could be undergoing a rejuvenation event after blue ungeet phase as it is suggested by Lapiner et al. (2023). However, when comparing the \(t_{50}\) of the disk and core, we find no evidence that the star-forming disks are younger than the cores. The fact that we do not observe any blue nuggets (or a single one) might be due to their low-mass, or low SFR, or that the previous observations were not deep enough to detect the low-luminosity disks. It may be possible that the most massive galaxies undergo a different quenching mechanism that lower-mass galaxies. ### Investigating the lopsidedness Galaxy lopsidedness has not so far attracted much attention at high redshift, probably because of a lack of spatial resolution and/or incomplete data since the most obscured part of the galaxies are not visible with pre-_JWST_ telescopes. However, the spatial resolution of NIRCam shows that it is a common features of DSFGs around the Cosmic Noon. Indeed, we showed in Sect. 4.1.3 that being lopsided seem to be the typical morphology of Type I and II galaxies (see Figs. 2.1, 2.2 and 13). Bournaud et al. (2005) investigated the origins of lopsidedness in field galaxies and concluded that it is very unlikely the result of internal mechanisms but rather linked to the history and environment of the galaxies. With the NIRCam images, we have access to the spatially resolved morphology of these galaxies, and can try to better understand the causes of the lopsidedness. Among the lopsided galaxies showed in Figs. 2.1 and 2.2, some have a clear compact central core and a rather homogeneously colored disks (e.g. ID11887, ID13776), others are mostly clumpy galaxies with a less compact core (e.g. ID18694, ID18278). For the first category, even if we don't have the kinematics to confirm it, it seems that the galaxies have a stable disk, with no major merger features. This means that the lopsidedness of these galaxies, is probably due to accretion and minor mergers. This accretion would be happening via streams of cold gas that asymmetrically feed more generously one side of the galaxy making it grow larger than the opposite side. Moreover, the fact that these galaxies are clumpy (see Sect. 4.2.4) and that their disk is highly heterogeneous (see Sect. 4.2.3) favors the idea of accretion or minor mergers that could create clumps or patches in the disks with different SFRs or \(A_{V}\). However, the fact that Type I galaxies have a star-forming disk and Type II a quiescent disk means that the properties of gas transport in Type I and Type II galaxies are different. In Type I galaxies, the disk acquires its gas via accretion streams or minor mergers and forms stars, but the gas also goes to the core, which is SF as well. Bournaud et al. (2005) showed via simulations that the strong lopsidedness could be the result of gas accretion if it as asymmetric enough and that the lopsidedness from accretion is relatively long-lived (\(\sim 3\)Gyr), hence easily observable. This has also been confirmed by a recent study based on the TNG50 simulation (Dolfi et al., 2023) where they conclude that the lopsidedness in local galaxies originates from accretion over several Gyr while symmetric galaxies formed earlier and within a shorter timescale. In Type II galaxies, on the other hand, while the gas keeps going to the core and keeps it SF, the disk is quenched. This would seem to suggest that the gas does not stay in the disk, but goes straight to the center. A possible explanation would be that Type II galaxies have larger inflows or very powerful outflows that blow away and/or shock the gas in the disk (confirming this would require spectroscopy). It could also be that in Type II galaxies the accreted gas has a more radial accretion, with little angular momentum and goes straight into the central regions. Or, for some reason, the gas rapidly looses its angular momentum and abandon the disk and falls into the center. This would, depending on the direction of accretion, feed the lopsidedness. This effect has already been suggested by Kalita et al. (2022) where they were able to link the lopsidedness of 3 galaxies at \(z\sim 3\) in a dense environment to cold gas accretion using Lyman-\(\alpha\) emissions. The strong lopsidedness of these galaxies, would then be a tracer of the point of impact of the accretion streams. For the clumpier galaxies, the disk is star-forming and not homogeneous. Kannan et al. (2015) showed with simulations that gas-rich disks are able to survive major mergers and that the following enhanced star-formation is not entirely happening in the core of the galaxy, but a substantial fraction takes place in the disk too. This is compatible with our Type I galaxies, the fact that their SF disks are clumpy and heterogeneous in terms of dust and sSFR could be a signature of a recent major merger (Calabro et al., 2019). Moreover, Kannan et al. (2015) mention that the presence of a gas-rich disk contributes to reducing the efficiency of bulge formation, which is compatible with the non-compact core observed in some of these galaxies. Usually major mergers features are short lived, but the clumps we observe could be preserved due to Toomre instabilities. Indeed Fensch & Bournaud (2021) showed, via simulations, that a galaxy with a gas fraction greater than 50% will have strong disk instabilities leading to the formation of long-lived giant clumps and strong nuclear inflow affecting the structure of the galaxy and possibly introducing lopsidedness. It has already been observed in a local galaxy used as proxy for high redshift galaxies (Puschnig et al., 2023). A major merger could then result in a clumpy galaxy with a perturbed structure, which is what we have in Fig. 2.1 for some Type I galaxies. The color variations between clumps/regions in the galaxies could be tracers of the original galaxy they were a part of before the merging as they trace the dust attenuation. However, a major merger is not necessarily required, indeed, Rujopakarn et al. (2023) studied a lopsided galaxy at \(z\sim 3\) and concluded that its lopsidedness did not originate from interaction with the environment but from internal, large scale instabilities, that could, in the end, form bars or spiral arms. The lopsidedness of these galaxies could also be the signature of the bulge angular momentum build-up. Indeed, either via accretion, minor mergers, major mergers, internal instabilities and tidal effects, the lopsidedness will break the disk balance, consequently creating a torque on the bulge of the galaxy resulting in an angular momentum loss. The significance of the difference of lopsidedness between Type III galaxies and the rest of the sample means that, by some mechanism, the galaxies become much more symmetric after the Cosmic Noon. Indeed, we recall that our Type III galaxies have \(z=1.80\pm 0.09\) while Types I and II have \(z=2.19\pm 0.14\). This could be due to increasing virialization with passing of time, also due to the stabilising effect of the larger bulge mass fraction (see lower right panel of Fig. 13). ### Where do Type II galaxies come from? The Type II galaxies (see Sect. 3.7 and Fig. 2.2) have an unusual behavior. They have a compact star-forming core embedded in a quiescent disk, and represent \(\sim 23\%\) of the galaxies of our sample, so are relatively common. Kalita et al. (2022) studied such galaxies in a crowded environment at \(z\sim 3\) and linked the quiescence of the disk to its strong lopsidedness which rapidly fuels the gas to the core of the galaxy. In our sample of Type II galaxies, 3 have a strong lopsidedness, 1 is only weakly lopsided and has an off-center core while 1 is not lopsided at all. This means that even if lopsidedness can be a driver of outside-in quenching, it is not the only one. Based on our observations, we have three possible scenarios that could explain the observed suppression of star-formation in the disk. The first scenario is the one developed by Kalita et al. (2022) with the lopsidedness either coming from a major merger strong enough to result in this off-centered core or from asymmetric accretion of gas via streams and minor mergers, feeding the disk preferentially on one side. The strong lopsidedness resulting from this is enough to explain the quenching of the disk as it greatly facilitates the transportation of the gas toward the core (Fensch & Bournaud, 2021). The second scenario is a wet compaction event leading to an apparent outside-in quenching. ID13098 is in the correct range of stellar mass and redshift to be in a 'blue nugget' phase (Lapiner et al., 2023; Dekel et al., 2009; Tacchella et al., 2016) where the galaxy undergo a wet compaction caused by gas-rich mergers or smoother gas streams, leading to an episode of high central star-formation and outside-in quenching. The presence of the low-luminosity quiescent disk might indicate that the compaction is not completely done yet. If it is a blue nugget, the outside-in quenching may not be final as when the gas has been consumed at the center and the bulge has grown, a star-forming ring can form in the disk via accretion of new gas-reach material from the inter-galactic medium leading to an inside-out quenching in the post-blue nugget phase. The last scenario is an actual outside-in quenching linked to the strong lopsidedness but not resulting from a major merger. In Fig. 11, we show that the Type I galaxies are the most star forming and at the higher redshift on average. They also have a stellar mass consistent with the Type II galaxies. This means that there could be an evolutionary path between Type I and Type II galaxies driven by VDI and lopsidedness. The idea is that the star-forming clumps of the Type I galaxies will migrate toward the center of mass of the galaxy (Mandelker et al. 2014). By doing so, they will fuel strong gas nuclear inflow creating a compact SF core (Fensch and Bournaud 2021). On their way to the center of the galaxy, the clumps will accrete the gas of the disk and could leave a completely gas deprived disk and a compact SF core. When growing, the SF core will prevent the formation of new clumps in the disk by stabilizing it (Hopkins et al. 2023) while the lopsidedness could be conserved due to the large scale instabilities. In this scenario, Type II galaxies are observed in a process of outside-in quenching. Chandar et al. 2023 demonstrated that the local ULIRG Arp220 is composed of a central starburst and a larger quiescent disk. The starburst has been triggered by a major merger. The galaxy is classified as shocked post-starburst galaxy which is a stage prior to post-starburst. In that case, it appears that shocks induced by the merger forced the outer disk in this galaxy to turn quiescent. This is close to first scenario we described with the outside-in quenching originating from a major merger. In our case, in the four remaining galaxies, two have clumpy heterogeneous disks (ID13107 and ID18278, see Fig. 2.2), the different properties of the patches, either linked to dust or sSFR (see Sect. 4.2.3) favors the idea of asymmetric accretion streams and minor mergers as the source of lopsidedness for these two galaxies. The ID13776 has a clumpy but more homogeneous disk, but highly off-centered. The eccentricity of this galaxy can both originate from asymmetric accretion making the disk grow on one side or from a major merger strong enough to shift the disk. In the same way, it is hard to conclude for the last galaxy (ID21190) which is not lopsided and seem to have a smooth homogeneous disk. ### The role of environment A way to discriminate between the scenarios of outside-in quenching and the origin of the lopsidedness of galaxies is to look at their local environment. To this aim, we use the environment density measurements from Chartab et al. (2020). They measure the density contrast of galaxies with a magnitude brighter than 26 AB mag in the H-band. The density contrast is defined as the number density enhancement with respect to the average density in the vicinity of the galaxy (local density/background density). In Fig. 21, we compare the local density contrast of our sample with the general population of galaxies in the EGS field. The star markers in Fig. 21 are the Type II galaxies. They do not sit in any particular kind of environment, they are relatively close to the median of the general population showed by the blue dotted line. This suggest that outside-in quenching can happen both in dense environment via major mergers but also in lighter environment via internal effects. The galaxy at lower density is ID13098 that we discuss in Sect. 5.7. The fact that this galaxy is relatively isolated favors the scenario of wet compaction as the origin of its outside-in quenching. For the other galaxies, the local density is insufficient to discriminate between scenarios as they do not sit in strongly over/under crowded environments but show that they all are likely. The color of the markers trace the lopsidedness of the galaxies. There is no obvious difference between the lopsided galaxies and the general population. We do not see any signature that could link the environment to the lopsidedness. The fact that we see lopsided galaxies not only in dense environment and that most of them have a regularly looking disk favors the idea that lopsidedness originates from accretion and/or VDI. However, this is only a tentative explanation, these measurements are not strong enough to say if environment could be a driver of lopsidedness. The circular marker showing a weakly lopsided galaxy in a high density environment is ID30186. This galaxy is the brightest galaxy of a group of \(\sim 16\) members at \(z_{spec}=1.85\), is undergoing a major merger and is surrounded by quiescent intra-halo light (Coogan et al. 2023). Discriminating further between the different scenarios would require spatially resolved spectroscopy to study the kinematics of each of these galaxies, and especially of the disk of each of them, to see if their disk is rotating, which would favor accretion and minor mergers, or if they are dominated by dispersion velocity favoring the scenario of major mergers and VDI. ## 6 Summary In this paper, we used the new set of images in the near-IR from _JWST_/NIRCam in the EGS field from the CEERS collaboration to investigate the formation and evolution of DSFGs at Cosmic Noon. To start with, we selected a sample of DSFGs based on their FIR emissions and around Cosmic Noon (\(1.5<z<3.0\)). We ended up with 22 galaxies in the CEERS field. We studied each galaxy on a sub-galactic scale by dividing them in different regions based on their NIRCam (F115W, F200W, F444W) colors, taking advantage of the spatial resolution. Using the available photometry from _HST_ and _JWST_, we ran SED fitting and derived physical parameters for each galaxy component and classified them as star-forming or quiescent. We classified the galaxies in different Types based on the star-forming activity in their core and disk. The Type I have a star-forming disk with a red star-forming core, the Type II are quiescent disks with a SF core and the Type III are star-forming disks with a quenched bulge. The main results of this study are: Figure 21: Density contrast of galaxies versus their redshift. Density contrast is defined as the number density enhancement with respect to the average density in the vicinity of the galaxy (Chartab et al. 2020). Grey scatter is the general population of H-band mag AB \(\leq 26\), the black dotted line is \(1\), the blue dashed line is the median density contrast of the general population in redshift bins. Star-shaped markers are galaxies undergoing outside-in quenching (Type II). Circular marker are the Type I and III galaxies. The colors of the markers trace the lopsidedness. * \(\sim 70\)% of the DSFGs in our sample have a red deeply dust attenuated compact star-forming core that can represent up to 80% of the total SFR of the galaxy but only 20-30% of its stellar mass. Contrary to the simulations that predict blue nuggets, these compact red cores are surrounded by large less obscured disks. Most of these cores are measured or predicted to be SMGs. However, telescopes like _ALMA_ or _NOEMA_ would only be sensitive to the most obscured part of the galaxy. This study demonstrates the necessity to combine near-IR imaging to sub-mm data to fully grasp the nature of DSFGs. * 64% of our galaxies are at least weakly lopsided, and 27% strongly lopsided. The lopsidedness could be caused by asymmetric cold gas accretion and minor mergers feeding preferentially one side of the disk, which would, depending on the orientation of the accretion favor a star-forming or quiescent disk. Lopsidedness could also be triggered by a major merger disrupting the disk, and/or via large scale instabilities even if our study favors accretion. The fact that lopsidedness is so common among our sample means that most DSFGs have a complex SFH and do not calmly evolve without any interaction with their environment. * 23% of the galaxies of our sample have a quiescent disk but a star-forming core. If one of them is compatible with a blue nugget, the others are not. Their observed outside-in quenching could then find its origins in their strong lopsidedness that favors VDI and rapid transportation of gas towards the center or from large scale instabilities and clump migration accreting the gas from the disk to feed it to the core. * Most of the galaxies have a disk with patches/clumps of different RGB color that are not radially symmetric. The color variations within the disks are mostly driven by dust attenuation. These variations are another indicator that Main Sequence DSFGs have a complex SFH. * Interestingly, among the quiescent disks, we find evidence of clump-like structures. These clumps are not (or very weakly) star-forming, they are mostly populated by old stars but seem to be to massive to be compared to the globular clusters we observe in the local universe. This work demonstrates the power of the _JWST_ in probing for the first time spatially resolved galaxies in the near-IR at Cosmic Noon, where the only available data was the unresolved images from _Spitzer_/IRAC. This allows reliable studies of quenching and dust attenuation at sub-galactic scales in DSFGs, facilitating the understanding of their morphologies and formation and evolution mechanisms that appear to be more complex than previously thought. ###### Acknowledgements. CGG acknowledges support from CNES. P.G.P.-G. acknowledges support from Spanish Ministerio de Ciencia e Innovacion MCIN/AEI/10.13039/501100011033 through grant PGC2018-093499-B-100.
2305.11582
What You Hear Is What You See: Audio Quality Metrics From Image Quality Metrics
In this study, we investigate the feasibility of utilizing state-of-the-art image perceptual metrics for evaluating audio signals by representing them as spectrograms. The encouraging outcome of the proposed approach is based on the similarity between the neural mechanisms in the auditory and visual pathways. Furthermore, we customise one of the metrics which has a psychoacoustically plausible architecture to account for the peculiarities of sound signals. We evaluate the effectiveness of our proposed metric and several baseline metrics using a music dataset, with promising results in terms of the correlation between the metrics and the perceived quality of audio as rated by human evaluators.
Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez, Valero Laparra, Jesus Malo
2023-05-19T10:43:57Z
http://arxiv.org/abs/2305.11582v2
# What You Hear is What You See: ###### Abstract In this study, we investigate the feasibility of utilizing state-of-the-art image perceptual metrics for evaluating audio signals by representing them as spectrograms. The encouraging outcome of the proposed approach is based on the similarity between the neural mechanisms in the auditory and visual pathways. Furthermore, we customise one of the metrics which has a psychoacoustically plausible architecture to account for the peculiarities of sound signals. We evaluate the effectiveness of our proposed metric and several baseline metrics using a music dataset, with promising results in terms of the correlation between the metrics and the perceived quality of audio as rated by human evaluators. Tashi Namgyal, Alexander Hepburn, Raul Santos-Rodriguez Intelligent Systems Lab University of Bristol Bristol, UK [email protected] &Valero Laparra, Jesus Malo Image Processing Lab Universite de Valencia Valencia, Spain [email protected] & [email protected] ## 1 Introduction The study of the perceptual assessment of the quality of audio signals has been explored at very different levels depending on the type of audio. Whilst there exist several tools to understand speech quality [1], the evaluation of music is rarely explored and comes in the form of software hidden behind commercial licences [2]. More generally, practitioners rely either on traditional physical measures of the audio signal, e.g., signal-to-noise ratio (SNR), or more recent deep learning-based metrics that involve non-interpretable complex models to capture statistics of the data [3]. The picture is quite different in other domains, namely imaging, where many more perceptual models have been developed over the years for these purposes - and well-curated datasets are readily available [4, 5]. It is well-known that the auditory and visual processing pathways share similar attributes. For example, _divisive normalisation_ is a well explored phenomenon that is encountered when studying neurons in the brain [6, 7]. Specifically in images, divisive normalisation has been shown to factorise the probability density function of natural images [8]. In audio the same phenomenon has been shown to minimise the dependencies between between natural sound stimuli responses to filters of certain frequencies [9]. Also, other behaviours such as signal adaptation can also be observed in the processing of both modalities [10]. Many of these ideas form the basis of, and have been embedded into, the design of image quality metrics, but, as they are also observed in auditory statistics or psychophysical tests, we argue they should be included in the design of audio quality metrics. In this paper, we propose to bridge the gap in between image quality metrics and their audio counterparts, drawing inspiration _Copyright:_ _2022 Tashi Namgyal et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, adaptation, and reproduction in any medium, provided the original author and source are credited._ In this paper, we propose to bridge the gap in between image quality metrics and their audio counterparts, drawing inspiration _Copyright:_ _2022 Tashi Namgyal et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, adaptation, and reproduction in any medium, provided the original author and source are credited._ from state-of-the-art metrics. Although raw audio takes a very different form to images, well-studied transformations can be used to align the two modalities. For example, spectrograms represent audio signals using a 2-d matrix, where each column represents the a time step and each row is a frequency band. As such, spectrograms encode the signal strength at a certain time at a certain frequency, similar to wavelet decomposition of images that is often used in image quality metrics [11]. We can then use these representations in order to exploit the image processing literature and image quality metrics (IQMs) to estimate audio quality. Importantly, whilst the structure of spectrograms is different to images (and semantically), the underlying principles are similar, i.e. importance of brightness or contrast. The paper is organised as follows: firstly, we show that popular IQMs can outperform metrics specifically designed for audio. Secondly we show that fine-tuning a traditional IQM based on divisive normalisation, that is also seen in auditory processing, can further improve results. We also provide the intuition behind what this tuned metric is capturing about the properties of audio. ## 2 Quality Metrics Objective assessment of modalities is a well-researched yet ongoing problem. Quality metrics aim to replicate the distance between two examples perceived by a human observer. This usually involves projecting the data to a more perceptually meaningful space and computing a distance, or computing and comparing statistical descriptors of the examples. Below we will detail a number of audio and image quality metrics used throughout the paper. ### Image Quality Metrics Traditional IQMs fall into two categories; _structural similarity_, comparing descriptions of statistical structure of the images, and _visibility of errors_, which aims to measure how visible the distortions are to humans. Multi-Scale Structural SIMilarity (MS-SSIM) [12] is based on the former and computes and compares three descriptors at various scales; luminance, contrast and structure. Normalised Laplacian Pyramid Distance (NLPD) [13], based on visibility of errors, is inspired by the biological processing that occurs in the visual system. Coincidentally, this processing is also present in the auditory system and as such we will fine-tune NLPD to audio later (see. 3). ### Audio Quality Metrics Frechet Audio Distance (FAD) [14] is a reference free evaluation metric for evaluating generated audio based on the Frechet Inception Distance (FID) commonly used in images [15]. FAD uses embeddings from the VGGish model [16] to measure the distance between previously learned clean studio quality music and a given audio clip. Virtual Speech Quality Objective Listener (ViSQOL) is a reference based perceptual quality metric based on the Neural Similarity Measure (NSIM) [17] between spectrograms. NSIM is similar to SSIM, using the luminance and structure terms but dropping the contrast term. Additionally it uses a support vector regression model to map the NSIM scores more closely to Mean Opinion Scores from listening tests. The discriminator output of a Generative Adversarial Network (GAN) has also been shown to correlate with human perceptual quality ratings [3]. ## 3 Normalised Laplacian Pyramid Distance We now introduce NLPD in more detail, as we will use it as an example of how to adapt existing image metrics for audio. The Laplacian Pyramid is a well known image processing algorithm for image compression and encoding [18]. The image is encoded by performing convolutions with a low-pass filter and then subtracting this from the original image multiple times at various scales, creating low variance and entropy versions of the image. Normalised Laplacian Pyramid (NLP) extends this with a local normalisation step on the output of each stage of the pyramid [13]. These two steps are similar to the early stages of the human visual and auditory system where divisive normalisation is present [11, 13, 19]. The distance in this transformed domain is referred to as NLPD. NLPD correlates well with human perception and the architecture reduces mutual information between the image coefficients, in agreement with the efficient coding hypothesis [8]. An overview of the architecture is detailed in Fig. 1. Given two images, \(x_{1}\) and \(x_{2}\), After computing each \(y_{i}^{(k)}\) output at every stage \(k\) of the pyramid (with \(i=1,2\)), the final distance is the root mean square error between the outputs of two images: \[\text{NLPD}(x_{1},x_{2})=\frac{1}{N}\sum_{k=1}^{N}\frac{1}{\sqrt{N_{s}^{(k)}}} ||y_{1}^{(k)}-y_{2}^{(k)}||_{2}, \tag{1}\] where \(N\) is the number of stages in the pyramid, and \(N_{s}^{(k)}\) is the number of coefficients at stage \(k\). ## 4 Experiments ### Data We use the Perceived Music Quality Dataset (PMQD) described in [3]. PMQD consists of 4-second audio clips across 13 genres, with 5 songs per genre and 3 clips per song, totalling 195 reference clips. These reference clips are degraded in four ways: waveshape distortion, low pass filtering, limiting and additive noise, resulting in 975 clips total. We divide this into an 80-20 train-test split, in which the test set contains all 3 clips for the last song in each genre. Each clip has an associated human perceptual quality rating on a scale from 1 to 5 ["Bad", "Poor", "Fair", "Good" and "Excellent"]. These ratings were gathered using Amazon Mechanical Turk. Each clip is rated by at least 5 participants and the median value is taken. For the SSIM, NLPD, and Mean Square Error (MSE) metrics the audio clips are downmixed into mono and converted into Mel Spectrograms. The audio was downsampled from 48kHz to 16050Hz and a window size of 2048, a hop-length of 64, and 512 mel-bands were used resulting in spectrograms of size 512 x 1024. The SSIM ratings were calculated with the Pytorch MS-SSIM package 1. For the ViSQOL and FAD metrics the audio clips are downmixed into mono and converted from 32-bit to 16-bit WAV files and ratings were calculated using the ViSQOL package 2 and the FAD package 3. Footnote 1: [https://github.com/Visual/Visual/Visual/Visual](https://github.com/Visual/Visual/Visual/Visual) Footnote 2: [https://github.com/Visual/Visual/Visual](https://github.com/Visual/Visual/Visual) Footnote 3: [https://github.com/Visual/Visual/Visual/Visual/Visual](https://github.com/Visual/Visual/Visual/Visual/Visual) Footnote 4: [https://github.com/Visual/](https://github.com/Visual/) as weights \(p_{j}\) that transform the the weighted sum of the pixel values in the neighbourhood surrounding each pixel to equal the centre pixel, \(j\). This is done separately for each layer of the pyramid, \(k\). \[f_{C}^{(k)}\left(\mathbf{z}_{N_{i}}\right)=\sigma^{(k)}+\sum_{j\in N_{i}}p_{j}^{( k)}\left|z_{j}^{(k)}\right| \tag{2}\] where \(N_{i}\) defines the neighbourhood to be considered (the size of the filters). The additive constant \(\sigma^{(k)}\) is simply the mean absolute value of \(z\) for each layer, \(k\). \[\sigma^{(k)}=\frac{1}{N_{s}^{(k)}}\sum_{i=1}^{N_{s}^{(k)}}\left|z_{i}^{(k)}\right| \tag{3}\] where \(N_{s}\) is the number of coefficients at stage \(k\), i.e. dimensions of \(z\). The weights are optimised with Eq. 4. We optimise over the reference spectrograms contained in the training set only, using ADAM optimiser, learning rate 0.01, batch size of 1 for 10 epochs. \[\mathbf{\hat{p}}^{(k)}=\operatorname*{argmin}_{\mathbf{p}}\sum_{i=1}^{N_{s}^{( k)}}\left(\left|z_{i}^{(k)}\right|-f_{C}\left(\mathbf{z}_{N_{i}}^{(k)}\right) \right)^{2} \tag{4}\] Optimising perceptually consists of maximising the Pearson's correlation between the NLPD and the human ratings of each reference audio clip and a degraded version of the clip. The filters are initialised to be the image NLPD values, and \(\sigma^{(k)}\) is initialised with Eq 3. We use ADAM optimiser to maximise the Pearson correlation with a learning rate of 0.001 for 100 epochs, where each batch only contains one degradation. We use Pearson as the training objective instead of Spearman's as the sorting operation has undefined gradients and assuming approximately linear rankings. ## 5 Results and Discussion ### Main findings Table 1 shows the correlation between different IQMs and human ratings using spectrograms. Surprisingly, IQMs perform well and better than specific audio quality metrics for all distortions other than low pass. ViSQOL may have performed better with longer clips similar to the data it was trained on; 8-10 seconds with 0.5 seconds silence at each end. The results for adapting NLPD to audio using different strategies can be seen in Table 2. To investigate the importance of divisive normalisation, we test various scenarios; with no divisive normalisation, setting the filters in the divisive normalization to one (equal contribution of all the neighbours), when the model is statistically fit (no perceptual information is used) and when the parameters are fit in order to maximise correlation with the opinion of human participants. These results can be compared with the ones in Table 1 where we used NLPD as optimised for image quality. The divisive normalisation increases correlation when fit to maximise perception as a whole, and this seems important for the low pass and limiter distortion. For waveshape and noise, all forms of divisive normalisation we test decrease the correlation. This indicates a relationship between the distortions tested and the form of divisive normalisation used that could be further explored. The limiter and low pass filter had much weaker p-values for statistical correlation. We think this is partly because the amount of degradation was not audible enough compared to the other distortions. This is indicated by some of the reference-distorted pairs where the distorted audio is actually judged to be of better quality. Fig. 2 shows the learned filters from the divisive normalisation at different stages in NLPD for 3 optimisation strategies. Different from the image filters, fitting NLPD statistically to audio clips causes the filters to focus on the power of the same frequency at time steps (x-axis) immediately before and after the center pixel for the first four stages - a similar strategy that the one followed by the one fitted for image quality. After, at stages 4 & 5, the model uses the frequency (y-axis) information. In contrast, the perceptual filters consider both time and frequency simultaneously, with stages exhibiting more smoothing behaviour in general. ### Further Work We have identified a need for more publicly available datasets for this task in the audio domain, similar to [4] in the image domain, with a larger variety of sounds and distortion types. The dataset collection for the GAN and ViSQOL datasets was done according to the ITU-T P.800 recommendation. However, this was designed for telephone conversations and according to ITU-R BS.1534-1 proved insufficient for evaluating audio signals of broadcast quality. Instead the "MUliu Stimulus test with Hidden Reference and Anchor" (MUSHRA) is the recommended grading procedure. The non-adaptive psychophysical Two Alternate Forced Choice (2AFC) paradigm could also be suitable. Preliminary tests should be performed to ensure the range of degradation is similar across degradation types. Tests should also scale the degradation amount to avoid degradation types that improve perceptual quality over a reference when applied in small amounts. A training procedure or better task descriptions could also help, as according to [3], participants were asked "How do you rate the audio quality of this music segment?" where "quality" is left largely up to participants to interpret. We also plan to investigate how divisive normalisation may be better tailored to audio, by using different filters for time and frequency, along the lines of MusicCNN [21]. We intend to use these metrics as a loss function in generative modelling, so that such models generate audio samples that sound more realistic and contain fewer perceived distortions. We also want to investigate the degree to which navigating through latent spaces of models trained with perceptual metrics aligns with human expectations of how the generated audio should change. We believe this should help with the explainability and trust of the output of audio generative models. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Waveshape} & Low & \multirow{2}{*}{Limiter} & \multirow{2}{*}{Noise} & All \\ & & Pass & & & data \\ \hline Original & **0.468** & 0.012 & 0.339 & **0.681** & 0.633 \\ No DN & 0.412 & -0.052 & 0.336 & 0.670 & 0.617 \\ \(P(\omega)=1\) & 0.457 & -0.022 & **0.380** & 0.669 & 0.629 \\ Statistical & 0.432 & -0.033 & 0.356 & 0.660 & 0.619 \\ Perceptual & 0.430 & **0.035** & 0.347 & 0.637 & **0.643** \\ \hline \hline \end{tabular} \end{table} Table 2: Spearman correlations for variations of the NLPD. Original corresponds to the filters fit statistically to images, no DN is NLP without the divisive normalisation, \(P(\omega)=1\) corresponds to divisive normalisation filters being all ones, statistical is the filters optimised to predict the center pixel given its neighbours and perceptual is the model optimised to maximise correlation.
2310.15552
Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks
Recent research suggests that the feed-forward module within Transformers can be viewed as a collection of key-value memories, where the keys learn to capture specific patterns from the input based on the training examples. The values then combine the output from the 'memories' of the keys to generate predictions about the next token. This leads to an incremental process of prediction that gradually converges towards the final token choice near the output layers. This interesting perspective raises questions about how multilingual models might leverage this mechanism. Specifically, for autoregressive models trained on two or more languages, do all neurons (across layers) respond equally to all languages? No! Our hypothesis centers around the notion that during pretraining, certain model parameters learn strong language-specific features, while others learn more language-agnostic (shared across languages) features. To validate this, we conduct experiments utilizing parallel corpora of two languages that the model was initially pretrained on. Our findings reveal that the layers closest to the network's input or output tend to exhibit more language-specific behaviour compared to the layers in the middle.
Sunit Bhattacharya, Ondrej Bojar
2023-10-24T06:45:00Z
http://arxiv.org/abs/2310.15552v1
Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks ###### Abstract Recent research suggests that the feed-forward module within Transformers can be viewed as a collection of key-value memories, where the keys learn to capture specific patterns from the input based on the training examples. The values then combine the output from the'memories' of the keys to generate predictions about the next token. This leads to an incremental process of prediction that gradually converges towards the final token choice near the output layers. This interesting perspective raises questions about how multilingual models might leverage this mechanism. Specifically, for autoregressive models trained on two or more languages, do all neurons (across layers) respond equally to all languages? No! Our hypothesis centers around the notion that during pretraining, certain model parameters learn strong language-specific features, while others learn more language-agnostic (shared across languages) features. To validate this, we conduct experiments utilizing parallel corpora of two languages that the model was initially pre-trained on. Our findings reveal that the layers closest to the network's input or output tend to exhibit more language-specific behaviour compared to the layers in the middle. ## 1 Introduction One of the least studied aspects of the Transformer (Vaswani et al., 2017) models in general and Large Language Models (LLMs) in particular is the feed-forward layers (FFNs). Although they contain almost two-thirds of the parameters, it is only recently1 that their role in the working of the models is being seriously studied. Footnote 1: Although the work by (Wang and Tu, 2020) is relevant in this regard, their analysis was done for all the components of the Transformer and not just the FFNs. Geva et al. (2021, 2022) have earlier demonstrated that FFNs could be seen as "key-value memories" where each neuron (key)2 in the lower sub-layer of the FFN gets triggered by specific patterns in the input data and the higher sub-layer (values) produces a distribution over the output vocabulary. This leads us to a perspective (Figure 1) where the FFN first captures certain patterns or concepts3 in the input (conceptualization), selects the important aspects (using the activation function i.e. selection) and then combines them to emit an output which can be interpreted as a prediction of the possible next-word token for that layer, i.e. synthesis. To highlight this view throughout the rest of the paper, we will use the term _'detectors'_ instead of the rather generic 'keys' to refer to the neurons in the earlier layer and _'combinators'_ instead of 'values' to refer to the later layer. Repeating this across layers leads to a process of incremental prediction of the next token, with the prediction from previous layers being refined in the next layers (Belrose et al., 2023). This perspective however raises an important question. For models trained with a causal-language modeling objective in multilingual settings, what sort of patterns do the detectors encode across layers? More precisely, are some detectors triggered by input only from specific languages? Footnote 2: While Geva et al. (2021) use the word ‘_keys’_, some other authors use the word _neuron_ in this context. In this paper, we investigate this phenomenon of language specificity of the detectors in a multilingual model, pretrained on 30 languages from 16 language Figure 1: Transformer block and the structure of FFN families. Earlier work has shown that Transformer models encode more shallow features in the earlier layers4 while encoding more semantic features in the later layers5(Tenney et al., 2019). We hypothesise that the shallow processing would require more language-specific detectors than the semantic aspects of the input. And hence, we posit that during pretraining of the multilingual models, two kinds of neurons would emerge: **language-specific** and **language-agnostic**. Footnote 4: close to the input Footnote 5: near the output Footnote 6: in a decoder-only Transformer model Thorough investigations into the role of the FFN layers in Transformer is an interesting research direction, and to our best knowledge, this is the first work that tries to look at the FFN7 from the perspective of multilinguality. The rest of the paper is structured as follows: a brief discussion of the related works (Section 2) is followed by the description of the models and data (Section 3) and models (Section 4). This is followed by the presentation (Section 5) and simultaneous discussion of the results (Sections 6 and 7). Footnote 7: todo: in a decoder-only Transformer model ## 2 Related Work Exploring the role and capabilities of the FFN sub-layer in Transformer models is a still nascent field of research with only a few papers exploring their working. As mentioned earlier, Geva et al. (2021, 2022) have proposed an interesting perspective of looking at how the FFN layer of the Transformer contributes during language generation. Recent work (Meng et al., 2022; Yao et al., 2022) exploring the capabilities of the FFN has also looked into how the activations of FFNs could be used for understanding how autoregressive models deal with facts. Other works (Li et al., 2022; Zhang et al., 2022) have analysed activation patterns in FFNs to study sparsity in Transformers. In other words, they show that only a few neurons in the FFNs are activated corresponding to inputs to Transformers. On the front of studying multilingual models, Libovicky et al. (2019) demonstrated that representations in encoder-only models can be split into language specific and language-neutral components. But to our best knowledge, no equivalent study has been done for autoregressive language models. Additionally, Deshpande et al. (2022); Blevins et al. (2022); Lauscher et al. (2020); Choudhury and Deshpande (2021); Kudugunta et al. (2019) have studied the pretraining behaviour and capabilities of various encoder-only multilingual models. More recently, Pfeiffer et al. (2022) demonstrated how separating parameters into language-specific modules during training can help improve the performance across languages. From the perspective of studying multilinguality in the human brain, neuroimaging studies (Crinion et al., 2006; Videsott et al., 2010; Miozzo et al., 2010) have shown that although neural circuits for different languages are highly overlapping, there are distinct brain areas for language-specific processing and areas that are language-agnostic. ## 3 Model and testing data We use a pretrained XGLM model (Lin et al., 2021) with 1.7 billion parameters, available on the Hugging Face (Wolf et al., 2019) repository7 for our experiments. Footnote 7: [https://huggingface.co/facebook/xglm-1.7B](https://huggingface.co/facebook/xglm-1.7B) We use sentences from the training data of the CzEng 2.0 corpus8(Kocmi et al., 2020) for our experiments. The model description of the XGLM model states that the model was trained on CommonCrawl data of various languages. CzEng heavily relies on various freely accessible web sources and a part of the data included in CzEng is also drawn from CommonCrawl among other sources. Thus, we expect that the sentences used for the experiments are of the same domain/style as the model was originally trained on, and they can even overlap. We do not consider such a possible overlap a serious problem for our analysis, because we are not measuring any processing performance or generalization capability. Footnote 8: [https://ufal.mff.cuni.cz/czeng](https://ufal.mff.cuni.cz/czeng) ## 4 Experiment We first extract a sample of sentences from the CzEng corpus, giving us a set of Czech and English parallel sentences. We only select sentences with lengths between 20 and 50. We then feed the model with all 'prefixes' of the sampled sentences from both languages. In other words, for each sentence, we incrementally feed the model one subword at a time and record our observations. For instance, for a Czech sentence like "Tenhle okol je obtizny" (This task is difficult), the prefixes fed to the model would be "Tenhle", "Tenhle okol", "Tenhle okol je" and "Tenhle okol je obtizny". The parallel sentences ensure that the semantic contents of the sentences for the two languages are similar. We go on to collect the data about the model state corresponding to each prefix. Figure 2: FFN in close detail From the collected data9, we extract the "selection coefficients" corresponding to each prefix for all detectors across the layers of the model. Specifically, for detector \(d_{i}\) in layer \(L_{j}\), we define the selection coefficient for a prefix \(p_{k}\) as: Footnote 9: from all sentences across Czech and English \[C_{p_{k}}^{(L_{j},d_{i})}=GeLU\{d_{i}(p_{k})\} \tag{1}\] Thus, for each prefix we obtain layer-wise selection coefficients for the detectors (an example can be visualised in Table 1). We then sort the detectors based on the values of their corresponding selection coefficients. We posit that for a layer, certain detectors are triggered by specific prefix templates or languages. The selection coefficient is the indicator of the extent to which a particular detector is triggered by a prefix. Thus, observing the selection coefficients of the detectors across prefixes of different languages should indicate which (and how many) detectors are relevant bilingually and which (and how many) are relevant only for one of the two examined languages. We do this by analysing the top-k detectors after sorting the detectors by decreasing selection coefficients. ## 5 Observations As an example, Table 2 shows the top-1 detector (detector with maximum selection coefficient) for the prefixes of an English and Czech sentence. In the following sections, we present the results from our observations of the selection coefficients of detectors across the layers of the model. ### Distribution of active detectors across layers We collect the indices of the top-10 and top-10010 detectors for each prefix. For a prefix \(P_{i}\) of all the considered prefixes \(P_{0},P_{1},...,P_{n}\), we denote the set of the top detectors \(D_{i}\) where \(|D_{i}|=t\) (i.e. the set cardinality of \(|D_{i}|\) is \(t\)). This way, we collect the list of the top \(t\) detectors for all prefixes in a layer. For each layer \(L_{k}\), we obtain \(L_{k}=D_{0}\cup D_{1}\cup...\cup D_{n}\) and we plot the \(|L_{k}|\) across the layers (e.g. Figure 3). In other words, we are checking how many unique detectors across prefixes belong to the list of 10 or 100 most active detectors for that layer. The fewer detectors in this set, the more "compact" the representation of these sentences are. The more detectors is in this set, the more "network capacity" is used when processing the given sentences. We make the plots for each of the two languages. Hence, using the example in Table 2: for layer \(1\), we have \(L_{1}^{en}=(2149,3424)\) and \(L_{1}^{es}=(2149,3942,200)\) and so \(|L_{1}^{en}|=2\) and \(|L_{1}^{es}|=3\). \begin{table} \begin{tabular}{||c|c||} \hline Lang1, sent1, prefix\_1 & \(C_{11}C_{12}C_{13}\ldots C_{1m}\) \\ Lang1, sent1, prefix\_2 & \(C_{21}C_{22}C_{23}\ldots C_{2m}\) \\ \(\vdots\) & \(\vdots\) \\ Lang2, sentN, prefix\_xx & \(C_{k1}C_{k2}C_{k3}\ldots C_{km}\) \\ Lang2, sentN, prefix\_xy & \(C_{n1}C_{n2}C_{n3}\ldots C_{nm}\) \\ \hline \end{tabular} \end{table} Table 1: Selection coefficients of \(m\) detectors in layer \(L\) for a total of \(n\) prefixes Figure 3: Number of top detectors (\(|L_{i}|\)) used across layers when processing Czech (top plot) and English (bottom plot) sentences. \begin{table} \begin{tabular}{||c|c||} \hline Prefix & Detector \\ \hline \hline Europol & 2149 \\ \hline Europol zpracová\^{\prime}a & 2149 \\ \hline Europol zpracová\^{\prime}a & 3942 \\ \hline Europol zpracová\^{\prime}a a pédává & 200 \\ \hline Europol zpracová\^{\prime}a a pédává & 200 \\ \hline \hline Europol & 2149 \\ \hline Europol shall & 2149 \\ \hline Europol shall process & 2149 \\ \hline Europol shall process and & 3424 \\ \hline Europol shall process and transfer & 2149 \\ \hline \end{tabular} \end{table} Table 2: Prefixes from an example Czech-English sentence pair, listing the most active detector ID (according to selection coefficients) from layer 1. Figure 3 shows that the top-100 list does not seem to show any pattern, unlike the top-10 list. We observe that for each prefix, only certain detectors exhibit high values of selection coefficient. Selecting the top-100 leads to the inclusion of many detectors that repeatedly appear across many prefixes with tiny values of selection coefficient. We reason that, this leads to the pattern seen with the top-10 list. We also posit that this is a callback to the previous research that has indicated that FFNs exhibit patterns of sparse activation. The top-10 list shows that the number of detectors for both languages increases between layers 1 to 4 (near the input) and then decrease between layers 19 to 24 (near the output). Since this observation also includes detectors that get triggered for both languages11, we analyse the number of detectors that are intersecting between the two languages (Czech and English). That is, for each layer \(L_{k}\), we identify the intersecting detectors \(I_{k}=L_{i}^{cs}\cap L_{i}^{en}\). In other words, we examine how the number of keys getting triggered by both English and Czech prefixes (multilingual detectors) vary across the layers. Footnote 11: for example, detector 2149 in the example shown in Table 2 As Figure 4 shows, the number of intersecting detectors also follows the same pattern as observed in Figure 3. The number starts increasing in the layers near the input and decrease near the output. It may be argued that the spike in the number of unique detectors (for individual languages) in the middle layers might imply that the number of intersecting detectors would also increase in the middle layers. However, we argue that it might not always be the case. We validate our argument in the following sections. To look at the language specific responses of the detectors across the layers, we look at the set difference of the detectors seen in, Figure 3 i.e. the language-specific detectors. So, for some layer \(k\), we analyse \(en_{k}=L_{k}^{en}\setminus L_{i}^{cs}\) and12\(cs_{k}=L_{i}^{cs}\setminus L_{i}^{en}\). From the results in Figure 5, we see that there is a steady drop in the number of Czech-specific detectors in the middle layers. No such effect is seen for English. Also, across all the results presented here, we note that the observed number of detectors getting triggered by English prefixes is considerably higher than that of Czech prefixes. Footnote 12: From the example in Table 2, \(en_{k}=3424\) and \(cs_{k}=\frac{3942,200}{3942,200}\) Next, we determine to what extent the actual language can be identified from the detector activity. ### Layers close to the input and output are language specific To confirm the existence of language-specific detectors, we train a linear classifier over all the detectors for each layer. The task of the classifier is to use the selection coefficients to determine if the given prefix was in English or Czech. The results from the experiment are shown in Figure 6. In the plot, we show the number of detectors across different performance brackets. Each series shows the number of detectors classifying with an accuracy of \(>=k\%\). We see that for performance brackets \(<80\%\), the layer closer to the input shows the highest accuracy in predicting the language. Again for slabs, \(>70\%\) we see that the accuracy increases in the last few layers. Thus, we conclude that layers closer to the input and output are more language-specific than the others. ## 6 Discussion We started with the hypothesis that language-specific detectors would be more common in the layers closer to the input and output. We analysed the detectors across the layers using sentences from a Czech-English parallel corpus. We note that in the underlying XGLM model, English (with 803,527 million training tokens) was much more dominant than Czech (with 8,616 million training tokens) (Lin et al., 2021). We thus consider the model to be a primarily English model that Figure 4: Distribution of multilingual detectors (intersecting detectors) Figure 5: Distribution of language specific detectors saw some Czech sentences during pretraining. From the results, we observe that the layers closer to the input and output indeed perform more language specific processing than others. We also see that considerably lower number of detectors are triggered by the Czech prefixes than English prefixes, probably reflecting the data imbalance during training. While looking at the behaviour of Czech-specific detectors, we find that their numbers drop near the middle layers (8-15). We know that the model is primarily English centric. And since it is well known that higher-layers of Transformers are involved in more semantic processing, it is likely that the model uses more language-agnostic detectors and only a few Czech-specific detectors for processing semantic aspects of the input. Studies with humans have previously shown that semantic processing in humans is often language-agnostic. We thus see a possible way to connect these observations in the future. From a different perspective, the analysis of the selection coefficients also agrees with the recent theories and observations about the sparse nature of FFN modules. We hypothesise that the sparsity (lesser numbers of unique detectors) might be an indicator of shallow processing and density might be an indicator of semantic processing. The sparsity argument might also be extended to claim that only a subset of detectors are required for language specific processing while greater numbers of detectors are required for more language-agnostic (i.e. semantic) However, such claims warrant extensive experimentation that we wish to conduct as a followup to this work. ## 7 Conclusion In this study, we focused on the analysis of the Feed Forward Layers (FFNs) of a pretrained multilingual Transformer model. We look at the FFNs as a system that first identifies patterns in the input representations (detector), selects the relevant information (selector), and then combines it to make a guess of the next token (combiner). We assess the degree of language specificity of the detectors in this multilingual model with two experiments. We observe that there are greater number of language specific detectors near the input and output of the model. Additionally, we observe how data imbalance during training is reflected in the behaviour of the multilingual detectors. We also try to link our observations with recent studies on the sparse activations in FFNs. Overall, our findings shed light on the language specificity of FFNs in multilingual models. ## Limitations While our analysis provides valuable insights into the behaviour of "detectors" in a multilingual Transformer model's Feed Forward Layers (FFNs), there is an important limitation to consider. Our analysis is limited to only the XGLM model. This work does not consider the multilingual dynamics of other models. Also, our study is centred on the Czech-English language pair. Different languages exhibit diverse linguistic characteristics and complexities, and the behaviour of detectors could vary significantly across various language pairs. Extrapolating our findings to multilingual behaviour involving other languages requires caution and further investigation. Further, while we categorize detectors as language-specific or multilingual based on their activation patterns, the specific linguistic cues that trigger their activation remain complex and challenging to interpret. Our study focuses on the quantitative aspects of detector behaviour, and a deeper qualitative analysis of the linguistic information captured by these detectors could provide additional insights. ## Ethics Statement As the work is dedicated to evaluating existing models on publicly available datasets, we are not aware of any potential ethical issues or negative impacts. ## Future Work We wish to extend this work and test the generalizability of our hypothesis across more language pairs and other multilingual autoregressive language models. ## 8 Acknowledgements This work has been funded from the 19-26934X (NEUREM3) grant of the Czech Science Foundation and the grant 205-09/260698 (SVV) of Charles University. The work has also been supported by the Ministry of Education, Youth and Sports of the Czech Republic, Project No. LM2023062 (LINDAT/CLARIAH-CZ). Figure 6: Classification percentages across layers. The colour indicates the reached accuracy level of the prediction.
2307.01856
Long-Lived Particles and the Quiet Sun
The nuclear reaction network within the interior of the Sun is an efficient MeV physics factory, and can produce long-lived particles generic to dark sector models. In this work we consider the sensitivity of satellite instruments, primarily the RHESSI Spectrometer, that observe the Quiet Sun in the MeV regime where backgrounds are low. We find that Quiet Sun observations offer a powerful and complementary probe in regions of parameter space where the long-lived particle decay length is longer than the radius of the Sun, and shorter than the distance between the Sun and Earth. We comment on connections to recent model-building work on heavy neutral leptons coupled to neutrinos and high-quality axions from mirror symmetries.
R. Andrew Gustafson, Ryan Plestid, Ian M. Shoemaker, Albert Zhou
2023-07-04T18:00:07Z
http://arxiv.org/abs/2307.01856v3
# Long-lived particles and the Quiet Sun ###### Abstract The nuclear reaction network within the interior of the Sun is an efficient MeV physics factory, and can produce long-lived particles generic to dark sector models. In this work we consider the sensitivity of satellite instruments, primarily the RHESSI Spectrometer, that observe the Quiet Sun in the MeV regime where backgrounds are low. We find that Quiet Sun observations offer a powerful and complementary probe in regions of parameter space where the long-lived particle decay length is longer than the radius of the Sun, and shorter than the distance between the Sun and Earth. We comment on connections to recent model-building work on heavy neutral leptons coupled to neutrinos and high-quality axions from mirror symmetries. + Footnote †: preprint: CALT-TH/2023-023 ## I Introduction It has long been recognized that the solar interior can serve as an efficient factory for keV-scale physics beyond the Standard Model (BSM), e.g. solar axions and dark photons [1; 2; 3; 4; 5; 6; 7; 8]. In addition to thermal production mechanisms, nuclear reactions within the Sun may also source BSM particles up to masses and energies of roughly 15 MeV [9; 10; 11; 12; 13; 14; 9]. If a flux of long-lived particles (LLPs) in this energy regime emanates from the solar interior, they may transit toward the Earth and their decay products can leave detectable signatures. It is important to emphasize that LLPs are generic consequences of a dark sector with relatively light particles and feeble couplings to the SM [15; 16; 17; 18]. As decay lengths become long, LLPs become increasingly difficult to detect and strategies to attack this "lifetime frontier" are valuable tools in the search for BSM physics. This idea has been previously investigated, largely considering FERMI-LAT, in the high energy, i.e. \(\gtrsim 100\) MeV, regime for annihilating dark matter [19; 20; 21; 22]. In this work we point out that existing data from the RHESSI satellite spectrometer [23], which observed the Quiet Sun,1 can place interesting limits on dark sectors with LLPs in the range of \(\mathcal{O}(100\,\mathrm{keV})-\mathcal{O}(1\,\mathrm{MeV})\). This is an old idea, first proposed by Stodolsky and Raffelt in 1982 in the context of a 200 keV axion [9], however, it has remained unexplored despite new data in the intervening decades [24]. We illustrate the potential sensitivity of Quiet Sun data with a number of BSM examples, emphasizing different production mechanisms which may operate in this mass window. A conservative analysis of existing data from RHESSI is capable of offering complementary constraints on production mechanisms involving neutrino upscattering, and can probe previously untouched regions of parameter space for axion like particles (ALPs) with masses close to \(\sim 1\) MeV. Upcoming missions, such as the COSI satellite [25; 26], may be able to substantially improve on the capabilities of RHESSI by _i)_ taking advantage of a larger instrument surface area, _ii)_ making use of dead time to carefully study backgrounds, and _iii)_ taking advantage of distinctive spectral features. Footnote 1: Time periods without intense surface activity such as solar flares. We focus on LLPs that decay primarily to photons,2 and have decay lengths, \(\ell_{\mathrm{LLP}}\), that satisfy Footnote 2: We could also consider decays to \(e^{+}e^{-}\) pairs but an analysis is complicated by the magnetic fields that surround the Earth. \[R_{\odot}\ll\ell_{\mathrm{LLP}}\ll d_{\odot}\, \tag{1}\] where \(R_{\odot}\) is the radius of the Sun and \(d_{\odot}\) is the distance from the Sun to the Earth. This allows an \(O(1)\) fraction of the LLPs to decay en-route to the satellite instrument. In this limit, the flux of LLPs will never reach any terrestrial experiment since they will decay in flight and their daughter photons will be absorbed in the upper atmosphere. In this sense, Quiet Sun observations are complimentary to terrestrial searches for LLPs from the Sun such as those that have been performed by CAST [10] and Borexino [11]. We perform a straightforward (and conservative) rate-only analysis, the details of which can be found at the end of Section II. In the body of the paper we organize our discussion along the lines of specific BSM scenarios. We discuss neutrino upscattering in Section II and solar axion production in Section III. We also spend time focusing on model-independent LLP constraints in Section IV. In Section V we discuss the physics potential for dark sector searches using future missions such as COSI. We close by summarizing our results in Section VI. Neutrino Upscattering - Transition Dipole We begin by considering a production mechanism involving the upscattering of solar neutrinos transiting through the Sun, e.g. \(\nu A\to{\rm LLP}A\) with a \(A\) a nucleus such as hydrogen or helium (see e.g. [13; 14; 27] for results on neutrino upscattering in the Earth). This mechanism leverages the large solar neutrino flux which is copious in the few-hundred keV region, and extends up to \(\sim 15\) MeV. Solar neutrinos have a small probability of being absorbed in the SM because of the small charged current scattering cross section at \(E_{\nu}\sim{\rm MeV}\) energies. It is, however, possible to have BSM cross sections that exceed the weak interaction at low energies if neutrinos couple via a transition magnetic dipole moment [28; 29]. This can lead to sizable conversion probabilities into an unstable right-handed neutrino, \(N\) (also called a heavy neutral lepton or HNL), for neutrinos transiting from the center to the surface of the Sun. As it is unstable, \(N\) may decay in flight supplying a broad flux of photons in RHESSI. Similar phenomena may occur in the aftermath of SN 1987A [30; 31] leading to tight limits below the supernova floor derived in [28]. This "dipole portal" can dominate low energy phenomenology since it is a dimension-five operator and competes with the dimension-six four-Fermi contact interaction at low energies. The effective Lagrangian is given by \[\mathcal{L}_{\rm int}\supset\sum_{\alpha}d_{\alpha}F^{\mu\nu}\bar{N}\sigma_{ \mu\nu}P_{L}\nu_{\alpha}. \tag{2}\] Here, \(d_{\alpha}\) represents the coupling between \(N\) and each of the 3 SM neutrinos. In this work, we consider the cases where \(N\) couples to a single flavor. This effective interaction has been studied recently in the context of accelerator, solar, atmospheric, and collider neutrinos as well as in the context of early universe cosmology and constraints from SN 1987A [28; 29; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. Unlike the monoenergetic LLP cases discussed later in this paper, the spectrum of \(E_{\nu}\) (and hence \(E_{N}\)) spans several orders of magnitude. For that reason, we implement a Monte Carlo integration to sample neutrino energy, production location, and upscattering location. We also account for flavor transformation during the neutrino propagation (both due to adiabatic conversion and oscillations). We consider the Sun to be solely comprised of \({}^{1}\)H and \({}^{4}\)He with densities given by the Standard Solar Model [57; 58; 59]. All scattering is calculated to be off free nucleons, ignoring the coherent enhancement due to helium. This only leads to an \(\sim 10\%\) change in the bounds, which we will see is a much smaller effect than uncertainty in the detector opening angle/background. The cross section for scattering on a free proton is given by \({\rm d}\sigma_{\rm dip}={\rm d}\sigma_{1}+{\rm d}\sigma_{2}\) with, \[\frac{{\rm d}\sigma_{1}}{dE_{r}}= \alpha(2d)^{2}F_{1}^{2}\bigg{(}\frac{1}{E_{r}}-\frac{1}{E_{\nu}}\] \[+\frac{m_{N}^{2}(E_{r}-2E_{\nu}-m_{p})}{4E_{r}^{2}E_{r}m_{p}}+ \frac{m_{N}^{4}(E_{r}-m_{p})}{8E_{\nu}^{2}E_{r}^{2}m_{p}^{2}}\bigg{)}\, \tag{3}\] and \[\begin{split}\frac{{\rm d}\sigma_{2}}{dE_{r}}=&\alpha d ^{2}\mu_{n}^{2}F_{2}^{2}\bigg{[}\frac{2m_{p}}{E_{\nu}^{2}}\big{(}(2E_{\nu}-E_{r })^{2}-2E_{r}m_{p}\big{)}\\ &+m_{N}^{2}\frac{E_{r}-4E_{\nu}}{E_{\nu}^{2}}+\frac{m_{N}^{4}}{E_ {\nu}E_{r}}\bigg{]}\.\end{split} \tag{4}\] Here, \(F_{1}\) and \(F_{2}\) are electromagnetic form factors, \(\mu_{n}\) is the magnetic moment of the nucleon in question, \(E_{r}\) is the Figure 1: The flux of solar HNLs at Earth (ignoring decays) as calculated through the dipole model Monte Carlo simulation, where \(m_{N}=0.75\) MeV and \(d_{\mu}=2\times 10^{-11}\) MeV\({}^{-1}\). Figure 2: The flux of photons at RHESSI from \(N\) decays calculated using a Monte Carlo integration with a \(90^{\circ}\) opening angle, compared with the RHESSI background in the front segment. Since the flux from decays exceeds the background, we consider \(m_{N}=0.75\) MeV, \(d_{\mu}=2\times 10^{-11}\) MeV\({}^{-1}\) to be excluded. recoil energy, and \(m_{p}\) is the proton mass [60; 61]. Since the neutrino energy is much less than the proton mass, the HNL energy \(E_{N}\) is nearly identical to the neutrino energy \(E_{\nu}\). Thus, the flux of HNLs has similar features to the solar neutrino flux (see Fig. 1). The HNL has decay channels \(N\to\nu_{\alpha}\gamma\). We consider the \(\nu\) to be massless, and the decays to be isotropic in the rest frame of the HNL.3 The decay length is calculated as Footnote 3: In complete generality the HNL may have some angular correlation with its polarization, but this depends on the details of the model e.g. Dirac vs. Majorana neutrinos [62] and we neglect this in what follows. \[\lambda=\frac{4\pi}{d_{a}^{2}m_{N}^{3}}\gamma\beta. \tag{5}\] The Monte Carlo simulation samples locations for \(N\) decays along with the energy and direction of the decay photon. This is used to calculate the resulting photon flux with respect to energy and angle observed by RHESSI. We consider opening angles of \(1^{\circ}\) and \(90^{\circ}\), where we reject all photons arriving at larger angles. The background flux observed by RHESSI is calculated by using the reported number of counts and effective area of the front segment (ignoring narrow peaks) [23]. We reject a parameter point if the flux from \(N\) decays exceeds the observed flux at any energy (see Fig. 2). Our resulting exclusion curves from the RHESSI data are shown in Fig. 3 for a muon neutrino dipole coupling. We find that RHESSI data can offer a complimentary (and direct) probe of regions of parameter space that are already probed by SN-1987A. Constraints are strongest in the low mass region (sub-MeV), and this may also be probed using coherent elastic neutrino nucleus scattering. We see that the exclusions for the three neutrino flavors all have similar values in Fig. 4. ## III Heavy Solar Axions Another production mechanism is solar axions with energies in excess of \(E_{a}\gtrsim 500\) keV. These energies are too high to allow for thermal production (except for in exponentially suppressed tails), and so the background photon fluxes are much smaller than for typically considered keV solar axion searches. The study of MeV-scale solar axions has a long history, and they have been searched for in terrestrial experiments such as Borexino and CAST [10; 11]. As we discuss below, satellite measurements of the Quiet Sun provide a complimentary probe that excels for decay lengths that are short relative to the Earth-Sun distance. It is worth highlighting recent work on model building for axions with an extended matter content [64; 65; 66; 67]. These models are motivated by the axion quality problem and seek to protect the axion against Planck suppressed corrections. The simplest mechanism to achieve this is to simply break the canonical relation \(f_{a}m_{a}\approx f_{\pi}m_{\pi}\) and to allow for \(m_{a}\) to be "heavy" relative to predictions of conventional (i.e. DSVZ [68; 69] or KSVZ [70; 71]) axion models. It is interesting to note that these independent model building considerations often push the mass and couplings of the axion into regions of parameter space that are well suited for solar axion detection; we will comment on this in great detail below. For instance, Figure 3: Excluded parameter space for a muon neutrino transition dipole moment. Along with our bounds, we show 90% CL exclusions from Borexino \(e-\nu\) scattering [63; 29], terrestrial solar neutrino upscattering [14], Supernova 1987A [28], big-bang nucleosynthesis and the cosmic microwave background [29]. For RHESSI excluded parameter space, we include exclusions from taking a \(1^{\circ}\) opening angle and a \(90^{\circ}\) opening angle. The star represents the parameter point show in Fig. 1 and Fig. 2. Figure 4: Excluded parameter space of a transition dipole moment for each of the three active neutrinos. We see that the constraints all take a similar form, only varying by \(\mathcal{O}(1)\) factors. following the benchmark scenarios presented in [67] one finds that masses in the \(\sim 10\) MeV regime with axion decay constants \(f_{a}\sim 10^{-5}~{}\text{GeV}^{-1}\) fall squarely within the "natural" window of parameter space whilst simultaneously predicting a sizeable coupling to nucleons and a decay length that is a few times longer than the radius of the Sun. For slightly lighter axions, solar production and detection is a useful complimentary probe. The primary production mechanism for heavy solar axions is the \(p\)\(d\rightarrow\,^{3}\)He \(\gamma\) reaction which takes place in the solar \(pp\) chain. Other mechanisms are energetically allowed, such as \(M1\) transitions in the CNO chain [72], and \(e^{+}e^{-}\) annihilation from \({}^{8}\)B neutrinos in the solar interior, however, we find that the production rates are sufficiently small so as to be uninteresting. The flux of axions (prior to decay) can be related to the flux of \(pp\) neutrinos, and depends on the isovector coupling of axions to nucleons \(g_{3aN}\)[73]. The axions must first escape the Sun and then decay before reaching Earth. The escape probability depends both on axion absorption and decay processes. Putting all of this together and setting \(\text{BR}_{a\gamma\gamma}=1\), we arrive at the flux of axions arriving at a detector orbiting the Earth, \[\frac{\Phi_{\gamma}}{\Phi_{\nu}^{(pp)}}=0.54|g_{3aN}|^{2}\bigg{[}\frac{p_{a}} {p_{\gamma}}\bigg{]}^{3}\Big{[}\mathrm{e}^{-R_{\odot}/\ell_{\text{abs}}}- \mathrm{e}^{-d_{\odot}/\ell_{\text{dec}}}\Big{]}\, \tag{6}\] where \(\ell_{\text{abs}}^{-1}=\ell_{\text{MFP}}^{-1}+\ell_{\text{dec}}^{-1}\) with \(\ell_{\text{MFP}}^{-1}\) the averaged mean free path in the Sun and \(\ell_{\text{dec}}\) the axion decay length. The coupling \(g_{3aN}\) is the isovector coupling strength of the axion to nucleons, and \(p_{a}/p_{\gamma}\) is the ratio of three-momenta between an axion and photon emitted with \(E=5.49~{}\text{MeV}\). The \(pp\) neutrino flux is given by \(\Phi_{\nu}^{(pp)}=6\times 10^{10}~{}\text{cm}^{-2}\text{s}^{-1}\). We account for axion-absorption, Primakoff scattering, and axion electron scattering in our calculation of \(\ell_{\text{MFP}}^{-1}\). Our results are shown in Fig. 5. We note that our exclusions depend on the axion nucleon coupling, captured by \(g_{3aN}\), and the decay constant \(g_{a\gamma\gamma}\). If \(g_{a\gamma\gamma}\) vanishes at some scale \(\mu=\mu_{0}\), but \(g_{aee}\neq 0\) then an effective \(g_{a\gamma\gamma}\sim(\alpha/4\pi)g_{aee}/m_{e}\) will be generated via a 1-loop triangle diagram, and in this way one can re-cast our limits4 in terms of those on \(g_{aee}\). We do not include exclusions from SN1987 typically plotted in the \(m_{a}-g_{a\gamma\gamma}\) plane because the values of \(g_{3aN}\) that are required to produce a sufficient axion flux in the Sun lead to axion trapping within a core-collapse supernova [74].5 This is an important distinction between the hadronically coupled axion models we considered here vs. an axion like particle which couples exclusively to photons (see e.g. [75]). The solar axion constraints we discuss here are therefore complimentary to supernova cooling ones. If the axion nucleon coupling, \(g_{aN}\), is large enough to evade SN-1987 bounds via self trapping then it is also large enough to be probed with RHESSI data. Low energy supernovae observations have been used to place constraints on axions which decay in-flight and deposit energy to the ejecta [76]. Additionally, axions produced in neutron star mergers have been constrained using X-ray observation [77]. These constraints also disappear in the strong coupling regime, and are complimentary to ours. Constraints from NA62 [78], E787 [79], and E949 [80] are subject to \(O(m_{K}^{4}/m_{\rho}^{4})\) hadronic uncertainties in the prediction of \(K\to a\pi\)[67, 81]. Finally, our constraints on \(g_{a\gamma\gamma}\) lie above the ceiling of searches performed with the Borexino collaboration [11] because we are sensitive to decay lengths much shorter than \(d_{\odot}\). This is demonstrative of the way in which constraints from solar axion may compliment existing search techniques using accelerator based experiments, underground detectors, and astrophysical constraints. Footnote 4: This requires accounting for the branching ratio to photons, as well as adjusting the decay length. Footnote 5: This occurs because axion-nucleon scattering leads to mean free paths much shorter than the typical size of a supernova, trapping the axions. Constraints from big bang nucleosynthesis (BBN) will generically apply both because the axions we consider have lifetimes in the vicinity of a few seconds, and because the same reaction, \(p\)\(d\rightarrow\,^{3}\)He \(\gamma\), is a key driver of BBN. In the absence of any additional dark sector decay modes, measurements of \(N_{\text{eff}}\) will generically exclude axions with masses below 5 MeV or so. These constrains can be alleviated if the dark sector contains additional degrees of freedom see e.g. [67]. Searches for gamma rays from the Quiet Sun offer a complimentary direct probe of axion (or other light particle) production that is independent of early universe cosmology. We consider a \(90^{\circ}\) opening angle for our signal, meaning all decays between the Sun's surface and Earth's orbit contribute. The monoenergetic nature of the axion means the photon flux is constant in energy (see Section IV for more details on monoenergetic pro Figure 5: Contours of \(g_{3aN}\) for which the solar axion flux of photons would overwhelm the the RHESSI background measurements for the front segments. Sensitivity is exhausted for \(g_{3aN}\sim 1\times 10^{-5}\) however further reach can be obtained with better data and/or a more sophisticated analysis. duction). We demand that this flux exceed \(1.8\times 10^{-3}\mathrm{s}^{-1}\mathrm{cm}^{-2}\mathrm{keV}^{-1}\) for photon energies above 1 MeV, so that this flux is above the observed RHESSI background flux in the front segments. ## IV Model-independent searches Let us now consider a model-independent production of LLPs (here called \(\phi\)) which decay via \(\phi\to\gamma\gamma\). In this simplified model, we consider all production to occur at the solar center, and \(\phi\) only interacts with SM physics through its decay. We also assume there is no preferential direction for decay in the rest frame of \(\phi\), so the flux of photons is a uniform distribution between \(E_{\gamma,\mathrm{min}}\) and \(E_{\gamma,\mathrm{max}}\) where \(E_{\gamma,\mathrm{max}/\mathrm{min}}=1/2\times(E_{\phi}\pm\sqrt{E_{\phi}^{2}- m_{\phi}^{2}})\). Inverting this equation, we find \(E_{\phi}\geq E_{\gamma}+m_{\phi}^{2}/(4E_{\gamma})\) (we will call this lowest energy \(E_{\phi,\mathrm{min}}\)). Therefore, if we know the rate of production \(R_{\phi}\) and decay length \(\lambda\) as a function of \(E_{\phi}\), then we can determine the BSM flux of photons at Earth. \[\begin{split}\frac{\mathrm{d}\Phi_{\gamma}}{\mathrm{d}E_{\gamma}}=& \frac{2}{4\pi d_{\odot}^{2}}\times\\ &\int_{E_{\phi,\mathrm{min}}}^{\infty}\mathrm{d}E_{\phi}\frac{e^ {-R_{\odot}/\lambda(E_{\phi})}-\mathrm{e}^{-d_{\odot}/\lambda(E_{\phi})}}{ \sqrt{E_{\phi}^{2}-m_{\phi}^{2}}}\frac{\mathrm{d}R_{\phi}}{\mathrm{d}E_{\phi}} \end{split} \tag{7}\] One particularly well motivated morphology is where \(\phi\) has a mono-energetic production spectrum. This would occur if \(\phi\) is produced via a 2-body decay \(\chi\to\phi X\) or via annihilation \(\chi\chi\to\phi X\) for \(v_{\chi}\ll 1\). Performing the integral in Eq. (7) with a delta-function distribution leads to a flux of photons that is constant in energy between \(E_{\gamma,\mathrm{min}}\) and \(E_{\gamma,\mathrm{max}}\). Remaining more agnostic to the source of \(\phi\) production, we may consider a power-law distribution with respect to energy for \(E_{b}\leq E_{\phi}\leq E_{u}\) \[\left.\frac{\mathrm{d}R_{\phi}}{\mathrm{d}E_{\phi}}\right|_{\mathrm{power}}=R _{c}\times E_{\phi}^{c}\ \Theta(E_{u}-E_{\phi})\Theta(E_{\phi}-E_{b}). \tag{8}\] For \(m_{\phi}\ll E_{\gamma},E_{\phi}\) the photon flux is calculable in closed form, \[\begin{split}&\left.\frac{\mathrm{d}\Phi_{\gamma}}{\mathrm{d}E_{ \gamma}}\right|_{\mathrm{power}}=\frac{2R_{c}}{4\pi d_{\odot}^{2}}\\ &\times\bigg{[}\bigg{(}\frac{R_{\odot}\tilde{E}}{\tilde{\lambda} }\bigg{)}^{c}\bigg{(}\Gamma\big{(}-c,\frac{R_{\odot}\tilde{E}}{\tilde{\lambda }E_{u}}\big{)}-\Gamma\big{(}-c,\frac{R_{\odot}\tilde{E}}{\tilde{\lambda}E_{l} }\big{)}\bigg{)}\\ &-\bigg{(}\frac{d_{\odot}\tilde{E}}{\tilde{\lambda}}\bigg{)}^{ c}\bigg{(}\Gamma\big{(}-c,\frac{d_{\odot}\tilde{E}}{\tilde{\lambda}E_{u}} \big{)}-\Gamma\big{(}-c,\frac{d_{\odot}\tilde{E}}{\tilde{\lambda}E_{l}}\big{)} \bigg{)}\bigg{]}\,\end{split} \tag{9}\] with \(\Gamma(a,x)\) the incomplete gamma function, \(E_{l}=\max\{E_{b},E_{\phi,\mathrm{min}}\}\), and \(\tilde{\lambda}\) is the decay length at characteristic energy \(\tilde{E}\). We normalize to the total rate of \(\phi\) produced, \(N_{\phi}\). For the mono-energetic case, we have \(N_{\phi}=R_{\phi}\), while for the power-law production, we have \[R_{c}=\begin{cases}\frac{N_{\phi}(c+1)}{E_{c}^{\alpha+1}-E_{b}^{\alpha+1}}& \text{for}\quad c\neq-1\,\\ \frac{N_{\phi}}{\log(E_{u}/E_{b})}&\text{for}\quad c=-1\.\end{cases} \tag{10}\] Constraints on the number of \(\phi\) produced per second in the Sun are shown in Fig. 7. Constraints are set as described at the end of Sections II and III. ## V Future prospects In the above discussion, we have found that repurposing existing RHESSI data is able to provide interesting constraints on light dark sectors with MeV scale LLPs. Our analysis should be viewed as a proof of principle and certainly underestimates the sensitivity of experiments like RHESSI to new physics models. The major limitations in our analysis are a lack of reliable peak-subtracted spectra and the ability to suppress backgrounds (see [82; 83] for recent work in the keV regime for more sophisticated statistical analyses). For example, much of the background for RHESSI comes not from solar activity but rather from cosmic ray interactions with the Earth's atmosphere i.e. the radiation comes from the rear rather than forward field of view. Much of this background can presumably be suppressed (or perhaps eliminated) with a future instrument, especially if a dedicated search is performed. In what follows we sketch potential improvements using a near-term MeV telescope. For concreteness we will anchor our discussion around the COSI satellite.6 Footnote 6: We thank Albert Shih for pointing out the COSI mission to us. Figure 6: Comparison of photon fluxes for different \(\phi\) production scenarios. The fluxes are normalized so that the total production rate is \(N_{\phi}=10^{28}s^{-1}\), and the decay length is \(\lambda=10R_{\odot}\) at \(E_{\phi}=1\mathrm{MeV}\). We consider \(m_{\phi}\) to be negligibly small. RHESSI operated with minimal shielding to minimize weight. This made the instrument an effectively "all sky" observation with a high level of cosmic ray background activity. In contrast COSI will operate with active shielding, and its further use of Compton kinematic discrimination offers further background reduction [84]. Moreover, ongoing work to better understand gamma ray emission from the Quiet Sun will further improve on irreducible backgrounds [85; 86]. Other strategies that could be pursued with a future instrument are to go beyond the rate-only analysis presented above. For example, COSI will have 25% sky coverage and excellent angular resolution. One could image the MeV photon flux differential in both energy and angular position. Depending on the lifetime of LLPs a "halo" of photons could be searched for outside the solar corona. The shape of the photon distribution will be model dependent, but can be computed using the Monte Carlo simulations outline above. Similarly, taking advantage of COSI's large field of view, other local planetary systems could be used to search for LLPs. This was suggested recently in the context of Jupiter where the capture of light dark matter is better motivated [87; 22]. Finally, let us comment on a second channel of interest: \(\mathrm{LLP}\to e^{+}e^{-}\). This may occur for a dark vector which dominantly decays via \(V\to e^{+}e^{-}\), and has recently been considered (in the context of large volume underground detectors) for the same \(p\;d\rightarrow\,^{3}\mathrm{He}\;\gamma\) reaction considered here [88]. A search for electrons and/or positron would require accurate modeling for propagation through magnetic fields in the vicinity of the Earth. Figure 7: Exclusion of \(\phi\) production for some of the special cases considered. Production rates above the lines are excluded. In all cases, the mass is considered negligible, and there is no production below 10 keV (\(E_{b}=10\,\mathrm{keV}\)). Conclusions and outlook We have discussed simple particle physics models that predict an MeV flux of photons produced by the Sun. The generic requirement is the existence of some LLP which can efficiently transport energy from the interior (fueled by nuclear reactions) to beyond the Sun's surface. Provided the LLP has a sizeable branching ratio to final states including at least one photon e.g. \(\gamma\gamma\), \(\nu\gamma\), and/or \(e^{+}e^{-}\gamma\) final states, one can search for energetic gamma rays emanating from the Quiet Sun. We find that constraints from existing data from RHESSI, with a very conservative analysis strategy, can probe small pockets of untouched parameter space for both MeV scale axions and a neutrino dipole portal. In both cases, the RHESSI analysis provides complimentary coverage to existing search strategies (including cosmological probes such as BBN). Our major motivation is a simple proof of principle that MeV-scale LLPs with decay lengths larger than the radius of the Sun can be efficiently searched for using solar telescopes. The analysis presented here is conservative and fairly crude; we define exclusions by the condition that the BSM signal prediction exceeds the _total signal_ observed in any energy window by RHESSI. Constraints and/or discovery potential could be substantially improved with a better understanding of instrument backgrounds and more sophisticated analysis techniques. For example, one could make use of angular profiles of incident photons to search for new physics, as an LLP flux will produce a photon flux outside the stellar corona with a predictable angular shape/morphology. We encourage future missions with MeV scale instrumentation below the cut-off of Fermi-Lat, such as COSI [25; 26], to consider searches for BSM particles, with the Sun being a well-motivated engine for MeV-scale LLPs. ###### Acknowledgements. This work benefited from feedback at the Simon's Center for Geometry and Physics, and RP would like to specifically thank Simon Knapen, Rebecca Leanne, and Jessie Shelton for useful discussions. We thank Albert Shih for helpful discussions regarding the RHESSI instrument. We benefited from feedback on this manuscript from Rebecca Leanne and Elena Pinetti. This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. RP acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and the Neutrino Theory Network Program Grant under Award Number DEAC02-07CHI11359 and the US DOE under Award Number DE-SC0020250. ## Appendix A Inefficient production mechanisms In this section we discuss production mechanisms which we have found to be too inefficient to allow for detection prospects with our RHESSI analysis. ### Mass-Mixing portal for HNLs Another BSM model involving HNLs has \(N\) couple directly to active neutrinos through added elements in the PMNS matrix[13; 11; 10; 11; 89]. Active neutrinos contain a small admixture of the HNLs along with the three known mass states, \[\nu_{\alpha}=U_{\alpha N}N+\sum_{i=1}^{3}U_{\alpha i}\nu_{i}\, \tag{10}\] where \(U_{\alpha N}\) represents the coupling of HNLs to active neutrinos. Since the Sun only has nuclear reactions that produce electron neutrinos, our constraint is on \(U_{eN}\). The \(N\) flux from upscattering is subdominant by orders of magnitude to that from direct production. Therefore, the flux is given by rescaling the neutrino flux \[\Phi_{N}=|U_{Ne}|^{2}\Phi_{\nu}\sqrt{1-m_{N}^{2}/E_{N}^{2}}. \tag{11}\] For the masses considered here, there are only three decay channels; _i)_\(N\to 3\nu\), _ii)_\(N\to\nu\gamma\), and _iii)_\(N\to\nu e^{+}e^{-}\). As with other production mechanisms, we only consider signals from photons. The geometry of this decay (into a massless neutrino and photon) is identical to the case of the dipole portal. The decay rate for each of the processes follows the general form \[\Gamma_{N\to SM}\propto G_{F}^{2}|U_{eN}|^{2}m_{N}^{5}\, \tag{12}\] which has the steep power-law dependence on mass typical of weak decays. We find that since decay lengths are always long enough to fall outside the range given in Eq. (1) that sensitivity from RHESSI is subdominant to searches at Borexino (which benefits from a large detector volume) and from direct laboratory searches (see Fig. 8). ### Captured dark matter in the Sun If heavy dark matter, \(\chi\), has interactions beyond gravity, it may scatter within large celestial bodies and become gravitationally captured. The Sun, being by far the most massive object in the solar system, is a strong candidate in searching for the signals from captured \(\chi\)[21; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]. For the case of symmetric dark matter with a long-lived particle mediator, there is the interaction \(\chi\bar{\chi}\to\) LLPs. The energies of these final observable particles are \(\mathcal{O}(m_{\chi})\). However, as discussed in [114], for thermal relic annihilation cross sections, short range interactions with SM, and \(m_{\chi}\) below a few GeV, most of the \(\chi\) evaporates from the Sun before annihilating. Even Jupiter, which has a cooler core than the Sun, would have evaporation be the dominant effect for \(m_{\chi}\leq 0.7\)GeV [128], far above the energy sensitivity of RHESSI. We note that in the presence of long-range \(\chi-\)SM interactions, evaporation may be suppressed [87; 22]. However, this is a model-dependent scenario, and is not considered in this work. We also considered the case of asymmetric dark matter with self-interactions via a scalar \(\phi\) with a Yukawa like interaction \(\mathcal{L}\subset\bar{\chi}\chi\phi\). As there is no annihilation, in the absence of evaporation, the \(\chi\) population grows indefinitely. Virialized dark matter passing through the Sun can scatter on the trapped overdensity and produce LLPs via the bremsstrahlung-like reaction \(\chi\chi\to\chi\chi\phi\). In order to produce MeV gamma rays, we require heavy dark matter, \(m_{\chi}\gtrsim 1\) TeV, such that there is sufficient available kinetic energy \(m_{\chi}v_{\chi}^{2}\gtrsim 1\) MeV.7 In order to produce a sufficiently large flux of LLPs, we require a sizeable \(\chi\chi\to\chi\chi\) cross section. This can only be achieved with a light mediator for TeV scale (or heavier) dark matter. The cross section relies on a small momentum transfers. Non-relativistic kinematics, however, demand a parametrically larger momentum transfer in the bremsstrahlung like reaction than for elastic scattering. For example, demanding \(E_{\phi}\sim\mathcal{O}(\text{MeV})\) bremsstrahlung, requires a momentum transfer on the order of \(\Delta p^{2}\sim m_{\chi}E_{\phi}\sim(1\text{ GeV})^{2}\). Due to this kinematic supression, we find that RHESSI is incapable of setting competitive limits even with the most generous/optimistic model building choices to maximize the bremsstrahlung like cross section. Footnote 7: Dark matter nucleon scattering cannot induce MeV bremsstrahlung (i.e. via \(\chi N\to\chi N\phi\)), because the available kinetic energy is set by \(m_{\chi}v_{\chi}^{2}\sim 1\) keV \(\times(v_{\chi}/10^{-3})^{2}\). This is most easily seen in the rest frame of the heavy dark matter.
2304.09782
Quantum Superposition States: Spin Glasses and Entanglement
Spin-glass (SG) is a fascinating system that has garnered significant attention due to its intriguing properties and implications for various research fields. One of the key characteristics of spin glasses is that they contain random disorder, which leads to many possible states of the system occurring with very close probabilities. We explore the concept of spin-glass superposition states (SSs), which are equiprobable SSs of possible electronic configurations. Using the Edward-Anderson (EA) type SG order parameter $q_{EA}$ and magnetization, we demonstrate that these SSs can be classified based on their contribution to distinguishing magnetic order (disorder), such as SG, (anti)ferromagnetic (FM), and paramagnetic (PM) phases. We also generalize these superposition states based on the system size and investigate the entanglement of these phase-based SSs using the negativity measure. We show that the SG order parameter can be utilized to determine the entanglement of magnetically ordered (disordered) phases, or vice versa, with negativity signifying magnetic order. Our findings provide further insight into the nature of quantum SSs and their relevance to SGs and quantum magnets. They have implications for various fields, including condensed matter physics, where SGs are a prototypical example of disordered systems. They are also relevant for other fields, such as neural networks, optimization problems, and information storage, where complex systems with random disorder behavior are greatly interested. Overall, our study provides a deeper understanding of the behavior of SGs and the nature of quantum SSs, with potential applications in various fields.
Aslı Tuncer, Serhat C. Kadıoğlu
2023-04-19T16:08:58Z
http://arxiv.org/abs/2304.09782v1
# Quantum Superposition States: Spin Glasses and Entanglement ###### Abstract Spin-glass (SG) is a fascinating system that has garnered significant attention due to its intriguing properties and implications for various research fields. In condensed matter physics, SGs are a prototypical example of disordered systems and have been studied extensively to understand the behavior of complex systems with the random disorder. One of the key characteristics of SGs is that they contain random disorder, which leads to many possible states of the system occurring with very close probabilities. We explore the concept of spin-glass superposition states (SSs), which are equiprobable SSs of possible electronic configurations. Using the Edward-Anderson (EA) type SG order parameter and magnetization, we demonstrate that these SSs can be classified based on their contribution to distinguishing magnetic order (disorder), such as SG, (anti)ferromagnetic, and paramagnetic phases. We also generalize these SSs based on the system size and investigate the entanglement of these phase-based SSs using the negativity measure. We show that the SG order parameter can be utilized to determine the entanglement of magnetically ordered (disordered) phases, or vice versa, with negativity signifying magnetic order. Our findings provide further insight into the nature of quantum SSs and their relevance to SGs and quantum magnets. They have implications for a range of fields, including condensed matter physics, where SGs are a prototypical example of disordered systems. They are also relevant for other fields, such as neural networks, optimization problems, and information storage, where complex systems with random disorder behavior are greatly interested. Overall, our study provides a deeper understanding of the behavior of spin glasses and the nature of quantum SSs, with potential applications in a variety of fields. ## I Introduction Spin glasses are a fascinating phenomenon in condensed matter physics due to their unique microscopic properties. However, these systems contain random disorder, resulting in many possible states occurring with similar probabilities [1; 2; 3] and being unable to arrange in a particular spin state, which satisfies the energy minimum for each interaction [4; 5]. Such a situation is called frustration [6]. Even if a system is unfrustrated classically, it may exhibit frustration in the quantum case [7; 8; 9; 10; 11] due to non-commutativity and entanglement [9; 12]. Interestingly, even with just a few entangled elements, novel phenomena can occur in the quantum domain [12; 13; 14; 9; 14]. Quantum fluctuations and entanglement can also play essential roles in the behavior of spin glasses since the spin glass order occurs at low temperatures [13], and thermal fluctuations do not dominate the feature of spin-glass order. Quantum interference may also lead to unexpected effects, such as the suppression of tunneling [15] or the formation of localized states [16; 17]. Spin glasses, on the other hand, should have frustrated spin(s). When considered from the quantum perspective, the complex behavior of spin glasses is understandable as arising from quantum interference and entanglement of formation. The frustration results in a complex and disordered arrangement of the magnetic spins, which can be described by a distribution of spin configurations rather than a well-defined pattern. Since frustration leads to many local minima in their free energy landscapes, making it difficult to choose any configuration due to their equiprobable nature. In contrast to this approach, we investigate the existence of spin glasses in distinct quantum superposition states of possible electronic configurations without needing any ensemble of spin configurations. As a result, it can be thought that spin glasses become frozen in any of these configurations. We directly measured the local magnetizations of the spins and the Edwards-Anderson SG order parameter to describe the corresponding magnetic phases, which include all-to-all interactions and randomly distributed antiferromagnetic (AFM) impurities. The paper is organized in Section II we introduce our model and describe the procedure for identifying the superposition states that contributing to the SG phase. We also expand our results to well-known magnetic orders (disorder) and classify these superposition states (SSs) concerning their phase contributions, such as paramagnetic (PM), ferromagnetic (FM), and antiferromagnetic magnetic phases in Section III. Once we identify the phase-based superposition states, we discuss the role of entanglement and the relationship between the SG-order parameter and entanglement in Section IV. Finally, we conclude the paper in Section V with an outlook of our results and the impact on future theoretical investigations and experimental implementations of current quantum technologies. Model We consider \(N\) Ising spins interacting through infinite-ranged exchange interaction with the Hamiltonian, \[H=-\Sigma_{(i,j)}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}. \tag{1}\] Here \(\sigma_{i}^{z}\) is Pauli z-matrices with \(i,j=1,\ldots,N\) and the interaction couplings are quenched variables governed by a Gaussian distribution with a variance \(J^{2}/N\) and zero mean \(<J_{ij}>=0\), \(P(J_{ij})\propto\exp\frac{1}{2}(\frac{NJ_{ij}^{2}}{J^{2}}))\). The randomly distributed antiferromagnetic interaction is the source of frustration in case the system's Hamiltonian can not be minimized, at least for one spin or bound. Although there is no direct analogy between the geometric frustration in classical systems and its quantum counterpart, it has been defined in quantum systems related to entanglement and coherence effects [18]. In this study, we will concentrate mainly on \(N\)-atom system with all-to-all interactions. Since each spin has two states, these Ising spin systems exhibit an exponentially large phase space of \(2^{N}\) configurations. We started from the simplest model with interacting three-qubit system may have frustration, see Figure 1. In the classical case, to see the phase transition, the system should go to the thermodynamic limit, so it would be impossible to see the phase transition on three-spin model. In addition, the thermal or quantum fluctuations drive the system to transition, but we do not consider the thermal fluctuations in this work. To inject the quantum effects into the classical version of the Ising model so-called Edward-Anderson spin-glass Hamiltonian (1), one of the well-known ways is to add a transverse field. The quantum fluctuations arises from a competition between the spin-spin interactions and such an applied external field. In contrast to this approach, the present study assumes that quantum fluctuations and frustration are introduced into the system via direct injection of quantum superposition states. Through measurement of corresponding order parameters, such as the local magnetization of the \(i^{th}\) spin for a given realization \(\alpha\), with \(m_{i}^{\alpha}=<\sigma_{i}>_{\alpha}\), we were able to determine the specific superposition states that contribute to various magnetic phases. The EA spin-glass order parameter, which corresponds to overlaps of the local magnetization [19] and is provided in (2), was also utilized in our analysis. \[q_{EA}^{\alpha}=\frac{1}{N}\sum_{i=1}^{N}(m_{i}^{\alpha})^{2}. \tag{2}\] As we modified these definitions to our SS concept by taking the averages over our superposition states, the average magnetization is \[m=\frac{1}{N}\sum_{i}^{N}\langle\psi_{suppos.}|\sigma_{i}^{z}|\psi_{suppos.}\rangle, \tag{3}\] and the spin-glass order parameter includes the overlapping between the discrete states of the entities of the SS by definition. In this way, we have found some contributions to SG order from the equally weighted superposition state in both energy and computational basis with the non-zero \(q_{EA}\) and \(m=0\). Besides obtaining the SSs contribute to the SG phase, we obtained the PM and (anti)ferromagnetic SSs in both vanishing \(q_{EA}=0\) and \(m=0\), and both are non-zero cases, respectively. All these different order and disorder SSs are given in Section III and we explained that explicit classification and association with their entanglement in Sections IV. The state of a system consisting of \(N\) spins can be considered as a product state, and the initial state is set to be a superposition of all possible configurations, \(|\psi\rangle=\otimes_{i}^{N}|\rightarrow\rangle\) with \(|\rightarrow\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle+|\downarrow\rangle)\), where \(|\uparrow\rangle\) and \(|\downarrow\rangle\) are the eigenbasis of the Pauli-\(z\) operator. Each spin has two possible orientations, up or down, in this representation. However, the superposition states that we have created by taking the sum of these product states will no longer be product states with the quantum correlations they contain. While the random disorder interactions between spins force a fixed orientation to minimize the energy, some of the spins may remain in the \(|\rightarrow\rangle\) state even if there is no external field due to frustration. In Figure 1, the antiferromagnetic (\(J<0\)) interactions may cause geometric frustration in the first triangle, as shown by the two-side-aligned arrows denoting the frus Figure 1: _Top panel_: The graphical representation of SG-superposition state for \(N=3\) qubits. The interactions (lines) between the qubits (blue spheres) are illustrated for FM (\(J>0\)) and AFM (\(J<0\)) couplings are yellow and light blue edges, respectively. _Bottom left_: The possible SS is found by matching the dashed lines. _Bottom right_: One of the \(|\psi_{SG}\rangle\) state contributes to the SG-order. The numbers below the levels correspond to the labels of the excited atoms of the state, and dots represent the placement of the ground-level atoms. The circled numbers next to the levels show the corresponding computational basis states. In the smaller box, the corresponding SG-contributed superposition states of the three-spin system with vanishing magnetization and non-zero SG-order parameters are shown in green color. trated spin. The qubit state can be in any superposition of the form \(|\psi\rangle=a_{1}|0\rangle+a_{2}|1\rangle\) as long as \(|a_{1}|^{2}+|a_{2}|^{2}=1\). However, this is not the case for classical spins. We illustrate such a frustrated configuration state of a three-body system separated into two states that do not have frustration in the \(\sigma^{z}\) basis: \[\begin{split}|\psi>&=|0\rangle\otimes|1\rangle \otimes\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right),\\ &=\frac{1}{\sqrt{2}}\left(|0\rangle\otimes|1\rangle\otimes|0 \rangle+|0\rangle\otimes|1\rangle\otimes|1\rangle\right).\end{split} \tag{4}\] The corresponding phases can be obtained from the relative order parameters. Therefore, although many different superposition states could be considered via the energy eigen states or computational basis vectors, we will only consider the equally weighted two-state superposition in the computational basis. We will continue with the standard basis of energy-state products from now on. This standard basis is \(|e\rangle,|g\rangle\) for one atom, \(|ee\rangle,|eg\rangle,|gg\rangle\) for two, and \(|eee\rangle,|eeg\rangle,|eeg\rangle,|eee\rangle,|eee\rangle,|eee\rangle,|ggg\rangle,|ggg\rangle\) for three atoms. Figure 1 shows the three-atom standard basis on the left bottom. Corresponding positions in the natural basis of the levels are given in the circles next to levels and the numbers with the green dots below the levels denotes the placement of the excited atoms and ground state, respectively. All our superposition states contributes to the spin-glass order should satisfy the two simple rules: 1. The SG-contibuted SSs must have at least one co-excited spin, 2. The total number of excited spin labels should not exceed the total number of spins in the system. The first rule denotes the overlapping between the states, and the second one corresponds to the SS should have at least one frustrated spin an equally-weighted qubit-state such as called _cat state_ denoted by **C**. Once we address the excited spins, we can write the spin-glass SSs quickly in each system size. However, in ferromagnetic (antiferromagnetic) SSs, the number of labels can be greater than the number of spins in the system. Understandably, the ferromagnetic order does not have to include any frustrated spin, and all the spins can be in the same (up or down) direction. It can be classified as that one excited and a set of cat states {**C**,**C**,...} pair will contribute to the FM/AFM orders. To fix a spin on a state can be thought of as making a local measurement on it, and this causes a loss of quantumness. We examined this loss in terms of negativity, a measure of quantum entanglement defined as [20], \[\mathcal{N}=\frac{\|\rho^{\mathrm{T}}\|_{1}-1}{2}. \tag{5}\] Here, the trace norm of the quantum state \(\rho=|\psi_{suppos}\rangle\langle\psi_{suppos}|\) is denoted by \(\|\rho^{\mathrm{T}}\|_{1}\). The detailed about negativity calculations and results are explained in Section IV. ## III Quantum superposition states for magnetic orders and disorder The magnetic phases of interest can be classified in terms of magnetization and the EA spin-glass order parameter, \(q_{EA}\)[19]. All corresponding magnetization and \(q_{EA}\) parameter values of the equally weighted and expanded superposition state spaces are displayed in Figure 2(a) and Figure 2(b), respectively. The parameter values of equally weighted superposition states, a special case of the expanded ones, are obtained at N=3,4,5,6 system sizes. Figure 2: (Color online.) The magnetization vs. EA spin-glass order parameter, \(q_{EA}\), is shown for (a) all equally-weighted superposition states in different sizes \(N=3,4,5,6,7\), and \(8\). The red horizontal line indicates \(q_{EA}\neq 0\) with \(m=0\), corresponds to the spin-glass regime, (b) not only the equally weighted SSs but also expanded superposition states at \(N=3\). We obtained expanded superposition states by giving different weights to superposed ones. In both figures, the paramagnetic phase is observed at the point \(m=0\) and \(q_{EA}=0\). These findings provide valuable insights into the nature of quantum SSs and their relevance to SG and quantum magnets. The triangular parameter domain can be seen in all cases; moreover, the maximum value of the \(q_{EA}\) remains the same. In each system size, it is possible to see all the magnetic phases in interest. In both figures, having various numbers of \(m\) values against the same \(q_{EA}\) value indicates spontaneous symmetry breaking [21]. While this symmetry breaking signifies the FM (AFM) order, the PM phase corresponds to the point of \(m=0\) and \(q_{EA}=0\). The red-dashed line starts with the PM regime, and along the line, we can obtain the SSs corresponding to the SG regime. For simplicity, we will continue with the particular case of these SS space, such as equally weighted superposition states. However, even if the system size enlarges and extra SSs arise, the triangular structure of the diagram remains the same. In other words, the symmetry breaking can be observed for \(N\rightarrow\infty\) in terms of the different \(q_{EA}\) values. Let's return to the SSs that are equally weighted. Figure 3 shows how binary superpositions in a system of dimension \(N=3\) correspond to different quantum magnets, including SG, FM, AFM, and PM-phases. As shown in Figure 4(a), we have derived the matrix representation of the binary states that contribute exclusively to the SG phase in systems comprising \(N=3,4,5\), and \(6\) atoms. In order to maintain clarity, we have elected to exclusively represent the SG-SSs, omitting the explicit depiction of the superposition states that would pertain to other magnetic phases. However, it is pertinent to note that the off-diagonal elements do indeed play a role in the PM phase, as they are composed of non-overlapping product spin states. Furthermore, the upper and lower triangular regions of the non-diagonal elements in the matrix are associated with the FM or AFM phase, depending on the specific binary superpositions involved. Consequently, this superposition matrix can be regarded as an evenly distributed superposition state space that pertains to the configuration space. The matrix is divided into distinct quantum magnetic phases, thereby serving as a phase diagram. Additionally, we have noted that the phase partition pattern remains consistent and scalable even as the system size is expanded. The differentiation of the \(q_{EA}\) order parameter at each even value of system size, \(N\), can be associated with replica symmetry breaking in spin-glass systems [13]. For example, while there is a unique \(q_{EA}\) value for \(N=3\), sizes \(N=4\) and \(N=5\) have two different values of \(q_{EA}\). Figure 4 illustrates a recursive pattern, wherein taking a partial trace over the most recently added qubit leads the system state space to the previous state space. Co-excited atoms in SSs can also be defined as overlapping between two states. This overlap scales with the system size, similar to the differentiation of \(q_{EA}\). The number of possible overlapped atoms increases with each even number of \(N\). However, unlike the SG-SSs, the PM-phase SSs have neither co-excited spins nor any overlap. All PM-phase SSs are maximally entangled, similar to Greenberger-Horne-Zeilinger (GHZ) states [22]. We illustrated that entanglement of spin-glass SSs in the first line of the Figure 4(b) and entanglement of SSs corresponds to the other magnetic phases in the second line of the Figure 4(b), respectively. ## IV The new entanglement witness of the magnetic structures The presence or absence of overlap between states in their superposed states corresponds to the ordered/disordered states of the system. Moreover, the overlapping superposition states can exhibit different magnetic orders, such as spin-glass, ferromagnetic, and antiferromagnetic orders. Firstly, after defining the order/disorder distinction based on the presence or absence of overlap, we observe that paramagnetic (disordered) systems corresponding zero overlaps also possess maximum entanglement. From this perspective, we calculated the entanglement of the superposition states using logarithmic negativity to define the relationship between the amount of overlap and entanglement. Considering that the Edward-Anderson spin glass order parameter measures the degree of overlap between two different system configurations (superposed states), we investigated the Figure 3: (Color online.) Schematic representation of SSs contributes to (a)SG, (b)FM, (c)AFM and (d)PM phases. Arrows depict the equally weighted summation of the states. This figure provides a clear and comprehensive visualization of the various superposition states and their relation to spin glasses, quantum magnets, and quantum superposition states. direct relationship between negativity \(\mathcal{N}\) and \(q_{EA}\) order parameter. The numerical results illustrate the relationship between these two quantities, as shown in Figure 5. We obtained that the order parameter \(q_{EA}\) decreases linearly with \(\mathcal{N}\), \[q_{EA}=q_{max}-\frac{1}{4}\mathcal{N}. \tag{6}\] Here \(q_{max}=1\) is the maximum value of the normalized EA spin-glass order parameter. According to the figure, the state starts from the fully entangled state (PM state) with \(\mathcal{N}=1\) and \(q_{EA}=0\). Then, its maximal-entangled portion becomes smaller and smaller until it reaches the separable state with \(\mathcal{N}=0\) and \(q_{EA}=q_{max}=0.25\). Each distinct \(q_{EA}\) value corresponds to a different entangled cluster size. We separated regions corresponding to \(m\)-partite entanglement with \(m=N,N-1,\dots,1\) by dashed lines. However, as the system size approaches infinity (\(N\rightarrow\infty\)), the classification mentioned above is no longer discernible, and the different \(q_{EA}\) values on the fitting line be indistinguishable. This is due to the loss of quantumness in the system, as it becomes classical and the separations between distinct \(q_{EA}\) values vanish. In this analysis, we present both numerical results and a discussion of superposition states, including the multi-partite entangled component, concerning the recursive growing pattern of the sg-order parameter \(q_{EA}\) and negativity \(\mathcal{N}\), illustrated in Figure 4. Specifically, we consider a three-particle system where each particle can exist in one of the three configurations from the set \(E_{N=3}(SG)=\{\mathbf{C},\mathbf{e},\mathbf{g}\}\), where \(E_{N=3}(SG)\) denotes the ensemble including the probable spin configurations. We identify six possible configuration states for the SG contribution with vanishing magnetization and non-zero \(q_{EA}\). Suppose one more qubit is added to the system. The system state has two possible configuration ensembles: \[E_{N=4}(SG)=\begin{cases}\{\mathbf{C},\mathbf{C},\mathbf{e},\mathbf{g}\},&q_{ EA}=0.125,\\ \{\mathbf{e},\mathbf{g},\mathbf{e},\mathbf{g}\},&q_{EA}=0.25.\end{cases} \tag{7}\] which correspond to SG-contributed superposition. In Figure 4: (Color online.) (a) The matrix representation of all spin-glass order contributed equally-weighted superposition states is given for \(N=3,4,5\), and \(6\) spins from left to right with non-zero Edward-Anderson order parameter and zero magnetization. As the system size increases, the number of distinct values of \(q_{EA}\) increases, indicating a signal of replica symmetry breaking. The various colors denote different values of \(q_{EA}\) (or \(\mathcal{N}\)), and this pattern repeats itself in subsequent generations from left to right. Diagonal elements correspond to the single states and they are equal to \(q_{EA}=0.25\) fixed value. However, they contribution to spin-glass or FM/AFM order may alter by the system size. One another fixed part of the matrix representation is off diagonal elements have only \(q_{EA}=0\) values and they contributes to the PM order for all system sizes. (b) The same pattern of \(q_{EA}\) can be obtained for average negativity \(\mathcal{N}\) with different numerical values. It should be noted that there is a reciprocal relationship between the \(\mathcal{N}\) average negativity and the \(q_{EA}\) order parameter. this case, we observe two distinct values of \(q_{EA}\) corresponding to the two different configuration sets. The permutation group of the first set yields 24 different SSs that have \(q_{EA}=0.125\), while the second set yields six SSs with \(q_{EA}=0.25\) appropriate to Figure 4. As we increase the system size to \(N=5\) spins, the number of possible sets remains the same as for \(N=4\), and we observe two additional sets, namely \(\{\mathbf{C},\mathbf{C},\mathbf{C},\mathbf{e},\mathbf{g}\}\) and \(\{\mathbf{C},\mathbf{e},\mathbf{g},\mathbf{e},\mathbf{g}\}\). Notably, the \(q_{EA}\) parameter differs between distinct sets and within a set, depending on the number of entangled particles in a superposition state. For instance, the set \(\{\mathbf{C},\mathbf{C},\mathbf{C},\mathbf{e},\mathbf{g}\}\) can be considered as \[\{\mathbf{C},\mathbf{C},\mathbf{C},\mathbf{e},\mathbf{g}\}=\begin{cases}\{| GHZ\rangle_{3},\mathbf{e},\mathbf{g}\}\\ \{|GHZ\rangle_{2},\mathbf{C},\mathbf{e},\mathbf{g}\},\end{cases} \tag{8}\] here, the sub set \(|GHZ\rangle_{3}:\{\mathbf{C},\mathbf{C},\mathbf{C}\}\) and \(|GHZ\rangle_{2}:\{\mathbf{C},\mathbf{C}\}\) gives the maximally entangled (GHZ) states, and the subscript denotes the number of entangled particles. In general definition, \(n\)-particle GHZ state can be written as \[|GHZ\rangle_{n}=\alpha(|g\rangle^{\otimes n}+|e\rangle^{\otimes n}), \tag{9}\] where \(\alpha\) is a constant [22; 23], the source of these maximally entangled states is related to the number of cat state \(C\) in the permutation sets. If the possible configuration sets lack _cat state_\(\mathbf{C}\), the resulting state will be deemed _separable_. The superposition state may also be partially entangled, in which the number of cat states is less than \(N-1\). Defining the entangled portion of the state, particularly for larger systems, presents a challenge. While our focus centers on SG superposition states, analogous conditions arise in FM and AFM cases. We developed a metric to quantify the entanglement partition of a state in terms of the spin-glass order parameter, \(q_{EA}\), which allowed us to achieve our objective. Figure 5 illustrates the entanglement partitions for various system sizes, along with the corresponding inverse linear relationship between the negativity and \(q_{EA}\). As \(q_{EA}\) decreases from its maximum value corresponding to separable states, the number of entangled particles increases until the system reaches a state of maximum entanglement. Based on the presence of entangled particle ensembles, Figure 5 provides a classification of magnetic phases. ## V Conclusions This research demonstrates that spin glasses can exist in equiprobable superposition states of potential electronic configurations in a quantum framework. We propose using cat states to define frustrated spins and link the frustration to quantum interference. By employing the Edward-Anderson spin-glass order parameter and magnetization, we classify the superposition states based on their contribution to distinguishing magnetic order (or disorder) in various phases, such as SG, (anti)FM, and PM. Our results provide valuable insights into the nature of spin glasses in quantum systems and have implications for developing quantum technologies such as quantum cryptography [24], quantum simulation [25; 26; 27], quantum computation [28; 29; 30; 31], quantum sensing [32] and metrology [33]. We establish a direct correlation between the Edward Figure 5: (Color online.) The left panel shows the variation of the negativity with the Edward-Anderson spin-glass order parameter \(q_{EA}\) for particle systems with \(N=4,5,6,7\), and \(8\). The distinct regions in the plot correspond to different values of \(q_{EA}\) that indicate the extent of particle entanglement. Initially, all systems are fully entangled with \(\mathcal{N}=1\), and as \(q_{EA}\) decreases, the number of entangled particles decreases, ultimately leading to a separable state with \(\mathcal{N}=0\). In the right panel, the phase-contributed superposition states are classified based on the number of entangled particles in the \(N\)-particle system. Anderson spin glass order parameter, \(q_{EA}\), and a measure of entanglement, negativity represented by \(\mathcal{N}\). We demonstrate that the spin glass order parameter can also function as an indicator of entanglement, while conversely, the negativity of entanglement can serve as the order parameter to distinguish between phases of order and disorder, specifically the ferromagnetic (FM) and paramagnetic (PM) phases. This result is due to the fact that entanglement is the ability of qubits to correlate their state with other qubits. Quantum phase transitions (QPTs) are an established finding in condensed matter physics, characterized by significant changes in the ground state properties of a quantum system induced by small variations in an external parameter. For example, QPTs can be induced in spin systems by variations in the magnetic field [34; 35], while in cold atom simulators of Hubbard-like models, changes in the intensity of a laser beam can trigger QPTs [14]. While our study does not address QPTs directly, several methods for driving quantum states to a target state have been studied extensively in the literature. These include quantum entanglement, state transfer [36; 37; 38; 39; 40; 41], and quantum adiabatic evolution [42; 43]. Once we obtain the corresponding quantum states to the different quantum phases, phase transition can be studied in this contextuality. This study reveals that the structural similarities between entanglement and spin glass order parameter persist across systems of varying sizes, as indicated by the consistent patterns shown in Figure 4. Furthermore, our study highlights the potential use of superposition states in defining magnetic order (disorder) [33] in condensed matter physics, which has broader implications for quantum information processing and quantum computing [28; 29; 30; 31]. These findings offer new insights into the nature of quantum superposition states and their relevance to spin glasses and quantum magnets. The categorization of states according to their magnetic properties, utilizing physical order parameters and entanglement, is a critical prerequisite for the effective manipulation of entangled states that are necessary for quantum information processing and transfer via qubits [44]. We offer that these superposition states are candidates to be new phase-based-bits in usage of the quantum computing [23; 45]. We are currently investigating their possible use in other physical systems. ## Acknowledgements The authors would like to acknowledge the financial support from the Scientific and Technological Research Council of Turkiye (TUBITAK), grant No. 120F100. We would also like to express our gratitude to O. E. Mustecaplioglu and M. Paternostro for their insightful discussions.
2303.02670
Topological Mixed Valence Model for Twisted Bilayer Graphene
Song and Bernevig (SB) have recently proposed a topological heavy fermion description of the physics of magic angle twisted bilayer graphene (MATBG), involving the hybridization of flat band electrons with a relativistic conduction sea. We explore the consequences of this model, seeking a synthesis of understanding drawn from heavy fermion physics and MATBG experiments. We identify a key discrepancy between measured and calculated onsite Coulomb interactions, implicating renormalization effects that are not contained in the current model. With these considerations in mind, we consider an SB model with a single, renormalized onsite interaction between the f-electrons, containing a phenomenological heavy fermion binding potential on the moir\'e AA-sites. This feature allows the simplified model to capture the periodic reset of the chemical potential with filling and the observed stability of local moment behavior. We argue that a two stage Kondo effect will develop in MATBG as a consequence of the relativistic conduction band: Kondo I occurs at high temperatures, establishing a coherent hybridization at the $\Gamma$ points and a non-Fermi liquid of incoherent fermions at the moir\'e K-points; at much low temperatures Kondo II leads to a Fermi liquid in the flat band. Utilizing an auxiliary-rotor approach, we formulate a mean-field treatment of MATBG that captures this physics, describing the evolution of the normal state across a full range of filling factors. By contrasting the relative time-scales of phonons and valence fluctuations in bulk heavy fermion materials with that of MATBG we propose a valley-polaron origin to the Coulomb renormalization and the heavy fermion binding potential identified from experiment. We also discuss the possibility that the two-fluid, non-Fermi liquid physics of the relativistic Kondo lattice is responsible for the strange metal physics observed in MATBG.
Liam L. H. Lau, Piers Coleman
2023-03-05T13:34:45Z
http://arxiv.org/abs/2303.02670v2
# Topological Mixed Valence Model for Twisted Bilayer Graphene ###### Abstract Song and Bernevig (SB) have recently proposed a faithful reformulation of the physics of magic angle twisted bilayer graphene (MATBG) as a topological heavy fermion problem, involving the hybridization of flat band f-electrons with a topological band of conduction electrons. Here we explore the consequences of this analogy, using it to reformulate the SB model as a mixed valence model for twisted bilayer graphene. We show how the interaction with the conduction sea behaves as a U(8) Kondo Lattice at high energies and a U(4) Kondo lattice at low energies. One of the robust consequences of the model, is the prediction that underlying hybridization scale of the mixed valent model and the width of the upper and lower Hubbard bands will scale linearly with energy. We also find that the bare hybridization \(\gamma_{0}\) predicted by the SB model is too large to account for the observed local moment behavior at large filling factor, leading us to suggest that the bare hybridization is renormalized by the soft polaronic response of the underlying graphene. ## I Introduction The discovery of magic angle twisted bilayer graphene (MATBG), developing flat bands at "magic angles"[1; 2; 3; 4; 5; 6], has opened a new avenue for the exploration of quantum materials. At integral filling, novel spin and valley polarized [7; 8; 9; 10; 11] Mott insulators develop, which on doping transform into strange metals [12; 13; 14; 15; 16; 17; 18; 19; 20] and novel superconductors [4; 21], all of which have been a subject of intense theoretical study [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. It is as if, by tuning the voltage, one can explore a family of compounds along an entire row of the periodic table. Various experiments suggest that electrons localized in the moire hexagons of MATBG resemble quantum dots [42; 43], forming localized moments with valley and spin degeneracy near integer filling. This evidence includes the lifting of spin/valley degeneracy observed in Landau fans,[44] a field-tunable excess electronic entropy at integer filling \(y\)[45] and the appearance of upper and lower Hubbard band-like features in scanning tunneling microscopy measurements[46]. While the Bistritzer MacDonald [2] model for magic angle graphene, provides an accurate description of the plane-wave single-particle physics, the presence of local moments, governed by short-range Coulomb interactions underlines the importance of developing a real-space description of the physics, whilst taking the topology of the system into account [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57]. A recent theory by Song and Bernevig (SB)[58] describes TBG as a topological _heavy fermion_ problem [59]: they find, rather remarkably, that the moire potential of TBG focuses the low energy electron waves into Wannier states that are tightly localized at the center of each moire hexagon; the hybridization of these flat-band ("f") Wannier-states with a topological conduction ("c") band captures the essential mirror, time reversal and particle-hole symmetries of the Bistritzer-Macdonald model. The SB model establishes the correct band symmetries at the \(\Gamma_{M}\) and \(M_{M}\) points of the Brillouin zone, correctly giving rise to a pair of Dirac cones of the same chirality at the \(K_{M}\) points of each valley. The symmetry anomaly of the \(C_{2z}T\) and particle-hole symmetries \(P\), responsible for the Dirac cones, is reproduced by a quadratic conduction band touching at \(\Gamma_{M}\) in the conduction band: when hybridization is turned on, this anomaly is injected into the f-electron band. In this paper, we build on early work establishing a Kondo lattice model for MATBG [60; 61; 62; 63]. We explore the consequences of the SB approach, examining the effects of valence fluctuations of well-localized f-states within a topological conduction band. The localized heavy fermions carry spin (\(\sigma=\pm 1\)) - valley (\(\eta=\pm 1\)) and orbital (\(\alpha=\pm 1\)) quantum numbers, forming an eight-fold degenerate multiplet that becomes mobile through the effects of valence fluctuations into a topological conduction band. We consider the relevant energy scales from a moire impurity model, bringing out the differences between gate tuned MATBG and traditional heavy fermion materials [64; 65; 66; 67; 68; 69; 70; 71]. One of the key differences with heavy fermion materials, is that the f-states in MATBG do not involve an ionic potential well that progressively deepens with atomic number or filling: the only way to dope the f-states away from neutrality is to raise the chemical potential of all electrons by changing a back-gate potential. This causes the conduction density of states accessible to the f-states to change rapidly as a function of filling, drastically altering the Kondo temperature. We estimate the doping dependence of the Kondo temperature, using a Doniach criterion [72; 73] to show that in the vicinity of Figure 1: Hexagonal lattice of exponentially localized Wannier \(f\)-states (red) on each Moire A-A site submerged in a sea of topological relativistic \(c\)-electrons (blue). integer filling, valley-spin magnetism will become stable, with quantum phase transitions into a heavy Fermi liquid which flank the valley-spin magnetic phases. ## II Song Bernevig model The one-particle Hamiltonian of the SB model \[H_{0}=H_{c}+H_{f}c, \tag{1}\] hybridizes exponentially localized Wannier \(f\)-electron states centered on the moire A-A sites with topological conduction electrons defined by the Hamiltonian \[H_{c}=\sum_{\begin{subarray}{c}\mathbf{k}|\mathbf{k}<\mathbf{c}_{ \mathbf{k}}\\ a,\mathbf{\sigma}^{\dagger}\eta\sigma\end{subarray}}c^{\dagger}_{\mathbf{k} \alpha\eta\sigma}\mathcal{H}^{(\eta)}_{aa^{\prime}}\left(\mathbf{k}\right)c_{ \mathbf{k}\alpha^{\prime}\eta\sigma}. \tag{2}\] Here \(c^{\dagger}_{\mathbf{k}\alpha\eta\sigma}\) creates a conduction electron with orbital, valley and spin quantum numbers \(a\in(1,4)\), \(\nu=\pm\) and \(\sigma=\pm 1\) respectively. The orbital matrix \[\mathcal{H}^{(\eta)}\left(\mathbf{k}\right)=\left(v_{\star}\left(\eta k_{x} \alpha_{0}-ik_{y}\alpha_{z}\right)\begin{array}{c}v_{\star}\left(\eta k_{x} \alpha_{0}+ik_{y}\alpha_{z}\right)\\ M\alpha_{x}\end{array}\right) \tag{3}\] describes the mixing between the four orbitals in each valley \(\eta\). The off-diagonal terms give rise to an asymptotically linear dispersion with velocity \(v_{\star}\), where the Pauli matrices \(\alpha_{\mu}\equiv(\alpha_{0},\vec{\alpha})\) (\(\mu=0,3\)) act on the two dimensional blocks. The first two entries of the matrix (\(a=1,2\)) refer to electrons with \(\Gamma_{3}\) symmetry at the \(\Gamma_{M}\), point \(\mathbf{k}=0\), while the lower block-diagonal \(a=(3,4)\) describes two orbitals of \(\Gamma_{1}\) and \(\Gamma_{2}\) symmetry, split by a mass \(M\). \(H_{c}\) gives rise to four bands with a four-fold spin-valley degeneracy at each \(\mathbf{k}\). The low energy dispersion is quadratic at \(\Gamma_{M}\) and becomes relativistic \(|E|\sim v_{\star}k\) at energies \(|E|\gtrsim M\), with a bandwidth \(D\sim v_{\star}K_{\theta}\). The single-particle model for TBG in each valley has a symmetry anomaly in the \(C_{2z}T\) and particle-hole \(P\) symmetries, corresponding to two Dirac cones at the Fermi-level with the _same_ chirality. Since the local orbitals are topologically trivial, the unhybridized conduction electron band-structure carries the symmetry anomaly. The hybridization between the conduction sea and \(f\) electrons at each moire A-A site \(\mathbf{R}\) is described by \[H_{f}c=\gamma_{0}\sum_{\mathbf{R}\alpha\eta\sigma}\left(f^{\dagger}_{\mathbf{ R}\alpha\eta\sigma}c_{\mathbf{R}\alpha\eta\sigma}+\text{h.c.}\right) \tag{4}\] and \(H_{cf}=H^{\dagger}_{f}c\). Here \(f^{\dagger}_{\mathbf{R}\alpha\eta}\) creates an f-electron with orbital character \(\alpha=1,2\), valley and spin quantum numbers \(\eta\) and \(\sigma\). The total degeneracy of the bare f-states is thus \(2N_{f}=8\). \(c^{\dagger}_{\mathbf{R}\alpha\eta}\) creates a un-normalizable, non-exponentially localized Wannier state centered at \(\mathbf{R}\) for the c-electron with the same quantum numbers and is related to the normalizable continuum \(c^{\dagger}_{\mathbf{k}\alpha\eta\sigma}\) states as \[c_{\mathbf{R}\alpha\eta\sigma}=\frac{1}{\sqrt{N_{s}}}\sum_{ \begin{subarray}{c}\mathbf{k},\mathbf{G},\alpha\\ |\mathbf{k}+\mathbf{G}|<\Lambda_{c}\end{subarray}}e^{i\mathbf{k}\cdot\mathbf{ R}}[\phi^{(\eta)}(\mathbf{k}+\mathbf{G},\gamma_{0})]_{aa}c_{\mathbf{k}+\mathbf{G}a \eta\sigma}. \tag{5}\] where the sum over all momenta has been divided up into a sum over reciprocal lattice vectors \(\mathbf{G}\) of the moire lattice and a sum over momentum \(\mathbf{k}\) restricted to the first moire Brillouin zone. The matrix form factor is \[\phi^{(\eta)}\left(\mathbf{k},\gamma_{0}\right)=e^{\frac{-\mathbf{k}^{2}\xi^{ 2}}{2}}\left(\alpha_{0}+a_{\star}\left(\eta k_{x}\alpha_{x}+k_{y}\alpha_{y} \right),\ \ 0_{2\times 2}\right), \tag{6}\] where \(\gamma_{0}\) and \(a_{\star}\) set the magnitude and length scale of the hybridization and \(\xi\) is a damping factor proportional to the real space spread of the localized f-Wannier states. Remarkably, the focusing effect of interference off the moire potential produces Wannier states of size \(\xi\sim a_{M}/5\), about a fifth of the moire unit cell size \(a_{M}\). The natural bandwidth of the free theory is given by \(D\sim v_{\star}K_{\theta}\), but after hybridization, \(M\) becomes the bandwidth of the moire flat bands, while \(\gamma_{0}\) is the gap to the higher energy bands. The approximate scales for the parameters in the SB model are [58]\(D=v_{\star}K_{\theta}\approx 133\)meV, \(\gamma_{0}=-25\)meV, \(M=3.7\)meV, \(v_{\star}=-4.3\)eVA, \(K_{\theta}=0.031\)A\({}^{-1}\), \(a_{\star}=65\)A, and \(\xi=0.225a_{M}=27\)A for the size of the Wannier states, which gives \(\xi=\xi K_{\theta}=0.84\). The non-interacting SB model reproduces the band-structure of the Bistritzer MacDonald model, giving rise to a central band of width \(2M\), split-off by an energy \(\gamma_{0}\) from the upper and lower bands. The central band can contain up to 8 electrons, and in the non-interacting model, an applied chemical potential causes the band-structure to move rigidly, so that by changing the chemical potential \(\mu\) over a range from \(-M\) to \(M\), the electron count per moire unit cell can be tuned from 0 to 8. ## III Qualitative considerations In this section, we consider the effect of interactions on the SB model, examining discuss the criteria for local moment formation. We then discuss the effects of adiabatically turning on the interactions, assuming that the ground-state remains a Fermi liquid. Finally, we consider the Doniach criterion for the stability of the heavy Fermi liquid. This involves a comparison of two interaction scales: the Kondo temperature [64], where local moments are perfectly screened by the conduction sea and the magnetic RKKY temperature [74; 75; 76], describing the strength of magnetic interactions between the local, mediated by the conduction c-electrons. The competition between the Kondo scale with the magnetic RKKY scale is expected to lead to a sequence of spin-valley magnetic and heavy Fermi liquid phases separated by quantum phase transitions. We depict the relevant energy scales for the formation of local moments and correlated phases as a function of filling in the Doniach phase diagram shown in Fig. (5). Amongst the various electron interactions considered by SB, the largest is the on-site Coulomb repulsion \(U_{0}\) between the \(f\)-electrons. A back-of-the envelope calculation using the Wannier radius \(\xi=27\AA\) gives \(U_{0}\sim e^{2}/(4\pi e_{0}\xi)\sim 53meV\), a number quite close to the value \(U_{0}=60\)meV estimated by SB. This large onsite interaction stabilizes integer occupations \(Q\) of the f-states. While the various interactions between the \(f\) and \(c-\) electrons are comparable with \(U\), we argue that the low density of conduction electrons in the model allows us to neglect these terms in a simplified discussion. At neutrality, twisted bilayer graphene contains 4 f-electrons in each moire unit cell. An excess (or deficit) of \[\nu_{0}=Q-4 \tag{7}\] f-electrons is accomplished by applying a gate voltage. The effective Hamiltonian of the system is an Anderson lattice model, \[H=H_{0}-\mu\tilde{N}+\frac{U}{2}\sum_{\mathbf{R}}(n_{f\,\mathbf{R}}-4)^{2}, \tag{8}\] where \(\mu\) is the chemical potential provided to all electrons by the gate, \(\tilde{N}\) is the total electron count, relative to neutrality, \(n_{f\,\mathbf{R}}\) is the (instantaneous) number of f-electrons in a moire unit cell at site \(\mathbf{R}\), while \(U\) is the effective interaction between the f-electron. In practice \(U\) may be smaller than bare value \(U_{0}\sim 60\)meV obtained from the Song Bernevig model, due to renormalization effects. There are several energy scales we have to consider for the formation of local moments. The first we discuss is the ionization energy required to add or remove an electron into a single moire AA-site "atom" with \(n_{f\,\mathbf{R}}=Q\) electrons. We show that the ionization energies are dependent on the on-site repulsion \(U\) and the filling factor \(\nu\) of the moire AA-site "atom". In the Anderson model, the hybridization and Coulomb interaction compete with one another. Local moments only form when the Coulomb scale exceeds the non-interacting resonance width of the f-electron bound state immersed in a sea of c-electrons. ### Coulomb Blockade Physics We begin by considering the unhybridized atomic limit of the Anderson model, given simply by \[H_{A}(\mathbf{R})=\frac{U}{2}(n_{f\,\mathbf{R}}-4)^{2}-\mu n_{f\,\mathbf{R}}, \tag{9}\] where \(n_{f\,\mathbf{R}}-4\) is the deviation from neutrality. In a conventional heavy electron system, the neutrality point is determined by the atomic number of the rare earth ions, but in MATBG, the neutrality point is the same for all fillings of the lattice, and the filling of the f-state is determined by the chemical potential, which acts equally on both conduction and f-electrons. The stability of the quantum dot with charge \(Q\) requires that the ionization energies \[\Delta E_{\pm}^{Q}=E_{Q\pm 1}-E_{Q}=\frac{U}{2}\pm(U\nu_{0}-\mu) \tag{10}\] are both positive. The energies \(\pm\Delta E_{\pm}^{Q}\) describe the offset location for the upper and lower Hubbard peaks in the f-spectral function (Fig. 4), and requirement that both are positive restricts the chemical potential to lie in the range \[\left|\mu-U\nu_{0}\right|<\frac{U}{2} \tag{11}\] Thus to achieve a filling \(\nu_{0}\), the chemical potential provided by the back-gate must be close to \(U\nu_{0}\). For each unit increase in filling factor, the conduction band sinks down by an amount \(U\). At a finite temperature \(T\), the local moment will remain stable against ionization provided \[k_{B}T\leq U/2-\left|U\nu_{0}-\mu\right| \tag{12}\] This defines a saw-toothed phase boundary for the region of local moment behavior, as shown in Fig. (2). In the presence of a finite hybridization causes the f-valence to fluctuate through the virtual emission or absorption of electrons, \(f^{Q}\rightleftharpoons f^{Q-1}+e^{-}\) and \(e^{-}+f^{Q}\rightleftharpoons f^{Q+1}\). At energy scales below \(U/2-U|\nu-\nu_{0}|\), the physics of the low-energy region are then described by a voltage-tuned "Kondo lattice"[77; 78]. ### Adiabatic Considerations: the Heavy Fermi Liquid We now consider the effect of adiabatically turning on the Coulomb interaction. The non-interacting SB model describes a narrow band of f-electrons, with a linear Dirac dispersion of fixed chirality centered the \(K_{M}\) points with a Dirac velocity \(v_{D}\) (Derived in Appendix A). For instance, in the limit where \(a_{\star}=0\) \[v_{D}\approx\left[3m(\gamma_{K}/D)^{2}\left(2+\tilde{\xi}^{2}\right)\right] \nu_{*} \tag{13}\] where \(m=M/D\), \(D=\nu_{*}K_{\theta}\) and \(\gamma_{K}=\gamma_{0}e^{-\tilde{\xi}^{2}/2}\) is the strength of the hybridization at the \(K_{M}\) point. The approximate bandwidth of this flat band is given by \(W=v_{D}K_{\theta}\). On doping away from neutrality to a filling \(\nu>0\), the Dirac points sink into the Fermi sea, producing two approximately circular Fermi surfaces of predominantly f-character, with four-fold valley spin symmetry, centered at each \(K_{M}\) points, each of area \(A_{FS}\sim\pi k_{F}^{2}\) area which satisfies Luttinger's sum rule, which we can write as \[8\frac{A_{FS}}{A_{M}}=\nu \tag{14}\] Figure 2: Sawtooth phase diagram for the “atomic limit” of MATBG, as a function of effective filling factor \(\nu_{E}=\mu/U\). White regions denote a stable local moment with \(Q\) f-electrons, bounded by the ionization energies \(\Delta E_{\pm}\) for adding or removing one electron. where \(A_{M}\) is the area of the moire Brillouin zone (see Fig. 3b.) and \(\nu\) the filling of the flat band. The non-interacting f-electrons thus form a Dirac sea of relativistic chiral fermions with a bandwidth of approximately \(vDk_{F}\), occupying a fraction \(\nu/8\) of the Brillouin zone. The SB model also predicts that at the \(\Gamma_{M}\) point, the energy eigenvalues are \(\epsilon_{\Gamma}=\{\pm M-\mu,\pm\gamma_{0}-\mu\}\), where those with energy \(\pm M\) are entirely of conduction character, whereas those with energy \(\pm\gamma_{0}-\mu\) are an equal admixture of f and topological conduction electrons. As a whole however, the flat band is predominantly of f-character, dominating the bulk properties. Let us consider what happens when interactions are adiabatically introduced at constant filling factor \(\nu\) to produce a Landau Fermi liquid. Now the f-states will renormalize with a Quasi-particle weight \(Z_{f}\) characterizing the \(K_{M}\) points of the Brillouin zone. So long as the ground-state remains a Fermi liquid, the Fermi surface area remains an adiabatic invariant, which will cause the f-states to remain pinned close to the Fermi energy, with energies \(\epsilon_{\mathbf{k}}=\lambda\pm v^{*}_{D}|\mathbf{k}-\mathbf{K_{M}}|\), here \(v^{*}=Z_{f}\,v_{D}\) is a renormalized Fermi velocity while in \(\lambda\sim W^{*}=Z_{f}\,W\) is of order the renormalized band-width. By contrast, the Coulomb blockade physics guarantees the chemical potential must take the value \(\mu\sim UV\), so that the energy eigenvalues around the \(\Gamma_{M}\) point will take the form \[\epsilon_{\Gamma}=\left(\pm M-U\nu,-\frac{1}{2}U\nu\pm\sqrt{\left(\frac{U\nu} {2}\right)^{2}+Z_{f}\left(\gamma_{0}\right)^{2}}\right). \tag{15}\] This produces a large distortion in the renormalized band-structure around the \(\Gamma\) point. If we connect up the Dirac dispersion to the two points at \(\pm M\), we see that the shape of the narrow band distorts on doping into a dove-shaped configuration for positive \(\nu\), moustache-shaped for negative \(\nu\). This strong renormalization effect is also expected to produce a second light Fermi surface at high doping, nestled around the \(\Gamma_{M}\) point (see Fig. 3c. ). ### Anderson Criterion We now extract the key scales of the SB Anderson lattice by considering the corresponding impurity Anderson model, formed from a single moire f-state embedded in a relativistic electron gas. This essential simplification can not describe the detailed effects of coherence that develop in the lattice, but it can provide us with a simple understanding of the key energy scales in the lattice. The relativistic character of the conduction sea gives rise to a density of states per moire per valley per spin, that is linear in energy at high energies. The density of states per spin per valley per orbital in the channel that hybridizes with the Figure 4: a) Energy level diagram showing the position of the Fermi energy relative to the f-level excitation energies for the case \(Q=3\), \(\nu_{0}=-1\). b) Spectral function for the f-state in an impurity model, showing upper and lower Hubbard resonances and the central Kondo resonance. Figure 3: Schematic contrasting band-structure of MATBG in the SB model at fixed filling for a) zero hybridization b) finite hybridization, zero interaction, showing heavy Fermi surfaces at the \(K_{M}\) points and c)finite interaction showing the downward movement of the conduction band by \(U\nu\), and the appearance of a light electron pocket at the \(\Gamma_{M}\) point. f-states is \[\rho_{c}(E)=\frac{A}{D^{2}}\times\left\{\begin{array}{cc}|E|,&|E|>M\\ &\\ \frac{1}{2}(|E|+M),&|E|<M.\end{array}\right. \tag{16}\] where \(A=2\pi/(3\sqrt{3})\approx 1.2\). In the presence of a chemical potential, this density of states shifts downwards in energy by an amount \(\mu\), and now \(\rho_{c}(E,\mu)=\rho_{c}(E+\mu)\). If we ignore the effects of interaction, the hybridization width (half width at half maximum) of an isolated non-interacting Anderson impurity is given by \[\Delta_{0}[\mu]=\pi\rho_{c}(\mu)\gamma_{0}^{2} \tag{17}\] where \(\gamma\) is the strength of the hybridization. It is instructive to compare \(\Delta_{0}(0)\) at neutrality with the band-width \(W=v_{D}K_{\theta}\) obtained in the SB band structure. From (13), we obtain \[W=v_{D}K_{\theta}=3(1+(\xi K_{\theta})^{2})\left(\frac{\gamma_{K}}{D}\right)^ {2}M\approx 2.5\left(\frac{\gamma_{0}}{D}\right)^{2}M,\] while from (16), we obtain \[\Delta_{0}(\mu=0)=\frac{\pi A}{2}\left(\frac{\gamma_{0}}{D}\right)^{2}M \approx 1.9\left(\frac{\gamma_{0}}{D}\right)^{2}M \tag{18}\] so the two quantities are of comparable magnitude at neutrality. It is interesting to note that the hybridization entering into the flat band-width is the magnitude of the hybridization \(\gamma_{K}=\gamma_{0}e^{-\xi^{2}K_{\theta}^{2}/2}\sim 0.7\gamma_{0}\) at the \(K_{M}\) point where the f-excitations are concentrated. This comparison provides an important cross-check on our use of an equivalent impurity model to determine the key energy scales of the SB Anderson lattice model. Now for a filling factor \(\nu\), we expect a shift in the chemical potential \(|\mu|\approx U|\nu|\), and since \(U\gg M\), on doping away from neutrality, for \(U|\nu|>M\), we expect the hybridization width and the flat band-width \(W\) to both _increase linearly_ with filling \(\nu\). This robust observation, depending only on the linear density of states of the topological conduction electrons and the doping-dependent shift of the chemical potential is an important consequence of interactions in the SB model. When the interactions are turned on, the f-spectral function splits into an upper and lower Hubbard peak at locations \(E_{+}^{Q}\) and \(-E_{-}^{Q}\), with a Kondo resonance in the center as shown in Fig. 4. The upper and lower resonances have a half-width of order \((2N_{f}-Q)\Delta_{0}\) and \(Q\Delta_{0}\), where \(2N_{f}=8\) for TBG. Well-defined local moment behavior then requires that the separation \(U\) between the upper and lower Hubbard bands, is larger than their combined half-widths i.e \[\Gamma_{0}=2N_{f}\,\Delta_{0}[\mu]\left|{}_{|\mu|=U|\nu|}=8\pi A\left(\frac{ \gamma_{0}}{D}\right)^{2}U|\nu|\ll U. \tag{19}\] We shall see that this is also the condition that the exponential factor in the Kondo temperature is smaller than one, see (28). Curiously, the strength of interaction \(U\) factors out of this expression, implying that local moment formation is only expected for a chemical potential \[|\nu|<\frac{1}{8\pi A}\left(\frac{D}{\gamma_{0}}\right)^{2}, \tag{20}\] an estimate that depends purely on the ratio of hybridization to bandwidth. With the hybridization \(\gamma_{0}\approx 25\)meV from Song-Bernevig, \(D/\gamma_{0}\approx 5.2\), would imply that local moments would only form for \(|\nu_{\text{eff}}|=|\mu/U|\lesssim 2/(\pi A)=0.9\). If we use the slightly smaller value of the hybridization at the \(K_{M}\) point, \(\gamma_{K}\), we obtain \(|\nu|<1.25\). Yet, there is considerable evidence of local moment behavior from entropy measurements and STM measurements even at a filling factor of \(\nu=+3\)[45; 46], suggesting a need for a substantially smaller hybridization. If we take for example, \(\Gamma_{0}=U/4\), then this requires \(\gamma\sim D/\sqrt{360}\approx 6.3\)meV. This discrepancy between the SB value \(\gamma_{0}\sim 25\)meV and the value \(\gamma\) required for local moment physics is concerning, and suggests that there may be an additional source of screening that lies outside the SB model. A likely candidates for this effect are the highly polarizable phonons of bilayer graphene In TBG the characteristic Debye frequency is considerably larger than the flat-band width: this region of parameter space is means that phonons respond adiabatically to valence fluctuations, giving rise to a polaronic renormalization effects. When an electron from the conduction sea tunnels into the tightly bound f-state, the greater electron density at the moire AA-site pulls the surrounding carbon nuclei in, reducing the hybridization to \[\gamma_{0}\rightarrow\gamma_{0}\exp\left(-n_{ph}/2\right) \tag{21}\] where \(n_{ph}\) is the number of phonons modes that are condensed by the breathing motion. To reduce the hybridization by a factor of would require the condensation of \(n_{ph}\sim 3\) phonons. Such effects would likely lead to a strongly frequency and temperature dependent renormalization of \(\gamma_{0}\) which lie beyond the scope of the current work. In the current work, we shall assume a renormalized value of \(\gamma_{0}^{*}\) such that \(\Gamma_{0}=U/4\) at \(|\nu|=3\). The characteristic length scale of the hybridization \(a_{\star}\) is approximately the distance between the moire AB site and the AA site where the exponentially localized Wannier f-states reside. We expect \(a_{\star}\) to remain approximately constant, even with the condensation of phonon modes. ### Coqblin Schrieffer Transformation and the Kondo Temperature The resulting low energy effective Hamiltonian \[H_{K}=\sum_{\nu_{\star}\,|\mathbf{k}|<D}c_{\mathbf{k}}^{\dagger}\mathcal{H}\left( \mathbf{k}\right)c_{\mathbf{k}}+J_{\text{eff}}\sum_{\mathbf{R}BB^{\prime}}\tilde {c}_{\mathbf{R}B}^{\dagger}\tilde{c}_{\mathbf{R}B^{\prime}}S_{BB^{\prime}}\left( \mathbf{R}\right) \tag{22}\] is a Topological Kondo lattice model. Here \(B\equiv\left(\alpha\eta\sigma\right)_{2}\equiv 1,2\ldots 8\) is the \(SU(8)\) index written in binary and \[S_{BB^{\prime}}(\mathbf{R})=f_{\mathbf{R}B}^{\dagger}f_{\mathbf{R}B^{\prime}}- \frac{Q}{2N_{f}}\delta_{BB^{\prime}} \tag{23}\] is the SU (8) spin operator. Lastly, \[\tilde{c}_{\mathbf{R}B}=\frac{1}{\sqrt{N_{s}}}\sum_{\begin{subarray}{c}|\mathbf{ k}|<\alpha_{s}\\ \mathbf{R}\alpha\alpha\eta\end{subarray}}\phi_{\alpha\alpha}^{(\eta)}(\mathbf{k})c _{\mathbf{k}aq\gamma s}e^{i\mathbf{k}\cdot\mathbf{R}}, \tag{24}\] creates a spatially extended \(\Gamma_{3}\) conduction electron state, centered (rather than localized) at \(\mathbf{R}\) with quantum numbers \(B\). The strength of the effective Kondo interaction \[J_{\mathrm{eff}}=\sum_{\pm}\frac{(\gamma_{0}^{*})^{2}}{\Delta E_{Q}\to Q\pm 1}= \frac{4(\gamma_{0}^{*})^{2}}{U}\ F[\frac{\mu}{U}-\nu_{0}] \tag{25}\] where \(F[x]=1/(1-(2x)^{2})\). Notice that the multiplying factor \(F=1\) is unity at integral filling, (\(\nu_{\mathrm{eff}}=\mu/U=\nu_{0}\)). An emergent crossover temperature from Kondo physics is the Kondo temperature \(T_{K}\) at which the local moments at each moire AA-site is screened by the conduction c-electrons, forming a Kondo singlet at each moire AA-site. The Kondo temperature \(T_{K}\) at integer fillings \(\nu_{0}\) can be estimated as \[T_{K}[\nu_{0}] \sim\Lambda\exp\left(-\frac{1}{(8J_{\mathrm{eff}}\rho_{c})}\right) \tag{26}\] \[\approx\Lambda\exp\left(-\frac{U\pi(1-4(\mu/U-\nu_{0})^{2})}{32 \Delta_{0}[\mu]}\right)\Bigg{|}_{\mu=U\,\nu_{0}} \tag{27}\] where \(\Lambda\) is an appropriate cutoff. A more careful calculation reveals that \(\Lambda=M\) at neutrality, and \(U/2\) at \(\nu_{0}\neq 0\), giving \[T_{K}[\nu_{0}]=\begin{cases}M\exp\left(-\frac{1}{16A}\frac{U}{M}\left(\frac{D} {\gamma_{0}^{*}}\right)^{2}\right),&(\nu_{0}=0),\\ \frac{U}{2}\exp\left(-\frac{1}{32A\nu_{0}}\left(\frac{D}{\gamma_{0}^{*}}\right) ^{2}\right),&(\nu_{0}\neq 0).\end{cases} \tag{28}\] Note that the size of \(U\) disappears from the exponent for non-zero \(\nu_{0}\), and that the condition that \(T_{K}\ll U/2\) at finite \(\nu_{0}\) is equivalent to condition (20) considered in section III.3. In Fig. (5), we interpolate the Kondo temperature, shown in green, between the non-interacting resonance width \(\Delta_{0}[Uv]\) and \(T_{K}[\nu_{0}]\). ### Magnetic RKKY Temperature The magnetic moments at each moire AA-site induces a cloud of Friedel oscillations in the spin-valley density with a magnetization profile that couples to neighboring local moments via a long-range RKKY interaction [74; 75; 76]. The RKKY interaction drives long-range spin-valley magnetic ordering and we can estimate the RKKY magnetic energy scale Figure 5: Proposed doping-temperature phase diagram for twisted bilayer graphene based on an impurity model for the f-states in MATBG. Here \(\nu_{E}=\mu/U\) is the effective doping. The hybridization width \(\Gamma_{0}[\nu]=2N_{f}\,\Delta_{0}[\mu]\) (purple dotted line) is an approximately linear function of doping. Ionization energies \(\Delta E_{\pm}^{Q}\) are denoted by blue lines. Light blue regions are mixed valent. At temperatures smaller than the two ionization energies \(E_{\pm}^{Q}\), (white regions), stable local moments develop. In the presence of hybridization, the criteria for Kondo physics is that resonance width \(\Gamma_{0}=2N_{f}\,\Delta_{0}\), dotted purple line, is smaller than the ionization energies, blue lines, i.e \(2N_{f}\,\Delta_{0}\ll\min(E_{+}^{Q},E_{-}^{Q})\). We have chosen \(\Gamma_{0}(|\nu|=3)=U/4\), and \(U=35\)meV to allow for local moment physics at \(\nu_{E}=3\). The Kondo temperature \(T_{K}\), below which the f-states will develop coherence, is shown as an orange curve and the magnetic RKKY scale \(T_{RKKY}\) as a dashed red line. Red regions denote valley-spin magnetic phases where \(T_{RKKY}>T_{K}\). In the dark blue regions, where \(T_{K}>T_{RKKY}\), a heavy Fermi liquid ground state is stabilized in which where each moire \(AA\)-site coherently scatters the conduction electrons. The competition between \(T_{K}\) and \(T_{RKKY}\) leads to a series of quantum phase transitions [72; 73] straddling each integer filling factor. A weakly coupled Song-Bernevig phase develops at low temperatures in the blue regions. \(T_{RKKY}\) to be \[T_{RKKY}\left[\mu\right]\sim 8J^{2}\rho_{c}\left(\mu\right)=\frac{8AD^{2}}{U} \left(\frac{2\gamma_{0}^{*}}{D}\right)^{4}\frac{\mu}{U}, \tag{29}\] where \(J=4(\gamma_{0}^{*})^{2}/U\) is the characteristic Kondo scale. We depict the RKKY temperature as the red dashed line in Fig. (5). ### Doniach Criterion In the regimes near integer filling, where \(T_{RKKY}>T_{K}\), there would be a valley-spin magnetic phase depicted as highlighted red in Fig. (5). Away from integer filling but in the regions where local moments can form, the Kondo temperature can be larger than the magnetic energy scale \(T_{RKKY}\), \(T_{K}>T_{RKKY}\). In these regions of the phase diagram, a heavy Fermi liquid ground state is stabilized, shown as highlighted dark blue in Fig. (5), and every \(AA\)-site coherently scatters conduction electrons. Consequently, the competition of the Kondo scale and the magnetic RKKY scale leads to a series of quantum phase transitions straddling each integer filling factors. The physics in the regions outside where local moments are stabilized is governed by a weakly coupled Song-Bernevig model, leading to a Fermi liquid. ## IV Mixed Valence Model We now outline a theory that enables us to develop a mean-field picture of the valence fluctuations and Kondo effect in MATBG. At an energy scale greater than \(U/2\), the valence of the moire ion begins to fluctuate \(Q\rightleftharpoons Q\pm 1\) with excitation energies \(\Delta E_{\pm}=U/2\pm(U\nu_{0}-\mu)\) and the Moire f-state can no longer be simply described by local moments. We can treat the fluctuations as vector bosons [79; 80] \[w\equiv\begin{pmatrix}t\\ b\end{pmatrix} \tag{30}\] which describe the addition and removal of charge in the Wannier localized f-state. The physical states are: \[|f^{Q}\rangle =\prod_{j=1,Q}f_{\sigma_{j}}^{\dagger}|0\rangle,\] \[|f^{Q-1}\rangle =b^{\dagger}\prod_{j=1,Q-1}f_{\sigma_{j}}^{\dagger}|0\rangle,\] \[|f^{Q+1}\rangle =t^{\dagger}\prod_{j=1,Q+1}f_{\sigma_{j}}^{\dagger}|0\rangle, \tag{31}\] subject to the constraint \(n_{f}+n_{b}-n_{t}=Q\). ### Single Impurity Case To develop the theory for the lattice, it is instructive to first consider the simplified case of a mixed valent impurity model with a \(2N_{f}\) fold degeneracy. The action for a single impurity mixed valence model is \[S=\int_{0}^{\beta}d\tau\left\{L_{c}+L_{f}+L_{b}+\gamma_{0}\sum_{\mathbf{k},B} \left[c_{\mathbf{k}B}^{\dagger}f_{B}(b^{\dagger}+t)+\mathrm{H.c}\right] \right\}, \tag{32}\] where \[L_{c}=\sum_{\mathbf{k},B}c_{\mathbf{k}B}^{\dagger}(\partial_{\tau}+\epsilon_{ \mathbf{k}}-\mu)c_{\mathbf{k}B}, \tag{33}\] and \[L_{f}=\sum_{B}f_{B}^{\dagger}(\partial_{\tau}+\lambda)f_{B}-(\lambda+\mu)Q, \tag{34}\] describes the unhybridized fermions and \[L_{b}=b^{\dagger}(\partial_{\tau}+\Delta E_{-}+\lambda)b+t^{\dagger}(\partial _{\tau}+\Delta E_{+}-\lambda)t, \tag{35}\] describes the excitations into the upper and lower Hubbard bands, where \(\Delta E_{\pm}=U/2\pm(U\nu_{0}-\mu)\). The integration over the \(\lambda\) degree of freedom imposes the constraint \(n_{f}+n_{b}-n_{t}=Q\). Introducing the symmetric and antisymmetric boson fields, \(s=(b+t^{\dagger})/\sqrt{2}\), \(\delta=(b-t^{\dagger})/\sqrt{2}\), the action becomes \[S=\int_{0}^{\beta}d\tau\left\{L_{c}+L_{f}+L_{s}+\sqrt{2}\gamma_{0}^{*}\sum_{ \mathbf{k},B}\left[c_{\mathbf{k}B}^{\dagger}f_{B}s+\mathrm{H.c}\right]\right\}, \tag{36}\] where now \[L_{s}=\frac{U}{2}\left[s^{\dagger}s+\delta^{\dagger}\delta\right]+s^{\dagger}( \lambda+\mu-U\nu_{0}+\partial_{\tau})\delta+\delta^{\dagger}(\lambda+\mu-U\nu_{ 0}+\partial_{\tau})s. \tag{37}\] Notice that the antisymmetric \(\delta\) field decouples from the fermions and can be integrated out to then obtain \[S=\int_{0}^{\beta}d\tau\left\{L_{c}+L_{f}+\sum_{\mathbf{k},B}\left[c_{\mathbf{ k}B}^{\dagger}f_{B}s+\mathrm{H.c}\right]+\frac{1}{J}s^{\dagger}\left[1-\left( \frac{\lambda+\mu-U\nu_{0}+\partial_{\tau}}{U/2}\right)^{2}\right]s\right\} \tag{38}\] where we have rescaled \(\sqrt{2}\gamma_{0}^{*}s\to s\) and introduced the Kondo coupling constant \(J=4(\gamma_{0}^{*})^{2}/U\). The resulting action is remarkably similar to that of the Kondo lattice, but the frequency, \(\lambda\) and \(\mu\) inside the bosonic Lagrangian allows us to keep track of the dynamical valence fluctuations. Notice, that we can absorb the \(-U\nu_{0}\) into a shift of the chemical potential, shifting \(\mu\rightarrow\mu+U\nu_{0}\), but now this has the effect of shifting the conduction bands down in energy, so that our final action takes the form \[S=\int_{0}^{\beta}d\tau\left\{L_{c}^{*}+L_{f}+\sum_{\mathbf{k},B}\left[c_{ \mathbf{k}B}^{\dagger}f_{B}s^{\dagger}+\mathrm{H.c}\right]+\frac{1}{J}s^{ \dagger}\left[1-\left(\frac{\lambda+\mu+\partial_{\tau}}{U/2}\right)^{2} \right]s\right\} \tag{39}\] where now \[L_{c}^{*}=\sum_{\mathbf{k},B}c_{\mathbf{k}B}^{\dagger}(\partial_{\tau}+ \epsilon_{\mathbf{k}}-\mu-U\nu_{0})c_{\mathbf{k}B} \tag{40}\] describes the shifted conduction bands. We can construct mean-field solutions by treating \(\lambda\) and \(s\) as constants, so that the mean-field action becomes \[S=\int_{0}^{\beta}d\tau\left\{L_{c}^{*}+L_{f}+\sum_{\mathbf{k},B}\left[c_{ \mathbf{k}B}^{\dagger}f_{B}s+\mathrm{H.c}\right]+\frac{|s|^{2}}{J}\left[1- \left(\frac{\lambda+\mu}{U/2}\right)^{2}\right]\right\} \tag{41}\] We can identify \[-\frac{\partial S}{\partial\mu}=N_{e}=N_{c}+Q+\frac{|s|^{2}}{(\gamma_{0}^{*})^ {2}}\left(\frac{\lambda+\mu}{U/2}\right) \tag{42}\] as the total charge while \[\frac{\partial S}{\partial\lambda}=0=n_{f}-Q-\frac{|s|^{2}}{(\gamma_{0}^{*})^ {2}}\left(\frac{\lambda+\mu}{U/2}\right) \tag{43}\] as the constraint. The quantity \[\frac{|s|^{2}}{(\gamma_{0}^{*})^{2}}\left(\frac{\lambda+\mu}{U/2}\right)=|t|^ {2}-|b|^{2} \tag{44}\] appearing on the right-hand side of these two expressions is identified as the correction to the f-charge density derived from valence fluctuations into the upper and lower Hubbard bands. We can combine (43) and (44) to obtain \(N_{e}=N_{c}+n_{f}\). ## V Mixed Valent Moire Lattice The procedure for developing a mean-field theory for TBGK follows the same lines as the above development. The lattice action is now \[S=\int_{0}^{\beta}d\tau\left\{L_{c}+L_{f}+L_{b}+\gamma_{0}^{*}\sum_{\mathbf{ R},B}\left[c_{\mathbf{R}B}^{\dagger}f_{\mathbf{R}B}(b_{\mathbf{R}}^{\dagger}+t_{ \mathbf{R}})+\mathrm{H.c}\right]\right\}, \tag{45}\] where \[L_{c}=\sum_{\mathbf{k},a,\mu\sigma}c_{\mathbf{k}a\eta\sigma}^{\dagger}( \partial_{\tau}+\mathcal{H}_{aa^{\prime}}^{\eta}(\mathbf{k})-\mu)c_{\mathbf{k }a^{\prime}\eta\sigma}+\mu(N_{c})_{\nu=0}, \tag{46}\] here \(\langle N_{c}\rangle_{\nu=0}\) is the number of conduction electrons at half filling and \[L_{f}=\sum_{\mathbf{R}}\left[f_{\mathbf{R}B}^{\dagger}(\partial_{\tau}+\lambda_{ \mathbf{R}})f_{\mathbf{R}B}-\lambda_{\mathbf{R}}Q-\mu(Q-4)\right], \tag{47}\] describes the unhybridized fermions and \[L_{b}=\sum_{\mathbf{R}}\left[b_{\mathbf{R}}^{\dagger}(\partial_{\tau}+\Delta E _{-}+\lambda_{\mathbf{R}})b_{\mathbf{R}}+t_{\mathbf{R}}^{\dagger}(\partial_{ \tau}+\Delta E_{+}-\lambda_{\mathbf{R}})t_{\mathbf{R}}\right], \tag{48}\] defines the valence fluctuations. Carrying out the same sequence of manipulations used in the single impurity, we obtain \[S=\int_{0}^{\beta}d\tau\left\{L_{c}^{*}+L_{f}+\sum_{\mathbf{R},B}\left[c_{ \mathbf{R}B}^{\dagger}f_{\mathbf{R}B}s_{\mathbf{R}}^{\dagger}+\text{H.c} \right]+\frac{1}{J}\sum_{\mathbf{R}}s_{\mathbf{R}}^{\dagger}\left[1-\left( \frac{\lambda_{\mathbf{R}}+\mu+\partial_{\tau}}{U/2}\right)^{2}\right]s_{ \mathbf{R}}\right\} \tag{49}\] where now \[L_{c}^{*}=\sum_{\mathbf{k},a,\sigma^{\prime}\eta\sigma}c_{\mathbf{k}a\eta \sigma}^{\dagger}(\partial_{\tau}+\mathcal{H}_{aa^{\prime}}^{\eta}(\mathbf{ k})-\mu-U\nu_{0})c_{\mathbf{k}a^{\prime}\eta\sigma}+(\mu+U\nu_{0})\langle N_{c} \rangle_{\nu=0}, \tag{50}\] describes the shifted conduction bands. ## VI Mean-field approach Setting the \(\lambda_{\mathbf{R}}=\lambda\) and \(s_{\mathbf{R}}=\gamma\) in the above action, we obtain the following mean-field Hamiltonian takes the form \[H_{\text{MF}}=\sum_{\mathbf{k}\eta\sigma}\Psi_{\mathbf{k}\eta\sigma}^{\dagger }\mathcal{H}^{\text{MF}}\left(\mathbf{k}\right)\Psi_{\mathbf{k}\eta\sigma}+N _{s}\left(\frac{\bar{\gamma}\gamma}{J}\left[1-\left(\frac{\lambda+\mu}{U/2} \right)^{2}\right]-\mu\nu_{0}+(\mu+U\nu_{0})\langle N_{c}\rangle_{\nu=0}- \lambda Q\right), \tag{51}\] where \[\mathcal{H}_{MF}(\mathbf{k})=\left(\begin{array}{cccc}\lambda\sigma_{0}& \mathcal{H}_{cf}^{(\eta)}\left[\bar{\gamma}\right](\mathbf{k})&\ldots&\mathcal{ H}_{cf}^{(\eta)}\left[\bar{\gamma}\right](\mathbf{k}+\mathbf{G}_{n})\\ \mathcal{H}_{fc}^{(\eta)}\left[\gamma\right](\mathbf{k})&\mathcal{H}_{c}^{( \eta)}(\mathbf{k})-(\mu+U\nu_{0})\underline{1}&0&0\\ \vdots&0&\ddots&0\\ \mathcal{H}_{fc}^{(\eta)}\left[\gamma\right](\mathbf{k}+\mathbf{G}_{n})&0& \ldots&\mathcal{H}_{c}^{(\eta)}(\mathbf{k}+\mathbf{G}_{n})-(\mu+U\nu_{0}) \underline{1}\end{array}\right) \tag{52}\] describes the dispersion with renormalized hybridization strength \(\gamma\), and \(\Psi_{\mathbf{k}\eta\sigma}=(f_{\mathbf{k}1\eta\sigma},f_{\mathbf{k}2\eta \sigma},c_{\mathbf{k}1\eta\sigma},c_{\mathbf{k}2\eta\sigma},c_{\mathbf{k}3\eta \sigma},c_{\mathbf{k}4\eta\sigma},c_{\mathbf{k}4\eta\sigma},c_{\mathbf{k}5\eta \sigma},c_{\mathbf{k}61\eta\sigma},\ldots,c_{\mathbf{k}4\eta\sigma},c_{ \mathbf{k}5\eta\sigma},c_{\mathbf{k}6\eta\sigma},d\mathbf{r}\)\({}^{T}\) is a spinor combining the four conduction for each reciprocal lattice vector and two f-electron operators at each valley \(\eta=\pm 1\) and spin \(\sigma=\pm 1\). Notice that while \(H_{MF}(\mathbf{k})\) commutes with the spin and valley quantum numbers, at general momentum it breaks the two-fold \(\Gamma_{3}\) degeneracy down to a \(N_{f}=4\) fold valley-spin degeneracy. The mean-field Free energy obtained by integrating out the fermions for a static configuration of the fields \((\gamma,\lambda)\), is then \[F=-N_{f}\,T\sum_{\mathbf{k},i\omega_{n}}\text{Tr}\ln(-i\omega_{n}+\mathcal{H} _{MF}(\mathbf{k}))+N_{s}\left(\frac{\bar{\gamma}\gamma}{J}\left[1-\left(\frac{ \lambda+\mu}{U/2}\right)^{2}\right]-\mu\nu_{0}+(\mu+U\nu_{0})\langle N_{c} \rangle_{\nu=0}-\lambda Q\right), \tag{53}\] where \(\nu_{0}=Q-4\). The saddle-point requirement that \(F\) be stationary with respect to variations in \(\bar{\gamma}\) and \(\lambda\) which imposes the mean-field conditions \[\gamma=-\frac{J}{N_{s}}\left[1-\left(\frac{\lambda+\mu}{U/2}\right)^{2}\right] ^{-1}\sum_{\mathbf{R},B}\langle c_{\mathbf{R}B}^{\dagger}f_{\mathbf{R}B}\rangle, \tag{54}\] and \[\frac{1}{N_{s}}\sum_{\mathbf{R},B}\langle f_{\mathbf{R}B}^{\dagger}f_{\mathbf{R} B}\rangle=Q+\frac{\bar{\gamma}\gamma}{(\gamma_{0}^{*})^{2}}\left(\frac{ \lambda+\mu}{U/2}\right). \tag{55}\] Variations of the action with respect to the chemical potential \(\mu\) fixes the total number of electrons \(N_{e}\) in the system, \[-\frac{\partial F}{\partial\mu}=N_{e}-\langle N_{e}\rangle_{\nu=0}\] \[=(N_{c}-\langle N_{c}\rangle_{\nu=0})+N_{s}(Q-4)+N_{s}\frac{\bar{\gamma} \gamma}{(\gamma_{0}^{*})^{2}}\left(\frac{\lambda+\mu}{U/2}\right), \tag{56}\] and \(\langle N_{e}\rangle_{\nu=0}\) is the total number of electrons in the system at half filling. The quantity \[\frac{\bar{\gamma}\gamma}{(\gamma_{0}^{*})^{2}}\left(\frac{\lambda+\mu}{U/2} \right)=|t|^{2}-|b|^{2} \tag{57}\] appearing on the right-hand side of Eq. 55 and Eq. 56 to be the corrections to the f-charge density due to valence fluctuation into the upper and lower Hubbard bands. Combining (55) and (56) we get the physical filling factor to be \[\nu = \frac{(N_{e}-\langle N_{e}\rangle_{\nu=0})}{N_{s}} \tag{58}\] \[= \left(\frac{N_{c}-\langle N_{c}\rangle_{\nu=0}}{N_{s}}+\nu_{0}+| t|^{2}-|b|^{2}\right).\] In our mean-field theory for the mixed valent model for MATBG, we can explore solutions in which the vector bosons describing fluctuations into both the upper and lower valence condenses. In this early preprint, the full self-consistent treatment of equations (54), (55), and (56) have not been treated and will shortly be included in an updated version of this article. The key scales of the mean-field mixed valent moire lattice model for TBG are set by the impurity physics before coherence is reached. Hence, the doping-temperature phase diagram for the mixed valent moire lattice model would greatly resemble the Doniach phase diagram in Fig. (5) based on a single impurity model for the f-states in MATBG. The mean-field hybridization width for the mixed valent moire lattice is \[\Delta[\mu]=\pi\rho_{c}(\mu)\bar{\gamma}\gamma \tag{59}\] We anticipate the approximate mean-field bandwidth of the flat band to be \(W_{\rm MF}=\nu_{D}^{\rm MF}K_{\theta}\sim\Delta\), where \(\nu_{D}^{\rm MF}\) is the Dirac velocity at the \(K_{M}\) points for the mean-field theory. The Kondo temperature can then be estimated by \[T_{K}\sim 2N_{f}\,\Delta \tag{60}\] ## VII Discussion In this paper, we have conducted an initial examination of the physical consequences of including interactions into the SB heavy fermion description of MATBG. By taking an impurity limit of the SB model we have been able to identify the key scales in the problem, identifying the qualitative nature of the phase diagram. One of the robust consequences of the topological heavy fermion description is the presence of a conduction sea with a linear density of states at high energies. This effect means that the high doping states will typically be less strongly interacting, and more prone to the development of conventional Fermi liquid behavior. There is, in our opinion, much that can be done to experimentally test the foundation of the SB description. In conventional heavy fermion systems, the presence of local moment Figure 8: Many-body mean-field band-structure for \(\nu_{E}=\mu/U<0\). Parameters used: chemical potential \(\mu=-25\,meV\), \(\lambda=-31\,meV\), \(U=35\,meV\), with \(\Gamma_{0}=U/4\), and \(\gamma=29.0\,meV\). In this early preprint, full self-consistency has not been treated and will shortly be included in an updated version of the article. Figure 6: Many-body mean-field band-structure for a half filled flat band. Parameters used: chemical potential \(\mu=0\,meV\), \(\lambda=0\,meV\), \(U=35\,meV\), with \(\Gamma_{0}=U/4\), and \(\gamma=12.6\,meV\). In this early preprint, full self-consistency has not been treated and will shortly be included in an updated version of the article. Figure 7: Many-body mean-field band-structure for \(\nu_{E}=\mu/U>0\). Parameters used: chemical potential \(\mu=35\,meV\), \(\lambda=45\,meV\), \(U=35\,meV\), with \(\Gamma_{0}=U/4\), and \(\gamma=32.9\,meV\). In this early preprint, full self-consistency has not been treated and will shortly be included in an updated version of the article. behavior is immediately evident from the Curie-Weiss behavior of the magnetic susceptibility \[\chi(T)\propto\frac{1}{T+\theta} \tag{61}\] To what extent can such Curie Weiss behavior be detected from a Maxwell-analysis of existing field dependent compressibility measurements? It would be interesting to use existing field-dependent compressibility measurements to back-out the spin/valley susceptibility and directly measure the size of the moment. It would also be interesting to examine whether the upper and lower Hubbard bands observed in STM measurements, have a width that grows approximately linearly with the doping, which would provide direct evidence of the underlying topological conduction band. Our qualitative analysis has found that the hybridization strength \(\gamma_{0}\sim 25\)meV obtained in the SB model, is likely too large to account for the observation of local moment behavior at filling factors of \(|v|=3\), for which a considerably smaller value \(\gamma_{0}^{*}\sim 6\)meV is required. Can this be accounted for by polaronic effects? This is clearly a fruitful area for future exploration. There is much that can be done to improve our approximate theoretical treatment of mixed valent MATBG. We have argued the importance of finding a treatment of this model that can handle the effects of valence fluctuations, and have proposed a vector auxiliary boson approach that appears to capture the essence of the valence fluctuations. There are many other theoretical methods that could be applied to MATBG, such as slave rotor approaches. and dynamical mean-field theory. We end with a brief reflection on the nature of superconductivity in MATBG. Recent quasiparticle interference experiments have uncovered evidence of a possible d-wave nodal gap structure in the superconducting state. In a conventional superconductor this would be a sign of spin-singlet pairing. In MATBG, the presence of a valley-spin degeneracy with an \(N_{f}=4\) fold degeneracy and the presence of strong local Hund's interactions, raises the interesting possibility of spin or valley triplet paired states. An extension of the current model that includes the Hund's interactions within the multi-electron Wannier states is thus highly desirable. _Acknowledgements_ This work was supported by Office of Basic Energy Sciences, Material Sciences and Engineering Division, U.S. Department of Energy (DOE) under contract DE-FG02-99ER45790 (LLHL and PC).
2305.06798
Vibrations and tunneling of strained nanoribbons at finite temperature
Crystalline sheets (e.g., graphene and transition metal dichalcogenides) liberated from a substrate are a paradigm for materials at criticality because flexural phonons can fluctuate into the third dimension. Although studies of static critical behaviors (e.g., the scale-dependent elastic constants) are plentiful, investigations of dynamics remain limited. Here, we use molecular dynamics to study the time dependence of the midpoint (the height center-of-mass) of doubly clamped nanoribbons, as prototypical graphene resonators, under a wide range of temperature and strain conditions. By treating the ribbon midpoint as a Brownian particle confined to a nonlinear potential (which assumes a double-well shape beyond the buckling transition), we formulate an effective theory describing the ribbon's tunneling rate across the two wells and its oscillations inside a given well. We find that, for nanoribbbons compressed above the Euler buckling point and thermalized above a temperature at which the non-linear effects due to thermal fluctuations become significant, the exponential term (the ratio between energy barrier and temperature) depends only on the geometry, but not the temperature, unlike the usual Arrhenius behavior. Moreover, we find that the natural oscillation time for small strain shows a non-trivial scaling $\tau_{\rm o}\sim L_0^{\,z}T^{-\eta/4}$, with $L_0$ being the ribbon length, $z=2-\eta/2$ being the dynamic critical exponent, $\eta=0.8$ being the scaling exponent describing scale-dependent elastic constants, and $T$ being the temperature. These unusual scale- and temperature-dependent dynamics thus exhibit dynamic criticality and could be exploited in the development of graphene-based nanoactuators.
Paul Z. Hanakata, Sourav S. Bhabesh, David Yllanes, David R. Nelson, Mark J. Bowick
2023-05-11T13:40:06Z
http://arxiv.org/abs/2305.06798v1
# Vibrations and tunneling of strained nanoribbons at finite temperature ###### Abstract Crystalline sheets (e.g., graphene and transition metal dichalcogenides) liberated from a substrate are a paradigm for materials at criticality because flexural phonons can fluctuate into the third dimension. Although studies of static critical behaviors (e.g., the scale-dependent elastic constants) are plentiful, investigations of dynamics remain limited. Here, we use molecular dynamics to study the time dependence of the midpoint (the height center-of-mass) of doubly clamped nanoribbons, as prototypical graphene resonators, under a wide range of temperature and strain conditions. By treating the ribbon midpoint as a Brownian particle confined to a nonlinear potential (which assumes a double-well shape beyond the buckling transition), we formulate an effective theory describing the ribbon's tunneling rate across the two wells and its oscillations inside a given well. We find that, for nanoribbons compressed above the Euler buckling point and thermalized above a temperature at which the non-linear effects due to thermal fluctuations become significant, the exponential term (the ratio between energy barrier and temperature) depends only on the geometry, but not the temperature, unlike the usual Arrhenius behavior. Moreover, we find that the natural oscillation time for small strain shows a non-trivial scaling \(\tau_{\alpha}\sim L_{0}^{2}T^{-\eta/4}\), with \(L_{0}\) being the ribbon length, \(z=2-\eta/2\) being the dynamic critical exponent, \(\eta=0.8\) being the scaling exponent describing scale-dependent elastic constants, and \(T\) being the temperature. These unusual scale- and temperature-dependent dynamics thus exhibit dynamic criticality and could be exploited in the development of graphene-based nanoactuators. + Footnote †: Work completed prior to joining AWS. + Footnote †: Work completed prior to joining AWS. ## I Introduction In the last decade there has been growing interest in utilizing mechanical instabilities in thin materials to design smart materials with desired functionalities, from grasping [1; 2] and shape morphing [3; 4] to locomotion [5; 6]. Using membranes, such as thin sheets, as a building block (say an oscillator) for soft-robotic applications is appealing because thin sheets are flexible and can be controlled with minimal and simple actuation. The buckling instability, which sets in for sufficiently large Foppl-von Karman number \(\mathrm{vK}=\frac{YA}{\kappa}\), where \(Y\) is the 2D Young's modulus, \(A\) is a characteristic ribbon area, and \(\kappa\) the bending rigidity, is an important mechanism for such actuation. This simple principle has been successfully applied to a wide range of materials and system sizes, ranging from meter-sized satellites to nanoactuators [1; 2; 5; 6; 7; 8; 9; 10]. Very recently, there has been success in applying instability mechanisms to control actuator movements in low-noise environment, for example in a centimeter-sized buckling-sheet oscillator [5; 6]. It remains to be seen, however, if similar principles apply in a more noisy environment with, for example, strong thermal fluctuations. The mechanical response and energy dissipation of micro- and nanoscale oscillators have long been studied [11; 12]. Graphene and other 2D-materials-based nanoresonators, commonly in a double-clamped geometry, have been studied extensively. They exhibit remarkable properties compared to their bulk counterparts, including tunability over a wide frequency range, kilo- to terahertz, and a very high quality factor [13; 14; 15; 16; 17; 18; 19; 20; 21]. Exciting though these features are, precise control of the thermal dynamics of these atomically thin materials remains a challenge and is crucial for building, say, soft robots [5; 6]. Nevertheless, nature has shown us that micro- to nanosized biological "robots," such as kinesins and other molecular motors, do exist at biologically relevant temperatures. One of the main challenges in building 2D-materials-based robots or actuators is that height corrugations due to thermal fluctuations [22; 23; 24], impurities [25; 26; 27], or quenched disorder [28], alter the mechanics significantly at large distances--similar to how a wrinkled paper sheet can bear its own weight while a pristine sheet sags. Indeed the bending rigidity of a micron-sized graphene ribbon has been observed experimentally to exhibit a striking \(\sim\) 4000-fold increase at room temperature relative to its zero-temperature value, demonstrating the non-trivial mechanics of nanomaterials [29]. Because the mechanical properties are scale-dependent, which may complicate dynamics, scaling up a micron-sized robot based on graphene nanoribbons or nanotubes requires new design principles. Moreover, while fundamental studies of electronic, optical, and mechanical properties of graphene and other 2D materials are numerous [30; 31; 32; 19; 33], there has been much less focus on their dynamical behavior [34; 35], in particular dynamical critical exponents that relate time scales to length scales. As mechanical properties play an important role in determining the dynamics, such as underdamped or overdamped oscillations, we develop here a framework, motivated by extensive molecular dynamics simulations, to analyze the dynamics of nanoribbons over a wide range of temperatures and strains. We focus specifically on doubly clamped ribbons as one of the most common geometries for nanoelectromechanical systems (NEMS). In contrast to recent work [36], in which thermal effects are neglected while designing clamped resonators, we propose a simple geometric tunability that exploits thermal fluctuations as a means of studying anharmonic effects and dynamics. We will demonstrate that the dynamics of nanoribbons has two distinct behaviors at and above the temperature at which the thermal renormalization of elastic constants sets in. In Sec. II we introduce a simple computational model of nanoribbons mimicking 2D materials such as graphene. We first show how the height fluctuations change with strain in Sec. III, demonstrating the scale-dependent mechanics with simulations. In Sec. IV, we propose an effective free energy of the strained nanoribbon and present molecular dynamics results of the motion of nanoribbons under various strain conditions. We then develop a phenomenological model treating the midpoint of a ribbon as a Brownian particle with damping confined in a nonlinear potential with both single and double wells to understand the dynamics of nanoribbons under compression (Sec. V) and stretching (Sec. VII). In each respective section we present molecular dynamics simulations checking our theoretical predictions. We find that the escape time of the midpoint, which characterizes the inverse of the ribbon flipping rate, of a compressed ribbon at sufficiently high temperatures is approximately temperature-independent and solely governed by the geometry, unlike the usual Arrhenius behavior. At sufficiently high temperatures, where renormalization becomes important, the characteristic escape time scales with system size as \(\tau_{p}\sim L_{0}^{4-\eta}\), in the high-damping regime, and independent of system size in the low-damping regime, with \(\eta\approx 0.8\) the exponent controlling the scale-dependent bending rigidity and \(L_{0}\) the ribbon length. For a slightly stretched or relaxed ribbon we find that the natural oscillation time (oscillation time inside a minima) scales as \(\tau_{\diamond}\sim L_{0}^{(2-\eta/2)}T^{-\eta/4}\), which has no analog in standard mechanical resonators. In the language of dynamic critical phenomena [37], we have a dynamic critical exponent \(z=2-\eta/2\) for relaxed ribbons, and \(z=1\) for ribbon under tension, consistent with Van Hove, with no singularities in the transport coefficients. We conclude by discussing future prospects, including further investigation of the connection between the dynamical critical exponent \(z\) and the static exponent \(\eta\) using finite-size scaling, as well as incorporating an attractive substrate in the numerical simulations to capture energy losses present in certain experiments. ## II The model Similar to a number of previous studies [38; 39; 40; 24; 41], we simulate ribbons discretized on a equilateral triangular lattice. The ribbon is comprised of \(N_{x}\times N_{y}=100\times 25\) nodes with rest (zero-temperature) length \(L_{0}\sim 100a\) and width \(W_{0}\sim 20a\). To model a doubly clamped ribbon, the nodes in the two rows at each end are held fixed. We use a standard coarse-grained model [38] to compute the total energy of the ribbon. Each node is connected by a harmonic spring with a rest length of \(a\). The bending energy is computed using the dihedral interaction between the normals. The total energy is given by \[E=\frac{k}{2}\sum_{\langle i,j\rangle}||\mathbf{r}_{i}-\mathbf{r}_{j}|-a|^{2} +\hat{\kappa}\sum_{\langle\alpha,\beta\rangle}(1-\mathbf{n}_{\alpha}\cdot \mathbf{n}_{\beta}) \tag{1}\] where \(k\) is the harmonic spring constant and \(\hat{\kappa}\) is the microscopic bending rigidity. The first sum is over neighboring nodes and the second sum is over neighboring triangles. The continuum limit yields \(\kappa=\sqrt{3}\hat{\kappa}/2\) for the bare (zero-temperature) continuum bending rigidity and \(Y=\sqrt{2}k/3\) for the bare continuum 2D Young's modulus [38]. Following [40; 24], we set \(k=1440\hat{\kappa}/a^{2}\) so that the Foppl-von Karman number \(\mathrm{vK}=YW_{0}L_{0}/\kappa\sim 10^{6}\) is experimentally realistic. This coarse-grained model has been widely used to model atomically thin materials such as graphene and MoS\({}_{2}\) and successfully captures mechanical and thermal response [42; 43; 44; 45; 46; 47; 48; 49] consistent with those found in simulations with more sophisticated atomistic potentials [45; 46; 22; 47; 48; 49]. The molecular dynamics (MD) simulations are performed with the HOOMD-blue package [50] within the \(NVT\) ensemble (fixed number of particles \(N\), volume \(V\) and temperature \(T\)) with an integration time step of \(dt=0.001\tau_{\mathrm{MD}}\), where \(\tau_{\mathrm{MD}}=\sqrt{\mathcal{M}\mathcal{D}^{2}/\mathcal{E}}\) is the MD unit of time and \(\mathcal{M},\mathcal{D},\mathcal{E}\) are the fundamental units of mass, distance, and energy. For graphene parameters, \(\tau_{\mathrm{MD}}\sim 1\) ps. Temperature is controlled every \(\tau_{T}=0.2\tau_{\mathrm{MD}}\) via the Nose-Hoover thermostat [51]. For systems clamped at compressive strains below critical buckling, we run a total of \(10^{7}\) steps and discard 50% of the data for thermal equilibration. Above the critical buckling the relaxation time increases significantly, and therefore we run a total of \(10^{8}\) steps and discard the first 20% of the data for thermal equilibration. Snapshots are taken every 10,000 steps or equivalently \(10\tau_{\mathrm{MD}}\). HOOMD scripts and analysis codes used in this study are available at [https://github.com/phanakata/statistical-mechanics-of-thin-materials/](https://github.com/phanakata/statistical-mechanics-of-thin-materials/). All simulation data will be reported in natural MD units \(\mathcal{D}=\mathcal{M}=1\), \(k_{\mathrm{B}}T\) in units of \(\hat{\kappa}\) and time in \(\tau_{\mathrm{MD}}\). Temperature is reported as the ratio of the ribbon width \(W_{0}\) to the thermal length [23]\(\ell_{\mathrm{th}}=\sqrt{\frac{64\pi^{3}\kappa_{0}^{2}}{3k_{\mathrm{B}}T\,Y_{0}}}\), as explained below. To study ribbon dynamics over a wide temperature range we vary the ratio of temperature to microscopic bending rigidity \(k_{\rm B}T/\hat{\kappa}\) over a wide range, from \(10^{-1}\) to \(10^{-5}\), while keeping \(k=1440\hat{\kappa}/a^{2}\) and the preferred bond length constant at \(a=1\). Strains \(\epsilon\) are applied by clamping the two ends of the ribbon at different lengths \(L_{\epsilon}\). Thermal fluctuations lead to a reduced projected length of the unstrained ribbon, \(L_{\rm relax}\), relative to the unstrained length at zero temperature \(L_{0}\). The relaxed length \(L_{\rm relax}\) is determined by the vanishing of the average longitudinal stress \(\langle\sigma_{xx}\rangle\)[41]. The compressive strain \(\epsilon=(1-L_{\epsilon}/L_{\rm relax})\) is measured relative to the unstrained ribbon with clamping at \(L_{\rm relax}\). ## III Height profile of deformed ribbons Before discussing the dynamics of thermalized nanoribbons, we first probe the effects of strains on static properties. To lowest order in the height field \(h(x,y)\), in-plane displacement \({\bf u}(x,y)\) and their gradients, the elastic energy of a membrane under a spatially uniform uniaxial compression along the \(x\) direction, \(\sigma_{xx}\), can be written in the continuum limit as [22; 23] \[G[{\bf u},h]= \frac{1}{2}\int dx\,dy\left[\kappa\left(\nabla^{2}h\right)^{2}+2 \mu u_{ij}^{2}+\lambda u_{kk}^{2}\right]\] \[-\int dx\,dy\,\sigma_{xx}(\partial_{x}u_{x}), \tag{2}\] where \(u_{ij}\approx(\partial_{i}u_{j}+\partial_{j}u_{i})/2+\partial_{i}h\partial_{j}h\) is the nonlinear strain tensor, \(\kappa\) is the bare continuum bending rigidity and \(\mu\) and \(\lambda\) are the Lame coefficients. By tracing out the in-plane degrees of freedom, the effective free energy can be written in terms of out-of-plane flexural phonon deformation field \(h(x,y)\)[23] \[G_{\rm eff}[h]= \int dx\,dy\left[\frac{\kappa}{2}\left(\nabla^{2}h\right)^{2}+ \frac{Y}{8}\left(P_{ij}^{T}(\partial_{i}h)(\partial_{j}h)\right)^{2}\right]\] \[-\int dx\,dy\sigma_{xx}(\partial_{x}h)^{2}, \tag{3}\] where \(Y=4\mu(\mu+\lambda)/(2\mu+\lambda)\) is the bare 2D Young's modulus and \(P_{ij}^{T}=\delta_{ij}-\partial_{i}\partial_{j}/\nabla^{2}\) is the transverse projection operator. Within the harmonic approximation, the spectrum of the height-height correlation function of a tensionless sheet is \(\langle|h(q)|^{2}\rangle=k_{\rm B}T/(A_{0}\kappa q^{4})\), where \(A_{0}=L_{0}\times W_{0}\) is the undeformed sheet area, and \(h({\bf q})\equiv\frac{1}{A_{0}}\int dx\,dy\,e^{-(q_{x}\,x+q_{y}\,y)}h(x,y)\) is the Fourier transform of \(h(x,y)\). At low-temperature (\(k_{\rm B}T/\kappa\ll 1\)) a perturbative calculation shows that the bending rigidity is renormalized by thermal fluctuations in the form \(\kappa({\bf q})=\kappa_{0}+\frac{\chi_{\rm B}hT}{\kappa}I({\bf q})\), where \({\bf q}\) is the wavevector and \(I({\bf q})\) is a momentum integral that scales as \(q^{-2}\) for \(q\to 0\)[52]. The relative perturbative correction is of order one above a fundamental length scale \(\ell_{\rm th}\sim\sqrt{\kappa/(YK_{\rm B}T)}\)[52; 53]. At and above \(\ell_{\rm th}\) thermal fluctuations lead to scale-dependent mechanical moduli and non-trivial departures from the expected zero-temperature mechanical behavior. Within a renormalization group treatment, the spectrum of the height-height correlation function of a ribbon under uniaxial compression is given by [23] \[\langle|h(q)|^{2}\rangle=\frac{k_{\rm B}T}{A_{0}(\kappa_{\rm R}(q)q^{4}- \sigma_{xx}q_{\rm Z}^{2})}, \tag{4}\] where \(\sigma_{xx}\simeq Y_{\rm R}\epsilon\) is the positive compressive stress. The scale-dependent renormalized bending rigidity, \(\kappa_{\rm R}(q)\), and 2D Young's modulus, \(Y_{\rm R}(q)\), are given by [52; 23] \[\kappa_{\rm R}(q)\sim\begin{cases}\kappa&\text{if }q\gg q_{\rm th} \\ \kappa\left(q/q_{\rm th}\right)^{-\eta}&\text{if }q\ll q_{\rm th}\end{cases} \tag{5}\] \[Y_{\rm R}(q)\sim\begin{cases}Y&\text{if }q\gg q_{\rm th} \\ Y\left(q/q_{\rm th}\right)^{\eta_{u}}&\text{if }q\ll q_{\rm th}\end{cases} \tag{6}\] where \(\eta\) and \(\eta_{u}\) are scaling exponents and \(\sqrt{\frac{3k_{\rm B}T\,Y_{0}}{16\pi\epsilon_{0}^{2}}}\) is the wavevector below which renormalization becomes important [53]. Theoretical estimates [53; 54; 55] of the scaling exponents give \(\eta\approx 0.8-0.85\) and \(\eta_{u}\approx 0.2-0.4\), and have been confirmed by height-height correlation measurements in Monte Carlo [22; 39; 56; 45] and in molecular dynamics simulations [24; 26; 57], as well as more recently by stress-strain curve measurements [41; 43]. Eq. 4 indicates that height fluctuations are suppressed when stretching (\(\sigma_{xx}<0\)) is applied. For sufficiently large stretching \(|\epsilon|\gg\kappa_{\rm R}/(Y_{\rm R}q^{2})\) the \(q^{-2}\) behavior in \(\langle|h(q)|^{2}\rangle\) should dominate. Equivalently, for small wavevectors, \(q\ll\sqrt{|\epsilon|Y(q)/\kappa_{\rm R}(q)}\), \(\langle|h(q)|^{2}\rangle\) should switch from a \(q^{-(4-\eta)}\) or \(q^{-4}\) dependence to a \(q^{-2}\) fall-off. Fig. 1 shows the spectrum of the height-height correlation \(\langle|h(q)|^{2}\rangle\) obtained from MD simulations as a function of wavevector \(q\) for five different strains, both compressional \(\epsilon>0\) and extensional \(\epsilon<0\), \(\epsilon=[-0.3\%,-0.2\%,0\%,+0.2\%,+0.6\%]\). Here we show a system at a sufficiently high temperature, \(k_{\rm B}T/\hat{\kappa}=0.05\) (\(W_{0}/\ell_{\rm th}=8.5\)), where thermal fluctuations are significant. The thermalized critical Euler buckling strain for this particular system is \(\epsilon_{c}=0.05\%\). In the unstrained case, we see that \(\langle|h(q)|^{2}\rangle\sim q^{-(4-\eta)}\), with \(\eta\approx 0.8\), as expected [23; 43], for a wide range of \(q\). For stretched ribbons, in contrast, \(\langle|h(q)|^{2}\rangle\) scales more like \(q^{-2}\). This is better seen in the plot of \(q^{2}\langle|h(q)|^{2}\rangle\) in the inset of Fig. 1. While stretching (\(\epsilon<0\)) suppresses height fluctuations, sufficiently large compression drives buckling and consequently \(\langle|h(q)|^{2}\rangle\) of a compressed ribbon is elevated relative to the unstrained case. These strain-induced modifications of static properties have also been observed in the normal-normal correlation function of graphene under isotropic deformation [22]. ## IV Mean-field approximation to ribbon midpoint energetics We turn now to a simplified model of the dynamics of the ribbon center of mass, which is related to the fundamental mode of a doubly clamped ribbon. We simplify by coarse-graining over the short-scale fluctuations along the \(x\) and \(y\) directions. Specifically, we assume that the height profile is constant along the \(y\) direction in Fig. 2(b). For a ribbon of width \(W_{0}\), this approach effectively treats the ribbon as a one-dimensional object but with modified \(W_{0}\)-dependent elastic constants. By integrating out the in-plane phonons, the effective Gibbs free energy becomes [41] \[\begin{split} G_{\rm eff}[h]&=\frac{\kappa W_{0}}{2 }\int_{-L_{\epsilon}/2}^{L_{\epsilon}/2}dx\left(\frac{d^{2}h}{dx^{2}}\right)^ {2}\\ &\quad+\frac{YW_{0}}{2L_{\epsilon}}\left[\int_{-L_{\epsilon}/2}^{ L_{\epsilon}/2}dx\frac{1}{2}\left(\frac{dh}{dx}\right)^{2}\right]^{2}\\ &\quad-\frac{F}{2}\int_{-L_{\epsilon}/2}^{L_{\epsilon}/2}dx \left(\frac{dh}{dx}\right)^{2}dx+G^{\rm pre}[\overline{\Delta L}],\end{split} \tag{7}\] where \(L_{\epsilon}\) is the projected ribbon length corresponding to the strain \(\epsilon\) and \(G^{\rm pre}\) is the total prestress elastic energy stored during compression before buckling. \(G^{\rm pre}\) is independent of the ribbon height profile and can be dropped. Within the mean-field approximation, the ribbon height is assumed to be smooth over scales larger than the thermal length \(\ell_{\rm th}\) and double-camped boundary condition is implemented. These two conditions can be approximated by a profile \(h(x)=\frac{h_{\rm M}}{2}\left[1+\cos\left(\frac{2\pi x}{L_{\epsilon}}\right)\right]\). Upon using this height as an ansatz we obtain the effective Gibbs free Figure 2: (a) Schematics of the mean-field Gibbs free energy \(G_{\rm eff}\) as a function of height center-of-mass, \(h_{\rm CM}\), for a ribbon under stretched, unstrained, and buckled conditions. In each well the center of mass oscillates with a period of \(\tau_{\circ}=2\pi\sqrt{\frac{M}{k^{\rm eff}}}\), where \(M\) is the ribbon mass, \(k^{\rm eff}=\left.\frac{d^{2}G_{\rm eff}}{dk_{\rm CM}^{2}}\right|_{h_{\rm CM}=h _{\rm CM}^{2}}\) is the effective spring constant, and \(h_{\rm CM}^{*}\) is the \(h_{\rm CM}\) where \(G_{\rm eff}\) is at a minimum. (b) Representative configurations of a ribbon corresponding to three different compressive strains: \(\epsilon=-0.4\%\) (stretched), \(\epsilon=0\%\) (unstrained), and \(\epsilon=+0.6\%\) (buckled). Recall that the critical strain for compressive buckling under these conditions is quite small, \(\epsilon_{c}=0.05\%\). The color represents the \(z\)-position of a node scaled to the range \(-2a\) to \(+2a\). Positions are visualized using OVITO software [58]. energy from Eq. 7 [41] \[G_{\rm eff}[h_{\rm M}]=\frac{\pi^{2}YW_{0}}{4L_{\epsilon}}(\epsilon_{c}-\epsilon)h _{\rm M}^{2}+\frac{\pi^{4}YW_{0}}{32L_{\epsilon}^{3}}h_{\rm M}^{4}, \tag{8}\] where \(\epsilon_{c}=\frac{4\pi^{2}\kappa}{YL_{\epsilon}^{2}}\) is the critical strain for Euler buckling and \(L_{\epsilon_{c}}\) the associated projected length. Although this energy resembles the Landau theory of a critical point, note that \(\epsilon_{c}\) (the analog of a critical temperature) depends on the system size. For \(\epsilon>\epsilon_{c}\), there are two stable minima at \(h_{\rm M}=\pm\frac{2L_{\epsilon_{c}}}{\pi}\sqrt{\epsilon-\frac{4\pi\pi^{2}}{YL _{\epsilon}^{2}}}\) and one unstable point at \(h_{\rm M}=0\), whereas for \(\epsilon\leq\epsilon_{c}\) there is one stable minimum at \(h_{\rm M}=0\) (see Fig. 2(a)). To relate this result to simulations, we use the center-of-mass midpoint \(h_{\rm CM}=\frac{1}{N}\sum_{i}z_{i}\) as a measure of the aggregate collective motion of all nodes. This simplification effectively treats the ribbon as a Brownian particle confined to a nonlinear potential. Henceforth we will write Eq. 8 and other derived quantities in terms of \(h_{\rm CM}\) using \(h_{\rm CM}^{2}\equiv\left(\frac{1}{L_{\epsilon}}\int_{-L_{\epsilon}/2}^{L_{ \epsilon}/2}h\ {\rm d}x\right)^{2}=\frac{1}{4}h_{\rm M}^{2}\)[41]. Eq. 8 reveals that when \(\epsilon\ll\epsilon_{c}\) the non-linearity can be neglected, and for small height deflections, \(h_{\rm CM}\) is expected to oscillate in a harmonic potential with a period \(\tau_{\rm o}=2\pi/\omega_{\rm o}\), where we expect \(\omega_{\rm o}\) is related to the total ribbon mass \(M\) by \(\omega_{\rm o}=\sqrt{k^{\rm eff}/M}\) and \(k^{\rm eff}=\left.\frac{d^{2}G_{\rm eff}}{dh_{\rm CM}^{2}}\right|_{h_{\rm CM}= h_{\rm CM}^{*}}\), where \(h_{\rm CM}^{*}\) is the minimum shown on the right side of Fig. 2(a). From MD simulations, we indeed find that the \(h_{\rm CM}\) of a ribbon stretched at \(\epsilon=[-0.2\%,-0.3\%]\) oscillates with a sinusoidal-like function around zero, as shown in Fig. 3(a). For the unstrained case, shown in Fig. 3(b), the oscillation appears to have a larger amplitude with a longer and irregular period compared to that of the stretched case. For large compressions, \(\epsilon=[+0.2\%,+0.6\%]\), well above the critical buckling threshold \(\epsilon_{c}=0.05\%\) of this particular system, the ribbon buckles out of the plane with an amplitude much larger than the unstrained and stretched cases (see Fig. 2(b)). It can be seen from Fig. 3(c) that \(h_{\rm CM}(t)\) of a buckled ribbon behaves like a two level system, and stays buckled either above or below the plane of zero-height with an amplitude much larger than the stretched/unstrained case for a long period of time before it flips to the opposite side (moves to the other minima of a double-well potential). Similar thermally-assisted barrier crossings are also observed in single-clamped ribbons [59]. This characteristic time, which we will call the escape time (or residence time) \(\tau_{e}\), increases with increasing compression (see Fig. 3(c)). Notice also that when the ribbon stays within a local minimum, it oscillates with a much shorter time scale \(\tau_{\rm o}\) than the escape time \(\tau_{e}\), and with a smaller fluctuation amplitude (\(\sim 0.5a\)) relative to the buckling amplitude (\(\sim 2a\)). To summarize, the ribbon oscillates around a single minimum under stretched and unstrained conditions. Beyond the buckling point, however, the ribbon switches between two minima with an escape time \(\tau_{e}\) much larger than the oscillation period inside the wells. By building on these observations and on our mean-field Gibbs free energy Eq. 8, we will now develop a framework that treats the ribbon midpoint as a Brownian particle confined in a double-well potential of which the strength of the quadratic term is controlled by the external strain (schematically shown in Fig. 2). In the next two sections, we develop a phenomenological theory of the dynamics in the limit of large compression and large stretching energy to explain these observations. Figure 3: Midpoint trajectory \(h_{\rm CM}(t)\) of a ribbon under (a) stretched \(\epsilon=[-0.3\%,-0.2\%]\), (b) unstrained \(\epsilon=0\%\), and (c) buckled \(\epsilon=[+0.2\%,+0.6\%]\) conditions at a fixed \(W_{0}/\ell_{\rm th}=8.5\ (k_{\rm B}T=0.05\dot{\kappa})\). The time \(t\) is in units of the MD time unit \(\tau_{\rm MD}\). For clarity the time domain is chosen differently in (c). For stretched ribbons (a) \(h_{\rm CM}\) oscillates rapidly about the zero plane with small amplitude. For unstrained ribbons (b) the oscillation period increases and is irregular. Well beyond the thermalized Euler buckling point ribbons stay buckled either above or below the zero-plane for many oscillations before switching to the other local minimum (up to down state and vice versa). In (c) we see a dramatic increase in residence time with increasing compressive strain. In a local minimum, \(h_{\rm CM}\) fluctuates with a shorter period and with a smaller amplitude relative to the tunneling (fluctuation over a barrier) dynamics, indicating that \(h_{\rm CM}\) oscillates inside the local minimum for many periods before escaping over the potential barrier. Compressed ribbon dynamics In this section we focus on the dynamics of ribbons under compression above the Euler buckling point. We model the transition from the buckled up state to the down state as a rare event of a transition process over some energy barrier \(E_{\rm barrier}\). We begin by discussing the thermally activated transition process of a system in a double-well potential. We then compare the molecular dynamics results with the theoretical predictions [60; 61]. ### Escape time estimated from transition state theory The problem of escaping a barrier in a noisy environment, such as a thermal bath, has been studied extensively since the late 1800s, when the well-known Arrhenius form for the escape rate was first formulated based on experimental data [62] \[\mathcal{R}=\nu_{0}e^{-E_{a}/k_{\rm B}T}, \tag{9}\] where \(\nu_{0}\) is a prefactor related to an escape frequency and \(E_{a}\) denotes the activation energy. Soon after, several theories, summarized in Ref. [61], were developed, including Kramers' seminal work [63] on incorporating coupling of particles to the heath bath (frictional force), which is missing in the Arrhenius formula. Kramers used a microscopic model of, say, a particle in a nonlinear double-well potential governed by Langevin equations, to formulate the transition rate. The transition rate in the intermediate-to-high damping regime is given by [60; 63] \[\mathcal{R}=\left[\left(\frac{\gamma^{2}}{4M^{2}}+\omega_{b}^{2}\right)^{1/2} -\frac{\gamma}{2M}\right]\frac{\omega_{\rm o}}{2\pi\omega_{b}}\exp\left[- \frac{E_{\rm barrier}}{k_{\rm B}T}\right], \tag{10}\] where \(E_{\rm barrier}\) is the energy barrier, \(\gamma\) is the damping coefficient, \(\omega_{\rm o}\equiv(U^{\prime\prime}(x_{\rm min})/M)^{1/2}\) is the angular frequency in the metastable minimum, \(\omega_{b}\equiv(|U^{\prime\prime}(x_{\rm b})|/M)^{1/2}\) is the angular frequency at the transition (unstable local maximum), \(M\) is the particle mass, and \(U^{\prime\prime}(x)\) is the second derivative of a conservative potential \(U(x)\). Given that the dynamics of the collective motion, characterized by \(h_{\rm CM}(t)\), of the buckled ribbon and the effective free energy, with both harmonic and quartic terms, is similar to escape over a barrier, we will first calculate the energy barrier and then discuss the behavior in different temperature regimes. Since we work with relatively small strains, we assume a compressible stress \(\sigma_{xx}\simeq Y\epsilon\). We define a reduced additional compressive strain relative to critical buckling as \(\delta\equiv\frac{\epsilon-\epsilon_{\rm e}}{\epsilon_{\rm e}}\). In our previous work we found that the Gibbs free energy can be used to predict thermalized Euler buckling provided that we use the thermally _renormalized_ elastic constants \(Y_{\rm R}=Y(W_{0}/\ell_{\rm th})^{-\eta_{\rm e}}\) and \(\kappa_{\rm R}=\kappa(W_{0}/\ell_{\rm th})^{\eta}\) whenever \(W_{0}/\ell_{\rm th}\gg 1\)[41]. Following the same approach, we use renormalized elastic constants to calculate \(E_{\rm barrier}\), the temperature-dependent critical buckling \(\epsilon_{\rm c}=4\pi^{2}\kappa_{\rm R}/(Y_{\rm R}L_{\epsilon_{\rm c}}^{2})\), and the maximum height \(h_{\rm M}=\frac{2L_{\epsilon_{\rm c}}}{\pi}\sqrt{\delta\times\epsilon_{\rm c}}\). By inserting these renormalized values into Eq. 10, we obtain the escape time \(\tau_{e}\equiv\mathcal{R}^{-1}\) \[\tau_{e}=\tau_{p}\exp\left[\frac{8\pi^{4}W_{0}\kappa_{\rm R}^{2}\delta^{2}}{Y _{\rm R}L_{\epsilon_{\rm c}}^{3}k_{\rm B}T}\right]. \tag{11}\] Here we introduce a prefactor time scale \(\tau_{p}=\left\{\left[\left(\frac{\gamma^{2}}{4M^{2}}+\omega_{b}^{2}\right)^{ 1/2}-\frac{7}{2M}\right]\frac{\omega_{\rm o}}{2\pi\omega_{b}}\right\}^{-1}\), which is the inverse of the prefactor in Eq. 10. Note that to insure infrequent transitions, \(E_{\rm barrier}/k_{\rm B}T\gg 1\) must be satisfied so that we obtain a separation of time scale condition \(\tau_{e}\propto\exp[E_{\rm barrier}/k_{\rm B}T]\gg\tau_{\rm o},\tau_{\rm b}\), with \(\tau_{\rm o}\equiv 2\pi/\omega_{\rm o}\) and \(\tau_{b}\equiv 2\pi/\omega_{b}\), being the characteristic times at the bottom of the well and at the saddle point, respectively. On using the energy functional with the renormalized elastic parameters (Eqs. 5, 6 and 8) we can directly calculate the renormalized \(\tau_{\rm o}\) and \(\tau_{\rm b}\), in terms of the areal mass density \(\rho\), the ribbon length and the renormalized bending rigidity \(\kappa_{\rm R}\), \[\tau_{\rm o}^{\rm R}=L_{0}^{2}\sqrt{\frac{\rho}{4\pi^{2}\kappa_{\rm R}\delta}}, \quad\tau_{\rm b}^{\rm R}=L_{0}^{2}\sqrt{\frac{\rho}{2\pi^{2}\kappa_{\rm R} \delta}}. \tag{12}\] Note that both these times diverge as \(\delta\to 0\). Here we use \(L_{\epsilon}\approx L_{0}\), as we are working with systems with large Foppl-von Karman number vK number, and \(M=\rho W_{0}L_{0}\). Our numerical simulations confirm that \(L_{\epsilon_{\rm c}}\) is approximately \(L_{0}\) and weakly dependent on \(T\) as long as \(L_{0}\) is smaller than the persistence length \(\ell_{\rm p}=\frac{2\kappa W_{0}}{k_{\rm B}T}\) (see Appendix B). Note that when \(\ell_{\rm p}\ll L_{0}\), the ribbon will behave like a 1D polymer [23]. In the low-temperature regime \(\kappa_{\rm R}\simeq\kappa\), we recover the \(L_{0}^{2}\) dependence of the oscillation period \(\tau_{\rm o}\), a well-known result for doubly clamped beams [64; 65]. Upon inserting \(\kappa_{\rm R}=\kappa(W_{0}/\ell_{\rm th})^{\eta}\) into Eq. 12 to describe the important intermediate temperature regime, we find \(\tau_{\rm o}^{\rm R},\tau_{\rm b}^{\rm R}\propto L_{0}^{\varepsilon}\) with \(z=2-\eta/2\). To the best of our knowledge, these deviations in the exponent away from the classical result have not been systematically investigated in experiments \(\tau_{\rm o}\propto L_{0}^{2}\) scaling in Ref. [13] and \(\tau_{\rm o}\propto L_{0}\) in Ref. [15]-conclusions which bracket our result \(z=2-\eta/2\simeq 1.6\), presumably due to relatively larger error bars-nor in numerical simulations. We shall investigate the exponent \(z\) and the power law scaling with \(T\) numerically in Sec. VII. ### Escape time in different temperature regimes We first focus on the exponential term, which dominates the behavior for large \(\delta\). For convenience in our analysis, we write the term involving \(\kappa^{2}/Y\) in terms of \(\ell_{\rm th}^{2}\). In the classical low-temperature regime we use the bare elastic constants to obtain \[\tau_{e}=\tau_{p}\exp\left[\frac{3\pi\delta^{2}}{8}\left(\frac{W_{0}}{L_{\epsilon_ {e}}}\right)^{3}\left(\frac{\ell_{\text{th}}}{W_{0}}\right)^{2}\right]. \tag{13}\] In this regime the ratio between the energy barrier and the thermal energy depends on the cube of the aspect ratio \(W_{0}/L_{\epsilon_{e}}\) and the square of \(\ell_{\text{th}}/W_{0}\), yielding the usual Arrhenius-like behavior \(\tau_{e}\propto\exp[E_{\text{barrier}}/k_{\text{B}}T]\). In the high-temperature regime, however, we use the renormalized elastic constants \(\kappa=\kappa(W_{0}/\ell_{\text{th}})^{\eta}\), \(Y=Y(W_{0}/\ell_{\text{th}})^{-\eta_{\eta}}\), as well as the scaling relation \(2\eta+\eta_{u}=2\)[54], to obtain the escape time: \[\tau_{e}=\tau_{p}\exp\left[\frac{3\pi\delta^{2}}{8}\left(\frac{W_{0}}{L_{ \epsilon_{e}}}\right)^{3}\right]. \tag{14}\] Remarkably, and unlike the usual Arrhenius behavior, the exponential term in this case _does not_ depend on temperature, but instead depends solely on the geometry, specifically as the cube of the aspect ratio. Now according to Eq. 10, the prefactor time \(\tau_{p}\) depends on the temperature and strain: \(\tau_{p}\approx\frac{2\pi\gamma}{\lambda_{\text{Max}}\omega_{b}}\) for \(\gamma/M\gg\omega_{b}\) and \(\tau_{p}\approx\frac{M}{\gamma}\frac{k_{\text{B}}T}{E_{\text{barrier}}}\) for \(\gamma/M\ll\omega_{b}\). (Note that one cannot simply take the limit \(\gamma/M\ll\omega_{b}\) in Eq. 10 to get the _very low_ damping regime result. Kramers used a different formulation for this very low damping case [60; 61; 63].) Turning now to the scaling with system size, temperature, and relative compression, we find that in the high-damping regime (\(\frac{\gamma}{M}\gg\omega_{b}\)) the prefactor scales as: \[\tau_{p}\propto\begin{cases}L_{0}^{4}\delta^{-1}&\text{ if }W_{0}/\ell_{ \text{th}}\ll 1\\ L_{0}^{4-\eta}\delta^{-1}T^{-\eta/2}\sim L_{0}^{3.2}\delta^{-1}T^{-0.4}&\text{ if }W_{0}/\ell_{\text{th}}\gg 1.\end{cases} \tag{15}\] Here we use \(\eta\approx 0.8\) and assume some fixed aspect ratio \(W_{0}/L_{0}\), with \(W_{0}\simeq L_{0}\), and a fixed ribbon density. With the same assumptions, we can obtain the prefactor time scale \(\tau_{p}\approx\frac{M}{\gamma}\frac{k_{\text{B}}T}{E_{\text{barrier}}}\) at very low damping: \[\tau_{p}\propto\begin{cases}L_{0}^{2}\delta^{-2}T&\text{ if }W_{0}/\ell_{ \text{th}}\ll 1\\ \delta^{-2}&\text{ if }W_{0}/\ell_{\text{th}}\gg 1.\end{cases} \tag{16}\] Thus, apart from the case of low damping and weak (subdominant) renormalization, \(\tau_{p}\) shows either weak or no temperature-dependent behavior. Note that we expect Kramers' result to be valid when energy barrier is larger than the thermal energy \(k_{\text{B}}T\). By taking the log of the escape time \(\tau_{e}\) (either Eq. 14 or Eq.13), we see that the \(\delta^{2}\) term (from the energy barrier) dominates the log \(\delta\) term (from \(\tau_{p}\)) for large \(\delta\). In the next section, we assume that \(\tau_{p}\) is independent of \(\delta\) for fitting the extracted escape time \(\tau_{e}\) to either the high temperature result Eq. 14 or the low temperature result Eq. 13. Since the exponential term dominates for large \(\delta\) and \(\tau_{p}\) is weakly dependent on \(T\) for the set of parameters used in our simulations, our analysis suggests that the rather intriguing result that the escape time is controlled only by the geometry when \(W_{0}/\ell_{\text{th}}\gg 1\). In the low-temperature regime, however, we recover the usual Arrhenius behavior with geometry-dependent energy barrier. ## VI Molecular dynamics results of compressed ribbons We now turn to molecular dynamics data to test our thermally renormalized stochastic model of a double well potential, beyond the buckling transition. We will use relaxation times extracted from the autocorrelation function \(\tau_{\text{AC}}\) to approximate the escape time \(\tau_{e}\). Specifically, we calculate the discrete autocorrelation function of the average ribbon height \(h_{\text{CM}}\) to quantify ribbon dynamics \[A_{h_{\text{CM}}}(t_{j})=\frac{1}{(n-j)\sigma^{2}}\sum_{i=1}^{n-j}[h_{\text{CM} }(t_{i}+t_{j})-\mu][h_{\text{CM}}(t_{i})-\mu], \tag{17}\] where \(n\) is the number of observations in a single simulation run. The time offset is \(t_{j}=j\times\Delta t\) and the sum is over a set of times \(t_{i}=i\times\Delta t\) with \(i=1,n-j\). Here and \(\mu\) and \(\sigma^{2}\) are the mean and variance of \(h_{\text{CM}}\), respectively. In our simulation we choose \(\Delta t=10\tau_{\text{MD}}\). Given that successive jumps between the up and down states occur at random intervals (see Fig. 3(c)), we expect \(A_{h_{\text{CM}}}(t)\) to decay exponentially in time for sufficiently long time \(t\) \[A_{h_{\text{CM}}}(t)\propto\exp[-t/\tau_{\text{AC}}], \tag{18}\] where \(\tau_{\text{AC}}\) is the autocorrelation time. We average \(A_{h_{\text{CM}}}(t)\) over 10 independent runs and fit the data to an exponential function to extract \(\tau_{\text{AC}}\). While in practice \(\tau_{\text{AC}}\) may capture more than escape-over-a-barrier dynamics (the longest relaxation time), we expect \(\tau_{\text{AC}}\) will be dominated by the escape time \(\tau_{e}\), provided we run our simulations long enough to capture at least several rare flipping events; otherwise \(\tau_{\text{AC}}\) will be on the order of the short-scale relaxation time inside the well. In App. C we use a different phenomenological theory to extract \(\tau_{e}\) by filtering the up and down states; we still conclude that \(\tau_{e}\) robustly increases with compression and temperature following Eqs. 13 and 14, at low and high temperature, respectively. Fig. 4(a) shows the rapid increase in \(\tau_{e}\) with increasing compression at a set of temperatures with \(W_{0}/\ell_{\text{th}}=[24;17;12,8.5,0.3,0.2]\). At high temperatures (\(W_{0}/\ell_{\text{th}}>5\)) and sufficiently large \(\delta^{2}\) we see that the slopes, the coefficients in the exponent, are close to \(\frac{3\pi^{2}}{8}\left(\frac{W_{0}}{L_{0}}\right)^{3}\). Remarkably, this high-temperature result indeed indicates the _temperature-independent_ ratio of the activation energy to the thermal energy discussed in the previous section. The slopes in the low-temperature regime (\(W_{0}/\ell_{\text{th}}\lesssim 0.5\)), in contrast, increase systematically as the temperature drops. These two very different behaviors are consistent with our earlier analyses based on a thermally renormalized double-well potential for the ribbon height. To further test our theoretical predictions, we fit the high-temperature data (\(W_{0}/\ell_{\rm th}>5\)) to Eq. 14 and the low-temperature data (\(W_{0}/\ell_{\rm th}<0.5\)) to Eq. 13 using _only_\(\tau_{p}\) as an adjustable fitting parameter. As we discussed in the previous section, here we assume that \(\tau_{p}\) is independent of \(\delta\) as the exponential of \(\delta^{2}\) dominates for large \(\delta\). By rescaling \(\tau_{e}\) with \(\tau_{p}\) and \(\delta^{2}\) with the appropriate temperature and geometrical terms, we are able to collapse all data onto a single curve, as shown in Fig. 4(b). The inset to Fig. 4(b) shows a log-log plot of the fitted prefactor time \(\tau_{p}\) as a function of \(W_{0}/\ell_{\rm th}\). We see that, apart from the lowest temperatures, \(\tau_{p}\) depends weakly on \(W_{0}/\ell_{\rm th}\). Fitting only the four high temperature data points (\(W_{0}/\ell_{\rm th}>5\)) we find \(\tau_{p}={\rm constant}\times T^{0.03}\). This suggests that our data are better described by the low-damping case (see Eqs. 15 and 16). Kalmykov et al. provided an exact solution of the correlation time of a Brownian particle in a double-well potential involving special functions [60; 67]. The approximate prefactor of [60; 67], however, still scales as \(1/E_{\rm barrier}\propto\delta^{-2}\), which is still the same as the Kramers' very low damping regime result. Our data in the small \(\delta\) regime however do not show such behavior and we do not expect our phenomenological model based on Kramers' result would work for \(E_{\rm barrier}\ll k_{\rm B}T\). Although a more refined theoretical treatment for the prefactor of the escape time of the center-of-mass motion of a double-clamped ribbon is beyond the scope of our current work, we hope to investigate the geometrical and temperature dependencies of this prefactor in the future. ## VII Stretched and Unstrained ribbon We now turn to the dynamics of a ribbon in the regime away from the threshold compressive force needed to produce the Euler buckling transition, which includes both the stretched and the unstrained cases. Again we approximate the center-of-mass dynamics of the ribbon as a Brownian particle confined in a potential. We first discuss some key results, such as the analytical solution to the positional autocorrelation function within the Brownian particle approximation. The complete derivations of the solution of a Brownian particle in a harmonic potential can be found in Refs [60; 68], and for completeness we provide key results in the App. A. In the limit of small deflection amplitude, and especially in the large-stretching limit (so that the curvature at the parabolic minimum as a function of \(h_{\rm CM}\) is large), we can neglect the fourth order term in the potential, retaining only the harmonic term. With this simplification, the equations of motions for the ribbon midpoint become linear, \[\frac{dh_{\rm CM}}{dt}=v\] \[\frac{dv}{dt}=-\frac{\gamma}{M}v-\omega_{0}^{2}h_{\rm CM}+\frac{1 }{M}\xi(t), \tag{19}\] where \(\omega_{0}=(k^{\rm eff}/M)^{1/2}\) is the natural frequency, \(k^{\rm eff}=\frac{2\pi^{2}Y_{\rm B}W_{0}|e-\epsilon_{e}|}{L_{*}}\) is the effective spring constant, \(M\) is the mass, and \(\gamma\) is the damping coefficient. The random force \(\xi(t)\) is a Gaussian process with zero mean and \(\delta\)-function concentrated correlation function. One can solve the Langevin equations in frequency space and obtain the autocorrelation function of \(h_{\rm CM}\) by inverse Fourier transforming the spectral density of the midpoint dynamics \(S_{h_{\rm CM}}\propto|h_{\rm CM}(\omega)|^{2}\). The time autocorrelation function of \(h_{\rm CM}\) for the model of Eq. 19 is given by \[A_{h_{\rm CM}}(t) = \langle h_{\rm CM}(t^{\prime})h_{\rm CM}(t^{\prime}+t)\rangle\] \[= \frac{k_{B}T}{M\omega_{\rm o}^{2}}e^{-\frac{\gamma}{2M}t}\left[ \cos\omega_{\rm D}t+\frac{\gamma}{2M\omega_{\rm D}}\sin\omega_{\rm D}t\right],\] where the natural frequency is renormalized by damping is defined as \(\omega_{\rm D}=\sqrt{\omega_{\rm o}^{2}-\gamma^{2}/(4M^{2})}\). Thus \(A_{h_{\rm CM}}(t)\) is an oscillating function with an exponential decay envelope. The damping time \(\tau_{\rm damp}\equiv\frac{2M}{\gamma}\) that appears in the exponential prefactor provides a phenomenological description of dissipation in the system. ### Molecular dynamics results We now present measurements of time autocorrelation function \(A_{h_{\rm CM}}(t)\) of \(h_{\rm CM}(t)\) from the simulations of a doubly clamped ribbon. Fig. 5 shows the average \(A_{h_{\rm CM}}\) after thermodynamic equilibrium is reached as a function of time \(t\) for stretched, unstrained, and compressed ribbons at one representative temperature \(\frac{W_{0}}{t_{\rm th}}=8.5\) such that thermal fluctuations renormalize the bending rigidity; similar results are found for other parameter choices. For the stretched case \(A_{h_{\rm CM}}\) oscillates rapidly but decays rather slowly, whereas for the unstrained case \(A_{h_{\rm CM}}\) oscillates at a lower frequency but decays much faster. When the ribbon is compressed well above the critical buckling threshold, on the other hand, \(A_{h_{\rm CM}}\) displays a purely exponential decay. These findings are consistent with the dynamics of \(h_{\rm CM}(t)\) itself shown earlier in Fig. 3. We next calculate the ribbon resonant frequency \(\omega_{\rm o}\) from the parameters \(\omega_{\rm D}\) and \(\tau_{\rm damp}\) extracted by fitting the data to Eq. 20. Fig. 6(a) shows \(\omega_{\rm o}/\overline{\omega_{\rm o}}\) as a function of the relative compression \(\delta=\frac{\epsilon-\epsilon_{\rm e}}{\epsilon_{\rm e}}\), where \(\overline{\omega_{\rm o}}\equiv\frac{1}{L_{0}^{2}}\sqrt{\frac{8\pi^{4}\kappa }{\rho}}\) is the bare natural frequency of the unstrained ribbon without thermal fluctuations (\(T=0\), \(\delta=-1\)). The increase of \(\omega_{\rm o}\) with increasing tension is consistent with experiments in graphene resonators [13; 15]. Notice that, as expected, \(\omega_{\rm o}/\overline{\omega_{\rm o}}\simeq 1\) for the two lowest temperatures (\(W_{0}/\ell_{\rm th}<0.5\)). In contrast, at high temperatures (\(W_{0}/\ell_{\rm th}>5\)) \(\omega_{\rm o}\) increases relative to its zero-temperature value, indicating a stiffening of the bending rigidity. Based on the earlier analysis we can use the predicted renormalized natural frequency at zero strain \(\omega_{\rm o}^{\rm R}\simeq\frac{1}{L_{0}^{2}}\sqrt{\frac{8\pi^{4}\kappa_{ \rm th}}{\rho}}\) to rescale the data. Rescaling \(\omega_{\rm o}\) with its renormalized value \(\omega_{\rm o}^{\rm R}\) yields a better data collapse for not too large \(|\delta|\), as shown in Fig. 6(b). This strategy provides an oscillation measurement route to measuring the stiffening of bending rigidity, a complementary approach to critical buckling measurements [41; 43]. Because \(\tau_{\rm damp}\) also increases with increasing tension, the quality factor of the oscillating ribbon \(Q=\tau_{\rm damp}\omega_{\rm o}/2\) increases with increasing tension. Our simulation data suggest that energy dissipation is reduced for a stretched ribbon, consistent with experimental results for double-clamped graphene [13; 15]. We also note that we employ the NVT ensemble with Nose-Hoover thermostat [51; 69], which does not have a fixed damping like Langevin dynamics simulations [70]. In the NVT ensemble a dynamical term, physically interpreted as a friction, is changing during the approach to thermal equilibration. Once thermal equilibrium at a target temperature is reached, the dynamical friction goes to a finite value and its rate of change vanishes. Given our MD simulations setup, the Brownian particle, Figure 5: Autocorrelation of the midpoint \(A_{h_{\rm CM}}\) as a function of time \(t\) for (a) stretched, \(\epsilon=-2\%\), (b) unstrained, \(\epsilon=0\%\), and (c) compressed ribbon, \(\epsilon=+2\%\). The last strain is above the buckling threshold, \(\epsilon_{\rm c}=0.05\%\) for our parameter choices. Here the system is thermalized at \(W_{0}/\ell_{\rm th}=8.5\), so thermal renormalization of the elastic parameters is important. The circles represent MD data and the black line represents the fitted line. For the stretched and unstrained cases \(A_{h_{\rm CM}}\) show oscillatory plus exponentially decaying behavior, following Eq. 20. For the buckled case, in contrast, \(A_{h_{\rm CM}}\) shows only exponential decay. embodied in our mean-field description of a thermalized ribbon, is effectively coupled to a thermal bath (thermostat). Consistent with theoretical and numerical investigations of a beam coupled to Nose-Hoover thermostat by Louhghalam et al. [71] (see App. D), we anticipate an effective damping to occur due to coupling between the doubly clamped ribbon and the thermal bath. This prediction of energy loss, associated with the damping term, is consistent with our MD simulation results. ## VIII Conclusions Molecular dynamics simulations of the dynamics of an ultra-thin doubly clamped nanoribbon oscillator reveal rich dynamical behavior. Unlike cantilever geometries, in which stresses relax automatically to produce relatively simple scale-dependent elastic behaviors [29], isometrical constraints [72] embodied in double-clamping can lead to an effective tension, a buckling transition and other intriguing phenomena. Thermal fluctuations render the long-wavelength bending rigidity and 2D Young's modulus temperature scale-dependent with important implications for the motion of the center-of-mass height and the two-state nature of the ribbon. The escape time of a ribbon clamped beyond the onset of thermalized Euler buckling grows with increasing compression, as the system must sample two degenerate minima separated by an increasing barrier height. At high temperatures, where thermal fluctuations are significant, the energy barrier for bistable buckled ribbons increases linearly with temperature, thus leading to an approximately temperature-independent Boltzmann factor governing the transition rate. This compensation in the barrier crossing process leads to a transition time in this two-level system that depends only on geometry, in sharp distinction to the low-temperature regime where the escape time increases with the usual Arrhenius-like behavior, \(\tau_{e}\propto e^{E_{\text{barrier}}/k_{\text{B}}T}\). For a stretched ribbon we find that the natural angular frequency \(\omega_{\text{o}}\) and the quality factor \(Q\) increases with increasing tension, consistent with experiments [13; 14; 15]. Our theoretical work indicates that in the high-temperature regime the oscillation period scales with ribbon size \(L_{0}\) and temperature \(T\) as \(\tau_{\text{o}}^{\text{R}}\propto\frac{1}{\omega_{\text{o}}^{4}}\sim L_{0}^{(2 -\eta/2)}T^{-\eta/4}\). This scaling with ribbon size \(L_{0}\) suggests that thermalized nanoribbons close to the buckling transition, so that the ribbons are relaxed, behave as a system with a dynamical critical exponent of \(z=2-\eta/2=1.6\), assuming that the static critical exponent \(\eta\) is \(0.8\). Several experiments on doubly clamped graphene ribbons have shown either \(L_{0}^{2}\)[13] or \(L_{0}\)[15] scaling of the inverse of the natural frequency. These experimental results, of limited precision, bracket the exponent \(z=1.6\) found here. This scaling behavior could be tested by computational and experimental work that systematically varies the system size whilst ensuring vanishing tension. In this context, we mention a very recent theoretical investigation of the dynamics of _free-standing_ graphene [35], which is also motivated by experimental investigations [34]. Granato et al. argue that the time behavior of the mean-square displacement of height fluctuations, \(\langle\Delta h(t)^{2}\rangle\), at long and intermediate times, should not depend on the microscopic length. This argument, together with the scaling of elastic membranes, leads to \(\langle\Delta h(t)^{2}\rangle\sim t^{\frac{c}{1+\zeta}}\) with \(\zeta=(1-\eta/2)\) being the roughening critical exponent, a static equilibrium quantity. Further dimensional analysis by Granato et al. suggests that the subdiffusive time scale of the mean-square displacement has the form \(\tau\sim L_{0}^{2(1+\zeta)}\sim L_{0}^{4-\eta}\). Our work concerns the dynamical exponent of different physical quantities: (i) the characteristic oscillation time of the midpoint inside a minima \(\tau_{\text{o}}\sim L_{0}^{2-\eta/2}\) and (ii) the characteristic prefactor time scale of the escape time with \(\tau_{p}\sim L_{0}^{4-\eta}\) in the high damping regime and \(\tau_{p}\) is independent of system size in the low-damping regime. Our simulation results confirm that \(\omega_{\mathrm{o}}\) increases with increasing temperature due to stiffening in bending rigidity, which is consistent with our theoretical model. Several experiments have shown that the natural angular frequency \(\omega_{\mathrm{o}}\) and the quality factor \(Q\) of graphene resonators indeed increases with decreasing temperature [14; 15]. In contrast, other experiments on graphene resonators showed that the natural frequency increases with increasing temperature [73; 74]. Both sets of experiments conclude that frozen strains due to cooling/heating cycles could play an important role in altering the resonant frequency. The strain, however, is not directly controlled in those experiments. Our simulations, in contrast, allow us to examine the temperature dependence of \(\omega_{\mathrm{o}}\) while keeping the relative compression constant across different temperatures. We show that, for a fixed reduced stretching strain, \(\omega_{\mathrm{o}}\) increases with temperature according to \(\omega_{\mathrm{o}}\sim T^{n/4}\), due to bending stiffening above the temperature at which thermal renormalization effects take place. Another challenge requiring further study is the temperature-dependence of \(Q\) and \(\omega_{\mathrm{o}}\) where the energy loss due to boundary effects, such as imperfect clamping and different thermal expansions across different materials which present in physical experiments [75; 76], is taken into account. Future investigations might include simulating a ribbon adhered to a substrate via an attractive microscopic potential as opposed to the perfect clamping condition imposed in our current work. In summary, we have investigated the dynamics of the midpoint of doubly clamped nanoribbons at a wide range of temperatures. This work suggests that dynamical measurements may be used as an alternative way to study the unusual thermal renormalization of the underlying elastic constants. We hope that our work will encourage theoretical and experimental investigations of the non-trivial dynamical exponents of atomically thin ribbons of, e.g. graphene and MoS\({}_{2}\). From a practical standpoint, our findings are important for predicting the response of nanoactuators and non-linear mechanical nanoresonators operating at wide range of temperature and strain conditions. ###### Acknowledgements. P.Z.H. and D.R.N. acknowledge support through NSF Grant No. DMR-1608501 and via the Harvard Materials Science Research and Engineering Center, through NSF Grant No. DMR-2011754. We also thank the KITP program, "The Physics of Elastic Films: From Biological Membranes to Extreme Mechanics," supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. D.Y. acknowledges support from Ministerio de Economia y Competitividad (MINECO) and Agencia Estatal de Investigacion (Spain) through grant no. PGC2018-094684-B-C21, partially funded by the European Regional Development Fund (FEDER, European Union). HOOMD simulation input scripts and other codes are available at [https://github.com/phanakata/statistical-mechanics-of-thin-materials/](https://github.com/phanakata/statistical-mechanics-of-thin-materials/). We thank Roberto Valenzuela, Suraj Shankar, Daniel Lopez, Richard Huang and Abigail Plummer for helpful discussions. P.Z.H also thanks Harold Park and Jin-Wu Jiang for helpful discussions on energy dissipation in nanomechanical systems. In these Appendices we provide detailed derivations, supplementary molecular dynamics data, and a complementary phenomenological theory describing the dynamics that are not included in the main text. ## Appendix A Brownian particle in 1D harmonic potential We consider a Brownian particle of mass \(m\) allowed to move in the \(x\) direction and confined in a harmonic potential \(V(x)=kx^{2}/2\). This model is used to approximate the center of mass \(h_{\text{CM}}\) of an unbuckled ribbon well below the threshold for the Euler buckling transition, although the fourth order quartic term will lead to some corrections to the results in this Appendix (see our our coarse-grained Gibbs free energy). The equations of motions are given by \[\frac{dx}{dt} =v \tag{10}\] \[\frac{dv}{dt} =-\frac{\gamma}{m}v-\omega_{0}^{2}x+\frac{1}{m}\xi(t), \tag{11}\] where \(\omega_{0}^{2}=k/m\) defines the oscillator frequency associated with the harmonic potential at \(x=h_{\text{CM}}=0\) for the ribbon. The random force \(\xi(t)\) is a Gaussian process with zero mean and correlation function proportional to \(\delta\)-function \[\langle\xi(t)\rangle=0,\quad\langle\xi(t)\xi(t^{\prime})\rangle=2\gamma k_{B }T\delta(t-t^{\prime}). \tag{12}\] Upon Fourier transforming the Langevin equations(Eqs. 10 and 11) to the frequency domain \[-i\omega x(\omega) =v(\omega) \tag{13}\] \[-i\omega v(\omega) =-\frac{\gamma}{m}v(\omega)-\omega_{0}^{2}x(\omega)+\frac{1}{m} \xi(\omega), \tag{14}\] and upon solving the equations above we obtain \[x(\omega)=\frac{1}{m}\frac{\xi(\omega)}{\omega_{0}^{2}-\omega^{2}-i\frac{ \gamma}{m}\omega}. \tag{15}\] It is useful to study the amplitude of \(x(t)\) in frequency space to understand the dynamics. A closely related and commonly measured quantity in signal processing and studies of Brownian motion is the spectral density \(S_{x}(\omega)\propto|x(\omega)|^{2}\): \[S_{x}(\omega) =\frac{1}{m}\frac{\langle|\xi(\omega)|^{2}\rangle}{|\omega_{0}^{2 }-\omega^{2}-\frac{\gamma}{m}i\omega|^{2}} \tag{16}\] \[=\frac{2\gamma k_{B}T}{m^{2}\left[(\omega_{0}^{2}-\omega^{2})^{2} +\frac{\gamma^{2}}{m^{2}}\omega^{2}\right]}.\] From the equipartition theorem we expect \(\omega_{0}^{2}\langle x_{0}^{2}\rangle/2=k_{B}T/2\). We can normalize \(S_{x}(\omega)\) by inserting \(\langle x_{0}^{2}\rangle=k_{B}T/(m\omega_{0}^{2})\). We plot \(\frac{S_{x}}{k_{B}T/(m\omega_{0}^{2})}\) (Eq. 12) as a function of \(\omega\) for a fixed \(\omega_{0}=1\) and different values of \(\gamma/m,k_{B}T\) in Fig. 7(a). Similarly, by inverting the Fourier transform, we can calculate the position autocorrelation function \[C_{x}(t) =\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega e^{-i\omega t}S_{x }(\omega) \tag{17}\] \[=\frac{\gamma k_{B}T}{\pi m^{2}}\int_{-\infty}^{\infty}d\omega e ^{-i\omega t}\frac{1}{\left[(\omega_{0}^{2}-\omega^{2})+\frac{\gamma^{2}}{m^{2 }}\omega^{2}\right]}\] (18) \[=\frac{k_{B}T}{m\omega_{0}^{2}}e^{-\frac{\gamma}{2m}t}\left[ \cos\omega_{1}t+\frac{\gamma}{2m\omega_{1}}\sin\omega_{1}t\right], \tag{19}\] where \(\omega_{1}=\sqrt{\omega_{0}^{2}-\gamma^{2}/(4m^{2})}\) is the damped natural frequency. \(C_{x}(t)\) is an oscillating function with exponential decay. The solution for the \(x\)-autocorrelation functions \(C_{x}(t)\) for different values of \(\gamma/m\) are plotted in fig. 7(b). ## Appendix B Temperature behavior of critical buckling length For a large vK number, the critical buckling strain \(\epsilon_{c}\propto\kappa/YL_{0}^{2}\) is generally very small. Hence, the projected critical buckling length should be close to the undeformed zero-temperature (rest) length \(L_{0}\). From MD simulations we indeed find that \(L_{\epsilon_{c}}\) weakly depends on \(T\) as long as the ribbon length is smaller than the persistence length \(\ell_{\text{p}}=\frac{2\kappa_{B}T\omega}{k_{B}T}\), as shown in Fig. 8. ## Appendix C Three-state model and residence time estimation In the main text we use Kramers' result to describe the escape time. Here, we develop a three-state model as a complementary theory to describe the ribbon dynamics above the critical buckling. Suppose that we only have three possible states (Up, Down, and Flat) with energies \(E[\pm|h_{\rm CM}|]=-E_{\rm barrier}\) and \(E[0]=0\). The probability of being in a given state is proportional to the Boltzmann factor, and the probability of being in the up state is given by \[P(+h_{\rm CM})=\frac{\exp[E_{\rm barrier}/k_{\rm B}T]}{1+2\exp[E_{\rm barrier }/k_{\rm B}T]}. \tag{10}\] In simulations we can relate this probability to time as \(\sum P(E)=1\) and \(\sum\tau(E)/T=1\) in the limit \(T\to\infty\). We can then estimate the ratio between the total time in the up- and down-states and the time in the flat state to be \[R_{\tau}=\frac{\sum\tau_{\rm up}+\tau_{\rm down}}{\sum\tau_{\rm flat}}\propto 2 \exp[E_{\rm barrier}/k_{\rm B}T], \tag{11}\] where \(E_{\rm barrier}\) is given in the main text. In two different temperature regimes separated by thermal length \(\ell_{\rm th}\), the time ratio \(R_{\tau}\) is given by \[R_{\tau}\propto\begin{cases}&\exp\left[\frac{3\pi\delta^{2}}{8}\frac{W_{0} \ell_{\rm th}^{3}}{L_{\rm sc}^{3}}\right]\text{if }W_{0}\ll\ell_{\rm th},\\ &\exp\left[\frac{3\pi\delta^{2}}{8}\frac{W_{0}^{3}}{L_{\rm sc}^{3}}\right] \text{if }W_{0}\gg\ell_{\rm th},\end{cases} \tag{12}\] We first test this relation for systems with \(W_{0}>\ell_{\rm th}\) (semi-flexible regime). We expect \(\log(\frac{\tau_{\rm up}+\tau_{\rm down}}{\tau_{\rm flat}})=\text{slope}\times \delta^{2}+c\), where the slope is obtained from theory Figure 10: The time ratio \(R_{\tau}\) as a function \(3\pi\delta^{2}/8(W_{0}/\ell_{\rm th})^{3}\). The slope is close to one, consistent with the theoretical prediction. Figure 9: Midpoint \(h_{\rm CM}\) as a function of time in units of \(10\tau_{\rm MD}\) at a strain (a) above the buckling transition with \(\delta=6.8\) and (b) above but closer to the buckling transition with \(\delta=2.6\). Well above the buckling transition, the ribbon spends most of its time in either the up or down state. In contrast, close to the buckling point the ribbon transitions from the up to the down state more frequently, and so spends its time in the up, down, and flat state more equally. The system shown here is thermalized at \(W_{0}/\ell_{\rm th}\sim 8.5\). \(\left(\frac{W_{0}}{L_{\varepsilon_{\rm c}}}\right)^{3}\frac{3\pi}{8}\sim 0.01\). To extract \(\tau\) we use a height threshold \(h_{c}=h_{\rm max}/3\) and define an up- or down-state whenever \(|h_{\rm CM}|>h_{c}\). Fig. 9 shows the midpoint \(h_{\rm CM}\) as a function time for a ribbon well above the buckling transition and close to the buckling transition. Well above the buckling transition, the ribbon spends most of its time in either the up or down state. Close to the buckling transition, in contrast, the ribbon switches from the up to the down state more frequently, and so the ribbon spends its time in the up, down, and flat state more equally. Fig. 10 shows the time ratio \(R_{\tau}=\frac{\sum\tau_{\rm up}+\tau_{\rm down}}{\sum\tau_{\rm flat}}\) as a function of \(3\pi\delta^{2}/8(W_{0}/\ell_{\rm th})^{3}\). Close to the buckling transition \(\frac{\tau_{\rm up}+\tau_{\rm down}}{\tau_{\rm flat}}\sim 2\), which suggests that all three states are equally probable. Since \(E_{\rm barrier}=0\) at the transition, all three states are equally probable. From simulations we find that the slope is close to the analytical prediction. Note we could model the buckling problem as two states only (Up, Down). One can compute the cumulative probability distribution of the residence times and calculate the integrated survival time as a measure of the escape time. As shown in Ref. [77], the integrated survival time \(\tau_{\rm sur}\) is proportional to the autocorrelation time (\(\tau_{\rm AC}\sim 0.5\tau_{\rm surv}\)) as the autocorrelation time is related to the slowest mode of interest. This three-state model is used as a complementary theory showing how the activation energy becomes renormalized for \(W_{0}/\ell_{\rm th}\gg 1\), with the advantage that no prefactor is needed. ## Appendix D Nose-Hoover Beam Theory In the main text we developed a mean-field model that treats the many connecting nodes of a ribbon as a one-dimensional problem. This problem is equivalent to beam theory, however, with renormalized elastic constants. Our molecular dynamics simulations were carried in a canonical (NVT) ensemble where number of particles \(N\), volume \(V\) and temperature \(T\) are fixed. Within this ensemble we used the Nose-Hoover thermostat [51; 69] implemented in HOOMD-blue [50]. Thus, we need to add a thermal bath to our mean-field model in order to explain the observed quantities, such as height oscillations. In this appendix we provide derivations of the equation of motion for a beam coupled to a thermal bath, first derived in Ref [71]. Note that here we followed the notation in Ref [71]. In a microcanonical (NVE) ensemble number of particles \(N\), volume \(V\) and energy are conserved. The Lagrangian, the difference between the kinetic and the potential energy, of a beam in the absence of an external force is given by \[\mathcal{L}_{\rm beam}=\int\left[\frac{1}{2}\rho A\dot{h}^{2}-\frac{1}{2}EI(h ^{\prime\prime})^{2}\right]dx, \tag{101}\] where \(\rho\) is the density, \(h(x)\) is the height deflection, \(A\) is the beam cross section, \(EI\) is the bending stiffness, \(h^{\prime}=\partial\hbar/\partial x\) and \(\dot{h}=\partial h/\partial t\). Note that the quartic term is not included, unlike our mean field model for a clamped ribbon. Using the Euler-Lagrange equation resulting from Eq. 101, we obtain the equation for undamped motion of a beam in the NVE-ensemble \[-\frac{\partial}{\partial t}\left(\frac{\partial\mathcal{L}_{ \rm beam}}{\partial\dot{h}}\right)+\frac{\partial^{2}}{\partial x^{2}}\left( \frac{\mathcal{L}_{\rm beam}}{\partial h^{\prime\prime}}\right)=0 \tag{102}\] \[\Rightarrow\rho A\ddot{h}+\frac{\partial^{2}}{\partial x^{2}}EIh^ {\prime\prime}=0. \tag{103}\] In a canonical ensemble the system, which in this case is the beam, is in contact with a thermal bath with a reference temperature \(T_{\rm ref}\). The extended Lagrangian is \(\mathcal{L}=\mathcal{L}_{\rm beam}+\mathcal{L}_{\rm bath}\). In the Nose-Hoover thermostat, a fictitious mass \(Q>0\) of dimension \(ML^{2}\) and its velocity \(\zeta\) of dimension \(\text{time}^{-1}\) are introduced [51; 69]. The bath potential energy is \(RT_{\rm ref}\ln(s)\), with \(s\) being the generalized coordinate and \(R\) the product of the Boltzmann constant and the number of degrees-of-freedom. The generalized coordinate \(s\) and the velocity \(\zeta\) are related by \[\zeta=\frac{ds}{d\tau}\quad s=\frac{d\tau}{dt}, \tag{104}\] where \(\zeta\) determines the heat exchange between the beam and the bath and \(s\) is the stretch in time between the time of the beam, \(t\), and the time of the bath \(\tau\). The bath Lagrangian is \[\mathcal{L}_{\rm bath}=\frac{Q}{2}\zeta^{2}-RT_{\rm ref}\ln(s). \tag{105}\] Before moving further we first relate the time derivatives \[\frac{\partial s}{\partial t}=\zeta s \tag{106}\] \[\dot{h}=s\frac{\partial h}{\partial\tau}\] (107) \[\ddot{h}=s\frac{\partial}{\partial\tau}\left(s\frac{\partial h}{ \partial\tau}\right)=s^{2}\frac{\partial^{2}h}{\partial\tau^{2}}+s\zeta\frac{ \partial h}{\partial\tau} \tag{108}\] By change of variables we can write the extended Lagrangian in the extended time scale \(s\) \[\mathcal{L}=\int_{0}^{L}\left[\frac{\rho As^{2}}{2}\left(\frac{\partial h}{ \partial\tau}\right)^{2}-\frac{EI}{2}\left(\frac{\partial^{2}h}{\partial x^ {2}}\right)^{2}\right]dx+\frac{Q\zeta^{2}}{2}-RT_{\rm ref}\ln(s). \tag{109}\] As earlier, we obtain the equation of motion by using Euler-Lagrange equation \[-\frac{\partial}{\partial\tau}\left(\frac{\partial\mathcal{L}}{ \partial(\partial h/d\tau)}\right)+\frac{\partial^{2}}{\partial x^{2}}\left( \frac{\mathcal{L}}{\partial h^{\prime\prime}}\right)=0 \tag{110}\] \[\Rightarrow\rho A\left(s^{2}\frac{\partial^{2}h}{\partial\tau^{2} }+2\zeta s\frac{\partial h}{\partial\tau}\right)+\frac{\partial^{2}}{ \partial x^{2}}EIh^{\prime\prime}=0. \tag{111}\] We can use Eqs. 106, 107 and 108 to rewrite the equation of motion in the real time (\(t\)) \[\rho A\ddot{h}+\rho A\zeta\dot{h}+\frac{\partial^{2}}{\partial x^{2}}EIh^{\prime \prime}=0. \tag{47}\] Notice that this is similar to the undamped case Eq. 30 but now we have a new damping term \(\propto\zeta\dot{h}\), which is similar to the friction term in Langevin dynamics. Using the Euler-Lagrange equation we obtain the equation for the evolution of \(\zeta\): \[\dot{\zeta}=\frac{d}{dt}\left(\frac{\partial\ln(s)}{\partial t}\right)=\frac{ RT_{\rm ref}}{Q}\left(\frac{T(t)}{T_{\rm ref}}-1\right). \tag{48}\] As \(T/T_{\rm ref}\to 1\) the friction term \(\zeta\) tends to a constant, indicating equilibrium. In summary, we have shown that coupling a beam to a thermal bath results in an effective damping. The consequences of the mean-field theory with coupling to the bath is consistent with our simulation, given that we use Nose-Hoover thermostat for the NVT molecular dynamics simulations. This damping (energy loss) is observed in our simulation data, which is characterized by a decaying oscillation of the positional correlation function in the stretched ribbon case and in a purely decaying behavior of the positional correlation function for the buckled case.
2302.02559
Cosmography via Gaussian Process with Gamma Ray Bursts
In this paper, we firstly calibrate the Amati relation (the $E_{\rm p}-E_{\rm iso}$ correlation) of gamma ray bursts (GRBs) at low redshifts ($z<0.8$) via Gaussian process by using the type Ia supernovae samples from Pantheon+ under the philosophy that objects at the same redshift should have the same luminosity distance in any cosmology. As a result, this calibration derives the distance moduli of GRBs at high redshifts ($z>0.8$). For an application of these derived distance modulus of GRBs to cosmology, via Gaussian process again, a series of cosmography parameters, which describe kinematics of our Universe, up to the fifth oder and the redshift $z\sim 5$, i.e. the Hubble parameter $H(z)$, the deceleration parameter $q(z)$, the jerk parameter $j(z)$, the snap parameter $s(z)$ and the lerk parameter $l(z)$, are reconstructed from the cosmic observations. The reconstructed cosmography parameters show a transition singularity at $z\sim 6$, it may resort to two possible explanations: one is that the GRBs data points at high redshift $z>5$ are still reliable, it means that new physics beyond the $\Lambda$CDM model happens; another one is that the quality and quantity of GRBs data points at high redshift $z>5$ are not good enough to give any viable prediction of the kinematics of our Universe. To pin down this problem, more high redshifts $z>5$ cosmic observational are still needed.
Yuhao Mu, Baorong Chang, Lixin Xu
2023-02-06T04:34:50Z
http://arxiv.org/abs/2302.02559v2
# Cosmography via Gaussian Process with Gamma Ray Bursts ###### Abstract In this paper, we firstly calibrate the Amati relation (the \(E_{\rm p}-E_{\rm iso}\) correlation) of gamma ray bursts (GRBs) at low redshifts (\(z<0.8\)) via Gaussian process by using the type Ia supernovae samples from Pantheon+ under the philosophy that objects at the same redshift should have the same luminosity distance in any cosmology. As a result, this calibration derives the distance moduli of GRBs at high redshifts (\(z>0.8\)). For an application of these derived distance modulus of GRBs to cosmology, via Gaussian process again, a series of cosmography parameters, which describe kinematics of our Universe, up to the fifth oder, i.e. the Hubble parameter \(H(z)\), the deceleration parameter \(q(z)\), the jerk parameter \(j(z)\), the snap parameter \(s(z)\) and the lerk parameter \(l(z)\), are reconstructed from the cosmic observations. The result shows that the current quality of GRBs data points are not good enough to give viable prediction of the kinematics of our Universe at high redshifts. ## 1 Introduction Investigating the kinematics of our Universe in a model-independent way is interesting since the discovery of an expanding Universe by E. Hubble in 1929 [1], now this finding is dubbed as Hubble-Lemaitre law with memory of Lemaitre [2]. The current expansion rate of our Universe is described by the present Hubble constant \(H_{0}\). However over the last 100 years, the value of \(H_{0}\) was measured by different ways [3], eventually there is still about \(5\sigma\) discrepancy of \(H_{0}\) values between the direct and model-independent local measurement \(H_{0}=73.04\pm 1.04\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\)[4] from the recent release of the largest type Ia supernovae (SNe Ia) sample called Pantheon+ [5; 6] and \(H_{0}=67.4\pm 0.5\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) from the Cosmic Microwave Background (CMB) from Planck satellite (PLC18) [7] in the \(\Lambda\)CDM cosmology. In order to describe the kinematics of our Universe, a series of parameters by Taylor expansion of the scale factor \(a(t)\) in terms of the cosmic time \(t\) are introduced, such as \(q\), \(j\), \(s\), \(l\) and so on, named the deceleration, jerk, snap and lerk parameters are defined respectively, for the detailed forms please see Eqs. (2.2, 2.3, 2.4, 2.5, 2.6) (see also Eqs. (2.13, 2.14, 2.15, 2.16, 2.17) in terms of the comoving distance and its derivatives) in the Section 2. In the last few years, this kinematics approach has been studied extensively although in different names, for examples cosmography [9; 10; 11; 12; 13; 14; 15], cosmokinetics [16; 17], or Friedmannless cosmology [18; 19]. For recent progress, please see Refs. [20; 21; 22; 23; 24; 25; 26] for instance, but not for a complete list. In order to investigate the kinematics of our Universe, the distances between galaxies at large scales and their variation with respect to time \(t\) (or redshift \(z\)) are indispensable, just like the findings of the observed galaxies moving away from the Earth at speeds proportional to their distance and the dimmer apparent magnitude of SNe Ia as at high redshifts revealed by [27; 28]. Once having the distance indicators along the history of our Universe in hand, one can obtain the kinematics of our Universe. Therefore, the redshift range of distance indicators is demanded as large as possible. As so far, for SNe Ia as standard candles, the observed maximum redshift is \(z=2.26137\)[5; 6]. And as useful complement, the observed maximum redshift for the gamma ray bursts (GRBs) can reach to \(z=9.4\)[29]. Although a consensus of GRBs as standard candles is still vanished, several empirical GRBs luminosity relations have been proposed and used in studying cosmology, see [30; 31; 32; 33; 34] for reviews. To avoid the circularity problem [30] in using GRBs data to constrain cosmological models, one proposes the simultaneous fitting method [35; 36; 37; 38; 39] and cosmological model-independent method [40] under the assumption that objects at the same redshift should have the same luminosity distance in any cosmology. In GRBs cosmology, the Amati relation [41], which is related to the spectral peak energy and the isotropic equivalent radiated energy (the \(E_{\rm p}-E_{\rm iso}\) correlation) of GRBs, is extensively used [42; 43; 44; 45; 35]. Recently, Liang _et al.[45]_ used the 220 GRB samples (A220) complied by Khadka _et al.[44]_ to reconstruct the luminosity distance from the Pantheon SNe Ia sample [46] via Gaussian process, where the GRB Hubble diagram at high redshifts was obtained. Recently, the largest Supernovae Ia samples was released, dubbed as Pantheon+, which consists of 1701 light curves of 1550 spectroscopically confirmed SNe Ia coming from 18 different sky surveys ranging in redshifts from \(z=0.00122\) to \(2.26137\)[5; 6]. In this paper, we plan to update the distance modulus of GRBs by the recently released Pantheon+ SNe Ia samples mainly following the method proposed in Ref. [45] but with different redshift range \(z\in[0,0.8]\). We obtain 182 GRBs distance moduli range in the redshifts \(0.8<z\leq 8.2\). As an application in studying cosmology, combining the Pantheon+ SNe Ia samples and the observed Hubble parameters at different redshifts with these derived GRBs distance modulus, we reconstruct the kinematics of our Universe in terms of the cosmography parameters \(q\), \(j\), \(s\), \(l\) up to the fifth oder. This paper is organized as follows. In the next Section 2, we present the main cosmography parameters. In Section 3, the GRBs Amati relation is calibrated and distance moduli at high redshifts are derived. The cosmography parameter are reconstructed via Gaussian process are given in Section 4. The Section 5 is the conclusion. ## 2 Cosmography Parameters The geometry of our Universe is given by the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric \[ds^{2}=-c^{2}dt^{2}+a^{2}(t)\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}(d\theta^{2}+ \sin^{2}\theta d\phi^{2})\right], \tag{1}\] where \(c\) is the speed of light, \(a(t)\) is the scale factor which is normalized to \(a_{0}=1\) at present, \(t\) is the cosmic time, \(r\) is the comoving coordinate and \(\theta\) and \(\phi\) are the polar and azimuthal angles in spherical coordinates, the parameter \(k=1,0,-1\) denotes three dimensional spatial curvature for closed, flat and open geometries respectively. In this paper, we only consider the spatially flat \(k=0\) cosmology. The cosmography parameters, which describe kinematical state of our Universe and named as Hubble, deceleration, jerk, snap and lerk parameters, are defined as follows respectively, \[H \equiv \frac{da(t)}{dt}\frac{1}{a(t)}\equiv\frac{\dot{a}(t)}{a(t)}, \tag{2}\] \[q \equiv -\frac{1}{H^{2}}\frac{d^{2}a(t)}{dt^{2}}\frac{1}{a(t)}\equiv- \frac{1}{H^{2}}\frac{\ddot{a}(t)}{a(t)},\] (3) \[j \equiv \frac{1}{H^{3}}\frac{d^{3}a(t)}{dt^{3}}\frac{1}{a(t)}\equiv \frac{1}{H^{3}}\frac{a^{(3)}(t)}{a(t)},\] (4) \[s \equiv \frac{1}{H^{4}}\frac{d^{4}a(t)}{dt^{4}}\frac{1}{a(t)}\equiv \frac{1}{H^{4}}\frac{a^{(4)}(t)}{a(t)},\] (5) \[l \equiv \frac{1}{H^{5}}\frac{d^{5}a(t)}{dt^{5}}\frac{1}{a(t)}\equiv \frac{1}{H^{5}}\frac{a^{(5)}(t)}{a(t)}. \tag{6}\] In term of the redshift \(z=1/a(t)-1\), via the relation \[\frac{dt}{dz}=-\frac{1}{(1+z)H(z)}, \tag{7}\] the cosmography parameters can be rewritten as \[q(z) \equiv -1+(1+z)\frac{H^{\prime}}{H}, \tag{8}\] \[j(z) \equiv 1-2(1+z)\frac{H^{\prime}}{H}+(1+z)^{2}\frac{H^{\prime 2}}{H^{2}}+( 1+z)^{2}\frac{H^{\prime\prime}}{H},\] (9) \[s(z) \equiv 1-3(1+z)\frac{H^{\prime}}{H}+3(1+z)^{2}\frac{H^{\prime 2}}{H^{2}} -(1+z)^{3}\frac{H^{\prime 3}}{H^{3}}\] (10) \[- 4(1+z)^{3}\frac{H^{\prime}H^{\prime\prime}}{H^{2}}+(1+z)^{2} \frac{H^{\prime\prime}}{H}-(1+z)^{3}\frac{H^{(3)}}{H},\] \[l(z) \equiv 1-4(1+z)\frac{H^{\prime}}{H}+6(1+z)^{2}\frac{H^{\prime 2}}{H^{2} }-4(1+z)^{3}\frac{H^{\prime 3}}{H^{3}}\] (11) \[+ (1+z)^{4}\frac{H^{\prime 4}}{H^{4}}-(1+z)^{3}\frac{H^{\prime}H^{ \prime\prime}}{H^{2}}+7(1+z)^{4}\frac{H^{\prime}H^{\prime\prime\prime}}{H^{2}}\] \[+ 11(1+z)^{4}\frac{H^{\prime 2}H^{\prime\prime}}{H^{3}}+2(1+z)^{2} \frac{H^{\prime\prime}}{H}+4(1+z)^{4}\frac{H^{\prime\prime 2}}{H^{2}}\] \[+ (1+z)^{3}\frac{H^{(3)}}{H}+(1+z)^{4}\frac{H^{(4)}}{H},\] where the prime \({}^{\prime}\) denotes the derivative with respect to the redshift \(z\), and the \(f^{(i)}\) denotes the \(i\)-th order derivative of function \(f(z)\) with respect to the redshift \(z\). In order to reconstruct cosmography parameters from cosmic observations, the comoving distances along the line of sight is needed \[D_{C}(z)=c\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}. \tag{12}\] In terms of \(D_{C}(z)\), the cosmography parameters can be rewritten as \[H(z) \equiv \frac{c}{D_{C}^{\prime}}, \tag{13}\] \[q(z) \equiv -1-(1+z)\frac{D_{C}^{\prime\prime}}{D_{C}^{\prime}},\] (14) \[j(z) \equiv \frac{(1+z)^{2}}{D_{C}^{\prime}}\left[\frac{3D_{C}^{\prime \prime 2}}{D_{C}^{\prime}}+\frac{2D_{C}^{\prime\prime}}{(1+z)}-D_{C}^{\prime \prime\prime}\right],\] (15) \[s(z) \equiv 1+\frac{(1+z)^{3}D_{C}^{(4)}}{D_{C}^{\prime}}-\frac{(1+z)^{2}D_{ C}^{(3)}}{D_{C}^{\prime}}\] \[+ \frac{3(1+z)D_{C}^{\prime\prime}}{D_{C}^{\prime}}-\frac{10(1+z)^ {3}D_{C}^{(3)}D_{C}^{\prime\prime}}{D_{C}^{\prime 2}}\] \[+ \frac{15(1+z)^{3}D_{C}^{\prime\prime 3}}{D_{C}^{\prime 3}}+\frac{ 5(1+z)^{2}D_{C}^{\prime 2}}{D_{C}^{\prime 2}}\] \[l(z) \equiv 1+\frac{(1+z)^{4}D_{C}^{(5)}}{D_{C}^{\prime}}+\frac{(1+z)^{3}D_{ C}^{(4)}}{D_{C}^{\prime}}+\frac{2(1+z)^{2}D_{C}^{(3)}}{D_{C}^{\prime}}\] (17) \[- \frac{4(1+z)D_{C}^{\prime\prime}}{D_{C}^{\prime}}+\frac{7(1+z)^{ 4}D_{C}^{(4)}D_{C}^{\prime\prime}}{D_{C}^{\prime 2}}+\frac{4(1+z)^{4}D_{C}^{(3) 2}}{D_{C}^{\prime 2}}\] \[- \frac{(1+z)^{3}D_{C}^{(3)}D_{C}^{\prime\prime}}{D_{C}^{\prime 2}} +\frac{(1+z)^{4}D_{C}^{\prime\prime 4}}{D_{C}^{\prime 4}}-\frac{4(1+z)^{3}D_{C}^{ \prime 3}}{D_{C}^{\prime 3}}\] \[+ \frac{11(1+z)^{4}D_{C}^{(3)}D_{C}^{\prime\prime 2}}{D_{C}^{ \prime 3}}+\frac{6(1+z)^{2}D_{C}^{\prime\prime 2}}{D_{C}^{\prime 2}}.\] It is clear that once the comoving distance and its derivatives are reconstructed, the cosmography parameters and their error bars can be obtained consequently. Here, we would like warning the reader that the Hubble parameter \(H(z)\) obviously depends on the present Hubble parameter value \(H_{0}\), but the other cosmography parameters \(q(z)\), \(j(z)\), \(s(z)\) and \(l(z)\) are dimensionless and \(H_{0}\) free. The singularity of the cosmography parameters happens when the \(D_{c}^{\prime}(z)\) crosses zeros at some redshifts. ## 3 Calibration to GRBs Amati Relation In this Section, we are mainly going to use the distance moduli from Pantheon+ SNe Ia samples to calibrate the GRBs Amati relation via Gaussian process and then derive distance moduli of GRBs at high redshifts. Therefore we firstly give a brief introduction to the Gaussian process. Without assuming a specific parameterized form, the Gaussian process can reconstruct the function \(f(x)\) from data points \(f(x_{i})\pm\sigma_{i}\) via a point-to-point Gaussian distribution [47]. The Gaussian process was used extensively in cosmology study in the last few years [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57], where the cosmography parameters, equation of state of dark energy are reconstructed by using the cosmic observational data points. The Gaussian process was also used to calibrate GRBs Amati relation in Ref. [45]. In Gaussian process method, the expected value \(\mu\) and the variance \(\sigma^{2}\) of the function \(f(x)\) are given by \[\mu(x) = \sum_{i,j=1}^{N}k(x,x_{i})(M^{-1})_{ij}f(x_{j}), \tag{10}\] \[\sigma^{2}(x) = k(x,x)-\sum_{i,j=1}^{N}k(x,x_{i})(M^{-1})_{ij}k(x_{j},x), \tag{11}\] where \(N\) is the number of data points. And \(M_{ij}=k(x_{i},x_{j})+C_{ij}\) is the covariance matrix, where \(C_{ij}\) is the covariance matrix of the data points, and \(k(x,\tilde{x})\) is the covariance function or kernel between the points \(x\) and \(\tilde{x}\), which is usually taken as the squared exponential covariance function in the form \[k(x,\tilde{x})=\sigma_{f}^{2}\exp\left[-\frac{(x-\tilde{x})^{2}}{2\ell^{2}} \right], \tag{12}\] where the 'hyper-parameter' \(\sigma_{f}\) characterizes the 'bumpiness' of the function, i.e. denotes the typical change in the \(y\)-direction. The length scale \(\ell\) characterizes the distance traveling in \(x\)-direction to get a significant change in a function. These two 'hyper-parameters' \(\sigma_{f}\) and \(\ell\) are determined in the Gaussian process by maximizing the logarithmic marginalized likelihood function \[\ln\mathcal{L}=-\frac{1}{2}\sum_{i,j=1}^{N}f(x_{i})\left(M^{-1}\right)_{ij}f( x_{j})-\frac{1}{2}\ln|M|-\frac{1}{2}N\ln 2\pi, \tag{13}\] where \(|M|\) is the determinant of \(M_{ij}\). In this work, the double squared exponential covariance function \[k(x,\tilde{x})=\sigma_{f_{1}}^{2}\exp\left[-\frac{(x-\tilde{x})^{2}}{2\ell_{1 }^{2}}\right]+\sigma_{f_{2}}^{2}\exp\left[-\frac{(x-\tilde{x})^{2}}{2\ell_{2}^ {2}}\right], \tag{14}\] will also be used to reconstruct the cosmography parameters by considering the GRBs data points at high redshifts and the covariant correlation between them. Fortunately, the above mentioned aspects were already realized in the **GaPP** code 1[47]. But, in order to reconstruct \(l(z)\), we have modified the **GaPP** code to calculate the fifth order derivative of \(D_{C}^{(5)}\). Footnote 1: [https://github.com/carlosandrepaes/GaPP](https://github.com/carlosandrepaes/GaPP). For a standard candle such as SNe Ia, the luminosity distance \(D_{L}(z)\) is related to the distance modulus \(\mu=m-M=5\log_{10}D_{L}(\text{Mpc})+25\), where \(M\) is the absolute magnitude of SNe Ia. And the luminosity distance \(D_{L}(z)\), for a spatially flat Universe, is defined as \[D_{L}(z)=c(1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}=(1+z)D_{C}(z). \tag{15}\] Thus \(D_{L}=(1+z)D_{C}\) can be expressed in term of \(\mu\) as \[D_{L}=(1+z)D_{C}=10^{\frac{\mu-25}{5}}\text{Mpc}, \tag{16}\] where \(\mu\) is the distance modulus of a SNe Ia, and the absolute magnitude has been determined by the SH0ES Cepheid host distances for Pantheon+ samples [5, 6]. It corresponds to set \(H_{0}=73.04\pm 1.04\) km s\({}^{-1}\) Mpc\({}^{-1}\). These moduli of SNe Ia will be used to calibrate GRBs Amati relation under the philosophy that objects at the same redshift should have the same uminosity distance in any cosmology. The distance modulus reconstructed from Pantheon+ SNe Ia samples via Gaussian process with the squared exponential covariance function are shown in Figure 1 as in pink curves and regions, where the oscillations of the reconstructed function with large error regions are mainly due to sparse data points. It is obvious that these oscillations and large uncertainties are not suitable to calibrate Amati relation. Under this observation, see also the mini figure in Figure 1, we prefer calibrating the GRBs Amati relation in the redshift range \(z<0.8\) in stated of \(z<1.4\) as that did in Ref. [45]. The Amati relation [41] is given by \[y=a+bx, \tag{10}\] where \(y=\log_{10}\frac{E_{\rm iso}}{\rm{1erg}}\), \(x=\log_{10}\frac{E_{\rm p}}{\rm{300keV}}\), \(a\) and \(b\) are free coefficients to be calibrated by the cosmic observations. Here \(E_{\rm iso}\) and \(E_{\rm p}\) are the isotropic equivalent radiated energy and the spectral peak energy respectively, where \(E_{\rm iso}\) and \(E_{\rm p}\) are related by \[E_{\rm iso}=4\pi D_{L}^{2}(z)S_{\rm{bolo}}(1+z)^{-1},\quad E_{\rm p}=E_{\rm p} ^{\rm obs}(1+z), \tag{11}\] where the observables \(E_{\rm p}^{\rm obs}\) and \(S_{\rm{bolo}}\) are the GRBs spectral peak energy and bolometric fluence. Figure 1: The distance moduli of Pantheon+ SNe Ia samples, the reconstructed distance moduli from Pantheon+ SNe Ia samples and the derived distance moduli of GRBs, where the vertical dashed line denotes the maximum redshift of Pantheon+ SNe Ia samples. The free coefficients \(a\) and \(b\) are determined by maximizing the likelihood function \[\mathcal{L}(\sigma,a,b)\propto\prod_{i=1}^{N}\frac{1}{\sigma}\times\exp\left[- \frac{[y_{i}-y(x_{i},z_{i};a,b)]^{2}}{2\sigma^{2}}\right], \tag{23}\] where \(N=37\) is the number of low-redshift \(z<0.8\) GRBs in A220. Here \(y_{i}\) is obtained by the luminosity distance \(D_{L}(z_{i})\) reconstructed from SNe Ia data points via Gaussian process and the observed \(S_{\rm{bolo}}(z_{i})\) data point via the Eq. (21). The \(\sigma^{2}\) is given as [45] \[\sigma^{2}=\sigma_{\rm{int}}^{2}+\sigma_{y,i}^{2}+b^{2}\sigma_{x,i}^{2}, \tag{24}\] where \(\sigma_{\rm{int}}\) is the intrinsic scatter of GRBs, \(\sigma_{y}=\frac{1}{\ln 10}\frac{\sigma_{E_{\rm{iso}}}}{E_{\rm{iso}}},\quad \sigma_{x}=\frac{1}{\ln 10}\frac{\sigma_{E_{\rm{p}}}}{E_{\rm{p}}}\), \(\sigma_{E_{\rm{p}}}\) is the error magnitude of the spectral peak energy, and \(\sigma_{E_{\rm{iso}}}=4\pi D_{L}^{2}\sigma_{S_{\rm{bolo}}}(1+z)^{-1}\) is the error magnitude of isotropic equivalent radiated energy, where \(\sigma_{S_{\rm{bolo}}}\) is the error magnitude of bolometric fluence. It is clear that GRBs can not be calibrated if the absolute magnitude \(M\) of SNe Ia is still not known, even one takes \(\mu+M\), i.e. the apparent magnitude \(m\), as observable. It implies the degeneracy between the Amati relation parameter \(a\) and the absolute magnitude \(M\), and only the Amati relation parameter \(b\) can be constrained. Implementing Markov Chain Monte Carlo numerical fitting method by using GRBs data points ranging in \(z<0.8\), one obtains the Amati relation with fixed coefficients \(a=52.34\pm 0.10\), \(b=1.18\pm 0.20\) and \(\sigma_{\rm{int}}=0.54^{+0.08}_{-0.06}\)2. The corresponding contour is plotted in Figure 2. Once the Amati relation was calibrated at low redshifts, the distance modulus at high redshift will be obtained easily from Eq. (21), and the corresponding uncertainty of the GRBs distance modulus is given by [45] Footnote 2: Actually, by using GRBs at \(z<1.4\) and repeating the process, one has \(a=52.30\pm 0.07\), \(b=1.06\pm 0.12\) and \(\sigma_{\rm{int}}=0.51^{+0.05}_{-0.04}\) \[\sigma_{\mu}^{2}=\left(\frac{5}{2}\sigma_{\log\frac{E_{\rm{iso}}}{\rm{lerg}} }\right)^{2}+\left(\frac{5}{2\ln 10}\frac{\sigma_{S_{\rm{bolo}}}}{S_{\rm{bolo}}} \right)^{2}, \tag{25}\] where \[\sigma_{\log\frac{E_{\rm{iso}}}{\rm{lerg}}}^{2} = \sigma_{\rm{int}}^{2}+\left(\frac{b}{\ln 10}\frac{\sigma_{E_{ \rm{p}}}}{E_{\rm{p}}}\right)^{2} \tag{26}\] \[+ \sum_{ij}\left[\frac{\partial y(x;\theta_{c})}{\partial\theta_{i }}\right]C_{ij}\left[\frac{\partial y(x;\theta_{c})}{\partial\theta_{j}} \right],\] where \(\theta_{c}\)=\(\{\sigma_{\rm{int}},\,a,\,b\}\), and \(C_{ij}\) is the covariance matrix of these fitting coefficients. For convenience, the 182 derived distance moduli for GRBs at redshift \(z>0.8\) are summarized in the appendix A. Now these derived GRBs distance moduli can be used to constrain cosmology models and properties of dark energy. In particular, these distance moduli are compatible to Pantheon+ SNe Ia samples and can be used simultaneously under the philosophy that objects at the same redshift should have the same luminosity distance in any cosmology. Meanwhile, we should mention that the GRB051109A sample is removed as did in Ref. [45] for different values reported in Refs. [35] and [58]. econstructed Cosmography Parameters via the Gaussian Process As a direct application to cosmology, we move to study the kinematics of our Universe based on the observed data points. For seeking that, we resort to using the Gaussian process again, but with the double squared exponential covariance function given by Eq. (10). This consideration is based on the fact that GRBs as contrast to Pantheon+ samples ranges in a large redshift range and has large distance modulus uncertainty. The extra hyper-parameters \(\sigma_{f}\) and \(\ell\) can handle this diversity. In fact, we have tasted and confirmed that the squared exponential covariance function really gives weird oscillations, but the double squared exponential covariance function will not. In order to reconstruct \(D_{C}\) and its derivatives by using the Gaussian process code **GaPP**[47], the covariance matrix for the new observable \(D_{C}=D_{L}/(1+z)\), which can be derived Figure 2: Contour plots for the Amati relation coefficients and the intrinsic scatter of GRBs, where the redshifts of GRBs in the range of \(z<0.8\) are used. by error propagation equation, is given as \[C^{\rm tot}_{ij}=\left[\frac{D^{i}_{L}}{(1+z_{i})^{2}}\right]^{2}\sigma_{z_{i}}^ {2}\delta_{ij}+\frac{\ln 10D^{i}_{L}}{5(1+z_{i})}\tilde{C}^{\rm tot}_{ij}\frac{\ln 10D^{j} _{L}}{5(1+z_{j})}, \tag{10}\] where \(z_{i}\) and \(D^{i}_{L}\) are the redshift and the observed luminosity distance of the \(i\)-th SN Ia respectively, and \(\sigma_{z_{i}}\) is the \(1\sigma\) error for \(z_{i}\). And \(\delta_{ij}\) is the standard Kronecker symbol. \(\tilde{C}^{\rm tot}_{ij}\) in the last term is total distance covariance matrix for Pantheon+ SN Ia samples 3[5, 6], and there is no Einstein's summation convention. This variance \(C^{\rm tot}_{ij}\) will be added to the covariance matrix Footnote 3: The data points are available online [https://github.com/PantheonPlusSHOES/DataRelease](https://github.com/PantheonPlusSHOES/DataRelease). \[\mathbf{y}\sim\mathcal{N}\left(\mathbf{\mu},K(\mathbf{X},\mathbf{X})+C^{\rm tot}\right), \tag{11}\] where \([K(\mathbf{X},\mathbf{X})]_{ij}=k(x_{i},x_{j})\) is the covariance matrix for a set of input points \(\mathbf{X}=\{x_{i}\}\). Similarly, in order to reconstruct \(D^{\prime}_{C}\) from the cosmic chronometers (CC), the following covariance matrix is needed \[C^{\rm H}_{ij}=\left[\frac{c}{H_{i}^{2}}\right]^{2}\sigma_{H_{i}}^{2}\delta_{ ij}. \tag{12}\] Here the squared exponential covariance function Eq. (10) is taken as the covariance function, which is also infinitely differentiable and useful for reconstructing the derivative of a function. The recent release of the Pantheon+ samples contains SN Ia ranging in redshifts from \(z=0.00122\) to \(2.26137\), which consists of 1701 light curves of 1550 spectroscopically confirmed SN Ia coming from 18 different sky surveys. As pointed as in our previous study [15], due to the degeneracy between \(H_{0}\) and the absolute magnitude \(M\), the SN Ia cannot give any prediction of \(H_{0}\) value without calibration. Therefore, in this work, we use \(H_{0}\) from SH0ES to reconstruct \(H(z)\). In using the measurement of \(H_{0}\) from SH0ES, and making it consistent and free of redundancy, some Pantheon+ SN Ia data points (marked as **USED_IN_SH0ES_HF**=**1**) are removed where they were already used in the Hubble flow dataset [4]. For the observational Hubble data, or the so-called cosmic chronometers (CC) which is determined by computing the age difference \(\Delta t\) between passively-evolving galaxies at close redshifts, the sample compiled by [59] is used, see also the data table available online 4, where the redshift ranges in \(z\in[0.070,2.360]\). Footnote 4: [https://github.com/carlosandrepaes/GaPP](https://github.com/carlosandrepaes/GaPP). Implementing the Gaussian process as described in the Section 3, the comoving distance and its derivatives up to the fifth oder with respect to the redshift are reconstructed as shown in Figure 3, where \(1\sigma\) errors are also plotted in shadow regions. It is seen that the error becomes larger with the increase of the oder of derivative with respect to the redshift \(z\). On the contrary, the addition of CC data points gives an extra constraint to the first order derivative of \(D_{C}(z)\), thus a relative narrow error region for the reconstructed functions can be obtained. Meanwhile, a large error is shown at high redshift due to the sparse data points at where. With the joint CC and Pantheon+ SN Ia samples, the reconstructed Hubble parameter \(H(z)\) is shown in Figure 4 including \(1-3\sigma\) error curves, where the Hubble parameter \(H(z)\) predicted from a spatially flat \(\Lambda\)CDM cosmology, i.e. \(H^{2}(z)=H_{0}^{2}[\Omega_{m0}(1+z)^{3}+\Omega_{\Lambda 0}]\) with \(\Omega_{m0}=0.334\) (\(\Omega_{\Lambda 0}=1-\Omega_{m0}\)) from SH0ES [4] is also plotted as for comparison. The apparent bumps of error curves for \(H(z)\) at the redshift range \(z\sim 1.0-2.0\) are mainly due to the sparse and large error bars of the data sets. The vertical lines in Figure 4 happen at the redshifts where \(D^{\prime}_{C}(z)\) cross zero line, i.e. when the comoving distance \(D_{C}(z)\) transfers from increase to decrease or inverse with respect to the redshift \(z\). The same situation appears in the reconstructed cosmography parameters \(q(z)\), \(j(z)\), \(s(z)\) and \(l(z)\) as shown in Figure 5, where the corresponding cosmography parameters predicted from the spatially flat \(\Lambda\)CDM cosmology are also plotted as for comparison. The corresponding error is obtained by the error propagation equation, say for a function looks like \(f=g^{m}/h^{n}\), the errors, after omitting the cross correlation between \(g\) and \(h\), can be calculated as \[\sigma_{f}^{2}=\left[\frac{ng^{m}}{h^{n+1}}\right]^{2}\sigma_{h}^{2}+\left[ \frac{mg^{m-1}}{h^{n}}\right]^{2}\sigma_{g}^{2}. \tag{4.4}\] Thus the corresponding calculation for \(\sigma_{q}\)_etc._ is quite easy, but the mathematical expression is long and ugly, so it is not shown in this paper. In the upper left \(q(z)\) panel of Figure 5, the horizon \(q(z)=0\) line is for showing the transition redshift (at \(z_{t}=0.383\pm 0.164\)) from a decelerated expansion to an accelerated expansion at the crossing point with the reconstructed \(q(z)\) red solid line. This transition redshift is lower than that predicted by the spatially flat \(\Lambda\)CDM model. The evolution of the reconstructed cosmography parameters \(q(z)\), \(j(z)\), \(s(z)\) and \(l(z)\) with respect to the redshift \(z\) becomes weird at high redshifts \(z>2.5\). This strange behavior can simply boil down to Figure 3: The reconstructed cosmography parameters \(D_{C}(z)\), \(D^{\prime}_{C}(z)\), \(D^{\prime\prime}_{C}(z)\), \(D^{\prime\prime\prime}_{C}(z)\), \(D^{(4)}_{C}(z)\) and \(D^{(5)}_{C}(z)\) (with \(1\sigma\) error region) with the joint CC, Pantheon+ SN Ia samples and high redshift GRBs from the upper left panel to the lower right panel respectively. the definition of cosmography parameters in terms of \(D^{\prime}_{C}(z)\) which appears in the denominator of the corresponding expression. The singularities happen when \(D^{\prime}_{C}(z)\) crosses zeros. Of course this singularity means a transition of the comoving distance. But this transition seems unphysical in a regular cosmology. Therefore in current situation, the current quality of GRBs data points are not good enough to give any viable prediction of the kinematics of our Universe at high redshifts. ## 5 Conclusion In this paper, the GRBs Amati relation is calibrated by using Gaussian process from Pantheon+ SN Ia samples. After doing that, we obtain the 182 GRBs distance moduli in the redshift ranging in \(z>0.8\). These derived GRBs distance moduli can be used to constrain cosmology and dark energy properties. As a direct application to cosmology for these derived GRBs distance moduli, the cosmography parameters up to the fifth order are reconstructed by combing the cosmic observational data pints from Pantheon+ SN Ia samples, CC and GRBs at high redshifts. It is seen that some singularities denoted as vertical lines appear in Figure 4 and Figure 5. These singularities happen when \(D^{\prime}_{C}(z)\) crosses zeros. It means a transition of the comoving distance. But this transition seems unphysical in a regular cosmology. Thus Figure 4: The reconstructed Hubble parameter \(H(z)\) (with \(1-3\sigma\) error regions) with the joint CC and Pantheon+ SN Ia samples, where the Hubble parameter predicted from a spatially flat \(\Lambda\)CDM model is also plotted as for comparison. based on our studies, the current GRBs data points are still not give any viable prediction of the kinematics of our Universe at high redshifts. We expect that high quality of GRBs data points are available in the future. This work is supported in part by National Natural Science Foundation of China under Grant No. 12075042 and No. 11675032. ## Appendix A The GRB data sets Figure 5: The reconstructed cosmography parameters \(q(z)\), \(j(z)\), \(s(z)\) and \(l(z)\) (with \(1-3\sigma\) error curves) with the joint CC, Pantheon+ SN Ia samples and high redshift GRBs from the upper left panel to the lower right panel respectively, where the corresponding cosmography parameters predicted from a spatially flat \(\Lambda\)CDM model are also plotted as for comparison. In the upper right \(q(z)\) panel, the horizon \(q(z)=0\) line is for showing the transition redshift (at \(z_{t}=0.383\pm 0.164\)) from a decelerated expansion to an accelerated expansion at the crossing point with the reconstructed \(q(z)\) red solid line. \begin{table} \begin{tabular}{l c c} \hline \hline GRB & \(z\) & \(\mu_{\rm GRB}\pm\sigma_{\mu,\rm GRB}\) \\ \hline [MISSING_PAGE_POST] & 1.3310 & \(45.32\pm 1.50\) \\ \hline \end{tabular} \end{table} Table 1: List of the Derived Distance Moduli of 182 GRBs in the A220 Sample at redshift \(0.8<z\leq 8.2\). The GRB051109A sample is removed as did in Ref. [45] for different values reported in Refs. [35] and [58]. \begin{table} \begin{tabular}{l c c} \hline \hline GRB & \(z\) & \(\mu_{\rm GRB}\pm\sigma_{\mu,\rm GRB}\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: continued. \begin{table} \begin{tabular}{l c c} \hline \hline GRB & \(z\) & \(\mu_{\rm GRB}\pm\sigma_{\mu,\rm GRB}\) \\ \hline [MISSING_PAGE_POST] & 2.7110 & \(44.56\pm 1.58\) \\ \hline \end{tabular} \end{table} Table 1: continued. \begin{table} \begin{tabular}{l c c} \hline \hline GRB & \(z\) & \(\mu_{\rm GRB}\pm\sigma_{\mu,\rm GRB}\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: continued.
2302.05236
The full Lorentz-violating vacuum polarization tensor: low and high energy limits
We compute the full vacuum polarization tensor in the fermion sector of Lorentz-violating QED. Even if we assume momentum routing invariance of the Feynman diagrams, it is not possible to fix all surface terms and find an unambiguity free vacuum polarization tensor. The high and low energy limits of this tensor is presented. In the high energy limit, only $c_{\mu\nu}$ coeffcients contribute. In the low energy limit, we fnd that Lorentz-violating induced terms depend only on $b_{\mu}$, $c_{\mu\nu}$ and $g_{\mu\nu\lambda}$ coeffcients and they are suppressed by powers of $\frac{p^{2}}{m^{2}}$. This limit allows to obtain implications for condensed matter systems, explicitly, for the Hall effect in Weyl semimetals.
J. C. C. Felipe, A. Yu. Petrov, A. P. Baêta Scarpelli, A. R. Vieira
2023-02-10T13:27:45Z
http://arxiv.org/abs/2302.05236v2
# The full Lorentz-violating vacuum polarization tensor: low and high energy limits ###### Abstract We compute the full vacuum polarization tensor in the fermion sector of Lorentz-violating QED. Even if we assume momentum routing invariance of the Feynman diagrams, it is not possible to fix all surface terms and find an unambiguity free vacuum polarization tensor. The high and low energy limits of this tensor is presented. In the high energy limit, only \(c_{\mu\nu}\) coefficients contribute. In the low energy limit, we find that Lorentz-violating induced terms depend only on \(b_{\mu}\), \(c_{\mu\nu}\) and \(g_{\mu\nu\lambda}\) coefficients and they are suppressed by powers of \(\frac{P^{2}}{2\pi^{2}}\). This limit allows to obtain implications for condensed matter systems, explicitly, for the Hall effect in Weyl semimetals. pacs: 11.10.Gh, 11.30.Cp,11.30.-j ## I Introduction Lorentz and CPT symmetries are known to be among the main criteria to formulate field theory models. However, since we currently believe that a complete and unified theory must include both the Standard Model and General Relativity, which only merge at the Planck scale, those symmetries would be exact only at energies of that order of magnitude. As a consequence, it would be possible to detect tiny low energy Lorentz-violating (LV) effects coming from a spontaneous symmetry breaking of Lorentz symmetry that occurred at the Planck Scale \(m_{P}\sim 10^{19}GeV\)[1]. What we call the Standard Model Extension (SME) [2] is the usual Standard Model extended by adding all possible Lorentz and CPT violating terms which emerged due to that spontaneous symmetry breaking. Any signal of a tiny but non-zero coefficient of a Lorentz-violating term would support the idea of an unified theory at Planck scale. Moreover, even if Lorentz and CPT symmetries are in fact exact at low energies, the question is on what precision one can say that they are indeed valid. Therefore, the SME is also a framework for testing those symmetries. The tree level SME brings consequences to low energy physical models like quantum mechanic systems and it can be used as a framework to test Lorentz and CPT symmetries in that limit. By the way, most of the searches on Lorentz and CPT violation are based on non-relativistic Hamiltonians allowing to see how SME coefficients affect usual quantum mechanics. Some examples include spectroscopy [8] and condensed matter systems [9]. There is a recent investigation concerning Weyl semi-metals and terms induced by quantum corrections [10] (for other studies of Lorentz symmetry breaking within the condensed matter context see also f.e. [11]). Beyond tree level, the SME is one-loop renormalizable, both in the electroweak [3; 4] and the strong sectors [5; 6]. However, investigations on finite quantum corrections coming from loop diagrams are usually controversial. There was a long standing debate concerning the issue of radiatively induced CS-like terms [12] allowing to suggest that computations are in general regularization dependent. The good old Dimensional Regularization [13; 14] can be used for computing the divergent part of the diagrams, as it was used in the proof of one-loop renormalizability. Unfortunately, it is not suitable in some cases due to the presence of some Lorentz and CPT violating terms which contain objects well-defined only in specific dimensions, like Levi-Civita symbols and \(\gamma_{5}\) matrices. In this case, computing the finite part of the amplitudes can be a delicate problem. The \(\gamma_{5}\) issue can be avoided in certain situations [15], and there are also some recipes to treat it inside traces involving Dirac matrices [16; 17]. Nevertheless, the question on what coefficient remains at the quantum level although non-trivial has its own right. In this work, we compute the full Lorentz-violating vacuum polarization tensor. We perform the computation of loop corrections within a four dimensional implicit regularization [18] framework, which does not assume any explicit regulator. The regularization depended objects are mapped in surface terms. They manifest themselves as differences between integrals with the same degree of divergence and so their value can be any number including infinity. They are also the ones which can cause the breaking of symmetries of the model in a spurious way if explicitly computed. Therefore, we let these terms intact till the end of the calculation and then require the fulfillment of a Ward-Takahashi or a Slavnov-Taylor identity. In this case, we guarantee gauge symmetry beyond tree level and at the same time find conditions on surface terms. As a consequence, not only the induced CS-like term is arbitrary but also other radiatively induced terms. Another feature that can set values for surface terms is the momentum routing invariance (MRI) of the loop diagrams. For gauge field theories, there is an one-to-one diagramatic relation between gauge and MRI which is independent of regularization. Some regularization schemes that are born momentum routing invariant, like dimensional regularization, automatically fulfill gauge invariance. These condition could be considered as another trial for finding equations that could fix arbitrary surface terms. However, requiring MRI leads to the same relationships between the surface terms obtained requiring gauge invariance and therefore at least one surface term remains making the result arbitrary. All these reasoning on MRI do not cause any problem with the momentum routing diagrammatic computation of the chiral anomaly. Choosing the internal routing in order to fulfill the desired gauge Ward identity does not necessarily mean that momentum routing invariance was broken. Requiring MRI fixes the relation between surfaces terms and automatically fulfill gauge Ward-Takahashi identities and also reproduce the breaking of the axial current [19], the Adler-Bardeen-Bell-Jackiw anomaly. Although arbitrary and regularization dependent, the full Lorentz-violating vacuum polarization tensor has only pieces with \(b_{\mu}\), \(c_{\mu\nu}\) and \(g_{\mu\nu\lambda}\) coefficients from the matter sector that affect the photon sector in the renormalization process. The structure of the paper looks as follows: in section II, we list the relevant one-loop Feynman diagrams. In section III, we present the framework of the regularization. In section IV, we calculate and present the quantum corrections. In section V, we discuss applications of our results to condensed matter. We present a summary in section VI and a list of the relevant integrals in the Appendix. ## II The framework and the one-loop diagrams We consider the fermion sector of the standard minimal LV QED Lagrangian [20]: \[\mathcal{L}=\frac{1}{2}i\bar{\psi}\Gamma^{\mu}\overleftrightarrow{ D}_{\mu}\psi-\bar{\psi}M\psi-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\] \[-\frac{1}{4}(k_{F})_{\kappa\lambda\mu\nu}F^{\mu\nu}F^{\kappa \lambda}+\frac{1}{2}(k_{AF})^{\kappa}\epsilon_{\kappa\lambda\mu\nu}A^{ \lambda}F^{\mu\nu}, \tag{1}\] where \(D_{\mu}\equiv\partial_{\mu}+iqA_{\mu}\) is the usual covariant derivative which couples the gauge field with matter, \[\Gamma^{\nu}=\gamma^{\nu}+\Gamma^{\nu}_{1},\] \[\Gamma^{\nu}_{1}=c^{\mu\nu}\gamma_{\mu}+d^{\mu\nu}\gamma_{5} \gamma_{\mu}+e^{\nu}+if^{\nu}\gamma_{5}+\frac{1}{2}g^{\lambda\mu\nu}\sigma_{ \lambda\mu} \tag{2}\] and \[M=m+M_{1},\] \[M_{1}=m_{5}\gamma_{5}+a^{\mu}\gamma_{\mu}+b_{\mu}\gamma_{5} \gamma^{\mu}+\frac{1}{2}H_{\mu\nu}\sigma^{\mu\nu}. \tag{3}\] The coefficients \(a_{\mu}\), \(b_{\mu}\), \(c_{\mu\nu}\), \(d_{\mu\nu}\), \(e_{\mu}\), \(f_{\mu}\), \(g_{\lambda\mu\nu}\), \(H_{\mu\nu}\), \((k_{F})_{\kappa\lambda\mu\nu}\) and \((k_{AF})_{\kappa}\) break Lorentz symmetry and only the coefficients \(a_{\mu}\), \(b_{\mu}\), \(e_{\mu}\), \(f_{\mu}\), \(g_{\lambda\mu\nu}\) and \((k_{AF})_{\kappa}\) break CPT symmetry since their number of indices are odd. The Feynman rules corresponding to the Lagrangian in eq. (II) are listed in Fig. 1. We see in this lagrangian that we have a general vertex with \(\Gamma_{\mu}\) instead of just \(\gamma_{\mu}\) and a general fermion propagator \(\frac{i}{p_{\mu}\Gamma^{\nu}-M}\). However, considering this whole propagator is a difficult task. Therefore, the dot and the cross insertions in Fig. 1 mean the leading order in Lorentz and CPT violation in the fermion propagator. The one-loop diagrams are depicted in Fig. 2, and their amplitudes are: \[\Pi^{\mu\nu}_{(a)}=-q^{2}\int\frac{d^{4}k}{(2\pi)^{4}}Tr\left[\Gamma^{\nu}_{1 }\frac{1}{\not{k}-m}\gamma^{\mu}\frac{1}{\not{k}-\not{p}-m}\right], \tag{4}\] \[\Pi^{\mu\nu}_{(b)}=-q^{2}\int\frac{d^{4}k}{(2\pi)^{4}}Tr\left[\gamma^{\nu}\frac{1} {\not{\!\!k}-m}\Gamma^{\mu}_{1}\frac{1}{\not{\!\!k}-\not{\!\!p}-m}\right], \tag{5}\] \[\Pi^{\mu\nu}_{(c)}=q^{2}\int\frac{d^{4}k}{(2\pi)^{4}}Tr\left[\gamma^{\nu}\frac{ 1}{\not{\!\!k}-m}\Gamma^{\lambda}_{1}k_{\lambda}\frac{1}{\not{\!\!k}-m}\gamma^ {\mu}\frac{1}{\not{\!\!k}-\not{\!\!p}-m}\right], \tag{6}\] \[\Pi^{\mu\nu}_{(d)}=q^{2}\int\frac{d^{4}k}{(2\pi)^{4}}Tr\left[\gamma^{\nu}\frac {1}{\not{\!\!k}-m}\gamma^{\mu}\frac{1}{\not{\!\!k}-\not{\!\!p}-m}\Gamma^{ \lambda}_{1}(k_{\lambda}-p_{\lambda})\frac{1}{\not{\!\!k}-\not{\!\!p}-m}\right], \tag{7}\] \[\Pi^{\mu\nu}_{(e)}=-q^{2}\int\frac{d^{4}k}{(2\pi)^{4}}Tr\left[\gamma^{\nu} \frac{1}{\not{\!\!k}-m}M_{1}\frac{1}{\not{\!\!k}-m}\gamma^{\mu}\frac{1}{\not{ \!\!k}-\not{\!\!p}-m}\right], \tag{8}\] \[\Pi^{\mu\nu}_{(f)}=-q^{2}\int\frac{d^{4}k}{(2\pi)^{4}}Tr\left[\gamma^{\nu} \frac{1}{\not{\!\!k}-m}\gamma^{\mu}\frac{1}{\not{\!\!k}-\not{\!\!p}-m}M_{1} \frac{1}{\not{\!\!k}-\not{\!\!p}-m}\right]. \tag{9}\] Choosing of the regularization scheme to be applied is a subtle task because some LV terms involve objects defined only in the four-dimensional space-time, namely, the Levi-Civita symbol and \(\gamma_{5}\) matrices. Thus, the inadequate choice of a regulator may cause spurious terms in these amplitudes and affect the conclusions below. On the other hand, assuming that an implicit regulator exists allows us to manipulate the integrand and also to stay in 4 dimensions and do not concern about spurious breaking terms in the process of renormalization. Figure 1: Feynman rules of the Lorentz-violating QED. Figure 2: One-loop 2-point functions of the Lorentz-violating QED. ## III Basic divergent integrals and surface terms Here we briefly describe the framework [18] (for a recent review see [21]) allowing to handle the divergent integrals in four dimensions which appear in the amplitudes of the previous section and establish some notation. In this scheme, we assume that integrals are regularized by an implicit regulator \(\Lambda\) just to justify algebraic operations within the integrands. We then use, for instance, the following identity \[\int_{k}\frac{1}{(k+p)^{2}-m^{2}}=\int_{k}\frac{1}{k^{2}-m^{2}}-\int_{k}\frac{( p^{2}+2p\cdot k)}{(k^{2}-m^{2})[(k+p)^{2}-m^{2}]}, \tag{10}\] where \(\int_{k}\equiv\int^{\Lambda}\frac{d^{4}k}{(2\pi)^{4}}\), in order to separate basic divergent integrals from the finite part. The formers are defined as follows: \[I^{\mu_{1}\cdots\mu_{2n}}_{log}(m^{2})\equiv\int_{k}\frac{k^{\mu_{1}}\cdots k^ {\mu_{2n}}}{(k^{2}-m^{2})^{2+n}} \tag{11}\] and \[I^{\mu_{1}\cdots\mu_{2n}}_{quad}(m^{2})\equiv\int_{k}\frac{k^{\mu_{1}}\cdots k ^{\mu_{2n}}}{(k^{2}-m^{2})^{1+n}}. \tag{12}\] The basic divergences with Lorentz indices can be judiciously combined as differences between integrals with the same superficial degree of divergence, according to the equations below, which define the surface terms 1: Footnote 1: The Lorentz indices between brackets stand for permutations, _i.e._, \(A^{\{\alpha_{1}\cdots\alpha_{n}\}}B^{\beta_{1}\cdots\beta_{n}}=A^{\alpha_{1} \cdots\alpha_{n}}B^{\beta_{1}\cdots\beta_{n}}\) + sum over permutations between the two sets of indices \(\alpha_{1}\cdots\alpha_{n}\) and \(\beta_{1}\cdots\beta_{n}\). \[\Upsilon^{\mu\nu}_{2w}=g^{\mu\nu}I_{2w}(m^{2})-2(2-w)I^{\mu\nu}_{2w}(m^{2})=v _{2w}g^{\mu\nu}, \tag{13}\] \[\Xi^{\mu\nu\alpha\beta}_{2w}=g^{\{\mu\nu}g^{\alpha\beta\}}I_{2w}(m^{2})-4(3-w )(2-w)I^{\mu\nu\alpha\beta}_{2w}(m^{2})=\xi_{2w}g^{\{\mu\nu}g^{\alpha\beta\}}, \tag{14}\] \[\Sigma^{\mu\nu\alpha\beta\gamma\delta}_{2w}=g^{\{\mu\nu}g^{\alpha\beta\}}I_{2 w}(m^{2})-8(4-w)(3-w)(2-w)I^{\mu\nu\alpha\beta\gamma\delta}_{2w}(m^{2})= \sigma_{2w}g^{\{\mu\nu}g^{\alpha\beta}g^{\gamma\delta\}}. \tag{15}\] In the expressions above, \(2w\) is the degree of divergence of the integrals and for the sake of brevity, we substitute the subscripts \(log\) and \(quad\) by \(0\) and \(2\), respectively. Surface terms can be conveniently written as integrals of total derivatives, namely \[\upsilon_{2w}g^{\mu\nu}=\int_{k}\frac{\partial}{\partial k_{\nu}}\frac{k^{\mu }}{(k^{2}-m^{2})^{2-w}},\] \[(\xi_{2w}-v_{2w})g^{\{\mu\nu}g^{\alpha\beta\}}=\int_{k}\frac{\partial}{ \partial k_{\nu}}\frac{2(2-w)k^{\mu}k^{\alpha}k^{\beta}}{(k^{2}-m^{2})^{3-w}}. \tag{17}\] and \[(\sigma_{2w}-\xi_{2w})g^{\{\mu\nu}g^{\alpha\beta\}}g^{\gamma\delta\}=\int_{k }\frac{\partial}{\partial k_{\nu}}\frac{4(3-w)(2-w)k^{\mu}k^{\alpha}k^{\beta} k^{\gamma}k^{\delta}}{(k^{2}-m^{2})^{4-w}}. \tag{18}\] Equations (13)-(15) are in principle arbitrary and regularization dependent. From the mathematical point of view, a surface term can be any number since it is a difference between two infinities. They can be shown to vanish in usual dimensional regularization and they can be finite or infinite if computed with a sharp cutoff. In general, we leave these terms unevaluated until the end of the calculation to be fixed on symmetry grounds or phenomenology, when it is the case [7]. To illustrate this method, it is instructive to consider first the usual vacuum polarization tensor [22]: \[\Pi^{\mu\nu}(p)=\tfrac{4}{3}(p^{2}\eta^{\mu\nu}-p^{\mu}p^{\nu})I _{log}(m^{2})-4\upsilon_{2}\eta^{\mu\nu}+\tfrac{4}{3}(p^{2}\eta^{\mu\nu}-p^{ \mu}p^{\nu})\upsilon_{0}-\] \[-\tfrac{4}{3}(p^{2}\eta^{\mu\nu}+2p^{\mu}p^{\nu})(\xi_{0}-2 \upsilon_{0})-\tfrac{8i}{(4\pi)^{2}}(p^{2}\eta^{\mu\nu}-p^{\mu}p^{\nu})\int_{0 }^{1}x(1-x)\ln\tfrac{m^{2}-p^{2}x(1-x)}{m^{2}}, \tag{19}\] where \(\upsilon_{0}\) and \(\xi_{0}\) are logarithmic surface terms. Note that if we use a Ward identity the surface term \(\upsilon_{2}\) is fixed to zero and we get a relation between \(\xi_{0}\) and \(\upsilon_{0}\). At the same time, it is not possible to fix \(\upsilon_{0}\) term because this term is already gauge invariant. This surface term is the same that appears in the CS-like induced term. In that case it is not possible to fix it as well because it is proportional to a Levi-Civita symbol. The use of eq. (10) is not the only one possible since the implicit regulator assumed allows any other operation with the integrands. It is however the equation we use the most in order to separate divergent from the finite part because the second term in the righ-hand-side of this equation is less divergent than the first. Disadvantages of doing this include the high number of powers in \(k\) that can appear in the numerator of the integrals, which makes them difficult to compute (by the way one way out to this is proposed in [22]), the surface terms that cannot all be fixed by symmetries if only eq. (10) is used and last but not least there is no way out if the model does not have a propagator in the usual form being essentially nonlinear, like the Sine-Gordon model. ## IV Evaluation of the diagrams After taking the traces in eqs. (4)-(9), regularizing the integrals and summing all the diagrams we find the full one-loop vacuum polarization tensor. We list all integrals in the appendix. \[\Pi_{LV}^{\mu\nu}=\frac{8}{3}q^{2}\left\{\left(c^{\mu\alpha}p^{ \nu}+c^{\nu\alpha}p^{\mu}\right)p_{\alpha}-c^{\alpha\beta}p_{\alpha}p_{\beta }\eta^{\mu\nu}-p^{2}c^{\mu\nu}\right\}\left\{I_{log}(m^{2})-\frac{i}{16\pi^{2 }}\left[\frac{(p^{2}+2m^{2})}{p^{2}}Z_{0}+\frac{1}{3}\right]+\frac{\upsilon_{0} }{2}\right\}+\] \[+\frac{i}{16\pi^{2}}q^{2}c^{\alpha\beta}p_{\alpha}p_{\beta}\frac{ 1}{p^{2}}\left(p^{\mu}p^{\nu}-p^{2}\eta^{\mu\nu}\right)\left\{p^{2}\iota_{0}+ \frac{(p^{2}+4m^{2})}{p^{2}}Z_{0}+\frac{2}{3}\right\}+\] \[-\frac{mq^{2}}{2\pi^{2}}p_{\lambda}\left\{p^{2}g^{\mu\nu\lambda} +p_{\beta}\left(g^{\nu\beta\lambda}p^{\mu}-g^{\mu\beta\lambda}p^{\nu}\right) \right\}\iota_{1}-4imq^{2}p_{\alpha}\left(g^{\nu\mu\alpha}-g^{\alpha\mu\nu}+g ^{\alpha\nu\mu}\right)\upsilon_{0}+\] \[+q^{2}\left(-\frac{m^{2}}{\pi^{2}}\iota_{0}+4iv_{0}\right)b_{ \alpha}p_{\beta}c^{\alpha\beta\mu\nu}, \tag{20}\] in which \(Z_{n}=\int_{0}^{1}dx\ x^{n}\ln\left[\frac{m^{2}-p^{2}x(1-x)}{m^{2}}\right]\) and \(\iota_{n}=\int_{0}^{1}dx\frac{x^{n}(1-x)}{m^{2}-p^{2}x(1-x)}\). Besides, we note that \(c^{\mu\nu}\) is symmetric and \(g^{\mu\nu\alpha}\) is antisymmetric in the two first indices, as expected since only these parts of these tensor can contribute to observables. In order to obtain the result in eq. (IV), the following relations involving the integrals in the Feynman parameters were used: \[Z_{k}=\frac{1}{k+1}\left\{kZ_{k-1}-(k-1)\frac{m^{2}}{p^{2}}Z_{k-2}-\frac{k-1} {k(k+1)}\right\}, \tag{21}\] \[\iota_{k+1}=\frac{1}{2}\left\{\iota_{k}-\frac{1}{p^{2}}\left[kZ_{k-1}-(k+1)Z _{k}\right]\right\}. \tag{22}\] In eq. (IV), we have constrained the surface terms such that the result in eq. (IV) is transverse. This procedure fixed \(\xi_{0}=2\upsilon_{0}\), \(\sigma_{0}=3\upsilon_{0}\) and \(\upsilon_{2}=\xi_{2}=0\). With these relations between the surface terms, the contributions in the vectors \(a^{\mu}\) and \(e^{\mu}\) turned out to be null. We see that gauge invariance of the action is not sufficient to determine all the surface terms, and that this has as a consequence the ambiguity of the induced Carroll-Field-Jackiw (CFJ) term [23]. It is important to notice that the contributions due to the parameter \(\upsilon_{0}\) in the terms with the tensors \(c^{\mu\nu}\) and \(g^{\mu\nu\alpha}\) are irrelevant since they can be absorbed in renormalization or by some normalization condition. One could try to determine \(\upsilon_{0}\) by enforcing momentum-routing-invariance in the diagrams. One possible procedure would be to calculate the amplitude with arbitrary routing, parameterized by an alpha constant, and then require the result not to depend on such a parameter. It can be shown, for a QED amplitude \(T^{\mu_{1}\mu_{2}\cdots\mu_{n}}\), with \(n\) external photon legs, that its transversality only can be respected if a relative shift is allowed between the remaining \(n-1\)-point functions which result from the contraction of the external momentum \(p_{\mu_{i}}\) with \(T^{\mu_{1}\mu_{2}\cdots\mu_{n}}.\) This relative shift is only allowed if some relation between surface terms are established. Here, we show this for \(n=2\). We attribute a general loop momentum in the diagrams of Fig. 3, respecting the energy-momentum conservation in the vertices. When the external momentum \(p_{\mu}\) is contracted with the diagrams, each one of the graphics, after the contraction is decomposed in a difference of two identical tad-pole diagrams with different loop momentum. Considering the six two-point amplitudes, after the contraction with \(p_{\mu}\), only four tad-pole diagrams survive, which are shown in Fig. 4, in which \(l\) is an general routing that is proportional to the external momentum, _i.e._, \(l=\alpha p\). The tadpoles are functions of \(l\) (\(l^{\prime}\)), \(\tau^{\nu}(l)\). The result of the calculation represented by Fig. 4 is given by: \[\tau^{\nu}_{(a)}(l)-\tau^{\nu}_{(a)}(l^{\prime})+\tau^{\nu}_{(b)}(l)- \tau^{\nu}_{(b)}(l^{\prime})=8qc_{\mu\alpha}\left\{(\alpha^{3}-\alpha^{\prime 3}) \left[p^{\mu}p^{\alpha}p^{\nu}+p^{2}p^{\alpha}\eta^{\mu\nu}\right](-v_{0}+2 \xi_{0}-\sigma_{0})+\right.\] \[\left.+p^{\alpha}\eta^{\mu\nu}(2v_{2}-\xi_{2})\right\}+4q(\alpha^ {2}-\alpha^{\prime 2})(p^{2}me^{\nu}+2mp^{\nu}e\cdot p)(2v_{0}-\xi_{0}). \tag{23}\] Since \(\alpha\neq\alpha^{\prime}\) by definition, the only possible solution for preserving gauge-invariance (the tranversality of the photon polarization tensor) is the same that assures momentum-routing-invariance. In fact, the explicit calculation which results in eq. (23) enforces that \(\xi_{0}=2v_{0}\) and \(\sigma_{0}=3v_{0}\). If we take into account the contributions from the traditional QED, we also obtain that \(v_{2}=\xi_{2}=0\). It is interesting to remark that, although we have attributed an arbitrary loop-momentum in the vacuum polarization tensor, this result implies momentum-routing-invariance of the tadpole diagram, which results in transversality of the photon two-point function. In general, the transversality of an amplitude with \(n\) external legs will result in routing independence of an amplitude with \(n-1\) outer legs. This condition is weaker than the independence of loop-momentum of the original amplitude. Also, MRI looks like a symmetry of the Feynmann diagrams, as in Fig. 4. But is not a symmetry in terms of an action, _i. e._ transformations of the fields that make this action invariant. The piece \(I_{log}(m^{2})\) is a logarithmically divergent integral, namely \(\int\frac{d^{4}k}{(2\pi)^{4}}\frac{1}{(k^{2}-m^{2})^{2}}\), introduced in section III. We do not have to evaluate it explicitly. It gives rise to \(1/\epsilon+...\) in dimensional regularization, for example. We now take the low energy limit (\(m^{2}>>p^{2}\)) in each term and integral of eq.(20). For instance, \(Z_{1}(m^{2}>>p^{2})\approx-\frac{p^{2}}{12m^{2}}\) in this limit. Thus, we find the low energy limit of the vacuum polarization tensor (\(m^{2}>>p^{2}\)): Figure 4: Momentum routing invariance of a tadpole. Figure 3: Gauge and momentum routing invariance relation for a two leg diagram. \[\Pi^{\mu\nu}_{LV}(p)=\frac{8}{3}q^{2}\left\{c^{\mu p}p^{\nu}+c^{\nu p }p^{\mu}-c^{pp}\eta^{\mu\nu}-p^{2}c^{\mu\nu}\right\}\left(I_{log}(m^{2})+\frac{i }{16\pi^{2}}\frac{p^{2}}{6m^{2}}+\frac{\upsilon_{0}}{2}\right)+\] \[+\frac{i}{16\pi^{2}}\frac{q^{2}}{3m^{2}}c^{pp}\left(p^{\mu}p^{\nu }-p^{2}\eta^{\mu\nu}\right)-\frac{mq^{2}}{12\pi^{2}}\frac{p^{2}}{m^{2}}\left\{g ^{\mu\nu p}+\frac{1}{p^{2}}(p^{\mu}g^{\nu p}-p^{\nu}g^{\mu p})\right\}+\] \[-4imq^{2}p_{\alpha}\left(g^{\nu\mu\alpha}-g^{\alpha\mu\nu}+g^{ \alpha\nu\mu}\right)\upsilon_{0}-\left(\frac{q^{2}}{2\pi^{2}}\right)\epsilon^{ bp\mu\nu}\left(1-8i\pi^{2}\upsilon_{0}\right)\] \[\approx\frac{8}{3}q^{2}\left\{c^{\mu p}p^{\nu}+c^{\nu p}p^{\mu}- c^{pp}\eta^{\mu\nu}-p^{2}c^{\mu\nu}\right\}I_{log}(m^{2})+\frac{i}{16\pi^{2}} \frac{q^{2}}{3m^{2}}c^{pp}\left(p^{\mu}p^{\nu}-p^{2}\eta^{\mu\nu}\right)+\] \[-\left(\frac{q^{2}}{2\pi^{2}}\right)\epsilon^{bp\mu\nu}\left(1-8 i\pi^{2}\upsilon_{0}\right), \tag{24}\] where \(c^{\mu\nu}\) stands for \(c^{\mu\nu}p_{\mu}\). This result shows that only \(b_{\mu}\), \(c_{\mu\nu}\) and \(g_{\mu\nu\lambda}\) affect usual QED at low energies. The one-loop LV contribution to spectroscopy or condensed matter physics would be due only to these terms. In particular, the radiatively induced Chern-Simons-like (CS-like) term \(\epsilon^{\alpha\beta\mu\nu}b_{\alpha}p_{\beta}\) is the one which contributes the most. On the other hand, in the high-energy limit (\(m^{2}<<p^{2}\)) we find that vacuum polarization tensor is affected only by \(c\) coefficients: \[\Pi^{\mu\nu}_{LV}(p)=\frac{8}{3}q^{2}\left\{c^{\mu p}p^{\nu}+c^{ \nu p}p^{\mu}-c^{pp}\eta^{\mu\nu}-p^{2}c^{\mu\nu}\right\}\left\{I_{log}(m^{2} )-\frac{i}{16\pi^{2}}\left[\ln\left(-\frac{p^{2}}{m^{2}}\right)-\frac{5}{3} \right]+\frac{\upsilon_{0}}{2}\right\}+\] \[-\frac{i}{16\pi^{2}}\frac{4}{3}q^{2}c^{pp}\left(\frac{p^{\mu}p^{ \nu}}{p^{2}}-\eta^{\mu\nu}\right), \tag{25}\] where pieces proportional to \(p_{\mu}\) can be disregarded since they couple with currents, and contributions proportional to \(\partial_{\mu}J^{\mu}\) vanish due to the gauge symmetry. ## V The parameter \(b_{\mu}\) and applications to condensed matter The bridge between high-energy physics and condensed matter has shown promissing in recent years and the question of studies on the renormalization group has been the great trump card of this relationship [24]. With the emergence of graphene, low-dimensional systems could be described via the massless Dirac equation in \((2+1)\)- dimensions. The application of field theory in low-dimensional systems has been a reality, with regard to theoretical studies, such as the model that describes how electrons propagate over a sheet of graphene, from the point of view of the renormalization group [25]. Another interesting application is to consider curved space effects in graphene [26], showing how useful is the application of field theory techniques to low-dimensional electronic systems. The proposal that particles presenting a relativistic scattering relationship can be considered as quasi-particles in condensed matter models has been known for some time [27]. In some models, the dispersion relation can be linearized, by means of an expansion around the Fermi energy, which ends up with relativistic energy-momentum relations. In this sense, Dirac's fermions gained some prominence with the discovery of graphene which, in \((2+1)\)-dimensions, is nothing more than a sheet composed of carbon atoms in a hexagonal lattice [28]. Thus, electrons moving over this sheet of carbon interact with the potential of this lattice, giving rise, to conical structures close to Fermi energy. Near this region (called a node), the scattering relationships of electrons on the graphene sheet turn out to be linear and the Hamiltonian that describes the system is given by the non-massive Dirac equation, where the propagation velocity is the Fermi velocity (\(v_{f}\)) and the spin of the particle gives rise to a pseudo-spin, which is related to the sublattice of the system. After the observation of Dirac fermions, it was realized [29] that some materials whose band structure present nodes around the excitation of the material are Weyl fermions, such materials being called Weyl semimetals [30]. Initially, the Weyl semimetals proposals were based on studies of pyrochlore iride molecules, topological insulators and heterostructures [31]. Such descriptions paved the way for the distance between condensed matter and high energy physics to become smaller with regard to the description of certain phenomena, more specifically the emergence of field theory according to Anti-de Sitter-conformal field theory (AdS-CFT) correspondence or Anti-de Sitter-condensed matter theory correspondence (AdS-CMT) [32]. In the case of Weyl semimetals, this connection occurs through the chiral anomaly, which can be translated into condensed matter models. The chiral anomaly basically tells us that the conservation laws of the vectorial current (\(\partial_{\mu}j^{\mu}=0\)) and that of the chiral current (\(\partial_{\mu}j^{\mu}_{5}=0\)) cannot be satisfied at the same time. Therefore, if we enforce vector current conservation, the chiral current cannot be conserved, which leads to the well-known chiral symmetry breaking. From the point of view of Weyl semimetals, it can be written in terms of the electromagnetic fields as well as the number of fermions with right-left chirality [33]. Thus, from this perspective, one might wonder whether the inclusion of terms causing the violation of Lorentz symmetry would also lead to interesting results, considering this relationship with condensed matter. In this sense, it comes the study of QED applied to a class of materials that can be considered as Weyl semimetals [34; 35; 36; 37; 38], which, with proper choice of physical parameters, can be described by the massive Dirac equation in \((3+1)\)-dimensions (at low energies, they are considered quasi-particles), which can be modeled by the following action (corresponding to only the \(b_{\mu}\) term in equation (1)) \[S=\int d^{4}x\bar{\psi}(i\not{\partial}-m-\not{b}\gamma_{5}-e\not{A})\psi, \tag{26}\] with \(b_{\mu}\) being a constant four-vector. Such model has been studied with great enthusiasm in the literature, since it can generate a CS-like term [12]. It can even be finite, but indeterminated (in the sense that, to fix it, some experimental verification would be necessary) or even null, results that also permeate questions that concern the regularization scheme used in the amplitudes of the aforementioned [7]. However, starting from the equation (26), it is possible to radiatively induce the CS-like term and determine it in an unambiguous way, since condensed matter models can be verified experimentally. The corresponding action of eq. (26) for the condensed matter model in the momentum space is given by the expression \[S=\int\frac{d^{4}k}{(2\pi)^{4}}\bar{\psi}(\gamma_{\mu}M^{\mu}_{\nu}k^{\nu}-m- \not{b}\gamma_{5})\psi, \tag{27}\] with \(M^{\mu}_{\nu}=(v_{f},v_{f},\tilde{v}_{f})\) being a diagonal matrix which is necessary due to the anisotropy introduced by the Fermi velocity (in some materials, we consider \(v_{f}=c/300\)). The amplitudes and propagators that come from eq. (27) are similar to the usual QED. The complete propagator is given by the expression \[G(k,b)=\frac{i}{(\not{k}-m-\not{b}\gamma_{5})}, \tag{28}\] with the polarization tensor \(\Pi^{\mu\nu}\) given by the following expression, already adapted due to the velocity of propagation of the charge carriers on the graphene sheet being the Fermi velocity \(v_{f}\), \[\Pi^{\mu\nu}(b,p)=\frac{e^{2}}{v_{f}\tilde{v}_{f}}\int d^{4}xTr\gamma^{\mu}G(k,b)\gamma^{\nu}G(k+p^{\prime},b). \tag{29}\] Equation (29) provides all the necessary information regarding the relationship between the Lorentz breaking and the Weyl semimetals, it is now sufficient to perform a direct calculation to obtain information about the consequences of the parameter \(b\). Thus, taking the trace and making the necessary calculations, we found the result for the amplitude given by equation (20), in which only the \(b_{\mu}\) parameter contributes to the result for conductivity. From the point of view of implicit regularization, which was discussed in detail in Section III, the parameter \(\upsilon_{0}\), even when is required gauge (and/or momentum routing) invariance, is undetermined. In this sense, the process of fixing the parameter should be by phenomenology, like building an experimental apparatus that can measure the quadricurrent \(j_{\mu}\) associated with the fermionic current adapted to a condensed matter model (an example of measurement to fix such a parameter is an analysis of the Hall effect from the perspective of Weyl semimetals, where it is possible to experimentally obtain the value of the so-called Hall conductivity [9]). On the other hand, from a theory point of view, some results agree that the parameter \(\upsilon_{0}\) can be determined unambiguously for massless theories [39; 40]. After contracting the \(A_{\mu}\) field with the finite piece that remains in eq. (24), we find out the following current \[j^{\nu}=\frac{q^{2}}{2\pi^{2}v_{f}\tilde{v}_{f}}(1-8\pi^{2}\upsilon_{0})b_{ \mu}\epsilon^{\mu\nu\alpha\beta}\partial_{\alpha}A_{\beta} \tag{30}\] considering the spatial part and \[j^{\nu}=\frac{q^{2}}{2\pi^{2}v_{f}\tilde{v}_{f}}(1-8\pi^{2}\upsilon_{0})b_{0} \epsilon^{0\nu\alpha\beta}\partial_{\alpha}A_{\beta} \tag{31}\] when we consider the temporal part. The spatial part presented by (30) gives rise to the anomalous Hall effect, whose conductivity is proportional to the separation term between the Weyl nodes. \[\sigma^{xy}=\frac{1}{2\pi^{2}v_{f}\tilde{v}_{f}}\epsilon^{xyl}(1-8\pi^{2}v_{0})| \vec{b}|\hat{b}_{l} \tag{32}\] The second equation, given by (31), describes the so-called magnetic effect, which sometimes implies the equilibrium of currents in the presence of the chiral magnetic field [41]. However, this is a naive statement, because the chiral anomaly in condensed matter models is only true near the Weyl nodes. Therefore, care must be taken to predict chiral anomaly effects directly for a Weyl semimetal (for more details on Weyl semimetals, see [42]). Nevertheless, we see that Weyl semimetals are interesting for studies in high energy physics, more specifically, considering that they can be analysed from the point of view of Lorentz symmetry breaking, when the conductivity can be generated by a \(\not{P}_{75}\) type term in the LV action. A comment here is necessary. The \(\Pi^{\mu\nu}\) polarization tensor modifies Maxwell's equations. The even part is related to the characteristics of the electrical permittivity and magnetic permeability constants of the medium where the electrons propagate. The odd part, on the other hand, can add new terms to Maxwell's equations, which can modify the response of the material medium to the propagation of electrons in a Weyl semimetal. Considering \(j=\rho=0\) (absence of sources), the equation of the wave propagating in a Weyl semimetal is modified, leading to the effect of vacuum birefringence associated with it. This effect is exclusively associated with the CS-like term induced radiatively and such observation is one example of how to fix the parameter \(v_{0}\) by phenomenology. Some other interesting effects on Weyl semimetals can be observed in [43] (Repulsive Casimir Effect) and [44] (Axionic Electrodynamics) 2. Footnote 2: A discussion about Weyl semimetals theory and your relation with induced Chern-Simons term and another models in high energy physics is founded in [38] ## VI Summary We calculated the polarization tensor of the Abelian gauge field in a minimal LV extension of QED involving all terms listed in [20]. Within our studies, the main attention was paid, first, to divergent contributions, while most of previous studies dealt with finite ones, the most known of them is the CFJ term, second, to the infrared-leading parts of finite contributions. The importance of these terms is justified by the fact that they play a special role within condensed matter studies where the LV effects attract essential attention. In this study, we followed this line and calculated the anomalous Hall conductivity and discussed other possible applications of our results to the condensed matter, especially, to Weyl semimetals. A natural continuation of this study, besides of study of other applications of Lorentz symmetry breaking within the condensed matter context, could consist in treating the low-energy impacts of higher-derivative LV terms. We are planning to do this study in a forthcoming paper. ###### Acknowledgements. The work of A. Yu. P. has been partially supported by the CNPq project No. 301562/2019-9. ## Appendix All integrals needed after taking the traces in eqs. (4)-(9) can be obtained from integrals below: \[\int_{k}\frac{1}{[(k-p)^{2}-m^{2}]}=I_{quad}(m^{2})-p^{2}\upsilon_{0}; \tag{33}\] \[\int_{k}\frac{k^{\alpha}}{[(k-p)^{2}-m^{2}]}=p^{\alpha}(I_{quad}(m^ {2})-\upsilon_{2})-p^{2}p^{\alpha}(\xi_{0}-\upsilon_{0});\] (34) \[\int_{k}\frac{k^{\alpha}}{[(k-p)^{2}-m^{2}]^{2}}=p^{\alpha}(I_{log} (m^{2})-\upsilon_{0});\] (35) \[\int_{k}\frac{k^{2}}{[(k-p)^{2}-m^{2}]^{2}}=I_{quad}(m^{2})+(m^{2 }+p^{2})I_{log}(m^{2})-3p^{2}\upsilon_{0};\] (36) \[\int_{k}\frac{k^{\alpha}k^{\beta}}{[(k-p)^{2}-m^{2}]^{2}}=\frac{1 }{2}g^{\alpha\beta}(I_{quad}(m^{2})-\upsilon_{2})+p^{\alpha}p^{\beta}(I_{log }(m^{2})-\xi_{0})-\frac{1}{2}p^{2}g^{\alpha\beta}(\xi_{0}-\upsilon_{0});\] (37) \[\int_{k}\frac{k^{2}k^{\alpha}}{[(k-p)^{2}-m^{2}]^{2}}=2p^{\alpha} (I_{quad}(m^{2})-\upsilon_{2})+p^{\alpha}(m^{2}+p^{2})I_{log}(m^{2})+p^{\alpha }(3p^{2}-m^{2})\upsilon_{0}-4p^{2}p^{\alpha}\xi_{0};\] (38) \[\int_{k}\frac{k^{\alpha}k^{\beta}k^{\gamma}}{[(k-p)^{2}-m^{2}]^{2}} =\frac{1}{2}p^{\{\alpha}g^{\gamma\beta\}}\left[I_{quad}(m^{2})- \xi_{2}\right]-\frac{1}{2}p^{2}p^{\{\alpha}g^{\gamma\beta\}}\left[I_{log}(m^{ 2})-\xi_{0}\right]+\] \[+\frac{1}{2}(p^{2}p^{\{\alpha}g^{\gamma\beta\}}+2p^{\alpha}p^{ \beta}p^{\gamma})\left[I_{log}(m^{2})-\sigma_{0}\right]; \tag{39}\] \[I=\int_{k}\frac{1}{(k^{2}-m^{2})^{2}[(k+p)^{2}-m^{2}]}=-b\ \int_{0}^{1}dx\frac{(1-x)}{\Delta^{2}}; \tag{40}\] \[I_{1}^{\beta}=\int_{k}\frac{k^{\beta}}{(k^{2}-m^{2})^{2}[(k+p)^{ 2}-m^{2}]}=b\ p^{\beta}\ \int_{0}^{1}dx\frac{x(1-x)}{\Delta^{2}};\] (41) \[J_{1}^{\beta}=\int_{k}\frac{k^{\beta}}{(k^{2}-m^{2})[(k-p)^{2}-m ^{2}]^{2}}=I_{1}^{\beta}+p^{\beta}I;\] (42) \[I_{2}^{2}=\int_{k}\frac{k^{2}}{(k^{2}-m^{2})^{2}[(k+p)^{2}-m^{2} ]}=I_{log}(m^{2})-b\ Z_{0}(p^{2},m^{2})-b\ m^{2}\ \int_{0}^{1}dx\frac{(1-x)}{\Delta^{2}}\] (43) \[J_{2}=\int_{k}\frac{k^{2}}{(k^{2}-m^{2})[(k-p)^{2}-m^{2}]^{2}}=I _{2}+p^{2}I+2p_{\beta}I_{1}^{\beta};\] (44) \[I_{2}^{\beta\nu}=\int_{k}\frac{k^{\beta}k^{\nu}}{(k^{2}-m^{2})^ {2}[(k+p)^{2}-m^{2}]}=\frac{1}{4}g^{\beta\nu}(I_{log}(m^{2})-\upsilon_{0})- \frac{1}{2}b\ g^{\beta\nu}[Z_{0}(p^{2},m^{2})-Z_{1}(p^{2},m^{2})]-\] \[-b\ p^{\beta}p^{\nu}\ \int_{0}^{1}dx\frac{x^{2}(1-x)}{\Delta^{2}};\] (45) \[J_{2}^{\beta\nu}=\int_{k}\frac{k^{\beta}k^{\nu}}{(k^{2}-m^{2}) [(k-p)^{2}-m^{2}]^{2}}=I_{2}^{\beta\nu}+p^{\beta}p^{\nu}I+I_{1}^{\beta}p^{\nu }+I_{1}^{\nu}p^{\beta};\] (46) \[I_{3}^{\nu}=\int_{k}\frac{k^{2}k^{\nu}}{(k^{2}-m^{2})^{2}[(k+p) ^{2}-m^{2}]}=-\frac{1}{2}p^{\nu}(I_{log}(m^{2})-\upsilon_{0})+b\ p^{\nu}Z_{1}(p ^{2},m^{2})+b\ m^{2}\ p^{\nu}\int_{0}^{1}dx\frac{x(1-x)}{\Delta^{2}};\] (47) \[J_{3}^{\nu}=\int_{k}\frac{k^{2}k^{\nu}}{(k^{2}-m^{2})[(k-p)^{2}- m^{2}]^{2}}=I_{3}^{\nu}+p^{\nu}I_{2}+p^{2}I_{1}^{\nu}+p^{2}p^{\nu}I+2p_{\gamma}I_{2}^{ \nu}+2p_{\gamma}p^{\nu}I_{1}^{\gamma}-p^{\nu}\upsilon_{0};\] (48) \[I_{5}^{\beta\nu\alpha}=\int_{k}\frac{k^{\beta}k^{\nu}k^{\alpha}}{( k^{2}-m^{2})^{2}[(k+p)^{2}-m^{2}]}=\frac{-1}{12}bp^{\{\alpha}g^{\beta\nu\}}(I_{log}(m^{2} )-\xi_{0})+b\ p^{\alpha}p^{\beta}p^{\nu}\int_{0}^{1}dx\frac{x^{3}(1-x)}{\Delta^ {2}}+\] \[+\frac{1}{2}b\ p^{\{\alpha}g^{\beta\nu\}}[Z_{1}(p^{2},m^{2})-Z_{2}( p^{2},m^{2})];\] (49) \[J_{5}^{\beta\nu\alpha}=\int_{k}\frac{k^{\beta}k^{\nu}k^{\alpha}}{( k^{2}-m^{2})[(k-p)^{2}-m^{2}]^{2}}=I_{5}^{\beta\nu\alpha}+p^{\beta}p^{\nu}p^{ \alpha}I+p^{\{\nu}p^{\beta}I_{1}^{\alpha}\}+p^{\{\beta}I_{2}^{\nu\alpha\}}+ \frac{1}{4}p^{\{\alpha}g^{\beta\nu\}}(\upsilon_{0}-\xi_{0}) \tag{50}\] \[I_{4}^{\beta\nu}=\int_{k}\frac{k^{2}k^{\beta}k^{\nu}}{(k^{2}-m^{2})^{2 }[(k+p)^{2}-m^{2}]}=\frac{1}{2}g^{\beta\nu}(I_{quad}(m^{2})-\upsilon_{2})+\frac{ 1}{4}(m^{2}-p^{2})g^{\beta\nu}(I_{log}(m^{2})-\upsilon_{0})+\] \[+\frac{1}{6}(p^{2}g^{\nu\beta}+2p^{\beta}p^{\nu})(I_{log}(m^{2})- \xi_{0})-b\ (-g^{\beta\nu}p^{2}+p^{\nu}p^{\beta})Z_{2}(p^{2},m^{2})+\frac{1}{2}\ b(m^{2}-3p^{2 })\ g^{\beta\nu}Z_{1}(p^{2},m^{2})+\] \[+\frac{1}{2}b(p^{2}-m^{2})g^{\beta\nu}Z_{0}(p^{2},m^{2})-b\ p^{ \beta}p^{\nu}m^{2}\ \int_{0}^{1}dx\frac{x^{2}(1-x)}{\Delta^{2}}; \tag{51}\] \[J_{4}^{\beta\nu}=\int_{k}\frac{k^{2}k^{\beta}k^{\nu}}{(k^{2}-m^{ 2})[(k-p)^{2}-m^{2}]^{2}}=\frac{1}{2}g^{\nu\beta}(I_{quad}(m^{2})-\upsilon_{2} )+(m^{2}-p^{2})J_{2}^{\beta\nu}+2p_{\lambda}J_{5}^{\beta\nu\lambda}-2p_{ \lambda}I_{5}^{\beta\nu\lambda}-p^{2}I_{2}^{\beta\nu}, \tag{52}\] where \(b\equiv\frac{i}{(4\pi)^{2}}\), \(Z_{k}(p^{2},m^{2})\) and \(\Delta^{2}\) are defined as \[Z_{k}(p^{2},m^{2})=\int_{0}^{1}dzz^{k}\ln\frac{m^{2}-p^{2}z(1-z)} {m^{2}}, \tag{53}\] \[\Delta^{2}=m^{2}-p^{2}x(1-x). \tag{54}\] The basic divergent integrals \(I_{log}(m^{2})\) and \(I_{quad}(m^{2})\) and the surface terms \(\upsilon_{0}\), \(\upsilon_{2}\) and \(\xi_{0}\) are defined in section III.
2303.12463
Anomalous transport in angstrom-sized membranes with exceptional water flow rates and dye/salt rejections
Fluidic channels with physical dimensions approaching molecular sizes are crucial for novel desalination, chemical separation, and sensing technologies. However, fabrication of precisely controlled fluidic channels in the angstrom size is extremely challenging. This, along with our limited understanding of nanofluidic transport, hinders practical applications. Here, we fabricated high-quality salt-intercalated vermiculite membranes with channel sizes 3-5 Angstrom, highly dependent on intercalant. Unlike pristine samples, the salt-intercalated membranes are highly stable in water. We tested several such membranes, of which 0.6 micron thick membranes showed dye rejection efficiencies greater than 98 percent with exceptionally high water permeance of 5400 L m-2 h-1 bar-1 at a differential pressure of 0.9 bar. Interestingly, the same membrane also rejected NaCl ions, with efficiencies of 95 percent. Our highly confined channels exhibit sub-linear ionic conductance related to hydration sizes, steric exclusion, K+ mobility enhancement, and conductance saturation at concentrations less than or equal to 10 mM. This makes highly confined channels interesting for both fundamental science and applications.
Rathi Aparna, Singh Khushwant, Saini Lalita, Kaushik Suvigya, Dhal Biswabhusan, Parmar Shivam, Kalon Gopinadhan
2023-03-22T11:22:12Z
http://arxiv.org/abs/2303.12463v1
Anomalous transport in angstrom-sized membranes with exceptional water flow rates and dye/salt rejections ###### Abstract Fluidic channels with physical dimensions approaching molecular sizes are crucial for novel desalination, chemical separation, and sensing technologies. However, fabrication of precisely controlled fluidic channels in the angstrom size is extremely challenging. This, along with our limited understanding of nanofluidic transport, hinders practical applications. Here, we fabricated high-quality salt-intercalated vermiculite membranes with channel sizes \(\sim\)3-5 A, highly dependent on intercalant. Unlike pristine samples, the salt-intercalated membranes are highly stable in water. We tested several such membranes, of which 0.6 m thick membranes showed dye rejection efficiencies \(>\)98% with exceptionally high water permeance of 5400 L m\({}^{-2}\) h\({}^{-1}\) bar\({}^{-1}\) at a differential pressure of 0.9 bar. Interestingly, the same membrane also rejected NaCl ions, with efficiencies of \(\sim\)95%. Our highly confined channels exhibit sub-linear ionic conductance related to hydration sizes, steric exclusion, K' mobility enhancement, and conductance saturation at concentrations \(\leq\) 10 mM. This makes highly confined channels interesting for both fundamental science and applications. ## 1 Introduction Angstrom-scale channels play a vital role in many essential functions of life; for example, Na\({}^{+}\) channels of 3 to 5 A diameters exhibit a high Na\({}^{+}\)/K' selectivity of 10 to 30 [1]. 3 A diameter _Aquaporin_ protein channels transport water molecules selectively while rejecting most of the ions including protons. The high selectivity of _Aquaporin_ channels is related to its hydrophobicity and angstrom sizes [2]. Several groups attempted to mimic biological channels, and had good success in the fabrication of channels/pores with sizes in the nanometer range or smaller. The highly confined channels exhibit fast water flow[3], hydration[4] and size-based selectivity[5], anomalous dielectric constant[6], ion mobility enhancement[7], giant osmotic energy[8,9], etc. All these observations are just indicative of the rich science that exist at the smallest scale. On the practical side, angstrom-sized channels with sub-micron membrane thickness promise improved filtration performance, low cost and low operating pressures. Current state-of-the-art membranes such as Toray TM 610, NF 270, and Desal 5 L, reject dyes with efficiencies of 98 to 99% and transport water molecules with a flux of 212 L m\({}^{-2}\) h\({}^{-1}\) at a pressure of 15 bar [10]. These membranes also reject NaCl with efficiencies of < 50% at differential pressures > 6 bar. Achieving high water flux along with good ion and dye rejection at low operating pressures still remains a challenge. This is partially due to our inability to fully resolve the competing transport mechanisms at the angstrom scale. Inspired by the performance of biological channels, several materials were investigated. Among these, two-dimensional (2D) materials are distinctly different and have atomically small thickness [11]. Notable examples are graphene slits [3, 5], graphene oxide laminates [12, 13], carbon nanotubes [7, 14], BN nanotubes [9], nanopores [15], and clay membranes [16, 17]. Most of these 2D material-based membranes, swell in water, resulting in poor sieving characteristics. Several studies tried to address this issue by focusing either on mechanical confinement[18] or strengthening the interaction between the nanosheets[12, 19, 20]. Although, several of these membranes showed good dye rejections, the water permeance always remained < 1000 L m\({}^{-2}\) h-1 bar-1 for sub-micron thick membranes. To overcome this, we chose crystals of vermiculite, the clay material, for several reasons. Vermiculite crystals are abundant and layered, therefore highly suitable for making membranes consisting of two-dimensional (2D) laminates. The interlayer space in bulk vermiculite crystals has unintentionally intercalated hydrated cations occupying a space of 3-5 A. This space is similar to or larger than the size of water molecules. Manipulating these spaces with controlled intercalation of salt ions could lead to new generation membranes. Structurally, vermiculite has two tetrahedral and one octahedral sheet made up of O\({}^{2\text{-}}\) in which Si\({}^{4\text{+}}\) ions occupy the tetrahedral sites and Al\({}^{3\text{+}}\) ions the octahedral sites. However, the substitution of Al\({}^{3\text{+}}\) in place of Si\({}^{4\text{+}}\) at about one-quarter of the tetrahedral sites and substitution of Mg\({}^{2\text{+}}\) and Fe\({}^{2\text{+}}\) in place of Al\({}^{3\text{+}}\) in aluminium hydroxide octahedral sheets leaves a net negative layer charge. This is balanced by the intake of various cations like Ca\({}^{2\text{+}}\), and Mg\({}^{2\text{+}}\) in the hydrated form, occupying interlayer spaces of vermiculite[21]. In our study, these unintentionally intercalated cations were first replaced with lithium ions, resulting in expanded vermiculite crystals because of the free swelling nature of Li+[22]. The expanded vermiculite was exfoliated to construct membranes of 2D laminates. We, however, observed that the Li-vermiculite (Li-V) membrane is not stable in water. To make it water-stable, we treated pristine free-standing Li-V membranes with aqueous solutions containing one of the following ions, Al\({}^{3\text{+}}\), Ca\({}^{2\text{+}}\), Na+, or K'. The salt intercalation indeed helped the membranes to be stable and also tuned the interlayer spaces of vermiculite. These membranes were observed to be highly efficient for salt and dye rejections and also exhibit exceptionally high water flux. ## 2 Experimental section ### Membrane fabrication We fabricated lithium intercalated vermiculite membranes via a two-step ion exchange process. In the first step of the exchange process, 100 mg of vermiculite was heated in 200 ml of saturated sodium chloride solution at 100\({}^{\circ}\)C for 24 hours in a reflux assembly, a procedure very similar to other reports [23, 24]. The resultant sodium exchanged crystals were repeatedly washed with DI water to remove excess NaCl. These crystals were then heated in 200 ml of 2 M LiCl solution for 24 hours and again washed with DI water to remove excess LiCl that resides on the surfaces. This product was then dried, weighed, and dispersed in DI water with a concentration of 1 mg/ml. The solution containing expanded Li-vermiculite crystals was subjected to ultra-sonication (power = 24 W) in water for 15 minutes. The suspension was then centrifuged at 3000 rpm for 10 minutes to obtain a uniform solution consisting of thinner layers. The supernatant gave a light-yellow dispersion of vermiculite layers (Fig. 1a) and the final solution consisted of 0.62 mg/ml of vermiculite nanosheets giving a 62% yield. Zeta potential measurements provided a value of -35\(\pm\)1 mV, a strong indication of a negative surface charge on the suspended flakes. The successful dispersion of vermiculite layers in water is confirmed by observing the Tyndall effect (Fig. 1a). We prepared a Li-vermiculite membrane, labeled as Li-V, of diameter 1.6 cm via vacuum filtration on several porous supports (pore size = 100-200 nm) such as AAO, and PVDF. The vermiculite naturally peeled off from the porous support in the case of thicker membranes (Top inset of Fig. 1a); however thin membranes were difficult to handle without the porous support. We prepared several membranes of thickness ranging from 0.4 to 5.0 \(\upmu\)m. We performed our ion transport measurements with free-standing vermiculite membranes, which helped us to understand their intrinsic properties without the influence of porous support. In pressure-assisted filtration experiments, we used membranes with PVDF porous support as the latter provide mechanical strength and a smooth surface to avoid wear and tear of the membrane. ### Pressure-driven filtration setup For the salt and dye permeation experiments, we used a pressure-driven filtration setup, where the feed side of the membrane was exposed to the atmosphere and the other side to pressures lower than 1 bar. We conducted the permeation test under several pressure gradients from 900 mbar to 50 mbar. The permeate was collected and analyzed to quantify the water permeance and the concentration of dye or ions, if present any. ### Ion transport setup The setup used for ion transport studies consists of a tau cell made of polyether ether ketone (PEEK) material holding a 5 \(\upmu\)m thick vermiculite membrane separating salt solutions on either side of the membrane. The membrane was mounted on an acrylic sheet that has a pre-drilled square hole, with the help of epoxy glue (Stycast 2625), which ensures that the only path for ion transport is through the vermiculite membrane. The actual area of the membrane under study was 4 mm\({}^{2}\). Two Ag/AgCl electrodes were used to measure the ionic current with the help of a source-meter unit (Keithley-Tektronix 2614B) and LabVIEW software. ## 3 Results and Discussion The free-standing Li-V membranes are found to disintegrate within 2 - 5 minutes of exposure to both water (Fig. 1b) and aqueous LiCl solutions having a maximum concentration of 0.1 M (Fig. S1). However, Li-exchanged vermiculite membranes appear stable in concentrated LiCl solutions, for example, 1 M and above. If the stable membrane is transferred from 1 M LiCl solution to water, it quickly disintegrates making them unsuitable for water-related applications. To make them water-stable, we immersed Li-V membranes in one of the salt solutions of KCl, NaCl, CaCl\({}_{2}\), or AlCl\({}_{3}\) of 1M chloride concentration for 24 hours. After the ion exchange, the membranes were thoroughly washed with DI water which removed the residual salt that resides on the sample surface. We performed several measurements to confirm the successful exchange of K', Na', Ca2+, or Al3+ with Li+ in the Li-V membranes. The exchanged membranes are labelled as cation-V membranes. The salt-exchanged membranes were found to be water stable for at least six months, without any evidence of swelling in aqueous solutions (Fig. 1c). In addition, these membranes were also tested under large concentration gradients and maximum applied voltages of \(\pm\)300 mV across the membrane and were found to be stable (Fig. S2). We did not observe any voltage-induced delamination in any of our salt-stabilized membranes. Though the synthesis of Li-vermiculite membranes were reported previously[23, 24], their stability in water was rarely discussed. In those studies, the membrane exposure to water was only for a short duration, and hence the instability might not be very obvious. We also checked the water wettability of these membranes with contact angle measurement and they are found to be mostly hydrophilic (Fig. S3). Figure 1: **Stability of free-standing vermiculite membranes in water.** (a) The exfoliated vermiculite layers are successfully dispersed in water as evident from the characteristic Tyndall effect. The solution has a typical concentration of 0.62 mg/ml. Inset: pristine free-standing Li-V membrane. (b) A free-standing Li-V membrane disintegrates in water within 2 - 5 minutes of exposure. (c) Camera images of a Li-V membrane stabilized in salt solutions of KCl, NaCl, CaCl\({}_{2}\), and AlCl\({}_{3}\). Having confirmed the water-stability of our membranes, we investigated the microstructure of our pristine and intercalated membranes. Using the atomic force microscopy (AFM) technique (Fig. 2a), the average flake thickness of exfoliated Li-vermiculite was determined to be -1.5 nm (Fig. 2b), which corresponds to 1 layer of vermiculite. The cross-section of the membrane obtained using scanning electron microscopy (SEM) shows an exquisite laminate structure (Inset of Fig. 2b), and the surface of the membrane shows a continuous microstructure without any obvious pinholes (Fig. S2). The X-ray diffraction (XRD) data recorded for bulk vermiculite crystal shows multiple peaks around 8.65\({}^{\circ}\), indicating the presence of numerous unintentional cations in the interlayer space (Fig. S4). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. S4). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The XRD data of the Li-V membrane (Fig. 2c-d) shows a sharp, and intense peak at 26 = 7.05\({}^{\circ}\), corresponding to (001) plane. The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of numerous unintentional cations in the interlayer space (Fig. 2c-d). The peak at 7.05\({}^{\circ}\) (Full width at half maximum= 0.81\({}^{\circ}\)) provides an interlayer separation, \(d\) of 12.5 A, confirming the presence of unintentional cations in the interlayer space (Fig. that the space between the layers has been successfully exchanged with Li -ions (Fig. S4). We also recorded the XRD data of salt-stabilized membranes (Fig. 2c-d). In Na-V, Ca-V, and Al-V, apart from the peak (001), we observed higher-order peaks (00\(l\)) with \(l\)= 2,3,4 and 5, and a smaller full width at half maximum when compared to Li-V membranes. This indicates that the salt-stabilized laminates are of the highest crystalline quality, homogeneous, and without inter-stratification. The most intense peak (001) is left-shifted from 7.05\({}^{\circ}\) to 5.95 \(\pm\) 0.05\({}^{\circ}\) for Na\({}^{+}\), Ca\({}^{2+}\) and Al\({}^{3+}\) cations, which is related to the modified layer charges arising from the exchange of cations, Na\({}^{+}\) is an exception due to its robust hydration shell as compared to K\({}^{+}\)[25]. Moreover, water layer can either be 1 or 2 and not in between, explaining the sudden change in the interlayer space from -12 A to -15 A. Unlike other salt-stabilized membranes, the XRD of K-V membranes show only (001) and (004) peaks, which is very similar to the XRD pattern of Li-V membranes. When compared with Li-V, the (001) peak of K\({}^{+}\) is shifted very little to the right from 7.05\({}^{\circ}\) to 7.25\({}^{\circ}\). The little difference in the structure of Li-V and K-V could be related to the similar layer charges arising from the exchange of Li\({}^{+}\) with K\({}^{+}\). Given the immense interest in dye removal from industrial wastewater and the high quality of our membranes, we investigated the water/dye permeation properties of Na-stabilized membranes. We tested our membranes with several dyes such as methyl orange (MO, anionic), crystal violet (CV, cationic), rhodamine 6G (R6G, cationic), methyl blue (MB, anionic), and brilliant blue (BB, anionic)[16]. The largest dimension of the dye molecule that we tested ranges from 1.19 nm to 2.73 nm. The schematic of the filtration setup is shown in Fig. S5a. The dye concentration was approximately 10 mg/L on the feed side. We collected the retentate, feed, and permeate solutions and analyzed them with the help of a UV-Visible spectrometer (Fig. S5b-f). The UV-Visible spectra of retentate solution show a broad and intense peak in the visible region for all the studied dyes. In contrast, the spectral intensity of the permeate solution, was extremely weak or within the detection limit of the instrument. Since intensity is a measure of the concentration, it is clear that the dye molecules on the permeate side are significantly low. The rejection, R was calculated by recording the relative difference in the concentration of permeate (C\({}_{p}\)) and feed (C\({}_{l}\)) compartments as \[\text{R (\%) = (1-C${}_{p}$/C$) x 100\%}\] Our membrane shows rejection of \(>\)98 % for MO, \(>\)99 % for CV, \(>\)99 % for R6G, \(>\)99 % for MB and \(>\)99 % for BB (Fig. S5b-f). The water permeance was estimated from the collected volume, V, the pressure difference, \(\Delta\)P, the duration of water permeation, t, and the membrane area, A. The permeance, J, is given by \[\text{J = V/($\Delta$P*A*t)}\] We recorded the highest permeance of -7183 L m-2 h-1 bar-1 for pure water with a - 400 nm thick Na-V membrane; however, it reduced to 6600 L m-2 h-1 bar-1 for the brilliant blue dye with a rejection of -90%. When we increased the membrane thickness to - 600 nm, the rejection increased to \(>\) 99% with a slightly reduced water permeance of 5400 L m-2 h-1 bar-1. For thicker membranes beyond 600 nm (Fig. S6), the rejection remained \(>\)99%; however, the water permeance scaled inversely with the thickness (Fig. 3a). Since our measurements were static, the dyes accumulated on membrane surface over time, reducing the water permeance, as shown in Fig. S7a. The dynamic measurements could help minimizing the dye accumulation. We also note that both anionic and cationic dyes did not show any significant difference in rejection (Fig. S7b), suggesting the dominance of size-based rejection. We repeated these measurements with other salt-stabilized membranes also and observed that with increase in interlayer cation valence, the water permeance decreased by a factor of 2.5, with negligible change in the rejection (Fig. S7c). The reduced water permeance is inferred to be a result of decreased hydrophilicity and hence a weaker intake of water. Removal of salt from seawater is extremely important for potable water applications, and we explored this possibility with our 1.2 \(\upmu\)m thick Na-stabilized membranes. We tested the permeation of NaCl (1 M) ions with a pressure-assisted filtration setup at several pressure gradients, up to a maximum of 900 mbar. We quantified the permeated concentration using inductively coupled plasma-optical emission spectroscopy (ICP-OES). At \(\Delta\)P of 50 mbar, we observed the highest rejection of -95% and a water permeance of -120 L m\({}^{-2}\) h\({}^{-1}\) bar\({}^{-1}\)(Fig. 3b). When \(\Delta\)P = 550 mbar, the ion rejection decreased to -8% and the water permeance increased to > 2300 L m\({}^{-2}\) h\({}^{-1}\) bar\({}^{-1}\) (Fig. S7d). When we further increased \(\Delta\)P to 900 mbar, a rejection of > 7% was recorded. We repeated these measurements with several membrane thicknesses at \(\Delta\)P = 50 mbar. The permeate concentration increased from -50 to 650 mM upon decrease in thickness from 1.2 to 0.4 \(\upmu\)m (Fig. 3b). We also performed these experiments with a mixture of 25 ml 1 M NaCl and 25 ml BB dye (concentration - 10 mg/L), which resulted in a final 50 ml solution of 0.5 M NaCl and 5 mg/L BB. Using a 1.2 \(\upmu\)m thick Na-V membrane at \(\Delta\)P = 250 mbar, we were able to get a salt rejection of - 40% and a dye rejection of - 99% with water permeance of 1290 L m\({}^{-2}\) h\({}^{-1}\) bar\({}^{-1}\) comparable to that of permeation experiments with only salt Figure 3: **Dye and Salt filtration.** (a) Water permeance (left Y-axis) and Brilliant blue dye rejection (Right Y-axis) as a function of membrane thickness at \(\Delta\)P = 900 mbar. (b) Water permeance (Left Y-axis) and NaCl rejection (Right Y-axis) as a function of membrane thickness at \(\Delta\)P = 50 mbar. The pressure gradient clearly changes the membrane permeability, as evident from the lower water flow rates at \(\Delta\)P = 50 mbar as compared to \(\Delta\)P = 900 mbar. This is inferred to be a result of smaller interlayer distance at lower pressure gradients. The error bar is for 3 samples. or dye. But at a higher differential pressure of \(\Delta\)P = 550 mbar, the salt rejection decreased to - 10% while the dye rejection remained constant with increased water permeance of 2277 L m\({}^{-2}\) h\({}^{-1}\) bar\({}^{-1}\), the flux was comparable to only 1 M NaCl as the feed (Fig. S7d). This confirms the ability of our membranes to separate salt and dye from a mixed solution (Fig. S8). In addition, we examined the ion permeation across 5 \(\upmu\)m thick free-standing membranes with a forward osmosis experiment. For this, we kept 1 M NaCl solution on the feed side and DI water on the permeate side (Fig. S9a). During a period of -45 hours, sufficient for establishing equilibrium, a concentration of -10 mM of ions permeated through the membrane. This also agrees with the result of UV-visible spectra (Fig. S9b). In this case, the ion rejection was estimated to be 99%. The higher salt rejection is due to the larger thickness of 5 \(\upmu\)m for the membrane used in this experiment. It is to mention that due to osmosis, there is a flow of water from permeate to feed side; hence, our measured ion concentration is in the upper limit. We further examined the little permeation of salt ions across 5 \(\upmu\)m thick membranes to understand the rejection mechanism. The approximate size of the fluidic channel that is responsible for the transport is estimated by considering the thickness of silicate layers (-9.6 A)[26] and the interlayer distance, \(d\), where the latter is taken from the XRD data. In the case of K-V membranes, \(d_{(001)}\) = 12.2 A, leaving a fluidic space that can accommodate -1 layer of water molecules. However, in the case of Na-V, Ca-V, and Al-V membranes, \(d_{(001)}\) \(\simeq\) 15 A, suggests an interlayer space that can accommodate -2-layers of water molecules. We performed transport measurements with an equal salt concentration on both sides of the membrane. The hydrated diameter of the cations considered here ranges from 6.6 to 9.6 A [27]. The hydrated diameter of Cl' is 6.6 A [27]. The schematic used for the measurement is shown in Fig. S10a. We first measured KCl ionic conductance through the K-V membrane. To transport the ions, a voltage, \(V\), was applied across the membrane, and the resulting current, \(I\), was monitored. The measured \(I\)-\(V\) characteristics (Fig. 4a) at various KCl concentrations, \(C\), is linear within our maximum applied voltage of \(\pm\) 300 mV. The conductance, \(G\)=I/V, is found from the slope of the curves. The measured pH of our deionized (DI) water was -5.5; therefore, chloride concentrations of 10\({}^{-5}\) M and 10\({}^{-6}\) M would roughly correspond to that of DI water. The measured water conductance of twelve samples is plotted in Fig. S10b. We observed a very different ionic conductance behavior for all the salt-stabilized membranes compared to other nanochannel systems. In the concentration range 10\({}^{-5}\) M - 10\({}^{-2}\) M, the conductance increased sub-linearly with concentration (Fig. 4b). The measurements with NaCl, CaCl, and AlCl\({}_{3}\) solutions using respective salt-stabilized membranes, also showed similar sub-linear \(G(C)\); however, the magnitude of the conductance varies with salt ion. For all the salts, \(G\)\(\propto\)\(C^{\alpha}\), with the exponent, \(\alpha\) = 0.67 to 0.24, inversely related to the hydrated diameter of the cations in the order K'\(<\) Ca\({}^{2+}\)\(<\) Al\({}^{3+}\). This points to the role of steric hindrance and ion-ion interactions in providing a size-based discrimination. A similar variation, \(G(C)\), was reported in carbon nanotubes[7], and biological channels[28], where the exponent was less than 1. For concentrations \(\geq\) 10\({}^{-2}\) M, the ionic conductance of all the membranes shows saturation (Fig. 4b). When compared to the bulk conductance of similar geometry, the NaCl conductance through Na-V is observed to be smaller by at least 300. This yields a salt rejection of \(>\) 99%, which agrees with the results of ICP-OES and UV-visible spectra (Fig. S9). It should be noted that interlayer cations of vermiculite membranes are exchangeable if the membrane is kept in a solution of higher cation concentration than the one/s already present at the interlayer sites. For example, we kept K-V membrane in AlCl\({}_{3}\) solution of concentration greater than 0.1 mM and for atleast Figure 4: **Ion transport through salt-stabilized vermiculite membranes.** (a) _I-V_ characteristics of K-stabilized vermiculite membrane for several concentrations of KCl. Inset: Schematic of our ion transport measurement setup. (b) Ionic conductance of salt-stabilized membranes with their corresponding salt solutions. For example, K-V membrane is measured with KCl solution. The grey dotted line is the data for open-hole conductance (membrane dimensions are considered) of NaCl calculated from the bulk conductivity reported in literature[29]. Dotted lines are guide for the eye, while the solid lines are fitted with \(G\propto C^{\alpha}\), where \(\alpha\) lies between 0.67 and 0.24. (c) _I-V_ curves for 1 M KCl on one side of the K-V membrane and 1 mM to 1 M on the other side after subtracting the redox potential. (d) The variation of diffusion potential and current (Inset) as a function of KCl gradient. The diffusion potential is fitted using the Nernst equation, which is shown as a solid red line. The diffusion potential and current are plotted after subtracting the contribution of redox potential at the electrodes. 24 hours, which converted K-V into Al-V. The ion exchange is further verified using techniques such as XRD, contact angle measurements and estimating the ionic conductance of the membrane. We monitored the ionic conductance of K-V membrane when kept in 1 M AlCl\({}_{3}\) solution where the conductance decreased continuously until it became stable and close to the ionic conductance of 1 M AlCl\({}_{3}\) with Al-V (Fig. S4b). Salt-stabilized vermiculite laminates have different layer charges which could influence the ion transport. To understand this, we performed electro-diffusion measurements with different concentrations of KCl on opposite sides of the membrane (Fig. 4c). The schematic of the measurement setup is shown in the inset of Fig. 4c. We have varied the concentration gradient (\(\Delta\)) from 10 to 1000. In the absence of any applied voltage, a negative current is measured, and its magnitude increases with increase in \(\Delta\) up to 100. Considering the polarity of the electrodes used in the study (Inset of Fig. 4c), the observed negative current (Inset Fig. 4d) suggests that these membranes preferentially transport cations over anions. The applied voltage that is required to nullify this current is designated as zero-current potential, \(V_{o}\). We subtracted the redox potential, \(V_{R}\) from \(V_{o}\), to estimate the net diffusion potential, \(V_{diff}\), of the membrane for a particular concentration gradient. The diffusion potential is observed to follow a logarithmic dependence with concentration gradient. This allowed us to estimate the selectivity, S, of the membrane using the Nernst equation (1) as, \[V_{diff}=S\frac{k_{B}T}{e}ln\Delta............................................ (1)\] Where \(S\) is the selectivity of the membrane, \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature (298 K), and \(e\) is the elementary charge. For an ideal cation-selective membrane, \(S=1\) and for a non-selective membrane, \(S=0\). We fitted the variation of diffusion potential with KCl concentration gradient (Fig. 4d) using Eqn.1, providing a selectivity of -0.67. This suggests that K-stabilized vermiculite membranes are cation-selective and the diffusion measurements hint the contribution of surface charge in the ion transport. We measured other salt-stabilized membranes also, and most of them are found to be cation-selective, with almost no selectivity for AlCl\({}_{3}\). The selectivity is in the order KCl = NaCl > CaCl\({}_{2}\) > AlCl\({}_{3}\) (Fig. S11a-b). We estimated the relative mobility of cations to anions, \(\mu\)-\(\mu\), from the measured diffusion potential (Fig. 5a) with the help of Henderson equation[30] as, \[\frac{\mu_{+}}{\mu_{-}}=-\frac{z_{+}}{z_{-}}\frac{ln\left(\Delta\right)-z_{-}FV _{diff}/RT}{ln\left(\Delta\right)-z_{+}FV_{diff}/RT}\................ (2)\] Where \(z_{+}\) and \(z_{-}\) are the cation and anion valances, respectively; F is Faraday's constant; R is the universal gas constant; and T= 298 K. The mobility ratio (cations to anions) is in the order KCl = NaCl > CaCl\({}_{2}\) > AlCl\({}_{3}\). With the assumption of constant mobility for chloride ions, the enhanced mobility of K\({}^{*}\) is interpreted to be a combined effect of confinement and electrostatic interactions (Fig. 5b). A previous report on graphene-based two-dimensional channels[31] indicate that confined channels of height -6.6 A, slightly increase the mobility of K\({}^{*}\). In clay minerals, the van der Waals attraction of the layers increases as the layer charge increases; an example is the highly compact structure of mica, where the layer charge is 1 with a 2: 1 layered structure. In vermiculite, layer charge is smaller than 1 due to the inclusion of Ca\({}^{2+}\) and Mg\({}^{2+}\) in place of K\({}^{+}\) in mica. In the case of Al-V membranes, exchanging Al\({}^{3+}\) with Li\({}^{+}\) would reduce the layer charge, explaining the lower selectivity (Fig. S11b). Exchanging the layer with monovalent ions maximizes the layer charge and hence the highest selectivity. In the case of K-V, layer charge is high and hence the van der Waals attraction leads to smaller interlayer spaces, whereas in the case of Al-V, the layer charge is less and hence the interlayer space is large, which agrees with our XRD results. A recent study discusses the possibility of dynamical change in the laminar structure whenever a cation is exchanged. The local delamination removes the existing cation, and subsequently, re-stacking happens, which accommodates the new cation [32]. In the case of monovalent cations, such as K\({}^{+}\) and Na\({}^{+}\), the larger interlayer spacing with Na\({}^{+}\) could be a result of the larger and stable hydration shell of Na\({}^{+}\) over K\({}^{+}\). Though diffusion measurements shed light on the layer charge, our many other observations suggest that the incoming species' molecular/hydrated ionic size controls the transport. The pressure-assisted filtration experiments provide insights into the membrane microstructure and its modifications under pressure gradients. We fabricated the membrane by assembling flake sizes of-600 nm, where the flake thickness is -1.5 nm. Fig. S7d summarizes the water permeance and NaCl rejection efficiencies as a function of pressure gradients from 50 to 550 mbar. When the pressure gradient is low, the water permeance is low, however, the NaCl rejection is high. When we increased the pressure gradient to 550 mbar, there is less rejection of NaCl; however, the water permeance increased. In the absence or low Figure 5: **Mobility of ions through salt-stabilized membranes.** (a) I-V characteristics of various chloride salts through respective salt stabilized vermiculite membranes with a concentration gradient of 100 across the membrane. (b) Mobility ratio as a function of increasing cation hydration radius through our membranes, which is compared with literature values on bulk systems[29]. Error bar is taken from mobility ratio of the same sample when concentration gradient is 10 and 100. values of pressure gradient, the flake placements are perfect, and there are only entry/exit points for the water, not the salt. However, when we gradually increased the pressure across the membrane, the interlayer spacing of the laminates increased, allowing the entry/exit of water molecules while rejecting most ions/dyes of sizes larger than the interlayer spacing. Since the interlayer space is small at low pressure gradients, the water permeance will also be low. However, larger interlayer spacing was induced when we increased the pressure gradient to 550 mbar, allowing salt ion permeation and increased water permeance (Fig. S12a-b). Here the interlayer spacing is still in sub-nm, much smaller than the sizes of dye molecules; hence, they are rejected. With the increase in thickness, the water permeance is found to decrease. As thickness increases, due to the corrugated path, the effective travel length of water molecules becomes larger, resulting in reduced water permeance. A very recent report on graphene oxide predict a similar increase in interlayer space under increased differential pressures[33]. The measured water permeance of 5400 L m-2 h-1 bar-1 far exceeds previous reports. There were some studies on hydrophilic membranes of vermiculite[17], montmorillonite[16], and Ti3C2Tx[19]; however, they exhibit moderate water fluxes. We measured the water contact angle of the salt-stabilized membranes and found them to increase in the order Na-V < Ca-V < Al-V. The higher water permeance in the case of Na-V membranes is related to the strong hydrophilicity of thin membranes and suggests the importance of capillary pressure. Further, our salt-stabilized membranes can handle pressures of 1 bar at least. Our vacuum filtration system uses a pressure gradient of 1 bar to fabricate the membranes, and it is observed that any external pressure gradients within this limit does not degrade the membrane performance. Even at a thickness of 600 nm, our membranes showed better rejection while maintaining a high flux compared to other membranes. Brilliant blue, one of the largest dye molecules, showed exceptionally high flux at a small pressure gradient of 900 mbar. We expect that slightly larger pressure gradients would further enhance the flux. The same membrane can also be utilized for desalination applications at low pressures as it exhibits 95% salt rejection and a water flux higher than or comparable to other membranes[16, 17]. This result can be further improved by using several micrometer thick membranes and larger pressure gradients. Overall, our inch-size membranes show great promise and are highly suitable for industrial applications. We tested several samples and more than 90% of the membranes showed high water flux, and good rejections. ## 4 Conclusion In conclusion, we addressed the water instability of Li-V membranes by intercalating them with various chloride salt solutions. The stabilized membranes have highly confined laminates and tunable interlayer spacing and exhibit steric hindrance. These membranes exhibit high salt and dye rejections with water flow rates exceeding the current state-of-the-art membranes. This is mainly due to the ability to tune the transport parameters, which highly depend on external pressure gradients, layer charge, and hydration diameter of transporting molecules/ions. The ability to fabricate water-stable vermiculite membranes will initiate decades of further research on clay-based membranes. ###### Acknowledgements. This work was mainly funded by Science and Engineering Research Board (SERB), Government of India, through grant CRG/2019/002702 and partially supported by MHRD STARS with grant no. MoE-STARS/STARS-1/405. A.R. acknowledges the Sabarmati fellowship from IIT Gandhinagar. A. R also acknowledges the PMRF fellowship from Ministry of Education, Government of India. G.K. acknowledges the research fellowship from IIT Gandhinagar. We acknowledge the contribution of IITGN central instrumentation facility. We thank Kalyan Raidongia and Raj Kumar Gogoi of IIT Guwahati for very useful discussions. ## Data Availability Statement Data is available from the authors upon reasonable request.
2304.07977
Spectral calculations of 3D RMHD simulations of super-Eddington accretion onto a stellar-mass black hole
We use the Athena++ Monte Carlo (MC) radiation transfer module to post-process simulation snapshots from non-relativistic Athena++ radiation magnetohydrodynamic (RMHD) simulations. These simulations were run using a gray (frequency integrated) approach but were also restarted and ran with a multi-group approach that accounts for Compton scattering with a Kompaneets operator. These simulations produced moderately super-Eddington accretion rates onto a 6.62 $M_\odot$ black hole. Since we only achieve inflow equilibrium out to 20-25 gravitational radii, we focus on the hard X-ray emission. We provide a comparison between the MC and RMHD simulations showing that the treatment of Compton scattering in the gray RMHD simulations underestimates the gas temperature in the regions above and below the accretion disk. In contrast, the restarted multi-group snapshots provides a treatment for the radiation field that is more consistent with the MC calculations, and result in post-processed spectra with harder X-ray emission compared to their gray snapshot counterparts. We characterize these MC post-processed spectra using commonly employed phenomenological models used for spectral fitting. We also attempt to fit our MC spectra directly to observations of the ultraluminous X-ray source (ULX) NGC 1313 X-1, finding best fit values that are competitive to phenomenological model fits, indicating that first principle models of super-Eddington accretion may adequately explain the observed hard X-ray spectra in some ULX sources.
Brianna S. Mills, Shane W. Davis, Yan-Fei Jiang, Matthew J. Middleton
2023-04-17T03:42:54Z
http://arxiv.org/abs/2304.07977v1
Spectral calculations of 3D RMHD simulations of super-Eddington accretion onto a stellar-mass black hole ###### Abstract We use the Athena++ Monte Carlo (MC) radiation transfer module to post-process simulation snapshots from non-relativistic Athena++ radiation magnetohydrodynamic (RMHD) simulations. These simulations were run using a gray (frequency integrated) approach but were also restarted and ran with a multi-group approach that accounts for Compton scattering with a Kompaneets operator. These simulations produced moderately super-Eddington accretion rates onto a 6.62 \(M_{\odot}\) black hole. Since we only achieve inflow equilibrium out to 20-25 gravitational radii, we focus on the hard X-ray emission. We provide a comparison between the MC and RMHD simulations showing that the treatment of Compton scattering in the gray RMHD simulations underestimates the gas temperature in the regions above and below the accretion disk. In contrast, the restarted multi-group snapshots provides a treatment for the radiation field that is more consistent with the MC calculations, and result in post-processed spectra with harder X-ray emission compared to their gray snapshot counterparts. We characterize these MC post-processed spectra using commonly employed phenomenological models used for spectral fitting. We also attempt to fit our MC spectra directly to observations of the ultraluminous X-ray source (ULX) NGC 1313 X-1, finding best fit values that are competitive to phenomenological model fits, indicating that first principle models of super-Eddington accretion may adequately explain the observed hard X-ray spectra in some ULX sources. 0000-0002-8880-7880]Brianna S. Mills 0000-0002-3870-7880]Shane W. Davis 0000-0002-4888-7808]Yan-Fei Jiang 0000-0002-4883-0888]Matthew J. Middleton ## 1 Introduction Ultra-luminous X-ray sources (ULXs) are point-like, off-nuclear extragalactic objects observed to have X-ray luminosities comparable to or in excess of the critical Eddington luminosity \(L_{\rm X}\gtrsim 10^{39}\) erg/s (assuming isotropic emission for a \(10M_{\odot}\) black hole; see Pinto & Walton, 2023; King et al., 2023 for a review of ULXs). The majority of ULXs are now accepted to be X-ray binary systems with super-Eddington rates of accretion onto a compact object, namely a stellar mass \(M<100M_{\odot}\) black hole (Poutanen et al., 2007; Middleton et al., 2015) or neutron star (Skinner et al., 1982; Bachetti et al., 2014). Some small fraction of ULXs may yet harbor sub-Eddington accretion rates onto intermediate-mass black holes \(M\gtrsim 100M_{\odot}\)(IMBHs; Farrell et al., 2009; Mezcua et al., 2013; Earnshaw, 2016; Brightman et al., 2016; Webb et al., 2017; Oskinova et al., 2019). The physical mechanisms which drive super-Eddington accretion are still under investigation and require numerical simulations in order to evaluate existing models of black hole accretion. The classical picture of an optically thick, geometrically thin accretion disk (Shakura & Sunyaev, 1973) is used to model black hole X-ray binaries (BHXBs) and is a generally applicable when the accretion rate is sub-Eddington (\(L/L_{\rm Edd}<0.3\)) where the disk geometry (defined by the disc semi-thickness, H and radius, R) remains thin \(H<<R\). If ULXs are indeed IMBHs, the spectra are expected to resemble scaled up versions of BHXB spectra, showing cooler accretion disks as the black hole mass increases (e.g. Miller et al., 2004). Observations of ULXs typically show a soft, thermal X-ray component and a hard thermal component with a rollover below \(\sim 10\) keV (Gladstone et al., 2009; Bachetti et al., 2014), the latter supporting the interpretation of super-Eddington accretion. Early models debated whether this hard X-ray emission originated from coronal emission from IMBHs (Miller et al., 2004) or Comptonized emission from super-Eddington accretion (Gladstone et al., 2009; Socrates & Davis, 2006). However, classically, one would expect the innermost regions to have a different spectral shape due to optical depth effects and anisotropy (Poutanen et al., 2007). Super-Eddington accretion is expected to deviate from the classical Shakura & Sunyaev (1973) thin disk approximation, as the radiation pressure exceeds gravity. Processes like advection (Abramowicz et al., 1988) and radiatively driven outflows (Shakura & Sunyaev, 1973; Ohsuga & Mineshige, 2011) may reduce the radiative efficiency and result in geometrically thicker flows in the super-Eddington regime. Advection can directly affect the observed spectra (Straub et al., 2011; Kubota & Done, 2019). Strong optically thick winds are also expected to be launched in these systems (and widely detected in ULXs: Middleton et al., 2014, 2015; Pinto et al., 2016, 2020; Walton et al., 2016; Kosec et al., 2021), which likely shroud the outer accretion disk and can contribute additional low energy flux for preferential sight lines. Due to the complex nature of describing three-dimensional super-Eddington accretion flows, numerical simulations are a key tool for studying this regime. Several radiation hydrodynamic (RHD; Ohsuga et al., 2005), radiation magentohydrodynamic (RMHD; Ohsuga & Mineshige, 2011; Jiang et al., 2014), and general relativistic RMHD (GRRMHD; McKinney et al., 2014; Fragile et al., 2014; Sadowski et al., 2015; Sadowski & Narayan, 2016) simulations have been performed to understand the physical mechanisms involved in super-Eddington accretion. In these simulations, the radiation transfer equation is often integrated over frequency (the "gray" approximation) to reduce the computational expense. In many case, the angle-integrated radiation moments (e.g. radiation flux and/or energy density) are solved for, which usually requires a closure relation (e.g. flux limited diffusion, Turner & Stone, 2001; Howell & Greenough, 2003; Krumholz et al., 2007; Moens, N. et al., 2022; M1 closure, Levermore, 1984; Gonzalez, M. et al., 2007; Skinner & Ostriker, 2013; Wibking & Krumholz, 2022; or variable Eddington tensor method, Jiang et al., 2012; Davis et al., 2012; Jiang et al., 2014; Asahina et al., 2020; Menon et al., 2022) to complete the the radiation moments. An alternative approach, which is used for the simulations discussed in this work, is the direct solution of the gray radiation transfer equation (Stone et al., 1992; Jiang et al., 2014; Jiang, 2021), which is then coupled to the fluid by computing the radiative cooling/heating and radiation force. There have been significant efforts to simulate global accretion flows in the vicinity of black holes and utilize them to generate synthetic observables to compare with observations. Perhaps the most impactful is the effort by the Event Horizon Telescope to interpret the very long baseline interferometric images of M87* and Sgr A* (Event Horizon Telescope Collaboration et al., 2019, 2022). In these systems, the flows are relatively optically thin to electron scattering and the modeling of Compton scattering is not essential to primary the imaging effort. Our current study is focused on more radiatively efficient and optically thick flows where electron scattering opacity dominates. Previous work includes efforts to generate spectra from GRMHD simulations that utilized simple cooling prescriptions to keep the disk thin(Zhu et al., 2012; Schnittman et al., 2013; Kinch et al., 2019, 2021) to study the sub-Eddington or near Eddington regime, non-relativistic radiation hydrodynamics simulations of super-Eddington accretion (Kawashima et al., 2012; Kitaki et al., 2017), and radiative GRRMHD simulations of the super-Eddington regime (Narayan et al., 2017). Spectral post-processing is commonly performed using Monte Carlo radiation transfer methods, which are useful for modeling the effects of Compton scattering. MC methods such as GRMONTY (Dolence et al., 2009), Pandurata (Schnittman et al., 2013), or RAIKOU (Kawashima et al., 2021) model Compton scattering and include general relativistic effects. The HEROIC code (Narayan et al., 2017) provides similar capabilities, but uses a combination of short and long characteristics instead of MC. These can also be coupled to photoionization calculations to produce predictions for atomic features, such as the Fe K\(\alpha\) line (Kinch et al., 2019). The inclusion of Compton scattering is a key ingredient because it dominates the thermodynamic coupling between the radiation and gas near or above the photosphere (Narayan et al., 2017; Kinch et al., 2020). In this work, we use the MC radiation transfer module in Athena++ to post-process Athena++ RMHD simulation snapshots and aim to describe these results with current black hole accretion models, as well as compare the simulated spectra to data for the ULX NGC 1313 X-1. Although the simulations performed here rely primarily on the non-relativistic gray RMHD module, two recent developments to Athena++ offer potential improvements for future work. The first is a multi-group implementation(Jiang, 2022) that facilitates multi-frequency transfer and better treatment of Compton heating and cooling. The second is a fully general relativistic formalism (White et al., 2023). As we discuss in Section 3.3, we utilize the multi-group method in this work to obtain a more accurate estimate for the temperature distribution in the current simulations. Spectral calculations with the GR implementation will be a focus of future work. The plan of this work is as follows: In Section 2 we discuss the MC and Athena++ methods used in our spectral post-processing analysis. In Section 3 we present the gray and multi-group RMHD spectral analysis results, along with image results and a comparison to phenomenological spectral models and fits to the spectrum of NGC 1313 X-1. We discuss the caveats, implications, and comparison of our results to previous work in Section 4. Finally, we summarize the key points of this work in Section 5. ## 2 Methods We utilize the Athena++ code (Jiang et al., 2019; Stone et al., 2020; White et al., 2016) in two configurations - using the Athena++ RMHD simulation snapshots of super-Eddington accretion onto a 6.62 \(M_{\odot}\) black hole, and using the Athena++ Monte Carlo radiative transfer module (Davis et al. in prep) to post-process the snapshots. Here we describe both configurations separately, and discuss the methods used for post-processing in the last subsection. ### MC radiation transfer code The standard Athena++ RMHD simulations utilize gray opacities (frequency averaged opacities) and thus do not directly provide any spectral information. To extract frequency information needed to produce the spectra, we utilize the Athena++ Monte Carlo (MC) radiation transfer module (Davis et al., 2009, Davis et al. in prep) to compute the radiation field throughout an Athena++ simulation snapshot. The MC module utilizes the Athena++ code structure and mesh, allowing it to be run concurrently with the simulations. It can also be utilized to read in output simulation snapshot for post-processing, which is how it is used here. Although the module can be used to perform MC transfer on the full three-dimensional refined simulation mesh, we focused here on two dimensional axisymmetric calculations, where finer/coarser levels are prolongated/restricted to an intermediate refinement level uniform mesh. The MC calculation proceeds by creating and then tracking photon samples throughout the mesh. The samples (often referred to as photon packets or superphotons) can be viewed as statistical ensembles of a large number of photons with common properties. These properties of the photons are initialized and evolved using pseudorandom numbers to draw from distributions in positions, photon energies, scattering angles, etc. until they are either absorbed or leave the domain. In this work we model free-free emission and absorption and unpolarized Compton scattering as the primary radiative processes. Each photon sample has a statistical weight corresponding to the number of photons in the packet. We model emission by randomly sampling each zone and assigning a weight corresponding to the volume integrated free-free emissivity from the sampled cell. We assume a total number of photon samples \(N_{\rm s}\) and \(N_{\rm cell}\) cells in the mesh. If we label cells by index \(i\) and photons samples with index \(j\), the total number of physical photons emitted in cell \(i\) can be written \[N_{i}=\int\frac{j(\nu,T_{i},\rho_{i})}{h\nu}d\Omega d\nu\mathcal{V}_{i}\Delta t _{\rm int}, \tag{1}\] Here, \(\Omega\) is the solid angle, \(\mathcal{V}_{i}\) is the volume of cell \(i\), \(j(\nu,T_{i},\rho_{i})\) is the free-free emissivity as function of temperature and density within the cell (Rybicki & Lightman, 1979), and \(\Delta t_{\rm int}\) is the (arbitrary) integration time interval. The statistical weights are defined so that \[\sum_{j=1}^{N_{\rm s}}w_{j}=\sum_{i=1}^{N_{\rm cell}}N_{i}=N_{\rm ph}, \tag{2}\] where \(N_{\rm ph}\) is the total number of physical photons emitted within the entire mesh. We can define the probability \(P_{i}\) for a photon to be emitted in zone \(i\) as \(P_{i}=N_{i}/N_{\rm ph}=1/N_{\rm cell}\). Then, the average number of photon samples emitted in cell \(i\) is \(P_{i}N_{\rm s}\), and we have \[w_{i}=\frac{N_{i}N_{\rm cell}}{N_{\rm s}}. \tag{3}\] This procedure yields photon weights that can differ by orders of magnitude. This is often frowned upon in the MC literature because more uniform weighting is generally variance reducing. We have, however, also implemented an equal weighting scheme where the initial cells of photon samples are chosen proportional to their volume weighted emissivity and found this scheme ultimately results in larger statistical errors in our output spectra per computational second when compared with the scheme used here. This is primarily due to the large scattering optical depths to escape for photons launched in the highest emission cells (Davis et al. in preparation). Finally, the direction of the photon is randomly sampled from an isotropic distribution, and the energy of the photon is drawn from a log normal distribution in photon energy. We then further adjust the weight so that binned photons match free-free distribution in photon frequency. Photon movement is handled in the Eulerian (coordinate) frame, while emission, scattering, and absorption occur in the comoving fluid frame. Photon sample properties are Lorentz boosted between the coordinate and fluid (comoving) frame for these interactions. Photon samples are moved between scattering/absorption events by drawing an exponentially distributed dimensionless path length \(\tau\) to the next absorption/scattering event via \(\tau=-\ln\xi\), where \(\xi\) is a pseudorandom number uniformly distributed in the interval (0,1). This dimensionless path length can be thought of as the optical depth to the next scattering/absorption event, and is computed as a series of steps \(l_{k}\) (enumerated with subscript \(k\)) so that \[\tau=\sum_{k}\frac{l_{k}}{\alpha_{\nu,k}+\sigma_{\nu,k}}, \tag{4}\] where \(\alpha_{\nu}\) is the absorption extinction coefficient, and \(\sigma_{\nu}\) is the scattering extinction coefficient. The scattering and absorption coefficients are the products of the corresponding opacities and density, which are evaluated in the comoving frame and then boosted to the Eulerian frame. In the scheme used here each step \(k\) represents a movement of the photon sample to the location of the next scattering/absorption event or the nearest cell face, whichever comes first. This continues until the requisite value of \(\tau\) is reached or the photon sample escapes the domain. Photon samples are assumed to travel along straight lines, but we use a spherical mesh, so that computing where the photon sample leaves the current cell requires solving quadratic relations and accounting for possible turning points in \(r\) and \(\theta\)(Davis et al., in preparation). Each interaction of photon sample with matter results in a combination of absorption and scattering, which is handled by reductions in \(w\). We have \(w^{\prime}=w\epsilon\), where \(w^{\prime}\) is the new weight after scattering and \[\epsilon=\frac{\alpha_{\nu}}{\alpha_{\nu}+\sigma_{\nu}}. \tag{5}\] If the statistical weight falls below a small threshold value (based on the initial emissivity), the photon is considered absorbed and further evolution is terminated. The outgoing photon energy and direction after Compton scattering follow from procedures described in Pozdnyakov et al. (1983), except that we tabulate the scattering cross section using a method similar to that described in Dolence et al. (2009). When photons escape through the domain boundary, their energies, locations, and angles are tabulated in a photon list output that is then used to generate spectra. The MC calculation also tabulates cell-averaged radiation moments such as the energy density, radiation flux vector, and pressure tensor, as well as user defined quantities such as the net radiative cooling, average photon energy, and average energy mean opacity in each cell. These are output in standard Athena++ formats, such as HDF5 and VTK. ### Athena++ RMHD simulation snapshots Athena++ has been rewritten in C++ compared to its predecessor, Athena (Stone et al., 2008). Athena++ now includes adaptive mesh refinement (Stone et al., 2020) and special and general relativistic capabilities (White et al., 2016, 2023). In the current work, however, a pseudo-Newtonian potential is used to mimic the effects of general relativity around a Schwarzschild black hole (Paczynsky and Wiita, 1980). Results from a GRRMHD implementation of Athena++ and subsequent spectra will be reported in future work. We performed a series of global, three-dimensional RMHD simulations for a 6.62 \(M_{\odot}\) black hole accreting at several super-Eddington mass accretion rates assuming a 10% radiative efficiency so that \(\dot{M}_{\rm Edd}\equiv 10L_{\rm Edd}/c^{2}\). We used the explicit integration RMHD module in Athena++, which uses an algorithm similar to Jiang et al. (2014), but with updates that solve a radiation transfer equation of the form presented in Jiang (2021). The simulation setup for these snapshots is similar to the setup described in Huang et al. (2023), where the ideal MHD equations are coupled with the time-dependent radiation transfer equation (see Jiang et al. 2014 equations 1-4, and Jiang 2021 equations 4-6). A rotating gas torus was initialized in hydrostatic equilibrium and threaded with toroidal magnetic fields. Accretion onto the black hole happens via the magnetorotational instability (Balbus and Hawley, 1991) and the mass accretion rate is varied for each simulation based on the initial magnetic field configuration (see e.g., Huang et al. 2023). The simulations self-consistently form an accretion disk and reach a quasi-steady state for the inner disk. Figure 1 shows the mass accretion rate in terms of \(\dot{M}_{\rm Edd}\) for a 6.62\(M_{\odot}\) black hole as a function of radius within the inner 25\(r_{\rm g}\) where \(r_{\rm g}=GM/c^{2}\) is the gravitational radius. The mass accretion rates are relatively steady-state within 25\(r_{\rm g}\) of the black hole, but not at larger radii. The four snapshots and their radially-averaged mass accretion rates (over the 25\(r_{\rm g}\)) are listed in Table 1. Snapshots ULX4a and ULX4b have nearly the same mass accretion rate (\(\dot{M}\simeq-4\dot{M}_{\rm Edd}\)) and are both from the same simulation run (at different times), thus we named them ULX4a and ULX4b. Snapshot ULX2.5 and Snapshot ULX1.3 are indepen \begin{table} \begin{tabular}{c c c c c} \hline \hline Snapshot & \(\langle\dot{M}\rangle/\dot{M}_{\rm Edd}\) & \(\theta_{\rm f}\) & \(L_{\rm f}\) (erg cm\({}^{-2}\)s\({}^{-1}\) ) & \(\eta_{\rm f}\) \\ \hline ULX4a & -4.15 & 37\({}^{\circ}\) & 1.03e+39 & 2.56\% \\ ULX4b & -3.93 & 37\({}^{\circ}\) & 8.77e+38 & 2.29\% \\ ULX2.5 & -2.53 & 50\({}^{\circ}\) & 2.78e+38 & 1.13\% \\ ULX1.3 & -1.31 & 55\({}^{\circ}\) & 1.51e+38 & 1.18\% \\ \hline ULX4a-MG & -4.02 & 37\({}^{\circ}\) & 1.31e+39 & 3.34\% \\ ULX2.5-MG & -2.53 & 50\({}^{\circ}\) & 4.73e+38 & 1.92\% \\ \hline \end{tabular} Note. –Athena++ RMHD simulation snapshots of a 6.62\(M_{\odot}\) black hole used in this analysis. All snapshots are azimuthally averaged and are limited to the inner 25\(r_{\rm g}\). The first column corresponds to the ratio of the radially averaged mass accretion rate \(\langle\dot{M}\rangle\) in terms of the Eddington mass accretion rate \(\dot{M}_{\rm Edd}\). The negative sign indicates accretion towards the black hole. The second column corresponds to the polar funnel angle \(\theta_{\rm f}\), the opening angle relative to the polar axis representing the approximate boundary between the funnel region and the accretion disk. Photons emerging from this polar funnel angle are collected for spectral post-processing and have corresponding funnel luminosities \(L_{\rm f}\). The last column is the calculated radiative efficiency \(\eta_{\rm f}\) of the funnel region. Snapshots ULX4a and ULX4b were taken from the same simulation run (at different times), whereas Snapshots ULX2.5 and ULX1.3 are independent simulation runs. The snapshots with the suffix “-MG” correspond to the two gray simulations chosen for the multi-group RMHD implementation (Jiang, 2022). \end{table} Table 1: Athena++ RMHD simulation snapshots average mass accretion rates \(\dot{M}{\simeq}-2.5\) and \(-1.3\dot{M}_{\rm Edd}\), respectively. Although the Athena++ RMHD calculations have adaptive mesh refinement capabilities, the MC code works most efficiently on a uniform grid. For efficient parallelization, we chose one uniform refinement level for our analysis. We selected an appropriate refinement level such that all snapshot grids were approximately the same size 256 x 128 x 256 cells in \(r\), \(\theta\), and \(\phi\) (respectively). The accretion disk located in the inner 25\(r_{\rm g}\) roughly corresponds to the 80 innermost zones in radius at this level, and covers a range of \(\theta\) from 0 to \(\pi\), and a range of \(\phi\) from 0 to \(2\pi\). Due to the approximately axisymmetric nature of the simulations, we chose to azimuthally average each snapshot for our post-processing analysis. This has little effect on the output spectra, but greatly improves the statistics for cell averaged quantities, examples of which are presented in Figures 3-6. Figure 2 shows the gas density in Snapshot ULX2.5 for the inner 25\(r_{\rm g}\) where the accretion disk has roughly reached inflow equilibrium, and the small inset plot shows the the full simulation grid out to 500\(r_{\rm g}\). The full simulation grid includes the geometrically thick gas torus extending from \(\sim 100r_{\rm g}\) to \(\sim 300r_{\rm g}\). The densities in the funnel regions are several orders of magnitude lower than the densities in the optically thick accretion disk and gas torus. We discuss the implications of the low density funnel region and the impact of the torus geometry in Section 3.1. The net cooling in the Athena++ RMHD simulations is given by \[\dot{C}=c\rho\left(\kappa_{\rm P}aT_{\rm g}^{4}-\kappa_{E}E_{r}\right)+c\rho \kappa_{\rm es}\frac{4kT_{\rm g}-\langle h\nu\rangle}{m_{e}c^{2}}E_{r}, \tag{6}\] where c is the speed of light, \(\rho\) is the gas density, \(\kappa_{\rm P}\) is the Planck mean opacity, \(a\) is the Planck temperature constant, \(T_{\rm g}\) is the gas temperature, \(\kappa_{\rm E}\) is the energy mean opacity, \(E_{r}\) is the radiation energy density, \(\kappa_{\rm es}\) is the electron scattering opacity, \(k\) is the Boltzmann constant, \(h\) is the Planck constant, \(\langle h\nu\rangle\) is the average photon energy, and \(m_{e}\) is the electron mass. The first term is the frequency and angle integrated free-free emissivity \(\eta_{\rm ff}=c\rho\kappa_{\rm P}aT_{\rm g}^{4}\). The second term is the heating term associated with absorption, and the last term is the net Compton cooling. In the RMHD simulations, the radiation field is assumed to be blackbody so \(\langle h\nu\rangle=4kT_{\rm r}\), where \(T_{\rm r}\) is the radiation temperature \(T_{\rm r}=(E_{\rm r}/a)^{1/4}\). The simulations also assume that \(\kappa_{\rm E}=\kappa_{P}\). These assumptions and their impact on the gas temperature distribution and the spectra that result will be discussed further in Section 3. The gas temperature of the same snapshot in Figure 2 is shown in Figure 3. Note that the apparent asymmetry of the gas temperature in the funnel regions above and below the disk is due to the randomness in the flow at the time this snapshot was taken. Prior to post-processing, we set a lower limit on the gas temperature of \(10^{6}\) K and an upper limit of \(3\times 10^{8}\) K (except for the multi-group snapshots, which we set the upper limit to \(10^{9}\) K). The gas temperature is hottest in the funnel region where it hits the temperature cap of \(3\times 10^{8}\) K, and the gas in the accretion disk peaks at a few\(\times 10^{7}\) K. Although the temperatures in the funnel regions are large, the corresponding gas densities from Figure 2 are small (\(10^{-8}\) g cm\({}^{-2}\)) so the contribution to the emission from the hottest simulation cells is relatively weak. The white contour lines roughly define the effective photosphere boundary between the accretion disk and the funnel region, defined by \(F_{\rm r}/cE_{\rm r}=0.3\), where \(F_{\rm r}\) is the \(r\phi\) component of the radiative Figure 1: The mass accretion rate \(\dot{M}\) as a function of gravitational radius \(r_{\rm g}\) from the black hole for each Athena++ RMHD simulation snapshot (see Table 1). ULX4a is the dot-dash line, ULX4b is the dashed line, ULX2.5 is the dotted line, and ULX1.3 is the solid line. Figure 2: Azimuthally averaged gas density \(\rho\) (g cm\({}^{-2}\)) of the Athena++ simulation showing the inner 25 \(r_{\rm g}\) with a small inset plot that shows the gas density out to 500 \(r_{\rm g}\). flux, and \(E_{\rm r}\) is the radiation energy density. This flux ratio is consistent with methods that define the photosphere by integrating to an optical depth \(\tau=1\) surface (Chandrasekhar, 1960; Kinch et al., 2019). The polar angle of this boundary is used to approximate the funnel opening angle \(\theta_{\rm f}\) which is then used to calculate the luminosity, spectra, and images in Section 3.2. ### Spectral post-processing Here we describe the methods used for performing our spectral analysis. Spectra were generated for each azimuthally averaged snapshot, truncating the calculation to only model MC transfer within \(25r_{\rm g}\). The properties of all photon samples leaving the domain at 25 \(r_{\rm g}\) are tabulated in a list, which is then used to generate spectra. This truncation radius was chosen primarily because the outer disk radii are not yet in steady state. In particular, the initial torus is thick, which requires an extremely large radiation pressure. Hence, this torus is not in thermal equilibrium, and is rapidly cooling. We then ran the MC code on a copy of each snapshot grid, initializing \(10^{7}\) photons for Snapshots ULX4a and ULX4b, and about \(10^{8}\) photons for the other two snapshots. The reason for the difference is due to the larger optical depths and mass accretion rates in ULX4a and ULX4b that result in a factor of \(\sim 10\) difference in number of scatterings per photon sample. Recall that the number of scatterings per photon sample is proportional to the square of the optical depth. Increasing the number of photons in the MC calculations greatly improves the counting statistics, however we found that more than \(10^{7}\) photons for those snapshots became too computationally expensive due to the large scattering optical depths. Photons that escaped the \(25r_{\rm g}\) simulation domain were collected and distributed into 64 photon energy bins ranging from 0.1 keV to 60 keV, and eight direction angles. By direction angle, we mean the angle \(\theta_{\rm p}\) that the photon momentum vector makes with the polar axis. We use the subscript \({\rm p}\) to distinguish angles related to the photon momentum from those related to spherical polar coordinate angles. For example, \[\theta_{\rm p}=\arccos\left(\frac{{\bf p}\cdot\hat{z}}{|{\bf p}|}\right), \tag{7}\] where \({\bf p}\) is the photon momentum vector. These angle bins are distributed uniformly in \(\cos\theta_{\rm p}\) and integrated over azimuthal direction angle \(\phi_{\rm p}\). When binning, we do not distinguish between photons leaving above or below the disk. For example, photons with \(\theta_{\rm p}\sim 0\) will be placed in the same bin as photons with \(\theta_{\rm p}\sim\pi\). We select only photons which escape through a "funnel" like region above and below the disk. Specifically, we only bin photons within a coordinate opening angle of \(\theta_{\rm f}\) from the polar axes, retaining photons leaving the domain at \(\theta<\theta_{\rm f}\) or \(\theta>\frac{\pi}{2}-\theta_{\rm f}\). This excludes photon samples that leave domain closer to the midplane. Such photons would almost certainly be further scattered in the optically thick accretion disk if we extended our domain outwards. Hence, we select our funnel opening angle \(\theta_{\rm f}\) to roughly correspond to the location of the disk photosphere at \(25r_{\rm g}\). The approximate values for this funnel opening angle are listed for each snapshot in Table 1. Due to these selections, the resulting spectra are only expected to be useful estimates of the hard X-ray emission as the softer X-ray will have a significant contribution from regions with \(r>25r_{\rm g}\). We also cannot infer much about the angular distribution of the escaping photons for angles that are more edge-on than \(\theta_{\rm f}\) as such photons would likely interact with an optically thick flow beyond \(r=25r_{\rm g}\). In the case of snapshot ULX2.5 we also perform a MC calculation using the full simulation domain. In this case we collect all photon samples leaving the domain, but find that the spectrum of the escaping radiation is dominated by contributions from the torus. Due to large optical depths in the outer torus, the calculation is computationally expensive and run with fewer photons, yielding a lower signal-to-noise spectrum. For these reasons, we do not report spectra from these runs, but we do use the cell-averaged radiation outputs for comparison with the truncated runs described above. ## 3 Results ### Comparing Athena++ with Monte Carlo Figure 3: Gas temperature \({\rm T_{g}}\) (K) shown for Snapshot ULX2.5. A white contour line corresponds to \(F_{\rm r}/cE_{\rm r}=0.3\), which is roughly equivalent to the effective photosphere boundary and defines the polar funnel angle \(\theta_{\rm f}=50^{\circ}\) from the polar axis. The apparent anisotropy of the gas temperature in the funnel regions above and below the disk is a result of this particular simulation taken at this moment in time. The temperatures were capped at a maximum of \(3\times 10^{8}\) K and a minimum of \(10^{6}\) K. We first compare cell averaged quantities. Figure 4 shows a comparison of \(E_{\rm r}\) computed with MC to the azimuthally-averaged \(E_{\rm r}\) from the Athena++ simulation snapshot. This figure is for Snapshot ULX2.5, but the result is representative of all four snapshots in this analysis. The RMHD simulation result is plotted in the left panel, and two MC calculations are plotted in the middle and right panels respectively. The middle panel shows the results of an MC calculation using the full simulation domain out to \(\sim 500r_{\rm g}\), whereas the right panel shows the MC calculation when the grid is truncated at \(25r_{\rm g}\). The two MC calculations show precise agreement in the accretion disk, where the radiation field is nearly in radiative equilibrium with the gas. They also agree reasonably well in the funnel regions, deviating by only a small factor near the outer edge of the truncated domain. This suggests that the \(E_{\rm r}\) in the inner \(25r_{\rm g}\) is dominated by the locally emitted radiation field, since the truncated calculations have no incoming photons on the boundary. Therefore, the radiation from the cooling torus, which dominates the overall emission in the full domain calculation, is not providing a significant contribution in the inner disk region. Our comparison suggests that radiation outside the truncated domain is contributing \(\lesssim 30\%\) near 25\(r_{\rm g}\), and \(\lesssim 15\%\) near the photosphere boundary of the disk. Note that at the very edge of the truncation boundary, \(E_{\rm r}\) is slightly lower compared to the radiation energy density in the full domain calculation, as the truncated calculation assumes no incoming radiation flux. The more noticeable streaks of high \(E_{\rm r}\) noise in the funnel regions in the MC full domain calculation are attributed to the factor 10 fewer photons used to compute the full grid, resulting in a larger statistical variance. Comparing the gray RMHD module \(E_{\rm r}\) to the MC calculations in Figure 4, they also appear to agree within a factor of order unity in the accretion disk midplane, but start to deviate more significantly as one transitions into the funnel region. In the funnel, this deviation is as much as a factor 10. The MC calculations find a significantly lower \(E_{\rm r}\) in the funnel region. This mismatch is even more evident in Figure 5, which shows the ratio of the two calculated energy quantities: the mean photon energy \(\langle h\nu\rangle\) calculated by the truncated MC calculation, and the mean photon energy \(4kT_{\rm r}\) assumed in the Athena++ RMHD module. The dark regions where the ratio is of order unity show that the MC and Athena++ generally agree in the accretion disk, but deviate in the funnel region above and below the disk. In these regions, the MC calculates that average photon energy is at least three times higher than assumed in the RMHD run. The assumption that the radiation field is approximately blackbody works well for Figure 4: Azimuthally-averaged radiation energy density of Snapshot ULX2.5 for the Athena++ RMHD simulation (left panel), the MC calculation (middle panel), and the MC calculation with the simulation grid truncated at \(r=25r_{\rm g}\) (right panel). In the left and middle panels, only the inner \(25r_{\rm g}\) are plotted here for comparison, but the full simulation grids extend out to 500\(r_{\rm g}\). Figure 5: Ratio of the mean photon radiation energy \(\langle h\nu\rangle\) calculated in the Monte Carlo code and the radiation energy \(4kT_{\rm r}\) in the RMHD simulation for Snapshot ULX2.5 where \(T_{\rm r}\) is the radiation temperature (assuming the blackbody approximation). The streaks in the funnel region are artifacts of low photon statistics in the Monte Carlo calculation. the optically thick accretion disk regions, but is inadequate in the optically thin funnel regions. In Figure 6, we compare the resulting cooling computed by the Athena++ RMHD simulation (left panel) to the same term evaluated by the MC calculation (right panel) for the same snapshot. The net cooling is calculated using Equation 6 where positive values indicate cooling and negative values indicate heating. Since the Compton cooling is the dominant term in equation (6) within the funnel region, this comparison is strongly dependent on the degree to which \(\langle h\nu\rangle\) differs from \(4kT_{\rm r}\) and the ratio of \(E_{\rm r}\) in the MC calculations relative to RMHD (see Figures 4 and 5). We find that the cooling calculated by the MC code deviates significantly from that of the RMHD simulation, particularly in the funnel regions, where the MC code shows significantly more heating and less cooling. In the accretion disk, the MC code provides slightly less cooling than the RMHD simulation does. The amplitude of the cooling is large in this region because \(E_{\rm r}\) is large. Even though the disk is optically thick, it is hot enough that the Compton term dominates over free-free emission and absorption in both the RMHD and MC calculations. Since \(\langle h\nu\rangle\simeq 4kT_{\rm g}\) to within a few percent, even small statistical noise in the MC calculation will cause either a large net heating or cooling term here. Hence, most of the fluctuation seen in the MC calculation in the disk is due to noise in the MC calculation. These results suggest that if the RMHD simulations had a better estimate for \(\langle h\nu\rangle\) it would be higher than \(4kT_{\rm r}\). In an approximate steady-state with the Compton cooling term dominating in the funnel region, one expects the \(\langle h\nu\rangle\simeq 4kT_{\rm g}\). By underestimating \(\langle h\nu\rangle\), the RMHD simulations tend to underestimate \(T_{\rm g}\) in the optically thin regions above the disk and near the photosphere. This underestimate also tends to increase \(T_{\rm r}\) to better balance \(T_{\rm g}\), causing the RMHD simulations to overestimate \(E_{\rm r}\), consistent with our findings above. This also means that spectra computed from these snapshots will have lower average photon energies than one might obtain in a simulation with more self-consistent thermodynamics, which would yield higher \(T_{\rm g}\) and harder X-ray spectra. We explore the implications of this in Section 3.3. ### Post-processed spectra and Compton Cooling from Gray RMHD simulations We present post-processed X-ray spectra for the four gray RMHD snapshots in Figure 7. For the lower \(\dot{M}\) snapshots (ULX1.3 and ULX2.5), the spectral peaks are roughly around \(5\) keV, whereas for the higher \(\dot{M}\) snapshots (ULX4a and ULX4b) have peaks that are shifted slightly to around \(7\) keV. The hard X-ray tails appear to follow power-laws, which we characterize with XSPEC(Arnaud, 1996) model fits in Section 3.4. Due to the truncation of the simulation grids at \(25r_{\rm g}\) prior to post-processing, the softer X-ray emission that should be coming from larger radii is largely absent in these spectra so only the hard X-ray are self-consistently modeled. The frequency-integrated luminosities for each spectrum are tabulated in Table 1. We label these funnel luminosities \(L_{\rm f}\) to emphasize that we only tabulate the contributions from photons leaving the domain at coordinate \(\theta\) within an angle \(\theta_{\rm f}\) of the polar axes. Note that these are mostly hard X-ray luminosities due to the missing soft emission from the outer disk. The contour lines in Figure 3 approximate the funnel opening angle for ULX2.5 (\(\theta_{\rm f}=50^{\circ}\)) which we show Figure 6: Comparison of the net cooling of Snapshot ULX2.5 in the Athena++ RMHD simulation (left panel) and the Monte Carlo \(25r_{\rm g}\) calculation (right panel). The snapshot has been azimuthally averaged in both cases. The net cooling is given by Equation 6 where positive values signify net cooling and negative values imply net heating. The black cells extending into the photosphere in the RMHD calculation are artifacts of this particular moment in the simulation. as a representative snapshot. The \(\theta_{\rm f}\) and corresponding funnel luminosities \(L_{\rm f}\) are listed for each snapshot in Table 1. Photons emerging closer to the disk midplane than \(\theta_{\rm f}\) are excluded because their escape from the truncated domain at \(r=25r_{\rm g}\) is largely artificial. If we had instead extended our MC calculation domain outward in radius, these photons would likely experience additional scattering and absorption in the optical thick flow before escaping. Slight variations in \(\theta_{\rm f}\) can have a modest effect on the funnel luminosity and the resulting spectral shape. For example, in the case of ULX2.5 the funnel luminosity varied by less than \(17\%\) when varying \(\theta_{\rm f}\) by \(\pm 10^{\circ}\). Choosing a narrower funnel angle (\(\theta_{\rm f}=40^{\circ}\)) gave a luminosity of \(L_{\rm f}=2.04\times 10^{38}\) erg/s, whereas choosing a wider funnel (\(\theta_{\rm f}=60^{\circ}\)) gave a slightly higher luminosity of \(L_{\rm f}=3.36\times 10^{38}\) erg/s. Increasing \(\theta_{\rm f}\), however, results in a slight softening of the spectrum as there is an increase in the flux of photons escaping below what would be the photosphere in a more extended domain. These photons tend to be softer because they are emitted from the cooler regions of the disk. We found this to be true for all snapshot spectra in this analysis. The radiative efficiency calculated for Snapshot ULX2.5 is \(\eta_{\rm f}=1.51\%\) for a nominal mass accretion rate of \(-2.53\dot{M}_{\rm Edd}\).1 We report the calculated \(\eta_{\rm f}\) for each snapshot in Table 1. Generally for super-Eddington accretion, it is expected that the radiative efficiency will be lower than the \(\sim 5-10\%\) inferred for thin disks, decreasing as accretion rate increases. We do generally find lower efficiencies, but the results are not completely consistent with expectations. Comparing the efficiencies for snapshots ULX2.5 and ULX1.3 which have similar \(\theta_{\rm f}\), we infer a slightly lower efficiency as accretion rate increases. Snapshots ULX4a and ULX4b, however, have higher \(\dot{M}\) than the other snapshots, but also show a higher \(\eta_{\rm f}\). It is possible that this deviation from the expected trend is a result of our truncation of the calculation at \(r=25r_{\rm g}\) and merits more consideration in future work exploring a wider range of \(\dot{M}\). Footnote 1: We define \(\dot{M}_{\rm Edd}\) assuming 10% efficiency, but \(\dot{M}\) itself is independent of our assumed efficiency. We also examine the angular distribution of the emission, which we model as the flux fraction (ratio of specific intensity \(I\) to the flux \(F\)) for each spectrum in Figure 8. For observations, this should roughly correspond to the inclination viewing angle dependence with respect to the polar axis. A face-on view of the emission corresponds to \(\cos\theta_{\rm p}=1\), and an edge-on view corresponds to \(\cos\theta_{\rm p}=0\). Snapshot ULX1.3 is shown as the solid black line, ULX2.5 is the dashed blue line, ULX4b is the dotted green line, and ULX4a is the dash-dot pink line. The flux fraction has been integrated over all frequencies \(\nu\) for improved statistics, so the resulting distribution is most representative of the angular distribution near the spectral peak. Although we show the full distribution for \(\cos\theta_{\rm p}\) ranging from 0 to 1, we emphasize that the \(\cos\theta_{\rm f}\) ranges from 0.57 for ULX1.3 to 0.8 for snapshots ULX4a and ULX4b. Hence, only the bins with \(\cos\theta_{\rm p}\) greater than these values are likely to be well-characterized. Over this limited range, the angular distributions are relatively flat, but notably do not peak at the most face on inclination bin. This is contrary to standard expectations where a face on view provides the largest projected area and, thus, the largest flux. As \(\theta_{\rm p}\) approaches the edge-on view the intensity declines by factors of several. However, we emphasize that the intensity distribution at these angles will undoubtedly be impacted by the extension of the optically thick disk outside of the calculations domain. For example, the slight rise in the most edge-on bin is almost certainly a result of our artificial truncation of the simulation domain. Hence, our current results cannot provide reliable predictions about geometric beaming factors. To better interpret these results, we show a set of reconstructed images in Figure 10, which shows the frequency-integrated intensity from the funnel region at different inclination angles \(\theta_{\rm p}\sim 49^{\circ}\) (left column; funnel edge view) and \(\theta_{\rm p}=0^{\circ}\) (right column; face-on view) for two snapshots: the gray RMHD snapshot ULX2.5 (bottom row), and the multi-group RMHD snapshot ULX2.5-MG (top row). We discuss the latter snapshot in detail in the next section. The corresponding opening angle for both snapshots is \(\theta_{\rm f}=50^{\circ}\). Photons escaping the funnel were extrapolated out to a distance Figure 7: Monte Carlo post-processed X-ray spectra from the gray RMHD simulation snapshots. From top to bottom: ULX4a (magenta dash-dot line), ULX4b (green dotted line), ULX2.5 (blue dashed line), and ULX1.3 (black solid line). Snapshots ULX4a and ULX4b were taken from the same simulation run, while ULX2.5 and ULX1.3 are both from independent simulations. Note that these spectra only include the inner \(25r_{\rm g}\) emission escaping out through a polar funnel angle \(\theta_{\rm f}\) specified in Table 1 for each snapshot. of \(\sim 250,000~{}r_{\rm g}\) to form these images. In the face-on case, we see a deficit for photons near the polar axis, along the line of sight to the black hole. In this region the densities are so low that relatively few photons are scattered or emitted toward the observer. Near the edge of the funnel, the intensity of the emission appears to brighten compared to the face-on inclination. This enhancement is consistent with modest amounts of relativistic beaming in the mildly relativistic outflowing gas. This beaming is largest at these moderate inclinations where both the line-of-sight outflow velocities and scattering optical depths are large. ### A multi-group RMHD approach As discussed in Section 3.1, the \(\langle h\nu\rangle=4kT_{\rm r}\) assumption in the gray RMHD simulations likely results in gas temperatures being underestimated in the regions above the optically thick disk. This, in turn, means that the MC spectra we compute are probably softer than they should be if the temperatures were computed with a more self-consistent treatment of Compton scattering. In an effort to recompute the gas temperature, we first tried to use the MC code to calculate the net cooling everywhere in the simulation and balanced this with the dissipation from the RMHD snapshots. We found, however, that the recomputed temperatures in the funnel had too large of a variance due to the limited photon statistics, and did not consistently converge after several iterations. Instead, we utilized the multi-group radiation module described in Jiang (2022), which extends the gray radiation scheme in Jiang (2021) to include frequency dependence and treats Compton scattering using a Kompaneets-like approximation for the electron scattering source term. We used 20 logarithmically distributed frequency groups to cover the frequency space over three orders of magnitude, which increased the computational cost by a similar factor. Hence, a full three-dimensional simulation with this method would be extremely computationally expensive. Here, we instead begin the multi-group simulation by assuming the initial spectrum to be blackbody, and thus restart the gray simulation and run for a time just long enough for the gas above the disk to reach a new temperature equilibrium. This makes the computational expense feasible for this study because the thermal timescale in the funnel region is very short. We performed the multi-group procedure for two of the four snapshots, ULX2.5 and ULX4a, using the restart files from the gray RMHD simulations and running them with 20 frequency groups. We label the new snapshots from these multi-group runs ULX2.5-MG and ULX4a-MG. These snapshots were computed at approximately the same time as their gray counterparts, and thus have the same average mass accretion rates (see Table 1) and the density distributions are quite similar. But, as expected from the MC calculations of cooling rates in the gray snapshots, these new runs find larger gas temperatures in the regions near or above the photosphere of the accretion flow, as shown in Figure 9 (compared to its gray counterpart temperature in Figure 3). We post-process these snapshots following the same procedures as we did for the gray snapshots. We do not show the angular distributions of the emitted spectra for these multi-group snapshot calculations be Figure 8: Flux fractions of the specific intensity \(I\) to the isotropic flux \(F\) as a function of inclination angle in terms of \(\cos\theta\) for Snapshots ULX4a (pink dash-dotted line), ULX4b (green dotted line), ULX2.5 (blue dashed line), and ULX1.3 (black solid line). Note that each snapshot spectrum was generated using only photons which escape through a funnel opening of polar angle \(\theta_{\rm f}\) specified in Table 1. Face-on viewing corresponds to \(\cos\theta_{\rm p}=1\) and edge-on viewing corresponds to \(\cos\theta_{\rm p}=0\). Figure 9: The same as Figure 3, except for the multi-group snapshot ULX2.5-MG with a temperature upper limit of \(T_{\rm g}=10^{9}\) K, although the colorbar in this plot goes to \(3\times 10^{8}\) K for comparison. cause they are rather similar to their counterparts shown in Figure 8. We show, however, a second set of reconstructed images in the top panel of Figure 10 for ULX2.5-MG. Compared to its gray counterpart in the top panel, the overall intensities in the multi-group approach are larger due to the larger temperatures, but the mild relativistic beaming again enhances the intensities for off-axis viewing angles relative to those of the most face-on image. Figure 11 shows the MC post-processed spectra from the multi-group snapshots compared to their corresponding gray snapshot counterparts. The multi-group approach leads to harder spectra due to the larger gas temperatures. This is seen as both a shift in the spectral peak and a somewhat flatter power-law dependence at higher energies. The effect is larger for ULX2.5-MG than ULX4a-MG. The overall luminosity of the funnel for the multi-group spectra are also larger, with \(L_{\rm f}=4.73\times 10^{38}\ {\rm erg\ cm^{-2}\ s^{-1}}\) for ULX2.5-MG, and \(L_{\rm f}=1.31\times 10^{39}\ {\rm erg\ cm^{-2}\ s^{-1}}\) for ULX4a-MG. There is also a commensurate increase in the radiation efficiencies since the accretion rates were essentially unchanged. Figure 12 shows a comparison of the radiation energy density for ULX2.5-MG from the multi-group RMHD simulation (left panel) and the MC calculated radiation energy density (right panel), analogous to the comparison for Figure 11: MC post-processed spectra from the gray RMHD simulation snapshots (ULX2.5 and ULX4a) shown as the lighter blue and purple colored dotted lines, respectively, and the MC spectra from the multi-group RMHD implementation shown as the corresponding darker solid lines. The spectra for ULX2.5 and ULX2.5-MG were computed for a funnel region of \(\theta_{\rm f}=50^{\circ}\), while ULX4a and ULX4a-MG were computed for \(\theta_{\rm f}=37^{\circ}\). Figure 10: A series of frequency-integrated images showing the emergent radiation intensity for snapshots ULX2.5-MG (top row) and ULX2.5 (bottom row) for two inclination viewing angles. The left column shows a viewpoint from an inclination of \(\theta_{\rm p}\sim 49^{\circ}\), which is at the edge of the funnel for these snapshots (\(\theta_{\rm f}=50^{\circ}\)). The right column views the face-on inclination (\(\theta_{\rm f}=0^{\circ}\)). The photons leaving the simulation domain at \(25\ r_{\rm g}\) were extrapolated out to a distance of 250,000 \(r_{\rm g}\) from the black hole. the gray snapshot ULX2.5 in left and right panels of Figure 4. As expected, the \(E_{\rm r}\) in the multi-group approach is lower compared to its gray counterpart, and the comparison with MC for the multi-group approach is in closer agreement, although not exact. Exact agreement is not necessarily expected as the Kompaneets treatment in the RMHD module differs slight from the MC treatment. It also possible that the (computationally expensive) multi-group calculation has not yet reached full equilibrium. ### Simulated spectra in comparison with phenomenological models Here we quantitatively characterize the post-processed spectra by utilizing X-ray spectral fitting models commonly used to describe observations of black hole sources. The motivation here is to get a sense of how the combinations of phenomenological models describe the hard X-ray emission and quantitatively compare between the post-processed gray RMHD spectra and the multi-group RMHD spectra. Although we utilize spectral fitting methodology as a tool to compare our simulations to other models, we emphasize that these are not fits to data, and we make choices in accordance with these considerations. We use the X-ray spectral fitting packageXSPEC(Arnaud, 1996) version 6.26.1 to explore a few different model combinations. We note that the two-component (soft+hard) phenomenological model combinations we use here are typically used to fit BHXB spectra whilst ULX spectra can also be described by two component models (as shown using variability studies: Middleton et al., 2015), where the components refer to regions in the super-Eddington disk, modified by opacity in the wind and anisotropy (Poutanen et al., 2007). In addition, ULX spectra sometimes require a third component at higher energies from a pulsing component (an accretion column: Brightman et al., 2016; Walton et al., 2018) which has led to speculation that a generic hard excess compared to thermal models could indicate the presence of a highly magnetised neutron star (Pintore et al., 2017; Walton et al., 2018). By comparison, our spectra only correspond to the innermost regions and so miss a large portion of the soft X-ray emission from outer radii; we therefore use only one soft X-ray and one hard X-ray component to describe our simulated spectra, and focus mainly on the hard X-ray part of the spectra. Since we only seek to characterize our simulated spectra, we do not include any absorption components that are typically used to account for the interstellar medium along the line of sight. The simulated spectra were transcribed into table models containing energies and fluxes that could be loaded intoXSPEC. For each simulated model, the energy range was limited to \(3-50\) keV. We used the fakeit none command to generate an artificial "dataset" for each simulated table model. For the required response file during this process, we input a _NuSTAR_ FPMA detector response file provided by one of the Grupide et al. (2021) observations of NGC 1313 X-1 (see following subsection). All artificial datasets were generated assuming a 100 ks exposure time. The systematic error was set to 5%, a large fraction compared to the error from the counting statistics. The inclusion of a large systematic error is chosen so that this fitting procedure gives a reasonable characterization of the hard X-ray tails in our synthetic spectrum. This is, of course, different from standard fitting procedures to data where bins with more counts generally have higher signal-to-noise and are thus weighted more heavily in the fit. The inclusion of a large systematic error results in the bins in the hard X-ray tail being treated on a more equal footing with those near the peak. If not included, the best-fit spectral slopes are notably flatter than our synthetic spectra in the hard X-ray tail. This is due to relatively small changes in the fit near the peak which drive larger changes to \(\chi^{2}\) than the large deviations in the X-ray tail. The proce Figure 12: Comparison of the radiation energy density \(E_{\rm r}\) for ULX2.5-MG from the multi-group RMHD snapshot (left panel) and the same quantity calculated by the Monte Carlo module (right panel). In both cases, the simulation has been azimuthally averaged. dure employed here, however, provides a reasonable match to both the continuum near the peak and in the X-ray tail. After the artificial datasets were generated, we chose a sample of model combinations listed in Table 2 along with the corresponding fit parameters. The model bbody fits a blackbody spectrum with two parameters: a temperature \(kT\) (keV) and normalization, the latter given by \(N_{\rm BB}=L_{39}/D_{10}^{2}\), where \(L_{39}\) is the source luminosity in units of \(10^{39}\) and \(D_{10}\) is the distance to the source in units of 10 kpc. Similarly, the diskbb model is a multi-temperature blackbody accretion disk (without a colour temperature correction factor) and has two free parameters: the inner disk temperature \(T_{\rm in}\) (K) and a normalization parameter defined as \(N_{\rm DBB}=(R_{\rm in}/D_{10})^{2}\cos\theta\) where \(R_{\rm in}\) is the apparent inner disk radius in km, \(D_{10}\) is the distance to the source in units of 10 kpc, and \(\theta\) is the disk inclination angle at which \(\theta=0\) is face-on (Mitsuda et al., 1984). In addition to the accretion disk models, a hard X-ray component was added to characterize the hard X-ray flux. We chose a power-law model pow which has two free parameters: the power-law index \(\Gamma\) (such that flux goes as \(E^{-\Gamma}\)), and a normalization parameter. The other model we chose to characterize the hard X-ray spectrum is the X-ray Comptonization model simpl1(Steiner et al., 2009). simpl is a convolution model that approximately Compton up-scatters a fraction \(f_{\rm sc}\) of seed photons from the bbody or diskbb models. These up-scattered photons form a hard X-ray power-law tail with index \(\Gamma\). We assume that photons will only be up-scattered, leaving two free fit parameters, similar to pow. For the model combinations that include simpl, we set an upper limit on the scattering fraction of \(f_{\rm sc}=60\%\) as this parameter was not well constrained at higher \(f_{\rm sc}\). We report the fit results in Table 2. We only report a few significant digits without the errors as these are essentially model fits to simulated data, in contrast to model fits to observed data. Any errors computed here would strongly depend on the chosen systematic and stochastic errors, and are not physically meaningful. We do not report goodness-of-fit for similar reasons. For the gray snapshots, we find a rather steep power-law index of \(\Gamma>4\) is required for all fits when simpl is used. The index is still steep, but somewhat flatter when pow is used. With the combination of diskbb+pow, the pow model component dominated the fit and could not adequately describe the softer part of the spectrum. This is partly because we are missing a large portion of the soft X-rays from the outer disk in the simulations, but is also related to the lack of an absorption model to attenuate the power-law emission at softer energies. In contrast, the multi-group snapshots were characterized by much flatter hard X-ray tails (e.g. \(\Gamma\lesssim 4\)) than their gray counterparts, particularly for ULX2.5-MG (\(\Gamma\lesssim 3\)). These flatter indices are in better agreement with most observed spectra (Pintore et al., 2017; Dage et al., 2021; Gurpide et al., 2021). We also generally find high scattering fractions (e.g. \(f_{\rm sc}\sim 60\%\)) for the multi-group spectra compared to their gray counterparts (e.g. \(f_{\rm sc}\sim 40--50\%\)), particularly for the diskbb+simpl model fits. This is indicative of the fact that the power-law extends from near the spectral peak. Figure 13 shows an example of the three model combination fits to ULX2.5-MG. The ULX2.5-MG spectrum is shown as the black solid line. The total diskbb+pow model corresponds to the blue dotted line, which shows the deviation of the fit at the softer end of the spectrum due to the pow component dominating the fit. Interestingly, the bbody+simpl model more closely fit the simulated spectra compared to the other two model combinations. For most of the spectra, the diskbb component in the diskbb+simpl model was slightly broader than the simulated spectrum as seen by the pink solid line in Figure 13. However, this may again be impacted by the missing soft X-rays from larger radii. ### Example analysis: NGC 1313 X-1 NGC 1313 X-1 is a well known ULX (\(L_{\rm x}\sim 10^{40}\ {\rm erg\,s^{-1}}\)) located relatively nearby (\(D\sim 4.2\) Mpc; Tully Figure 13: Comparison of three X-ray spectral fitting models to the post-processed multi-group RMHD spectrum ULX2.5-MG (shown as the black solid line). The model combinations include two components, one blackbody (bbody) or multi-temperature blackbody accretion diskbb model paired with either a power-law (pow) or a hard X-ray Comptonization model (simpl). The total combinations are shown as the blue dotted line for diskbb+pow, pink solid line for diskbb+simpl, and green dashed line for bbody+simpl. Note that the simpl model in XSPEC would be written as simpl1xdiskbb, since it is a convolution model. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Model Component & Parameter & ULX2.5 & ULX2.5-MG & ULX4a & ULX4a-MG \\ \hline bbody & KT & 1.13 & 1.42 & 1.46 & 1.39 \\ & \(N_{\rm BB}\) & 0.23 & 0.32 & 0.85 & 0.99 \\ simpl & \(\Gamma_{\rm S}\) & 4.38 & 2.82 & 4.34 & 3.52 \\ & \(f_{\rm sc}\) & 60\% & 60\% & 60\% & 60\% \\ \hline diskbb & \(T_{\rm in}\) & 1.71 & 2.27 & 2.47 & 2.04 \\ & \(N_{\rm DBB}\) & 149.92 & 56.42 & 116.21 & 288.64 \\ simpl & \(\Gamma_{\rm S}\) & 4.34 & 2.88 & 4.14 & 3.57 \\ & \(f_{\rm sc}\) & 41.5\% & 60\% & 52.2\% & 60\% \\ \hline diskbb & \(T_{\rm in}\) & 2.35 & 4.30 & 3.06 & 3.34 \\ & \(N_{\rm DBB}\) & 25.39 & 2.07 & 37.65 & 23.36 \\ pow & \(\Gamma_{\rm P}\) & 3.36 & 2.40 & 3.03 & 2.80 \\ \hline \end{tabular} Note. – Comparison of commonly used spectral fitting models when fit to the MC post-processed gray RMHD snapshot spectra (ULX2.5 and ULX4a) and the MC post-processed multi-group RMHD snapshot spectra (ULX2.5-MG and ULX4a-MG). We only report rough values for each model parameter without errors in an effort to get a sense of the relative spectral shape of the simulated spectra for different model combinations. The accretion disk models used to fit the softer part of the spectrum (diskbb and bbody) were combined with either the Comptonization model simpl or power-law model pow to fit the hard X-ray fluxes. \end{table} Table 2: Model comparisons \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model Component & Parameter & Typical model & ULX2.5 & ULX2.5-MG & ULX4a & ULX4a-MG \\ \hline TBabs & \(n_{\rm H}\) (cm\({}^{-2}\)) & \(0.27^{+0.02}_{-0.03}\) & \(0.26^{+0.02}_{-0.02}\) & \(0.16^{+0.03}_{-0.02}\) & \(0.15^{+0.02}_{-0.02}\) & \(0.19^{+0.02}_{-0.03}\) \\ diskbb & \(T_{\rm in}\) (keV) & \(0.27^{+0.03}_{-0.03}\) & \(0.27^{+0.01}_{-0.01}\) & \(0.72^{+0.04}_{-0.03}\) & \(0.68^{+0.03}_{-0.02}\) & \(0.72^{+0.03}_{-0.03}\) \\ & \(N_{\rm DBB}\) & \(11.2^{+7.9}_{-4.3}\) & \(13.83^{+6.9}_{-6.2}\) & \(0.30^{+0.07}_{-0.07}\) & \(0.42^{+0.08}_{-0.7}\) & \(0.33^{+0.06}_{-0.06}\) \\ MC spectrum & \(N\) (\(\times 10^{-6}\)) & n/a & \(100.0^{+1.8}_{-1.8}\) & \(74.2^{+2.1}_{-2.1}\) & \(26.5^{+0.6}_{-0.7}\) & \(22.1^{+0.6}_{-0.5}\) \\ diskbb & \(T_{\rm in}\) (keV) & \(3.22^{+0.62}_{-0.46}\) & n/a & n/a & n/a & n/a \\ & \(p\) & \(0.56^{+0.03}_{-0.02}\) & n/a & n/a & n/a & n/a \\ & \(N_{\rm P}\) (\(\times 10^{-4}\)) & \(4.49^{+5.6}_{-2.5}\) & n/a & n/a & n/a & n/a \\ simpl & \(\Gamma\) & \(2.00^{+0.53}_{-2.2}\) & n/a & n/a & n/a & n/a \\ & \(f_{\rm sc}\) (\%) & \(11.50^{+7.8}_{-6.1}\) & n/a & n/a & n/a & n/a \\ & \(\chi^{2}\)/dof & \(373.60/341\) & \(777.71/345\) & \(403.37/345\) & \(445.72/345\) & \(399.05/342\) \\ \hline \end{tabular} Note. – Best-fit parameter values from the simulated spectral model fits to the combined _XMM-Newton_ and _NuSTAR_ data of NGC 1313 X-1. We also show a comparison “Typical model” fit to NGC 1313 X-1, for comparison to the fits from the four post-processed or simulated snapshot spectral models. The two models ULX4a-MG and ULX2.5-MG include the multi-group implementation, whereas the other snapshots are post-processed from the gray RMHD snapshots. The notation “n/a” indicates that this model parameter was not included in the fit. \end{table} Table 3: Best-fit parameters to NGC 1313 X-1 data using simulated spectral models et al., 2013). The nature of its compact accretor is not currently known (Walton et al., 2020), although it has been suggested that changes observed in its hard X-ray flux might be consistent with a weakly magnetized neutron star (Middleton et al., 2023) entering a propeller state, where any pulsations (when not in propeller) are diluted due to scattering into the wind cone (Mushtukov et al., 2020). Regardless of the nature of its compact object, an interesting feature of this source is the mostly stable shape of the hard X-ray spectrum \(E\gtrsim 10\) keV as revealed by _NuSTAR_ observations (Walton et al., 2020); this hard X-ray coverage makes NGC 1313 X-1 an excellent option for trialling our simulated spectral models. Table 3 shows the best-fit parameters for fits to the combined _XMM-Newton_ and _NuSTAR_ observations of NGC 1313 X-1 (Walton et al., 2020). The data for NGC 1313 X-1 were provided by Grupide et al. (2021). For the _NuSTAR_ data we selected energies between \(3-70\) keV, and for the _XMM-Newton_ data we used energies between \(0.3-10\) keV. In all of the fits, we allow two multiplicative constants to vary freely between the _XMM-Newton_ and _NuSTAR_ datasets to account for cross-calibration between the different detectors. The FPMA detector constant was set to unity, while the two free constants yielded values \(\lesssim 1.38\pm 0.2\). We first highlight the "Typical model" column which includes two modified disk blackbody components and one hard X-ray component that is commonly used to fit this source (Middleton et al., 2015; Pinto et al., 2016; Walton et al., 2020; Grupide et al., 2021). To account for the hydrogen column along the line of sight for all model fits in this analysis, we include a neutral absorption component, TBabs, adopting the abundances from Wilms et al. (2000) and cross-sections from Verner et al. (1996). The column \(N_{\rm H}\) was left free to vary (see Middleton et al., 2015; Grupide et al., 2021 for some discussion on the variability in the absorption column for this source). The two modified disk blackbody components in this model combination are diskbb and diskbb, used to model the softer and harder emission components, respectively. The diskbb component includes a free parameter, \(p\), which describes the radial dependence of the local disk temperature, \(T(r)\propto r^{-p}\). When advection in the disk is considered important, such as in the case of super-Eddington accretion, the \(p\) values are typically \(p<0.75\)(Abramowicz et al., 1988). When \(p=0.75\), the model recovers the thin diskbb solution. Observations of NGC 1313 X-1 also show emission and absorption lines at energies \(E\lesssim 2\) keV that are attributed to the presence of a mildly relativistic disk wind (Middleton et al., 2014, 2015; Pinto et al., 2016, 2020; Grupide et al., 2021). Multiple Gaussian absorption components, gabs, are often included to account for some of these atomic features. We limit the gabs parameters to \(E\leq 2\) keV, line width \(\sigma\leq 0.5\) keV, and the line strength was allowed to be positive or negative to represent either emission or absorption. In the Typical model, an additional component at moderate to high energies (\(\gtrsim 10\) keV) is included to capture the X-ray excess not adequately modeled by the multi-temperature disk components (see Walton et al., 2020). We apply the same simpl convolution model to the diskbb component as in previous works. We set a lower limit on the power-law index parameter \(\Gamma\geq 2\) as the uncertainties in the data at high energies \(E\gtrsim 30\) keV cause simpl to return an unrealistically flat power-law. The Typical model is written as: TBabs\(\times\)gabs\(\times\)(diskbb+(simpl\(\times\)diskbbbb)). This model provides a reasonably good fit with \(\chi^{2}=373.60\) for 341 degrees of freedom, comparable to the best-fits reported in Middleton et al. (2015); Walton et al. (2020); Grupide et al. (2021). One difference in our reproduction of this model is that we only included one gabs component with a line energy \(E=1\) keV, line width of \(\sigma=0.01\) keV, and line strength \(N_{\rm gabs}=-0.02\). In particular, this differs from the lines modeled in Middleton et al. (2015) which were found at \(E\simeq 0.66-0.74\) keV and \(E\simeq 1.23-1.46\) keV. If we restricted our model to include these specific lines, the fit returned \(\chi^{2}=86.35\) for 338 degrees of freedom, which is a poorer \(\chi^{2}\) than with a single gabs component. The overall fit is qualitatively similar between the two, however we noticed that the \(f_{\rm sc}\) went to nearly \(0\) if we used too many gabs components. To compare to the Typical model, we replaced the simpl\(\times\)diskbbb component with one of the post-processed spectral models denoted in XSPEC as: TBabs\(\times\)gabs\(\times\)(diskbb+MC spectrum). The diskbb component is included to model the soft X-ray flux absent from our spectral models. The spectral models have one fit parameter, \(N=(10\ {\rm kpc}/D)^{2}\), where \(D\) is the distance to the source. Assuming the distance to NGC 1313 X-1 is \(D=4.25\)\({\rm Mpc}\)(Tully et al., 2013) gives a MC spectrum model normalization value of \(N=5.67\times 10^{-6}\). Table 3 shows the best-fit XSPEC values for four snapshot models: ULX2.5, ULX4a, and their corresponding multi-group runs ULX2.5-MG and ULX4a-MG. We also fit the other two spectral models from ULX1.3 and ULX4b, but for the sake of brevity and lack of a multi-group counterpart for these snapshots, we do not include them in Tables 2 or 3 but note that they provide poor fits to the data (the best-fit for ULX1.3 returned a \(\chi^{2}=637.87\) for \(345\) degrees of freedom, and ULX4b returned a \(\chi^{2}=432.19\) for \(345\) degrees of freedom). All of the spectral model fits included a single gabs component except for ULX4a-MG which included two gabs components. Most of the fits were insensitive to a second gabs component, but the \(\chi^{2}\) for ULX4a improved from \(\chi^{2}=433.20\) per 345 degrees of freedom with only one Gaussian absorption component to \(\chi^{2}=394.50\) per 342 degrees of freedom with the addition of a second Gaussian component. The two gabs components fit lines at \(0.34~{}\mathrm{\, keV}\) with \(\sigma=0.49~{}\mathrm{\, keV}\), and a line at \(0.67~{}\mathrm{\, keV}\) with \(\sigma=0.15~{}\mathrm{\, keV}\). Generally, modeling the absorption and emission features of this source improves the \(\chi^{2}\) of the fit residuals below \(2\) keV, but it does not significantly impact the broader continuum fit. Thus we do not attempt to model these features in any detail as past studies have already done (Middleton et al., 2015; Pinto et al., 2016, 2020; Gurpide et al., 2021; Kosec et al., 2021). The \(\Delta\chi^{2}\) improves significantly for fits with the multi-group models, as the harder X-ray tails in the models better match the observed _NuSTAR_ data. We show the two multi-group model combinations in Figures 14 and 15 for ULX4a-MG and ULX2.5-MG, respectively. The individual model components are shown for the absorbed diskbb component below \(10\) keV, and the component modeling the higher energy flux. ULX4a-MG in Figure 14 is just slightly steeper than the_NuSTAR_ data at \(\gtrsim 10\) keV, while ULX2.5-MG in Figure 15 is just slightly flatter at \(\gtrsim 10\) keV. Both spectral models, however, fit quite well in the 3-10 keV range. Considering the simulated spectral component only has one parameter (the normalization), the deviation of the fit at \(E\gtrsim 10\) keV qualitatively seems fairly reasonable. ## 4 Discussion Our results suggest that RMHD simulations can qualitatively reproduce the observed hard X-ray spectral shape seen in a number of ULX sources as long as the radiative heating/cooling associated with Compton scattering processes are well modeled. Nevertheless, the simulations presented here have only explored a limited range of parameter space and do not yet include all of the relevant physics. Most importantly, the Athena++ RMHD simulations neglect general relativistic effects such as light bending, relativistic beaming, and relativistic jets (although these simulations do generate radiatively driven disk winds). New GRRMHD simulations are being performed with Athena++ using the direct solutions of the radiation transfer equations (White et al., 2023) and we expect that the inclusion of general relativistic effects will have an impact on the accretion flow, disk structure, and associated spectral properties. Post-processed spectra from such GRRMHD simulations will be the focus of future work. In addition, we only include the inner \(25r_{\mathrm{g}}\) of these RMHD simulations in our MC spectral calculations as the simulation is not in steady state at larger radii. Thus, we do not accurately model the soft X-ray flux originating beyond \(25r_{\mathrm{g}}\). Consequently, we only select photons coming out of a polar funnel angle, \(\theta_{\mathrm{f}}\) to avoid the impact of photons which would normally interact with outer disk radii and become trapped in the disk or advected into the black hole. Therefore, we stress that interpretations of these results be limited to the inner regions of the flow. We also assume that protons and electrons are well coupled and so simulate a single temperature accretion flow \(T=10^{7}\) K. Some studies have suggested that two temperature accretion flows may become important in areas of Figure 14: Best-fit to the combined _XMM-Newton_ (black data points) and _NuSTAR_ data (red and blue data points) of the ULX NGC 1313 X-1 using the post-processed spectral model from ULX4a-MG. The top panel shows the spectral fit to the data with individual model components shown for diskbb (below 10 keV) and the ULX4a-MG model fitting the rest of the hard X-ray spectrum. The bottom panel shows the fit residuals of the total model (green line) to the data. The best-fit values are collected in Table 3. Figure 15: Same as Figure 14, but using the ULX2.5-MG spectral model to fit the hard X-ray spectrum. low density, such as in the funnel regions (Liska et al., 2022) and may result in softer X-ray spectra (Kinch et al., 2020). We compared the non-relativistic proton-electron relaxation timescale given by Spitzer (1956); Stepney (1983) assuming a single temperature for both the electrons and protons with the Compton timescale \(t_{C}=(N_{\rm e}\sigma_{\rm T}c)^{-1}\) where \(N_{\rm e}\) is the number density of electrons, \(\sigma_{\rm T}\) is the Thomson cross-section, and \(c\) is the speed of light. We found that the relaxation time is much shorter than the Compton timescale and most other dynamical timescales for the temperature and densities in our simulation, except possibly in the very low density, high temperature region near the axis. Hence, our assumption of a single temperature for the protons and electrons seems self-consistent, but the single temperature assumption may need to be revisited in future work, particularly for simulations at lower accretion rates. ### Comparison with NGC 1313 X-1 Fits to the _XMM-Newton_ and _NuSTAR_ data of NGC 1313 X-1 with the post-processed spectral models qualitatively reproduce the hard X-ray part of the spectrum, although the funnel luminosities for these spectra are at least an order of magnitude lower (\(L_{\rm f}=1.3\times 10^{39}\ {\rm erg\ s^{-1}}\) for ULX4a-MG, and \(L_{\rm f}=4.7\times 10^{38}\ {\rm erg\ s^{-1}}\) for ULX2.5-MG) than the observed luminosity (\(L_{\rm x}\sim 10^{40}\ {\rm erg\ s^{-1}}\)). The implied distances from the spectral models are also much smaller (\(D\sim 1-2.13\) Mpc) when compared to the true distance to NGC 1313 X-1 of \(D\simeq 4.25\) Mpc (Tully et al., 2013). This motivates future work with simulations at higher Eddington ratios and across a range of black hole masses, which might then better match the observed sources at their known distances. Nevertheless, it is remarkable that a first principles calculation with only the normalization as a free parameter can provide best fitting \(\chi^{2}\) values that are quantitatively competitive with commonly used phenomenological models. ### Comparison with previous work Previous simulations have explored a range of accretion rates and masses, using a variety of setups both with and without general relativistic effects, finding radiative efficiencies that are both relatively large (\(\eta\sim 5\%\), e.g. Jiang et al., 2014) or small (\(\eta\lesssim 1\%\), e.g. Sadowski et al., 2014). The radiative efficiencies inferred directly from the gray RMHD simulations are typically a few percent, somewhat less than expected for thin accretion disks but not inconsistent with expectations for modestly super-Eddington accretion rates. The radiative efficiencies \(\eta_{\rm f}=1.13-2.56\%\) for the funnel region computed with MC post-processing are modestly lower than from the RMHD simulations for the gray snapshots. In contrast, the snapshots produced from the multi-group calculations have slightly larger efficiencies (\(\eta_{\rm f}=1.92\%\) for ULX2.5-MG, and \(\eta_{\rm f}=3.34\%\) for ULX4a-MG), in better agreement with the luminosities directly inferred from RMHD simulations. The Monte Carlo radiation transfer calculations performed in this work are similar to other post-processing codes, particularly those that use MC methods (Dolence et al., 2009; Schnittman et al., 2013; Kawashima et al., 2021) that model Compton scattering and include general relativistic effects. The HEROIC code (Narayan et al., 2017), which uses a combination of short and long characteristics instead of MC, provides similar capabilities. Although the Athena++ module used here also supports general relativistic transfer, we treat the radiation transfer in Minkowski spacetime to be consistent with the non-relativistic simulations that generate the snapshots. In contrast to Kinch et al. (2019), we do not currently perform any ionization calculations that would investigate atomic transitions. We also do not use any integrated ray tracing algorithms that would integrate back along the photon path in our post-processing; although to create the images in Figure 10, we extrapolate the photons escaping the MC domain out to a distant observer assuming flat spacetime. Our approach compares most directly to those of Narayan et al. (2017) and Kitaki et al. (2017), who consider the spectra produced from super-Eddington accretion simulations. Our results are broadly consistent with those of Kitaki et al. (2017), at least when one focuses on the hard component of the spectrum and face-on inclinations for the 10 \(M_{\odot}\) black hole simulations. Narayan et al. (2017) used HEROIC to post-process simulations from the GRRMHD code KORAL (Sadowski et al., 2013; Sadowski et al., 2014; Sadowski and Narayan, 2015; Sadowski and Narayan, 2016), which was used to simulate a broad range of super-Eddington accretion rates onto a \(10M_{\odot}\) black hole. They faced the same issue in that their GRRMHD simulations only reached inflow equilibrium out to a finite radius. Instead of truncating the disk as we chose to do, they instead extrapolated the flow to larger radii using self-similar approximations. This allowed them to explore the softer X-ray emission and angular dependence, but with the caveat that the outer regions of the calculation were not simulated directly. We find that our spectra are more qualitatively consistent with their results when the gas temperatures in the HERIOC calculation were fixed to the values from KORAL (see green curves in Fig. 4 of Narayan et al., 2017), but not consistent with their spectra after the radiation field and temperatures were self-consistently solved (red curves in the same figure). The results from HEROIC show that their spectra become much softer after the temperature iteration, whilst our results suggest that a more self-consistent treatment of Compton cooling yields higher temperatures and harder spectra. The origin of the difference is not clear to us, but we note that the KORAL simulations use a photon num ber conservation scheme that is different from what we use in our gray simulations. ## 5 Summary We present Monte Carlo post-processed spectral calculations of super-Eddington accretion onto a stellar-mass black hole from the Athena++ RMHD simulation snapshots. Our calculations suffer from two primary deficiencies. We only achieve inflow equilibrium out to \(\sim 25~{}r_{\rm g}\), which led us to truncate our spectral calculations at this radius. Hence, the soft X-rays that come from the outer disk are absent. If we instead include emission from the outer disk, it is significantly overestimated due to the cooling of the torus. Therefore we mainly focus on the hard X-ray spectrum in this work. These simulations also assume that the intensities follow a blackbody spectrum for the purposes of computing Compton cooling and mean opacities. Although this assumption is good for the optically thick disk, we find that using the blackbody assumption to estimate the average photon energy in the Compton cooling term is a poor approximation in the funnel regions where Comptonized electrons dominate the cooling. This leads to an underestimate for the temperatures in the funnel for the gray RMHD simulations. The underestimated temperatures produced spectra that were much softer and led to radiation energy densities above the disk being overestimated. We addressed this underestimate of the temperature by restarting the gray opacity simulations with a multi-group approach (Jiang, 2022) that treats Compton scattering with a Kompaneets-like source term. This produced simulation snapshots with higher temperatures in the spectral forming regions above the disk, leading to harder X-ray flux in better agreement with observed ULX spectra. We used phenomenological models to fit our Monte Carlo spectra. In most of the two-component (soft X-ray & hard X-ray) models, the hard X-ray component was more accurately described with the SIMPL model compared to the power-law POW model, and yielded hard X-ray power-law slopes ranging from \(\Gamma\sim 2-4\) for spectra computed with gray RMHD snapshots. The multi-group snapshot spectra tended to be fitted with flatter slopes, with\(\Gamma\sim 2-3\), comparable to the hard X-ray tails observed in NGC 1313 X-1 and Holmberg IX X-1 (Grupide et al., 2021). Finally, we generated an XSPEC table model and directly fit our MC spectra to combined _XMM-Newton_ and _NuSTAR_ observations of the ULX NGC 1313 X-1. Despite only having one free parameter (the normalization), we find a good fit, which is competitive with the phenomenological models that are commonly used. Close inspection shows that the MC spectra provide a good fit at soft to moderately hard energies \(E\lesssim 10\) keV, but are either just slightly too steep in the case of ULX4a-MG or too flat in the case of ULX2.5-MG to exactly describe the hard X-ray power-law tail \(E\gtrsim 10\) keV. The best-fit normalizations are also not consistent with the known distance to NGC 1313 X-1 and the model implies a lower luminosity than is observed. Although there are a number of caveats (such as the absence of general relativistic effects) and we have used only a single black hole mass and a relatively narrow range of accretion rates, this work nonetheless provides a promising direction for super-Eddington ULX accretion simulations, as these post-processed spectral models are close to describing the observed spectrum of NGC 1313 X-1. Simulations with the new GRRMHD implementation of the Athena++ code (White et al., 2023) are now exploring a range of masses and accretion rates, and post-processed spectra from these simulation will be presented in a future work. This work was supported by NASA TCAN grant 80NSSC21K0496 and NASA ATP grant 80NSSC18K1018. BSM thanks the Jefferson Scholars Foundation Graduate Fellowship in support of this work. Part of this work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Resources supporting this work were also provided by the High-End Computing (HEC) program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. The Center for Computational Astrophysics at the Flatiron Institute is supported by the Simons Foundation.
2301.13104
Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models
Differentially Private Stochastic Gradient Descent (DP-SGD) limits the amount of private information deep learning models can memorize during training. This is achieved by clipping and adding noise to the model's gradients, and thus networks with more parameters require proportionally stronger perturbation. As a result, large models have difficulties learning useful information, rendering training with DP-SGD exceedingly difficult on more challenging training tasks. Recent research has focused on combating this challenge through training adaptations such as heavy data augmentation and large batch sizes. However, these techniques further increase the computational overhead of DP-SGD and reduce its practical applicability. In this work, we propose using the principle of sparse model design to solve precisely such complex tasks with fewer parameters, higher accuracy, and in less time, thus serving as a promising direction for DP-SGD. We achieve such sparsity by design by introducing equivariant convolutional networks for model training with Differential Privacy. Using equivariant networks, we show that small and efficient architecture design can outperform current state-of-the-art models with substantially lower computational requirements. On CIFAR-10, we achieve an increase of up to $9\%$ in accuracy while reducing the computation time by more than $85\%$. Our results are a step towards efficient model architectures that make optimal use of their parameters and bridge the privacy-utility gap between private and non-private deep learning for computer vision.
Florian A. Hölzl, Daniel Rueckert, Georgios Kaissis
2023-01-30T17:43:47Z
http://arxiv.org/abs/2301.13104v2
# Equivariant Differentially Private Deep Learning ###### Abstract The formal privacy guarantee provided by Differential Privacy (DP) bounds the leakage of sensitive information from deep learning models. In practice, however, this comes at a severe computation and accuracy cost. The recently established state of the art (SOTA) results in image classification under DP are due to the use of heavy data augmentation and large batch sizes, leading to a drastically increased computation overhead. In this work, we propose to use more efficient models with improved feature quality by introducing steerable equivariant convolutional networks for DP training. We demonstrate that our models are able to outperform the current SOTA performance on CIFAR-10 by up to \(9\%\) across different \(\varepsilon\)-values while reducing the number of model parameters by a factor of \(35\) and decreasing the computation time by more than \(90\%\). Our results are a large step towards efficient model architectures that make optimal use of their parameters and bridge the privacy-utility gap between private and non-private deep learning for computer vision. ## 1 Introduction In fields where sensitive data is handled, steps must be undertaken to prevent its exposure to unauthorized third parties. However, it has been shown on numerous occasions that unprotected models can be reverse-engineered to leak sensitive information [3, 6]. This fact poses a significant limitation to the application of machine learning systems in fields where the safety of sensitive data is necessary, due to privacy regulations, intellectual property protection requirements or other ethical considerations. Privacy-enhancing technologies allow one to derive insights from datasets while quantitatively bounding the leakage of information about the training examples, and represent the best chance to date to incentivize data sharing in an ethical and responsible manner. Deep learning with Differential Privacy (DP) [1, 15], the gold-standard technique for privacy preservation, enables analysts to train predictive models which can be shared with other parties while offering formal guarantees about how much information can be extracted from their representations to the individuals whose data was used for training. However, DP leads to sharp utility trade-offs, as -to achieve DP- the gradient updates are bounded in \(\ell_{2}\)-norm and noise is added, whereby the total noise power scales proportionally to the number of model parameters. This renders the effective training of large models, which are typically used to achieve state of the art (SOTA) results in non-private training, disproportionately difficult or impossible. Computational Considerations of Private Deep LearningRecent advancements in DP deep learning have focused largely on optimizing the training regime to improve Figure 1: Our results at a glance: We improve the SOTA test set accuracy by _De et al_. [12] for training CIFAR-10 from scratch under \((\varepsilon,10^{-5})\)-DP by up to \(9\%\) across \(\varepsilon\)-values (top panel) while dramatically reducing the number of parameters by over \(35\)-fold and the computation time by more than \(10\)-fold for \(\varepsilon=8\) (bottom panel). For details, we refer to Section 4.2 prediction performance. The current SOTA for training CIFAR-10 from scratch was very recently established by _De et al_. [12]. This work's technique focused on extensive training adaptions to improve prediction performance. Among these addaptations are (1) large-batch training [14], (2) averaging per-sample gradients across multiple augmentation of the same image [16] before clipping (_augmentation multiplicity_), and (3) temporal parameter averaging techniques [26]. This results in remarkable accuracy gains on over-parameterized models over the previous SOTA, which seemed out of reach up until recently for models such as the WideResNet (WRN), that incorporate several million parameters. However, the accuracy gains presented by the authors of the aforementioned work come at a very high cost: the massive computational resources and time required to train those models has an overbearing financial and environmental impact and imposes great difficulties in reproducing the results. This makes progress in this direction cumbersome and inefficient, and renders proposing improvements on top of the presented results out of reach for almost all scientific institutions. To mitigate these difficulties, approaches such as the recent work by _Sander et al_. [28] have proposed an incremental efficiency improvement in conducting hyperparameter searches under DP. However, even with these advancements, the development of better-performing models does not come into reach, as inordinately large computational resources are still required to train the proposed architectures and conduct nevertheless extensive parameter sweeps to find a well-performing setup. Orthogonal to the aforementioned works, other studies in DP deep learning have introduced training regimes which additionally leverage publicly available data, which is usable without any privacy constraints [18, 34]. This solution is however not a panacea; large quantities of public data are not available in areas such as medicine, where data, at least from the same distribution, still comes with the same privacy considerations and is thus impossible to access. Relying on public data in many cases thus opposes the very foundation of privacy-enhancing technologies, _i.e_. obtaining access to important _insights_ while minimizing the amount of sensitive data required. We thus contend that an optimal solution to the aforementioned dilemmas will require a fundamental reconsideration, which will marry _high accuracy_ when training from scratch with _high computational efficiency_. Efficient Models are the Key to DP Deep LearningTwo factors will allow us to achieve the aforementioned goal: (1) increased representational efficiency and (2) a reduction in (possibly redundant) model parameters. The requirement for DP models to learn high quality features to achieve parity with non-private models has been previously discussed [31]. However, the aforementioned work utilises an unwieldy cascade of static feature extractors and trainable model parameters. In contrast, to achieve high representational efficiency with a low number of parameters, we turn to the utilization of **Equivariant CNN (ECNNs)**. As shown below, this allows us to eschew costly training adaptations, while using existing model architectures in a plug-and-play fashion. It is known that ECNN architectures exhibit greatly increased data efficiency, improved generalization and high parameter efficiency, especially in domains with high degrees of intra-image symmetry [10, 11]. So far, no works have investigated how to combine equivariant layers with DP training nor analyzed the potential beneficial changes to the training regime, even though their characteristics render them highly attractive for this use-case. In comparison to standard convolutions, equivariant layers preserve the relative _pose_ of features in addition to their position, resulting in efficient parameter sharing and thus avoiding the redundant learning of identical convolutional filters in multiple orientations. For non-private training, these characteristics can be emulated (but without a formal guarantee of equivariance) by conventional (non-equivariant) models through an increase in model size and augmentation techniques, _i.e_. the exact techniques discussed above. As discussed however, these adaptions massively increase the computation time of the (already very demanding) Differentially Private Stochastic Gradient Descent (DP-SGD), as _e.g_. each augmentation increases the time complexity almost linearly. Moreover, naively scaling the number of parameters proportionally increases the total noise power of the added Gaussian noise, thus risking "drowning out" the learning signal. ECNNs offer a balanced approach by being able to generalize from less information while also requiring smaller architectures [10]. These properties allow us to reduce the required training and architectural adaptations in our ECNNs that complement training under DP. Our ContributionsIn this paper, we investigate ECNNs with steerable kernels for training under DP-SGD. We begin by adapting equivariant convolutional and normalization layers for private training. We subsequently experimentally evaluate private ECNNs under different \((\varepsilon,\delta)\)-DP privacy settings and show that they outperform the current state of the art when trained from scratch while requiring a fraction of model parameters and computation times. We determine that these benefits are a result of lower gradient sparsity and improved feature extraction characteristics. Last but not least, we show that ECNNs mitigate not only the privacy-utility, but also the privacy-calibration trade-off. ## 2 Background Differential PrivacyFor a comprehensive overview of DP, we refer to [15]. Its adaptation to deep learning came with the introduction of DP-SGD by [1]. DP is a strong stability condition on randomised algorithms mandating that outputs are approximately invariant under inclusion or exclusion of a single individual from the input database. For a mechanism (algorithm) \(\mathcal{M}\) and all datasets \(D\) and \(D^{\prime}\) that differ in one element as well as all measurable subsets \(S\) of the range of \(\mathcal{M}\), \((\varepsilon,\delta)\)-DP requires that: \[\text{Pr}(\mathcal{M}(D)\in S)\leq e^{\varepsilon}\,\text{Pr}(\mathcal{M}(D^{ \prime})\in S)+\delta, \tag{1}\] where \(\varepsilon\geq 0\) is called the _privacy loss_ or _budget_ and \(\delta\in[0,1)\) is called the _failure probability_. This privacy constraint is in practice typically realised by the addition of noise. In our work, we utilise Gaussian noise and the aforementioned DP-SGD algorithm to privatie gradient updates in neural network training. To track the privacy throughout training, we use Renyi-DP accounting [23, 24]. Renyi-DP (RDP) is often used with training deep neural networks as it massively facilitates the composition of sequences of private algorithms executed on sub-samples of a larger dataset (such as SGD). The RDP privacy condition is: \[D_{\alpha}(\mathcal{M}(D)\parallel\mathcal{M}(D^{\prime}))\leq\rho, \tag{2}\] where \(D_{\alpha}\) is the Renyi divergence of order \(\alpha\leq 1\). We note that the Renyi divergence is symmetric in the Gaussian noise setting, as the DP-guarantee is required to be symmetric. RDP can be converted to \((\varepsilon,\delta)\)-DP for a given \(\delta\), and we use the conversion by [23]: \(\bar{\varepsilon}=\rho+\frac{\log(1/\delta)}{\alpha-1}\). In the rest of this paper, we will refer to the converted \(\bar{\varepsilon}\) simply as \(\varepsilon\). We note that we refer to "sampling" rather than "mini-batches" in DP-SGD, since privacy amplification by sampling requires subsets of the training set to be drawn using a Poisson sampling technique [1]. Equivariant CNNsEquivariance describes the mathematical property of a structure preserving mapping. This means, that there exist two transformations \(T_{g}\) and \(T^{\prime}_{g}\) that lead to the same result when passing an input \(x\) through a mapping \(\phi\) such that \[\phi\left(T_{g}x\right)=T^{\prime}_{g}\phi\left(x\right). \tag{3}\] For image classification, this property was first used with rotations and reflections in Regular Group CNNs by [10] and later extended to ECNNs with steerable filters by _Cohen et al_. [9, 11, 33]. The general approach to equivariant convolutions is centered around the idea of representations, describing the transformation laws of a given feature space. The feature fields in this space are mappings that transform according to the corresponding representation. Each layer's input and output space must be compatible with the corresponding transformation law. In order to guarantee this behavior, a convolution kernel \(K\) is subject to a linear constraint, given by \[K(g\cdot x)=\rho_{\text{out}}(g)K(x)\rho_{\text{in}}(g)^{-1}\ \ \forall g\in G,\ x\in\mathbb{R}^{n}, \tag{4}\] with group actions \(g\in G\), depending on the associated group representations \(\rho\). For our case (planar images), we focused on representations where the group actions are rotations and reflections acting on the parameters of a learned kernel, the group elements. These group actions can be discrete, with the number of rotations typically depicted as \(N\), or continuous in \(SO(2)\) or \(O(2)\). Finite groups are commonly represented through regular representations, where the corresponding transformation matrix has dimensionality equal to the order of the group, \(\mathbb{R}^{N}\). There exist different ways to approximate continuous groups for them to work with steerable convolutions. We follow the results of [32] and use representations for \(O(2)\) that are induced from \(SO(2)\) as this has been shown to give the most promising results in non-private training for planar image classification. To solve the kernel constraint in our equivariant convolutions, we employ the proposed general solution from [7]. ## 3 Constructing Efficient Equivariant Models for DP Image Classification In this section, we present our methodological innovations which allow us to train ECNNs under DP constraints. We then experimentally establish that the equivariance properties lead to substantial efficiency and accuracy gains compared to previous works. We begin by discussing the architectural modifications required to make ECNNs DP-compatible. As introduced in the previous section, equivariant convolutions are generally natively compatible with DP, however, we must establish a number of changes to the rest of the network to satisfy the stability condition of DP as described in Eq. (1) and the equivariance property as given in Eq. (3). For DP, we are required to compute per-sample gradients, which are incompatible with Batch Normalization, as it contaminates per-sample gradients with information from other samples in the batch and thus the rest of the dataset. We thus implement an equivariant version of a batch-independent normalization layer, in our case, Group Normalization. Moreover, to improve signal propagation, we apply weight standardization in the convolutional layer [27] and switch the order of the normalization and activation layers as proposed in [28]. Data augmentation is implemented through augmentation multiplicities, first introduced by [20] and established for DP by [12]. The augmentations are done on a per sample basis and averaged before clipping, which is DP-compatible. Lastly, to further improve convergence, we also temporally average the parameters in validation and inference by an exponentially weighted moving average [26]. Our equivariant models are build upon the frameworks established by [17] and [7]. To make a model fully equivariant, each layer has to fulfill the equivariant property in Eq. (3). However, we follow the common approach when working on invariant tasks to map our equivariant feature maps to a trivial representation through an invariant mapping before the classification head. For the discrete groups \(C_{N}\) and \(D_{N}\), this is achieved by selecting the field of each channel with the maximum activation. Accordingly, for \(O(2)\), we calculate the per channel Frobenius norm of all subfields, _i.e_. for an equivariant feature channel \(f_{i}(x)\to f_{i}^{\prime}=\left\|f_{i}(x)\right\|_{F}\), of the induced representation and use the resulting maximum for the following classification. As part of our contributions, we introduce two novel layers for DP-ECNN training. Firstly, the (naive) **equivariant group normalization** layer can be used for trivial and regular representations. For each group of channels \(g\in[1,G]\), it normalizes across the different feature channels by acting on each field within the group. We split the channel dimension \(C\) of our \(4\)-dimensional feature vector \((N,C,W,H)\), where \(N\) is the batch axis, and \(H\) and \(W\) are the spatial height and width axes, such that the representation fields of each channel are in a separate dimension. This allows us to utilize existing implementations of \(3\)-dimensional group normalisation layers before we reduce our feature vector back to \(4\) dimensions for further processing. However, the aforementioned layer is _unsuitable_ for use with continuous groups, which additionally require irreducible representations to function. To thus satisfy the equivariance property for the (non-trivial) induced irreducible representations of the \(O(2)\) group, we introduce a **group standardization/normalization** layer, that standardizes the subfields within each feature channel by the mean and variance of the corresponding channel group. We again expand our feature vector to \(5\) dimensions, with the representation fields as extra dimension. The feature matrix \(\mathbf{X}_{c}\), consisting of the spatial dimensions and all fields of a channel \(c\in[1,C]\), is standardized by \[\hat{\mathbf{X}}=\left[\hat{\mathbf{X}}_{c}\mid\hat{\mathbf{X}}_{c}=\frac{( \mathbf{X}_{c}-\mu_{g})}{\sigma_{g}+\epsilon}\right]. \tag{5}\] The mean and standard deviation of the group \(\mu_{g}\) and \(\sigma_{g}\) are computed by \[\mu_{g}=\frac{1}{\gamma}\sum_{c=1}^{C_{G}}\mathbf{X}_{c}\,,\ \ \ \sigma_{g}=\sqrt{\frac{1}{\gamma}\sum_{c=1}^{C_{G}}(\mathbf{X}_{c}-\mu_{g})^{2} }\, \tag{6}\] where \(\gamma=C_{G}\times W\times H\) for channels \(C_{G}=C/G\). As the standardization is an affine transformation that is applied to all subfields of a channel, we are able to guarantee equivariance. For trivial and regular representations, standard activation functions, such as Mish and ReLU [2, 25], can be used as non-linearities. When working with the \(O(2)\) group, where our group consists of trivial and induced irreducible representations, we split our feature channels and use the gated non-linearities introduced by [32] for the induced representations while using a standard activation function for the trivial representations. When pooling is applied in the convolution block, it acts on the spatial dimensions of each field of trivial and regular representations independently. To preserve the equivariance property of our induced irreducible representations, pooling is applied based on the norms of each channel's subfields and the subfield with the highest value is preserved. Natural images often have a sense of orientation at a global level, while lower-level features intrinsically consist of a higher degree of symmetry. To account for this varying level of equivariance, we restrict our representations before the last residual block. The order of the finite groups is reduced to \(N/2\), while keeping the reflection property in the dihedral groups. By contrast, the induced representations are restricted to only incorporate reflections. Additionally, we adjust the number of channels for each convolutional layer, such that our equivariant networks have a similar number of parameters as their non-equivariant counterparts. Through the aforementioned changes, we are able to provide a _modular and scalable_ plug-and-play solution to easily generate equivariant networks from established standard models by simply defining a specific symmetry group and programatically adapting the layers in a fully automated fashion. ## 4 Results ### Accuracy Across Hyperparameter Choices In this sub-section, we demonstrate ablation experiments which highlight key benefits of ECNNs compared to conventional CNNs used for DP training in previous works. Due to its fast training time and small model size, we adapt Figure 2: Validation accuracy of the Equivariant-ResNet9 in comparison to the non-equivariant ResNet9 under \((8,10^{-5})\)-DP on CIFAR-10 without augmentation multiplicities. A _sweet spot_ is observed at an intermediate model size, with larger models yielding diminishing accuracy returns. the (scale normalized) ResNet9 model introduced by [21] to work with the implementation of our equivariant layers. This model is used throughout the section. Smaller Model SizeWe begin by showing that the isometry equivariance properties allow us to drastically reduce both CNN depth and width while maintaining high classification accuracy. Due to the additional pose information in our feature space, the equivariant network does not have to learn redundant features for different orientations. This additional information, also called _parameter utilization_ as defined in [11], is dependent on the chosen symmetry group. While a higher parameter utilization better satisfies the equivariance property, the expressiveness of the trained kernel can suffer due to the corresponding tighter kernel constraints. The increased parameter utilization of our models allows us to scale down the number of channels for each convolution layer without losing information. As the \(L_{2}\)-norm of the noise added at each update step scales proportionally with the length of the gradient vector and thus the number of parameters in a model, a smaller model size can have a beneficial effect on the prediction performance. Figure 2 shows that the maximum validation accuracy reachable on CIFAR-10 with the Equivariant-ResNet9 does not increase with the number of convolution channels as one might expect for conventional CNNs. In fact, there is an "optimal" model size at a layout of \((16,32,64)\) beyond which accuracy starts decreasing again; this leads to a total number of parameters of \(\approx 250k\), _i.e_. ten times fewer than the original (conventional) CNN. Improved Accuracy Across All Batch SizesPrevious work has generally shown that (very) large batch sizes lead to substantial improvements in accuracy for DP training [14, 22]. The pose information learned by the steerable kernels leads to more robust features that are able to generalize better. Research has shown, that as a consequence this allows equivariant networks to perform better with fewer samples in a non-private setting [7]. We thus investigate whether the superior performance of ECNNs is maintained across batch size choices or whether non-equivariant training is able to match our performance through an optimized choice of batch size. Our ablation testing shows that - mirroring previous findings- larger batch sizes generally lead to an increased accuracy, with diminishing returns setting in around a batch size of \(2048\) on CIFAR-10. However, our ECNNs _consistently outperformed the non-equivariant SOTA_. Interestingly, the accuracy gains through a batch size increase were "steeper" for ECNNs compared to the baseline, as witnessed by the slope of the curve between \(256\) and \(2048\) in Fig. 3. In summary, the equivariant architecture outperformed the non-equivariant model across all batch sizes with an increase in validation accuracy of -on average- \(\approx 5\%\). Fewer Augmentations RequiredOne of the main ingredients in the technique proposed by [12] is the utilization of _augmentation multiplicities_, _i.e_. performing multiple augmentations per sample and averaging the resulting gradients before privatization. In this section, we address the question whether ECNNs are able to supplant this technique (and thus drastically reduce the required computation time, which scales with the number of simultaneous augmentations). After all, isometry equivariant convolutions are able to learn information _decoupled from a feature's pose_ and thus reduce the necessity of augmentations, especially of Figure 3: The Equivariant-ResNet9 benefits from increased batch sizes similar to the non-equivariant WRN-16-4 from [12], but has a substantially better validation accuracy across the board under \((8,10^{-5})\)-DP (_left_). We note that this experiment was conducted without augmentation multiplicities for either model and that the models were trained for the exact same number of update steps to maintain comparability. The advantage of the equivariant network is maintained when adding augmentation multiplicities up to a value of \(16\) (_right_). Further increasing the number of augmentations only increases computation time but actually decreases performance. rotations and reflections, leading to an immediate reduction in the intense computational burden of augmentation multiplicity. Figure 3 shows that -compared to the current state of the art on CIFAR-10- equivariant models reach close-to-optimal performance with an augmentation multiplicity of only \(4\), whereas the SOTA requires \(8\) times as many simultaneous augmentations to achieve the same accuracy. In fact, our results at \(4\) augmentations _outperforms_ the \(32\) augmentation multiplicities of [12] while drastically reducing the computation time by the same factor of \(8\). Corroborating the theoretical advantages of ECNNs, diminishing returns set in earlier than for conventional CNNs. ### Results on Image Classification Datasets CIFAR-10We evaluate our equivariant approach on the CIFAR-10 dataset, where we split the training set into \(45k\) train and \(5k\) validation samples for all preliminary experiments and hyperparameter searches. Equivalently to other works cited, the stated test accuracy is achieved by training our model on the full training set and evaluating it once on the held-out test set. Our equivariant models are benchmarked against the current SOTA on CIFAR-10 by [12]. We reproduce the previous SOTA with the exact same setup and code provided by the authors1 on our hardware for a fair comparison of the prediction results and computation time. The reproduced results have a difference in test accuracy of \(\lessapprox 3\%\) to the results of the original paper, which was similarly observed by [28]. Footnote 1: [https://github.com/deepmind/jax_privacy](https://github.com/deepmind/jax_privacy) We evaluate our equivariant models with the most promising symmetry groups from _Weiler et al._[32] under \((\varepsilon,\delta)=(8,10^{-5})\)-DP. An angular frequency of \(1\) is used for the continuous group \(O(2)\). For an in-depth analysis of other frequency groups of the \(O(2)\) group, we refer to Appendix B.4. The standard deviation for the equivariant models is measured across \(5\) independent runs, except for the reproduction of [12], which due to the increased computation time was run only \(3\) times. Based on the results from Sec. 4, we reduce the model layout from \((64,128,256)\) channels to \((16,32,64)\) and (floor-)divide the number of channels by \(\sqrt{1.75N}\), where \(N\) is the group order, to end up with a similar number of parameters for all symmetry groups. A more detailed analysis of how the channel layout impact prediction performance can be found in Appendix B.3. The models are trained for \(2200\) update steps with DP-SGD and a clipping norm and learning rate of \(2\), an exponential moving average decay of \(0.999\), a batch size of \(8192\) and \(4\) augmentation multiplicities (_original image + 3 augmentations_). We use random reflections and cropping with a two-sided reflection padding of \(4\) pixels for our augmentations. The hyperparameters and implementation details are summarized in Appendix A. The results in Tab. 1 show that we achieve our best median accuracy on CIFAR-10 with \(81.62\%\) with the dihedral group \(D_{4}\). Most notably, this result is achieved substantially faster than the previous SOTA with a decrease in computation time by a factor of more than \(10\). Moreover, the corresponding model only includes \(256k\) parameters and is thus \(35\) times smaller than the previous SOTA (WRN-40-4). Interestingly, the dihedral groups perform (on average) better than the cyclic groups, probably due to the intrinsic horizontal symmetry of the images in the dataset, which is corroborated by the fact that increasing the rotation order \(N\) did not improve prediction performance. As the network \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & & & \multicolumn{3}{c}{Test Accuracy [\%]} \\ \cline{3-7} & Group & \(\varepsilon\) & Median & Std. Dev. & Parameters & GPU Hours \\ \hline _De et al._ (2022) & \(\{e\}\) & 8 & 81.4 & \((0.2)\) & \(8.9M\) & – \\ _De et al._ (_our reproduction_) & \(\{e\}\) & 8 & 80.3 & \((1.13)\) & \(8.9M\) & 69.5 \\ _Klause et al._ (2022) & \(\{e\}\) & 7.42 & 71.8 & – & \(2.4M\) & 0.37 \\ \hline & \(C_{4}\) & 8 & 80.13 & \((0.43)\) & \(258k\) & **4.7** \\ & \(C_{8}\) & 8 & 79.30 & \((0.30)\) & \(256k\) & 6.0 \\ & \(C_{16}\) & 8 & 78.17 & \((0.53)\) & \(244k\) & 9.3 \\ Equivariant-ResNet9 (_ours_) & \(D_{4}\) & 8 & **81.62** & \((0.51)\) & \(256k\) & 6.1 \\ & \(D_{8}\) & 8 & 79.53 & \((0.29)\) & \(244k\) & 8.8 \\ & \(D_{16}\) & 8 & 78.61 & \((0.41)\) & \(238k\) & 15.2 \\ & \(O(2)\) & 8 & 69.31 & \((0.91)\) & \(215k\) & 10.1 \\ \hline \hline \end{tabular} \end{table} Table 1: CIFAR-10 test accuracy of our Equivariant-ResNet9 with different symmetry groups trained from scratch compared to the previous state of the art. We report the median and the standard deviation calculated across \(5\) independent runs. The GPU hours are measured on a NVIDIA A100 40GB. The highest accuracy is observed using the Equivariant-ResNet9 and the \(D_{4}\) group at only \(256k\) parameters and requiring only \(6.1\) GPU hours of computation compared to the best SOTA result, which requires both a larger model and much longer computation. uses kernels of size \(3\times 3\), this phenomenon could also be attributed to the restrictions in discretizing small rotations on this kernel size without losing information. We consider combining our technique with larger receptive fields, _e.g_. through _atrous_ convolutions, a promising future work direction. We observe a similar effect with the continuous rotations in \(O(2)\), which perform substantially worse than all other groups. This is in line with the non-private experiments conducted by [32], indicating that the kernel constraint is too restrictive and the lack of expressiveness cannot be compensated by the more pronounced equivariant properties of the kernel. Summarizing the performance of all groups, we thus recommend the \(D_{4}\) group, which offers the best trade-off between accuracy and computation time out of all candidates. The \(D_{4}\) groups is also able to _consistently_ outperform the state of the art for \(\varepsilon<8\), as witnessed in Fig. 1, with an increase in test accuracy by the Equivariant-ResNet9 of up to \(9.47\%\) at \(\varepsilon=2\). Additional results and analyses can be found in Appendix B.2. To summarize these findings, our ECNNs achieve SOTA performance on CIFAR-10 with substantially smaller models and in a fraction of the computation time. ImageNetLarge-scale image classification with DP has recently come into focus, and promising initial results have lately been demonstrated on the ImageNet dataset [12, 22]. The key drawback of the aforementioned works is the fact that DP training on ImageNet is _exceedingly costly_ in terms of computational budget required. In this sub-section, we thus investigate the question of _how close_ to SOTA accuracy we can get using the highly efficient and compact Equivariant-ResNet9. For this purpose, we examine its accuracy on the ImageNet32 dataset and compare it to the following baselines: (1) the WRN-40-4 result from [12] and (2) the previous SOTA using a ResNet-18 by [22]. We stress that the key research question of this sub-section is _not_ whether it is possible to achieve state of the art results _per se_, as we lack the resources to conduct the hyperparameter tuning required for such an undertaking, but to what extent ECNNs shift the accuracy-efficiency trade-off. To measure this trade-off, we use the _accuracy density_ metric [4, 19], _i.e_. the test accuracy divided by the number of model parameters. In addition, we propose an extension to this metric which we term _DP accuracy density_; here, we additionally divide the base accuracy density by the noise multiplier \(\sigma\) to relate it to the privacy level. A higher accuracy density then signifies a more efficient architecture in both cases. We note that, despite this correction factor, DP accuracy density makes the most sense when algorithms trained to the same (or at least close) \(\varepsilon\) are compared to each other. For the experiment, we split the ImageNet training set and use \(10k\) samples of the training set for validation, and the official validation set as the unseen test set, similar to the other works. Of note, we use the \(32\times 32\) image size version of the dataset, which renders the classification task more difficult compared to the baselines, which use the full image size version [8]. This difficulty is compounded by the fact that, contrary to the baselines, we utilize a model which is _underparameterized_, _i.e_. has fewer parameters than samples in the dataset. Impressively, despite these challenging conditions, the Equivariant-ResNet9 achieved a Top-1 test accuracy of \(23.8\%\) with a mere \(411k\) parameters, corresponding to an accuracy density of \(57.8\) and a DP accuracy density of \(23.12\). For comparison, the model from [12] only has an accuracy density of \(1.27\). To contextualize this result: the current SOTA model on the _non-private_ ImageNet leaderboard 2 achieves \(88.6\%\) accuracy with \(\approx 305M\) parameters _i.e_. an accuracy density of only \(0.29\) (!). These results are summarized in Tab. 2. Footnote 2: [https://github.com/rwightman/pytorch-image-models/blob/main/results/results-imagenet.csv](https://github.com/rwightman/pytorch-image-models/blob/main/results/results-imagenet.csv) Privacy-Calibration Trade-OffDP deep learning is particularly appropriate to sensitive domains such as medicine. Here, privacy is only one of several considerations which are required to build trustworthy AI systems. Besides privacy guarantees, models should also exhibit favourable characteristics in terms of uncertainty calibration. We therefore exemplarily examine whether our proposed techniques are able to provide improved calibration in addition to higher accuracy by looking at the Brier score [5]. Indeed, on the CIFAR-10 test set, we found that the Equivariant-ResNet9 had a \(\approx 27\%\) lower Brier score compared to the non-equivariant counterpart (Tab. 3) indicating superior \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & & Top-1 & & & Acc. & DP Acc. \\ & Model & \((\varepsilon,\delta)\)-DP & Accuracy & Parameters & \(\sigma\) & Density & Density \\ \hline _Kurakin_[22] & ResNet-18 & \((10,10^{-6})\) & 20.6\% & \(11.68M\) & 0.65 & 1.76 & 2.71 \\ _De et al._[22] & WRN-40-4 & \((8,8\cdot 10^{-7})\) & **32.4\%** & \(25.56M\) & 2.5 & 1.27 & 0.5 \\ _Ours_ & Equivariant-ResNet9 & \((8,8\cdot 10^{-7})\) & 24.5\% & **411k** & 5.35 & **59.61** & **11.14** \\ & \((\mathbf{2},8\cdot 10^{-7})\) & 10.8\% & **411k** & 18.44 & 26.27 & 1.42 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy figures and densities for Equivariant-ResNet9 vs. two SOTA results. For both accuracy density metrics, higher is better. The equivariant model is substantially more efficient per unit accuracy. model calibration. While an in-depth investigation of the calibration of equivariant models is outside the scope of our current study, we consider this finding encouraging and intend to expand upon it in future work. ## 5 Result Interpretation Aside from the theoretical justification for the improved performance of ECNNs, _i.e_. due to improved feature efficiency, we are interested in establishing empirical findings which lead to an improved understanding and form a basis for future investigations. In particular, we are interested in two phenomena: (1) whether ECNNs are able to detect _more relevant input features_ than non-equivariant models and (2) whether information from the training set is _transmitted more efficiently_ into the network's weights. To address the first question (detection of relevant input features), we use two model interpretability techniques: Guided Backpropagation [30] is used to determine which features are detected in the input space and Guided GradCAM [29] is used to localize which of the detected features are used for classification. Fig. 4 shows that the ECNN picks up features across the whole input domain but mainly uses the pixels belonging to the subject to classify it. The conventional model on the other hand strongly detects features which are not part of the subject and uses these irrelevant features for classification (_i.e_. every area of the image _except_ the actual subject). To address the second question (information flow between the data and the weights), we examine the sparsity of the model architecture. Similarly to how convolutions are by design sparse fully-connected operators, equivariant convolutions exhibit increased parameter sharing and thus resemble even sparser CNNs [19]. To analyze how this design choice impacts the model during training, we utilize the \(\ell_{0}^{*}\) "norm" metric, _i.e_. the number of parameters or gradient entries with a magnitude \(<\epsilon\)[13]. We note that this is a different \(\epsilon\) than the one used to describe DP guarantees. Figure 5 shows, that the percentage of parameters with absolute values \(\approx 0\) during training is substantially lower for the equivariant network compared to the standard ResNet9 from [21]. Thus, more of the parameters in the network actually contribute to network's prediction. Additionally, the corresponding sparse gradient vector (_i.e_. one with many close-to-zero entries) updates fewer weights, indicating that the model is closer to convergence as it has incorporated nearly all the relevant information. That gradient sparsity moreover increases sooner in the training also indicates a more efficient information transfer at an early stage. We consider a more in-depth investigation of the role of sparsity in DP a promising direction for future work. ## 6 Conclusion The broad applicability of private deep learning has so-far been impeded by privacy-utility trade-offs. Recent works have partially addressed these limitations and presented techniques to bridge the accuracy gap but introduced a new trade-off between accuracy and efficiency. Ultimately, we contend that _both_ trade-offs must be addressed to facilitate large-scale research in DP deep learning. The remarkable performance gains equivariant CNNs enable are an important step towards the democratization of DP deep learning research and the widespread deployment of private models on low-power devices (_e.g_. mobile phones) where Figure 4: Guided Backpropagation and Guided GradCAM heatmaps for (A) the conventional and (B) the equivariant model on a sample from CIFAR-10. Figure 5: The Equivariant-ResNet9 has a higher \(\ell_{0}^{*}\) ”norm” indicating the percentage of gradients updating parameters during training while also having a substantially lower percentage of parameters with values \(<\epsilon\) under \((8,10^{-5})\)-DP. \begin{table} \begin{tabular}{l l l} \hline \hline & Model & Brier score \\ \hline _Klause et al_. (2022) & ResNet9 & 0.041 \\ _Ours_ & Equivariant-ResNet9 & **0.030** \\ \hline \hline \end{tabular} \end{table} Table 3: Compared to the standard ResNet9, the Equivariant-ResNet9 exhibits better model calibration, shown by a lower Brier score averaged across the test set under \((8,10^{-5})\)-DP. DP is often needed the most. We are hopeful that our work will spark interest in the highly promising intersection of equivariant deep learning and DP and have noted numerous possible future work directions above. After all, we regard the provision of formal guarantees of model behavior, which both equivariance and DP represent, as a solid foundation for systems truly deserving the label of "trustworthy AI".
2310.05027
Rigid Clumps in the MercuryDPM Particle Dynamics Code
Discrete particle simulations have become the standard in science and industrial applications exploring the properties of particulate systems. Most of such simulations rely on the concept of interacting spherical particles to describe the properties of particulates, although, the correct representation of the nonspherical particle shape is crucial for a number of applications. In this work we describe the implementation of clumps, i.e. assemblies of rigidly connected spherical particles, which can approximate given nonspherical shapes, within the \textit{MercuryDPM} particle dynamics code. \textit{MercuryDPM} contact detection algorithm is particularly efficient for polydisperse particle systems, which is essential for multilevel clumps approximating complex surfaces. We employ the existing open-source \texttt{CLUMP} library to generate clump particles. We detail the pre-processing tools providing necessary initial data, as well as the necessary adjustments of the algorithms of contact detection, collision/migration and numerical time integration. The capabilities of our implementation are illustrated for a variety of examples.
Igor Ostanin, Vasileios Angelidakis, Timo Plath, Sahar Pourandi, Anthony Thornton, Thomas Weinhart
2023-10-08T06:15:16Z
http://arxiv.org/abs/2310.05027v2
# Rigid Clumps in the _MercuryDPM_ Particle Dynamics Code ###### Abstract Discrete particle simulations have become the standard in science and industrial applications exploring the properties of particulate systems. Most of such simulations rely on the concept of interacting spherical particles to describe the properties of particulates, although, the correct representation of the nonspherical particle shape is crucial for a number of applications. In this work we describe the implementation of clumps, i.e. assemblies of rigidly connected spherical particles, which can approximate given nonspherical shapes, within the _MercuryDPM_ particle dynamics code. _MercuryDPM_ contact detection algorithm is particularly efficient for polydisperse particle systems, which is essential for multilevel clumps approximating complex surfaces. We employ the existing open-source CLUMP library to generate clump particles. We detail the pre-processing tools providing necessary initial data, as well as the necessary adjustments of the algorithms of contact detection, collision/migration and numerical time integration. The capabilities of our implementation are illustrated for a variety of examples. ## 1 Introduction ### Overview and scope Rigid assemblies of spherical particles [1; 2] are an important tool to simulate materials consisting of particles of irregular shapes with the discrete element method (DEM). The alternative approaches, that are often employed to model non-spherical particles in DEM [3; 4], have certain limitations - polyhedral particle shapes [3] lead to difficulties in generalization of a wide set of well-established contact models for spherical particles, while superquadrics [4] do not offer sufficiently general particle shape representation toolkit. As a result, almost all modern commercial DEM codes, e.g. EDEM [5] or PFC [6], include functionality to model rigid assemblies of spherical particles. However, as will be demonstrated below, the implementation of rigid clumps in DEM introduces ambiguities that are hard to interpret when the source code and exact implementation details are unavailable. We seek to fill this gap, presenting fully functional, well-documented and completely open source implementation of rigid particle assemblies within the _MercuryDPM_[7] particle dynamics code, utilizing CLUMP library [8] for particle generation. Below we provide a brief overview of the _MercuryDPM_ particle dynamics engine and discuss the notion of a _rigid clump_ - a rigid assembly of spherical particles - as it will be used in this paper. In the following sections we will take a closer look at the necessary theoretical background, the implementation details and the examples of using rigid clumps in numerical simulations with _MercuryDPM_. ### MercuryDPM particle dynamics code _MercuryDPM_[9] is an open-source realization of DEM. It is mainly used to simulate granular particles - collections of discrete particles that can be found in many natural and artificial settings. Examples include snow, sand, soil, coffee, rice, coal, pharmaceutical tablets, catalysts, and animal feed. Understanding the behavior of such materials is crucial for industries like pharmaceuticals, mining, food processing, and manufacturing. The development of the code started in 2009 at the University of Twente, and since then it has grown into a large framework with a wide open-source community of academic and industrial users. The core development team is still located at the University of Twente. _MercuryDPM_ is a versatile, object-oriented C++ code that is built and tested using the capabilities of cmake/ctest. The code possesses three primary features enabling it to simulate complex industrial and natural scenarios: (i) the flexible implementation allowing complex walls and boundary conditions; (ii) the analysis toolkit, able to extract the most relevant information from the large amount of data generated by these simulations, (iii) the advanced contact detection scheme that makes _MercuryDPM_ particularly efficient for highly polydisperse particle systems; [10; 7]. The latter feature is particularly interesting in a context of simulating clumps, since fine representation of shape of a non-spherical particle often requires highly polydisperse clumps. _MercuryDPM_ normally operates with spherical particles (discrete elements), characterized by the mass, radius, position, velocity and angular velocity. Also _MercuryDPM_ offers support of superquadric particles [7]. The Velocity Verlet time integration algorithm is utilised to update the positions of each particle, while the forward Euler algorithm is employed for particle rotations. Particle interactions are governed by wide variety of contact models which describe physical laws to compute the normal and tangential forces resulting from particle's contacts. ### Rigid clumps By _rigid clump_ (or just _clump_) we will imply an aggregate of \(N\) rigid spherical particles of a given density, that are rigidly linked to each other at a given relative translational and rotational positions (Fig. 1). The constituent particles of a clump will be referred to as _pebbles_. The clump is a _rigid body_ possessing 6 degrees of freedom. Therefore, in 3D, the number of constraints that are implicitly introduced on relative translational and rotational positions of particles is \(6(N-1)\). The pebbles may (or may not) have overlaps, introducing volumes within a clump that belong to more than one pebble. It is therefore impossible to algebraically sum up the inertia of the clump over pebbles for a system of overlapping pebbles representing a complex-shaped particle. Our approaches to computing the inertia of clumps are discussed below. Our implementation builds on the multispheres featured in the earlier versions of the code (see section 6.2 in [7]). However, the functional and performance of the implementation have been significantly expanded and improved via incorporation of multiple new features, architecture improvements and bugfixes. The new implementation allows to address a wide class of problems that previously remained unavailable - large simulation model sizes, arbitrarily complex clump geometries, complex (e.g. moving periodic) boundary conditions etc. ## 2 Clump geometry generation with the Clump software CLUMP software [8] has been developed recently to address the problem of automatic generation of rigid clump particles by approximation of polyhedral shapes. MercuryDPM provides the necessary interface to use CLUMP-generated particles in DEM simulations. This section offers an overview of the main features of CLUMP and underlying clump generation methodologies. The open-source CLUMP software (Code Library to generate Universal Multi-sphere Particles) [8] is used to create clump representations of irregular particle geometries. This software takes as input shape/imaging data of various types, such as point clouds, surface meshes (e.g. in the form of stereolithography/stl files), tetrahedral meshes and labelled three dimensional images derived via Computed Tomography. Utilising this input, three clump generation methods are implemented in the software to create clump representations of them, proposed in [11], [12] and [8]. The method of [11], one of the historically first clump generation methods, is implemented in the software to generate clumps of axisymmetric bodies. Although the original paper introducing the method [11] does not delineate a way of generating clumps of real particles, the implementation in CLUMP offers the capability of achieving this via the following steps: a particle geometry is loaded from imaging data and its inertia tensor is calculated; the principal inertia values and principal directions (PDs) are determined and the particle is oriented to its PDs; then a user-defined number of spheres are generated along the longest particle dimension, the size of which is decided so as to approximate the shape of the input particle; last, the clump is oriented back to the original orientation of the input particle. This method can generate elongated and compact particles of limited elongation, but cannot generate particles with pronounced flat features. For irregular particles that do not display axisymmetric features, the method described in [12] is an efficient method to generate clumps based on a triangulated mesh representation of the particle surface (i.e. made of vertices and triangular faces). The method first calculates the normal vectors of each vertex as the average of the adjacent face normal vectors; then, a random vertex is selected and a tangent sphere is grown internally within the particle, until it intersects one of the other particle vertices; the process is repeated until a sought number of spheres is generated. If imaging data are given in a different format, e.g. via Computed Tomography, this is handled internally within CLUMP, via transformation of the data to a surface mesh. The simplicity of the method makes it appealing and computationally efficient, but the random selection of vertices can lead to inadequate clumps for small numbers of spheres per particle. In such cases, for the same number of spheres the algorithm generates clumps of vastly different characteristics, as there is no rationale behind the random selection of vertices. As a result, for these cases there is no correlation between employed number of spheres and achieved morphological fidelity. However, if a large amount of spheres is considered computationally affordable by the modeller (e.g. in [12] up to 5500 spheres were considered), this method generates clumps with reduced artificial surface roughness, as reported in [12]. A new clump generation technique was recently proposed as part of CLUMP [8], which relies on the Euclidean transform of three-dimensional images. A particle shape is either imported directly from binarized (or labelled) images, or transformed into a three-dimensional image from other data types (e.g. from surface mesh data); the Euclidean transform of the image is calculated, and the maximum value of the transform determines the location and radius of the largest possible inscribed sphere that fits in the particle. This sphere is considered as the first sphere, the voxels corresponding to a percentage of this sphere are deactivated from the original image, leading to a residual image (original minus a percentage of the sphere voxels); then, the Euclidean transform of this new residual image is used to calculate the next sphere; the process is Figure 1: Rigid clump and its inertial properties – conceptual illustration repeated until a user-defined required number of spheres is generated or if a user-defined minimum radius is achieved. This technique has the clear advantage that each new sphere is generated at the position where the mass of the particle is least represented, thus creating a clear correlation between the number of spheres (a descriptor associated to computational cost) and the achieved morphological similarity (a descriptor of simulation fidelity). With this method, each sphere is of equal or smaller size to its previous one, and so particle generation is performed in a systematic and predictable way. If all the voxels of a sphere are deactivated after each iteration of the method, the method results in clusters of non-overlapping spheres, while if only a percentage of each sphere is deactivated, clumps of overlapping spheres are generated, as delineated in [8]. The drawback of the method is its high cost in terms of memory consumption (though still manageable even for a regular desktop computer). Choosing the optimal or preferred particle generation technique lies with the user, as different applications and different particle types pose different requirements in terms of the employed particle characteristics. In terms of efficiency, all of the aforementioned techniques perform well, mainly due to their algorithmic simplicity, allowing for the generation of several hundred particles within few minutes, for input imaging data of reasonable resolution and size. ## 3 Rigid clumps in _MercuryDPM_ ### General organization The rigid clump functional in _MercuryDPM_ is currently implemented as a multilevel structure. The logic of unification of pebbles in the clump, as well as the algorithms of time integration are implemented in the class./Kernel/particles/ClumpParticle.h/cc inherited from an abstract nonspherical particle class./Kernel/particles/NonSphericalParticle.h/cc, that, in turn is inherited from the base particle class./Kernel/particles/BaseParticle.h/cc. It is expected that the functions inherent to all types of nonspherical particles (e.g. rigid dynamics time integration) in the future will be located in the class./Kernel/particles/NonSphericalParticle.h/cc. The CLUMP software, described above, is used to generate positions and radii of pebbles that describe the given nonspherical shape. The CLUMP tool provides pebble data, which, along with the optionally provided initial stl format shape of the clump, constitute an input of MCLump pre-processing tool (part of _MercuryDPM_, cite [13]). Alternatively, the pebble data for MCLump can be generated manually. MClump centers and rotates the clump, aligning its principal axes with the global Cartesian axes, and computes clump's inertia using the prescribed algorithm (summation over pebbles, summation over voxels, summation over tetrahedrons using stl representation - see the description below. Fig. 2. details modes of work of MCLump. In the first mode, MCLump imports list of pebbles and then does all the necessary computations (center of mass (COM), volume, tensor of inertia (TOI), principal directions) based on summation over pebbles, as discussed in Subsection 3.3.1. In the second mode, MCLump imports list of pebbles, but performs inertia computations on the voxel grid, excluding extra contributions of pebble's overlaps (Subsection 3.3.2). In the third mode, MCLump receives the triangulated surface of a nonspherical particle, as well as its clumped sphere approximation generated by the external tool (CLUMP library), and computes the necessary properties (Subsection 3.3.3). Headers for the driver files ./Drivers/Clump/ClumpHeaders/ClumpIO.h, ./Drivers/Clump/ClumpHeaders/Mercury3DClump.h, introduce necessary features and modifications of the _MercuryDPM_ virtual members, enabling clump dynamics, namely: * The modifications of the _MercuryDPM_ engine, changing the logic of application of contact forces and moments, as well as the external forces (e.g. gravity). * The adjustment of the logic of interaction of the clump and its pebbles with the periodic boundary. * The import tool, that loads the all data of available clump instances, including clump volume, TOI and the list of pebbles. * Clump distribution generation functions, that create distributions of non-overlapping rotated clumps in a given spatial domain. Driver files (compiled simulation descriptions, see [7] for details) utilize these tools to load the list of clump instances generated by MClump, and, using them, generate necessary distributions of clumps and compute their dynamics. ### Clump creation logic The unification of particles into rigid clumps occurs by assigning to every particle instance the role of either a "clump" particle or a "pebble" particle. Specifically, every instance of BaseParticle class has Boolean attributes (flags) isClump and isPebble. The "pebble" instances have isClump = False, isPebble = True. All the "clump" (container) instances have isClump = True, isPebble = False. Regular spherical particles have isClump = False, isPebble = False. Depending on the flags, these three 2 types of particles have different behavior in contact detection, migration over boundaries etc. Namely, for the clump particle the interactions are treated at the pebble level, while time integration of motion occurs at the clump level. Stiff pebble-pebble interactions are assumed, so that clump-clump contact is always represented with a single pebble-pebble contact. In case of (non-physical) multiple contacts between the close pebbles of interacting clumps, the corresponding effective increase in stiffness should be treated [14]. The motion of pebbles is prescribed according to translation and rotation of the corresponding clump. Clumps and pebbles have some other differences in behavior, e.g. in a context of interaction with periodic boundaries - see the discussion below. Footnote 2: The \(4-^{th}\) combination (isClump = True, isPebble = True) is explicitly excluded in the relevant particle class methods. ### Computing inertial properties of a clump Defining inertial properties of a clump is a non-trivial problem. The analytical treatment is possible in case of absent overlaps (direct summation over pebbles, as implemented earlier [7]), and overlaps between no more than two spherical pebbles (summation over pebbles and subtraction of "cap" segments, [15]). In our implementation, we use three different approaches to compute mass and TOI of complex shape particles: summation over the pebbles, summation over the voxels and summation over the tetrahedrons. Fig. 3. gives the qualitative idea about these representations of the volume of a non-spherical particle. Let us take a closer look at each of these approaches. Figure 2: Modes of operation of MClump tool. #### 3.3.1 Summation over pebbles This method of computation works if the pebbles do not overlap or we presume that the inertial properties of a clump are defined by the total mass of the pebbles. In this case the total mass and TOI can be directly summed over the spherical pebbles using mass conservation and Steiner's theorem. Given the density of pebbles \(\rho\), their radii \(r_{j}\) and positions in Cartesian system \(\mathbf{x}_{j}=(x_{j},y_{j},z_{j})\), we first find the mass of the clump and the position of the center of mass: \[M=\sum m_{j}=\sum\frac{4}{3}\pi r_{j}^{3}\rho \tag{1}\] \[\mathbf{x}_{c}=\frac{1}{M}\sum m_{j}\mathbf{x}_{j} \tag{2}\] At the next step, we shift the center of the coordinate system to the center of mass: \[\mathbf{x}_{j}:=\mathbf{x}_{j}-\mathbf{x}_{c} \tag{3}\] then we compute the TOI by summing over pebbles: \[\mathbf{I}=\sum\mathbf{I}_{j} \tag{4}\] \[\mathbf{I}_{j}=m_{j}\begin{pmatrix}\frac{2}{5}r_{j}^{2}+y_{j}^{2}+z_{j}^{2}&- x_{j}y_{j}&-x_{j}z_{j}\\ -x_{j}y_{j}&\frac{2}{5}r_{j}^{2}+x_{j}^{2}+z_{j}^{2}&-y_{j}z_{j}\\ -x_{j}z_{j}&-y_{j}z_{j}&\frac{2}{5}r_{j}^{2}+x_{j}^{2}+y_{j}^{2}\end{pmatrix} \tag{5}\] Given the above-mentioned assumptions, this method is precise. #### 3.3.2 Summation over voxels In case if pebbles overlap and clump was not generated from a triangulated surface, we use voxel discretization to compute mass and TOI of a clump. The bounding box encapsulating every point of the clump is expanded to the cubic box \((x_{b},x_{b}+Nd,y_{b},y_{b}+Nd,z_{b},z_{b}+Nd)\) of a minimal size, which is split into cubic voxels of side \(d\), defined by the specified number of voxels \(N\) along the side of a bounding box. Then the mask \(\mathcal{M}(m,n,k)\) is introduced: \(\mathcal{M}(m,n,k)=1\) if the center of the voxel \(m,n,k\) is inside of at Figure 3: Representation of a non-spherical shape as (A) triangulated surface, (B) rigid clump of spherical particles, (C) 3D array of voxels. least one pebble, and \(\mathcal{M}(m,n,k)=0\) otherwise. The coordinates of the center of the voxel are found as \(\mathbf{x}(m,n,k)=(x_{b}+d(m+0.5),y_{b}+d(n+0.5),z_{b}+d(k+0.5))\). Then the mass and the COM is computed as \[M=\sum_{m}\sum_{n}\sum_{k}\mathcal{M}(m,n,k)\rho d^{3} \tag{6}\] \[\mathbf{x}_{c}=\frac{1}{M}\sum_{m}\sum_{n}\sum_{k}\mathcal{M}(m,n,k)\mathbf{x} (m,n,k)\rho d^{3} \tag{7}\] Next, we shift the center of the coordinate system to the center of mass: \[\mathbf{x}_{j}:=\mathbf{x}_{j}-\mathbf{x}_{c} \tag{8}\] then we compute the TOI by summing over voxels: \[\mathbf{I}=\sum_{m}\sum_{n}\sum_{k}\mathbf{I}_{mnk} \tag{9}\] \[\mathbf{I}_{mnk}=\rho d^{3}\begin{pmatrix}y_{mnk}^{2}+z_{mnk}^{2}&-x_{mnk}y_{ mnk}&-x_{mnk}z_{mnk}\\ -x_{mnk}y_{mnk}&x_{mnk}^{2}+z_{mnk}^{2}&-y_{mnk}z_{mnk}\\ -x_{mnk}z_{mnk}&-y_{mnk}z_{mnk}&x_{mnk}^{2}+y_{mnk}^{2}\end{pmatrix} \tag{10}\] The precision of such estimate depends on the chosen resolution and the complexity of the shape. The method requires brute-force summation over \(~{}10^{6}-10^{9}\) voxels, therefore, pre-computation might take some time. #### 3.3.3 Summation over tetrahedrons If the clump is generated by approximation of known triangulated surface, one can use the latter for an explicit calculation of the TOI [16]. In this case the TOI is computed by the analytical summation over tetrahedrons. The COM of a tetrahedron \(j\) with the vertices \([\mathbf{a}_{j}^{1},\mathbf{a}_{j}^{2},\mathbf{a}_{j}^{3},\mathbf{a}_{j}^{4}]\) is given by: \[\mathbf{c}_{j}=\frac{\mathbf{a}_{j}^{1}+\mathbf{a}_{j}^{2}+\mathbf{a}_{j}^{3} +\mathbf{a}_{j}^{4}}{4} \tag{11}\] Volume of a tetrahedron \(j\) is given by \[V_{j}=\frac{1}{6}\begin{vmatrix}(a_{j}^{1})_{x}&(a_{j}^{1})_{y}&(a_{j}^{1})_{ z}&1\\ (a_{j}^{2})_{x}&(a_{j}^{2})_{y}&(a_{j}^{2})_{z}&1\\ (a_{j}^{3})_{x}&(a_{j}^{3})_{y}&(a_{j}^{3})_{z}&1\\ (a_{j}^{4})_{x}&(a_{j}^{3})_{y}&(a_{j}^{4})_{z}&1\end{vmatrix} \tag{12}\] Here the volume \(V_{j}\) comes with the the sign that depends on whether the normal \((\mathbf{a}_{3}-\mathbf{a}_{2})\times(\mathbf{a}_{4}-\mathbf{a}_{2})\) is directed into the half-space containing the vertex \(\mathbf{a}_{1}\) (negative sign) or vise versa. Given arbitrary volume bounded by a triangulated surface \(\Gamma\) consisting of a set of triangles \(\mathbf{s}_{j}\), and an arbitrary point \(\mathbf{O}\), one can compute the COM of the volume as: \[\mathbf{x}_{c}=\sum\mathbf{c}_{j}V_{j} \tag{13}\] where \(V_{j}\) and \(\mathbf{c}_{j}\) are the volume and COM of the tetrahedron \([\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}]=[\mathbf{O}, \mathbf{s}_{j}^{1},\mathbf{s}_{j}^{2},\mathbf{s}_{j}^{3}]\) Similarly to alternative approaches, we shift the coordinate system to match its origin with the computed clump's COM: \[\mathbf{x}_{j}:=\mathbf{x}_{j}-\mathbf{x}_{c} \tag{14}\] One can further compute mass and TOI of the body with respect to its COM as the sum of masses and moments of inertia of constituent tetrahedrons: \[\begin{split} M&=\rho\sum V_{j}\\ &\mathbf{I}=\sum\mathbf{I}_{j}\end{split} \tag{15}\] Here the TOI of a tetrahedron with respect to its first vertex \(\mathbf{a}_{1}\), corresponding to the origin of the coordinate system and clump's COM, is computed according to [16]: \[\mathbf{I}_{j}=\rho\begin{pmatrix}a&-c^{\prime}&-b^{\prime}\\ -c^{\prime}&b&-a^{\prime}\\ -b^{\prime}&-a^{\prime}&c\end{pmatrix} \tag{16}\] where \[a =\int_{D}(y^{2}+z^{2})dD, b =\int_{D}(x^{2}+z^{2})dD, c =\int_{D}(x^{2}+y^{2})dD, \tag{17}\] \[a^{\prime} =\int_{D}yzdD, b^{\prime} =\int_{D}xzdD, c^{\prime} =\int_{D}xydD,\] where \(D\) is the tetrahedral domain. It worth noting that the paper [16] has a known [17] error, that is fixed in (16): components \(b^{\prime}\) and \(c^{\prime}\) are erroneously swapped there. Denoting \([\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}]=[(x_{1},y_{1},z _{1})\), \((x_{2},y_{2},z_{2})\), \((x_{3},y_{3},z_{3})\), \((x_{4},y_{4},z_{4})]\), the integrals 8 are solved explicitly as: \[\begin{split} a&=V_{j}(y_{1}^{2}+y_{1}y_{2}+y_{2}^{2 }+y_{1}y_{3}+y_{2}y_{3}+y_{3}^{2}+y_{1}y_{4}+y_{2}y_{4}+y_{3}y_{4}+y_{4}^{2}\\ &+z_{1}^{2}+z_{1}z_{2}+z_{2}^{2}+z_{1}z_{3}+z_{2}z_{3}+z_{3}^{2}+ z_{1}z_{4}+z_{2}z_{4}+z_{3}z_{4}+z_{4}^{2})/10\end{split} \tag{18}\] \[\begin{split} b&=V_{j}(x_{1}^{2}+x_{1}x_{2}+x_{2}^{2 }+x_{1}x_{3}+x_{2}x_{3}+x_{3}^{2}+x_{1}x_{4}+x_{2}x_{4}+x_{3}x_{4}+x_{4}^{2}\\ &+z_{1}^{2}+z_{1}z_{2}+z_{2}^{2}+z_{1}z_{3}+z_{2}z_{3}+z_{3}^{2}+ z_{1}z_{4}+z_{2}z_{4}+z_{3}z_{4}+z_{4}^{2})/10\end{split} \tag{19}\] \[\begin{split} c&=V_{j}(x_{1}^{2}+x_{1}x_{2}+x_{2}^{2 }+x_{1}x_{3}+x_{2}x_{3}+x_{3}^{2}+x_{1}x_{4}+x_{2}x_{4}+x_{3}x_{4}+x_{4}^{2}\\ &+y_{1}^{2}+y_{1}y_{2}+y_{2}^{2}+y_{1}y_{3}+y_{2}y_{3}+y_{3}^{2}+ y_{1}y_{4}+y_{2}y_{4}+y_{3}y_{4}+y_{4}^{2})/10\end{split} \tag{20}\] \[\begin{split} a^{\prime}&=V_{j}(2y_{1}z_{1}+y_{2}z_{1 }+y_{3}z_{1}+y_{4}z_{1}+y_{1}z_{2}+2y_{2}z_{2}+y_{3}z_{2}+y_{4}z_{2}+y_{1}z_{3}\\ &+y_{2}z_{3}+2y_{3}z_{3}+y_{4}z_{3}+y_{1}z_{4}+y_{2}z_{4}+y_{3}z_{ 4}+2y_{4}z_{4})/20\end{split} \tag{21}\] \[\begin{split} b^{\prime}&=V_{j}(2x_{1}z_{1}+x_{2}z_{1 }+x_{3}z_{1}+x_{4}z_{1}+x_{1}z_{2}+2x_{2}z_{2}+x_{3}z_{2}+x_{4}z_{2}+x_{1}z_{3} \\ &+x_{2}z_{3}+2x_{3}z_{3}+x_{4}z_{3}+x_{1}z_{4}+x_{2}z_{4}+x_{3}z_{ 4}+2x_{4}z_{4})/20\end{split} \tag{22}\] \[\begin{split} c^{\prime}&=V_{j}(2x_{1}y_{1}+x_{2}y_{1 }+x_{3}y_{1}+x_{4}y_{1}+x_{1}y_{2}+2x_{2}y_{2}+x_{3}y_{2}+x_{4}y_{2}+x_{1}y_{3} \\ &+x_{2}y_{3}+2x_{3}y_{3}+x_{4}y_{3}+x_{1}y_{4}+x_{2}y_{4}+x_{3}y_{4 }+2x_{4}y_{4})/20\end{split} \tag{23}\] This method gives precise TOI of the initial triangulated surface. It worth noting that the formulae (13, 15) work for rather complex (non-convex, multiply connected) domains: if the absolute volume of tetrahedrons is higher than the volume of a body, the extra volume is swept twice with tetrahedrons of positive and negative volume computed according to 12, which results in correct values for body's total volume, mass, COM and TOI. The examples section gives the comparison of the methods to compute inertial properties of a clump used in our work. ### Computing the clump's PDs Principal axes of inertia \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) are found as eigenvectors of \(\mathbf{I}\): \[\mathbf{I}\mathbf{e}_{\mathbf{i}}=\lambda_{i}\mathbf{e}_{\mathbf{i}} \tag{24}\] PDs are assured to form the _right-handed_ Cartesian basis. Once the PDs of the clump's TOI are computed, the clump instance is rotated to align its PDs with the Cartesian axes: \[\mathbf{x}:=\mathbf{Q}\mathbf{x} \tag{25}\] \[\mathbf{I}:=\mathbf{Q}^{T}\mathbf{I}\mathbf{Q} \tag{26}\] where \(\mathbf{Q}\) is the rotation matrix defined as \[\mathbf{Q}=\begin{pmatrix}\mathbf{n}_{1}\mathbf{e}_{1}&\mathbf{n}_{2}\mathbf{ e}_{1}&\mathbf{n}_{3}\mathbf{e}_{1}\\ \mathbf{n}_{1}\mathbf{e}_{2}&\mathbf{n}_{2}\mathbf{e}_{2}&\mathbf{n}_{3} \mathbf{e}_{2}\\ \mathbf{n}_{1}\mathbf{e}_{3}&\mathbf{n}_{2}\mathbf{e}_{3}&\mathbf{n}_{3} \mathbf{e}_{3}\end{pmatrix} \tag{27}\] where \(\mathbf{n}_{i}\) are the orths of global Cartesian coordinate system, and \(\mathbf{e}_{i}\) are orths of clump's eigendirections. ### Equations of motion of a rigid clump Once we have procedures that compute overall force \(\mathbf{F}\) and moment \(\mathbf{M}\) acting on the clump, we can solve the equations of motion using one of the schemes of numerical integration. For translational motion of a clump, we use the velocity Verlet algorithm that does not differ from the one employed for spherical particles, given that the particle mass is the mass of a clump. Below we consider the equations of motion for rotational degrees of freedom. In the case when the TOI is non-spherical (the principal moments of inertia are not equal) the rotational dynamics is described by Euler equations: \[I_{ii}\dot{\omega}_{i}-I_{ij}\dot{\omega}_{j}+\epsilon_{ijk}\omega_{j}(I_{kk} \omega_{k}-I_{kl}\omega_{l}))=M_{i};(i\neq j,l\neq k) \tag{28}\] The non-spherical TOI \(I_{ij}\) is computed based on one of the algorithms discussed above. ### Time integration of the EoM of a rigid clump The time integration scheme used in our code utilizes a leap-frog algorithm of the time integration of the notion of non-spherical particle, similar to one utilized in PFC 4.0 [18]. We track the orientation in the shape of rotation matrix \(Q\) that in used to reconstruct the current orientation of local coordinate system and the positions of pebbles. The equation (27) is solved using finite difference procedure of the second order, computing angular velocities \(\omega_{j}\) at mid-intervals \(t+\Delta t/2\), and all other quantities at primary intervals \(t+\Delta t\). The equation (27) can be re-written in the matrix form as \[\mathbf{M}-\mathbf{W} =\mathbf{L}\mathbf{\dot{\omega}} \tag{29}\] \[M =\begin{pmatrix}M_{1}\\ M_{2}\\ M_{3}\end{pmatrix}\] \[W =\begin{pmatrix}(I_{33}-I_{22})\omega_{2}\omega_{3}+I_{23}\omega_{3} \omega_{3}-I_{32}\omega_{2}\omega_{2}-I_{31}\omega_{1}\omega_{2}+I_{21}\omega_ {1}\omega_{3}\\ (I_{11}-I_{33})\omega_{3}\omega_{1}+I_{31}\omega_{1}\omega_{1}-I_{13}\omega_{ 3}\omega_{3}-I_{12}\omega_{2}\omega_{3}+I_{32}\omega_{2}\omega_{1}\\ (I_{22}-I_{11})\omega_{1}\omega_{2}+I_{12}\omega_{2}\omega_{2}-I_{21}\omega_ {1}\omega_{1}-I_{23}\omega_{3}\omega_{1}+I_{13}\omega_{3}\omega_{2}\end{pmatrix}\] \[I =\begin{pmatrix}I_{11}&-I_{12}&-I_{13}\\ -I_{21}&I_{22}&-I_{23}\\ -I_{31}&-I_{32}&I_{33}\end{pmatrix}\] We use the equation (29) to compute the values of \(\omega_{i}(t+\Delta t/2)\) and \(\hat{\omega}_{i}(t+\Delta t)\). Following the approach suggested in [18] we use the iterative algorithm to find these unknowns: * Set \(n=0\) * Set \(\omega_{i}^{[0]}\) to the initial angular velocity. * (*) Solve (29) for \(\hat{\omega}_{i}\) * Determine a new (intermediate) angular velocity: \(\omega_{i}^{[new]}=\omega_{i}^{[0]}+\hat{\omega}_{i}^{[n]}\Delta t\) * Revise the estimate of \(\omega_{i}\) as: \(\omega_{i}^{[n+1]}=0.5(\omega_{i}^{[0]}+\omega_{i}^{[new]})\) * Set \(n:=n+1\) and go to (*) This algorithm gives us the value of the angular velocity that is further used to update the position at the second step of leap-frog algorithm. The number of steps necessary for the sufficient precision varies depending on the application and is usually chosen in range of \(2-5\). The described approach is rather general, which potentially allows extension of the notion of clumps on quite wide set of pebble entities, including particles that do not track their orientations [18]. However, the algorithm is inferior in terms of precision and performance compared to modern rigid-body integrators [19; 20], because of significant overhead related to solving the equations of motion in inertial frame - this can be significant for clumps consisting of small numbers of pebbles, when the duration of rigid body integration is non-negligible compared to duration of updating positions of pebbles. ### Interaction of clump particles with periodic boundaries The complete description of the logic of interaction of spherical particles (classes BaseParticle, SphericalParticle) and periodic boundaries can be found in [21]. This logic had to be adjusted for rigid clumps. Below we briefly describe the corresponding modifications. The original scheme utilizes the concept of primary particles and "ghost" particles that are introduced to represent interactions across periodic boundaries. "Ghost" particles are created when the primary particle approaches closely the periodic boundary, and "switch" status with the primary particle when the migration over the periodic boundary occurs. Our implementation introduced two minor modifications to this scheme to ensure correct treatment of rigid clumps in a periodic box: * "Clump" particles are never erased/created in a course of the simulation. They migrate over the periodic boundary seamlessly by direct specification of the position property. This way the necessity of sending the "pebble" pointers between "clumps" is avoided. * The "ghost" particles for "clumps" do not exist, since no interaction is treated at the level of "clump" particles. * the vector connecting the center of "clump" particle (clump's COM), and the center of "pebble" particle. These adjustments are introduced in /Drivers/Clump/ClumpHeaders/Mercury3DClump.h, provide full functionality of all types of periodic boundaries, implemented earlier in _MercuryDPM_. ### Random generation of non-overlapping clumps It is often necessary to create rigid clumps with random initial orientation. In order to provide equal probability of every orientation, we use the following scheme of clump random rotation: we first rotate the clump instance counterclockwise about \(n_{3}\) direction by the angle \(\alpha\), and then rotate the clump to match its principal direction \(n_{3}\) with the random vector on a unit sphere \((\theta,\phi)\) in a spherical coordinate system: \(n_{3}^{rot}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\). The random values of \(\alpha,\phi\) are chosen uniformly in the range \((0,2\pi)\), while the angle \(\theta\) is chosen as \(\arccos(p)\), where \(p\) is uniformly distributed in \((-1,1)\). Such choice of random orientation angles ensures equal probability of every possible clump orientation. In order to ensure a placement of a new clump into the deposition domain without overlaps with the previous clumps, a straightforward algorithm is used to ensure that neither pebble of newly deposited clump overlaps with any pebble of the existing clump. ### Modifications of energy computing routines The routines computing rotational and translational kinetic energy of the clump, as well as its potential gravitational energy, had to be straightforwardly adjusted to reflect the correct inertial/gravitational properties of a clump, computed as detailed above. ## 4 Examples ### Computation of TOI - precision of the summation This brief example illustrates the precision of our approaches used to compute mass and tensor of inertia of the clumps. The test model consists of two spherical pebbles of unit radius, with centers separated by one diameter of a pebble (Fig. 4 (A) gives the model represented with pebbles, tetrahedrons and voxels). This simple model allows immediate exact evaluation of inertial properties of this non-spherical, non-convex shape. The mass of the clump, as well as its major and minor moments of inertia are then evaluated with tetrahedral and voxel discretization. The vertices of tetrahedrons are the origin \((0,0,0)\) and the triangles constructed by equispaced angular subdivision of each pebble sphere on \(N\) equal segments along latitude angle \(\theta\in(0,\pi)\) and on \(2N\) segments along azimuth angle \(\phi\in(0,2\pi)\) (see Fig.4 (A)). For the voxels, the refinement degree \(N\) is defined as the number of voxels along the diameter of a pebble. Fig. 4(B) demonstrates the convergence of relative error in computation of mass and principal moments of inertia with the degree of model refinement \(N\). We can clearly see that the error is inversely proportional to \(N\), both for tetrahedron and voxel discretization. The latter, however, features significant chaotic error, which suggests necessity of further improvement of an algorithm. ### Dynamics of a single particle - energy equipartition The simple simulation depicted in Fig. 5(A) is located at Drivers/Clump/Single/Single.cpp. An elastic, rod-like particle is placed into a cubic box with elastic walls (no friction, no dissipation, linear contact model is employed). At the initial moment of simulation, the particle is assigned the initial translational velocity \(V\), orientation along x axis and zero initial angular velocity \(w\). After few collisions, the alignment of the particle with x axis breaks, and each next collision causes redistribution of energy between translational Figure 4: (A) The model of a clump under test, represented with spherical pebbles, tetrahedrons and voxels. (B) Relative error in computation of clump’s mass (\(\Delta M\)), major (\(\Delta I_{1}\)) and minor (\(\Delta I_{3}\)) principal components of inertia, as a function of the model refinement \(N\) (see the definitions above), for tetrahedrons (top) and voxels (bottom). and rotational degrees of freedom (Fig. 5(B)). In a long enough timeline we see the energy equipartition between available degrees of freedom. For example, if the particle bounces strictly along \(y\) axis between two elastic walls, and rotates around its principal axis co-oriented with \(z\), it has only one translational and one rotational degree of freedom. We can therefore foresee that the equipartition will manifest itself with the ratio of 1 between the tranlational kinetic energy \(mv^{2}/2\) and rotational kinetic energy \(I\omega^{2}/2\) in a sufficiently long simulation. This is precisely what happens (Fig. 5(C)). Similarly, the different initial conditions leading to a different system of available degrees of freedom lead to different ratios. For example, if the initial translational velocity has two components, leading to two translational degrees of freedom, the ratio of rotational and translational energy converges to 0.5. ### Dynamics of a single particle - Dzhanibekov effect The example Drivers/Clump/TBar/TBar.cpp demonstrates so-called Dzhanibekov effect - instability of rotation around the second principal axis (see, e.g., [22]). It manifests itself in a series of flips of an object rotating around its intermediate axis - the classical example is a wingnut rotating around its axis in the condition of zero gravity. The simulation in this example reproduces this effect for T-shaped clump (Fig. 6(A)), rotating around its second principal axis (see Video 1 in the supplementary information [23]). It is important to note that the observed angular momentum and rotational kinetic energy are well preserved during the simulation - for example, as can be seen in Fig. 6(B), the relative drift of the rotational energy does not exceed \(10^{-3}\) for 8 flip cycles. ### Rolling of a Gomboc Gomboc is the convex body that, being put on the flat surface, has one point of stable and one point of unstable equilibrium [24]. Arbitrarily oriented at the initial moment, provided sufficient energy dissipation, the gomboc finally arrives to its only stable equilibrium position. We use the model of a gomboc depicted in Fig. 7(A) to create a clump (Fig. 7(B)), mimicking the behavior of a gomboc. The clump was generated using the algorithm [12] and has 182 pebbles. We simulated the dynamics of gomboc shape, dropped to the flat surface (./Drivers/Clump/Gomboc/Gomboc.cpp) Our simulation (Video 2 in the supplementary information [25]) indicate that, after a series of metastable rotational oscillations (Fig. 7(C)), gomboc shape does indeed arrive to a unique stable orientation. Our experiments indicate that if the initial energy of a gomboc is too low, it may get stuck in one of the local energy minima that emerge due to approximation of the original shape by a finite number of spherical particles. Besides this effect, our simulations compare nicely with the experiments with real Gomboc shape. Figure 5: (A) The model of a single-atom ideal gas with one translational and one rotational degree of freedom. (B) Observed fractions of translational and rotational kinetic energies as functions of time, for a time span comprising first 20 collisions. (C) The ratio between the rotational and translational kinetic energy, averaged over sufficiently long simulation time ( \(5\times 10^{4}\) particle-wall collisions). Figure 6: (A) Evolution of the orientation of a T-bar, (B) observed relative drift of its kinetic energy. Figure 7: Gömböc - (A) original stl model, (B) its rigid clump representation, computed according to [12], (C) Evolution of the translational and rotational kinetic energy with time in the simulation. The simulation duration was chosen to feature entire motion trajectory of a gömböc with realistic damping parameters. ### Domino effect Domino effect is well know to be quite non-trivial benchmark example for DEM simulation with non-spherical particles [26]. We provide a driver file designed for parametric studies of a domino effect (see./Drivers/Clump/Domino/Domino.cpp). Dominoes are rectangular regular packings of pebbles, equispaced along the straight line (Fig. 8(A)). At the initial moment the domino 1 is given an initial push with the cue - a spherical particle. The initial propagation of the domino wave is to a large extent affected by the initial velocity of the cue, however, the steady state velocity does not depend at all on this initial velocity. This, in particular, manifests itself in a constant time derivative of the potential energy (Fig. 8(B)) that does not depend on the initial cue velocity. This invariance of the domino wave velocity is well-known and often attributed [27] to dissipative effects; however, there are theoretical/numerical evidence [28; 26] that it takes place even in the case of perfectly elastic collisions between the dominoes. ### Dense gas of interacting T-shaped particles in a periodic box The driver Drivers/Clump/TGas/TGas.cpp demonstrates the evolution of six hundreds of T-shaped rigid particles of arbitrary initial velocities, angular velocities and orientations, that are deposited in a triple periodic box without initial overlaps, with zero initial rotational velocities and random initial translational Figure 8: (A) Geometry of DEM model of domino wave propagation, (B) Constant rate of change of the potential energy with time in the steady-state domino wave propagation (\(E_{0}\) is the initial gravitational potential energy of the system; the simulation duration roughly corresponds to duration of domino wave propagation over 20 dominoes). velocities (Fig. 9(A)). Shortly after the beginning of the simulation, we can see the complete energy equipartition (Fig. 9(B)). The driver code can be easily adjusted to introduce elastic walls, gravity, dissipation etc. ### Multiple clumps in a rotating drum A concluding example Drivers/Clump/RotatingDrum/RotatingDrum.cpp features a collective motion of complex-shaped clumps in a rotating horizontal drum in the field of gravity (Fig. 10(A)). The gomboc shape described above was used as a clump instance, 27 clumps were deposited in a volume of a drum without initial overlaps between themselves and the walls of the drum. The contact friction at both wall-clump and clump-clump contacts has zero rolling friction and high sliding friction of 0.6. At the initial moment of simulation the drum starts to rotate with the constant angular velocity. The Video 3 in the supplementary information [29] highlights the dynamic evolution of the system. Fig. 10(A) shows the geometry of the system, Fig. 10(B) gives the evolution of the gravitational potential energy of the clumps (normalized on the lowest energy observed in the beginning of the simulation) with time. One can see discrete events of sliding/repose of the bed (8 per 2 full revolutions of the drum). This simulation validates the efficiency of the clump implementation in a moderate-size single-core simulation. ### Efficiency of hierarchical grid contact detection algorithm for highly polydisperse clump systems One of the strong features of MercuryDPM is its efficient contact detection algorithm oriented on highly polydisperse particle assemblies [10]. It is interesting to see how the single-core simulation performance of polydisperse clump systems is affected by the maximum number of levels of hierarchical grid employed by the contact detection algorithm (see [10; 7] for details). Our benchmark examples predictably demonstrate that small models do not benefit from multiple levels of hierarchical grid used in contact detection, while larger models perform much faster with hierarchical grid turned on. The rotating drum simulation described above is used here to demonstrate the effect of multiple levels of hierarchical grid on the performance of simulation of the polydisperse clumps. Two (otherwise identical) simulations with different clump resolution were studied: **model 1** had clumps of 182 pebbles (4914 pebbles in total) and the size ratio of the largest Figure 9: Multiple T-bars in a box.(A) Initial geometry, (B) evolution of rotational and translational kinetic energy with time; the simulation duration was chosen to resolve the energy equipartition process. to the smallest pebble of 28.83; **model 2** had the same clump surface represented by 423 pebbles (11421 pebbles in total) with the size ratio of the largest to the smallest pebble of 53.36 (Fig. 11(A)). Both models were studied in simulations with contact detection algorithm limited to one hierarchical grid level (regular linked cell algorithm, blue plots on Fig. 11(B)) and with three hierarchical grid levels (MercuryDPM default value, green plots on Fig. 11(B)). Accurate comparison of performance results in 57% increase in the cycle-time performance for the **model 1** and 87% - for the **model 2**. For larger models this increase in performance is expected to be even more dramatic [10]. Therefore, we can see that MercuryDPM contact detection algorithm makes it well-suited for modeling polydisperse clumped particle systems. ## 5 Conclusions This work details the implementation of rigid clumps within _MercuryDPM_ particle dynamics code. Necessary pre-processing tools, kernel modifications and driver files illustrating the applications are described. Due to advanced contact detection algorithm of _MercuryDPM_, our implementation demonstrates high single-core performance for highly polydisperse clumps. The new features will certainly be useful to the _MercuryDPM_ community. The codes are currently available in the Master branch of the _MercuryDPM_ project [30]. The implementation is under ongoing development, the changes in the existing implementation will be highlighted in the future release notes and the corresponding papers. ## Funding acknowledgements _MercuryDPM_ has been supported by many projects, both past and present. The features presented here were (partially) funded by the Dutch Research Council (NWO), in the framework of the ENW PPP Fund for the topsectors and from the Ministry of Economic Affairs in the framework of the "PPS-Toeslagregeling". Figure 10: Clumps in a rotating drum. (A) Problem geometry, (B) normalized potential energy of the clumps versus time, featuring sloshing motion pattern.
2308.09457
Measurement of flow birefringence induced by the shear components along the optical axis using a parallel-plate-type rheometer
The present study investigated the flow birefringence induced by shear components along a camera's optical axis, which has been neglected in conventional theories of photoelastic measurements. Measurements were conducted for a wide range of shear rates from a direction perpendicular to the shear using a high-speed polarization camera and a parallel-plate-type rheometer. The measurement results obtained from a fluid with low viscoelasticity, specifically a dilute suspension of cellulose nanocrystals, showed that the birefringence increases monotonically as the stress components along the camera's optical axis increase. It was also found that the birefringence showed a power law with respect to the shear rate. This letter reports a key fact required for polarization measurements of shear rate (shear stress) in three-dimensional flows.
William Kai Alexander Worby, Kento Nakamine, Yuto Yokoyama, Masakazu Muto, Yoshiyuki Tagawa
2023-08-18T10:44:11Z
http://arxiv.org/abs/2308.09457v2
Measurement of flow birefringence induced by the shear components along the optical axis using a parallel-plate-type rheometer ###### Abstract The present study investigated the flow birefringence induced by shear components along a camera's optical axis, which has been neglected in conventional theories of photoelastic measurements. Measurements were conducted for a wide range of shear rates from a direction perpendicular to the shear using a high-speed polarization camera and a parallel-plate-type rheometer. The measurement results obtained from a fluid with low viscoelasticity, specifically a dilute suspension of cellulose nanocrystals, showed that the birefringence increases monotonically as the stress components along the camera's optical axis increase. It was also found that the birefringence showed a power law with respect to the shear rate. This letter reports a key fact required for polarization measurements of shear rate (shear stress) in three-dimensional flows. ## 1 Introduction Measurement of three-dimensional stress fields in fluids is of interest in various disciplines, such as flow engineering, polymer chemistry, and biomechanics. In particular, flow birefringence can be used for non-invasive stress measurement. The relationship between the flow birefringence \(\Delta_{n}\) and fluid strain rate \(\dot{e}_{ij}\) is described by (Doyle, 1982): \[\begin{split}\Delta_{n}\text{cos}2\phi&=\alpha_{1 }(\dot{e}_{xx}-\dot{e}_{yy})\\ &\quad+\alpha_{2}[(\dot{e}_{xx}+\dot{e}_{yy})(\dot{e}_{xx}-\dot{ e}_{yy})+\dot{e}_{zy}^{2}-\dot{e}_{xz}^{2}],\end{split} \tag{1}\] \[\begin{split}\Delta_{n}\text{sin}2\phi&=2\alpha_{1 }\dot{e}_{xy}+\alpha_{2}[2(\dot{e}_{xx}+\dot{e}_{yy})\dot{e}_{xy}+2\dot{e}_{yz} \dot{e}_{xz}].\end{split} \tag{2}\] Here, \(\alpha_{1}\) and \(\alpha_{2}\) are functions of the physical properties of the fluid. For Newtonian fluids, the stress is proportional to the strain rate. Therefore, Eqs. (1) and (2) can be expressed using stress as follows (Nakamine et al., 2023): \[\begin{split}\Delta_{n}\text{cos}2\phi&=C_{1}( \sigma_{xx}-\sigma_{yy})\\ &\quad+C_{2}[(\sigma_{xx}+\sigma_{yy})(\sigma_{xx}-\sigma_{yy})+ \sigma_{zy}^{2}-\sigma_{xz}^{2}],\end{split} \tag{3}\] \[\begin{split}\Delta_{n}\text{sin}2\phi&=2\,C_{1} \sigma_{xy}+\,C_{2}[2(\sigma_{xx}+\sigma_{yy})\sigma_{xy}+2\sigma_{xz}\sigma_{ yz}].\end{split} \tag{4}\] In these equations, \(C_{1}=\alpha_{1}/\eta\) and \(C_{2}=\alpha_{2}/\eta^{2}\), in which \(\eta\) is the shear viscosity of the fluid. Aben and Puro (1997) discussed the optical relationship based on Eqs. (3) and (4) and assumed that the stress components along the camera's optical axis (hereafter simply the "optical axis"), i.e., \(\sigma_{xz}\), \(\sigma_{zy}\), and \(\sigma_{yz}\), were negligible. In other words, they made the assumption that \(C_{2}=0\), which leads to the proposal of: \[\Delta_{n}=C_{1}\sqrt{(\sigma_{xx}-\sigma_{yy})^{2}+4\sigma_{xy}^{2}}. \tag{5}\] From Eq. (5), it is clear that \(\Delta_{n}=0\) in regions with \(\sigma_{xy}=0\), such as the center of a channel flow. However, non-zero \(\Delta_{n}\) vales have been observed even when a quasi-two-dimensional channel was used (Ober et al., 2011). This can be explained as being due to the shear caused by the upper and lower surfaces of the channel wall. This discrepancy between theory and experiment is more significant in the case of three-dimensional flows, where the contribution of the shear components along the optical axis is larger (Kim et al., 2017; Nakamine et al., 2023). Investigations of the response of flow birefringence to the shear rate (shear stress) have been conducted based on rheo-optical measurement techniques, which measure stress and flow birefringence simultaneously (Ito et al., 2016; Lane et al., 2022). Measurement of flow birefringence is a potentially powerful tool for estimating the stress fields in fluids, and there have been some cases of successful reconstruction of shear-velocity fields in two-dimensional channel flows (Kim et al., 2017). However, as Eq. (5) shows, the birefringence induced by the stress components along the optical axis has so far been neglected. Therefore, there have been few cases of systematic measurements using an experimental setup in which a shear-velocity distribution exists along the optical axis, e.g., in a parallel-plate-type (PP-type) rheometer. Such rheo-optical measurement setups are mainly applied to molten polymers (Mykhaylyk et al., 2016), although a complicated process is required to distinguish the effect of normal stress. However, only qualitative measurements have been conducted for fluids with low viscoelasticity, e.g., dilute aqueous cellulose nanocrystal (CNC) suspensions (Kadar et al., 2020), and these are of great importance because they can be regarded as Newtonian fluids. This study quantitatively investigated the flow birefringence induced by the shear components along the optical axis with respect to the shear rate. For fluids with low viscoelasticity, the shear component along the optical axis can be directly visualized as birefringence. Furthermore, herein, the trend of birefringence with respect to shear is discussed by comparing the present results--which involve birefringence induced by shear along the optical axis--with previous measurements of birefringence parallel to the shear. **2 Method** 2.1 Experimental setup A PP-type rheometer (MCR 302, Anton Paar Co., Ltd.) was used for the polarization measurements in these experiments; a schematic of the experimental setup is shown in Fig. 1a, and an example bottom-view intensity image is shown in Fig. 1b. A transparent plate (PP43/GL-HT, Anton Paar Co., Ltd.) and a stage (PTD200/GL, Anton Paar Co., Ltd.) were used to transmit the circularly polarized light emitted from the light source (SOLIS-525C, Thorlabs Co., Ltd., wavelength \(\lambda=525\) nm) through the flow. The gap height between the plate and the stage was fixed at \(H=(100\pm 5)\)\(\mu\)m. For polarization measurements, the transmitted light was captured from the bottom of the plate using a high-speed polarization camera (CRYSTA PI-1P, Photron Co., Ltd.) at 1000 frames per second. The fluid temperature was maintained at 25\({}^{\circ}\)C. The range of the shear rate \(\dot{\gamma}\) applied to the fluid was 0-10,000 s\({}^{-1}\). For \(r/R\geq 0.90\), a mottle-like birefringence distribution was observed even for optically isotropic liquids such as water. This is induced by the refraction and scattering of light near the plate boundary. In the present experiments and analysis, we focused on the region \(0.55\leq r/R\leq 0.90\), which is shown by the white dotted line in Fig. 1b. To evaluate the flow birefringence in a steady-state condition, temporal averaging was performed in the period 5.0-6.0 s after the plate started to rotate. The retardation measured at \(\dot{\gamma}=0\) s\({}^{-1}\) was uniformly offset from all other data. ### CNC suspensions Suspensions of CNC (Alberta Pacific Co., Ltd.) of two different concentrations were studied: 0.5 and 1.0 wt%. As shown in Fig. 1c, the 0.5 wt% CNC suspension behaved like a Newtonian fluid, whereas the 1.0 wt% CNC suspension showed weak shear-thinning. The normal stress was also measured using a rheometer, and the measured values were small enough to be regarded as measurement errors. Note that the CNC suspensions of both concentrations had negligible elasticity. Figure 1: (a) Schematic of the experimental setup and (b) an example image from the bottom view with an overlaid diagram showing the rotating plate, in which \(R\) is the radius of the plate. (c) Plots of steady shear viscosity \(\eta\) for CNC suspensions at different concentrations. ### High-speed polarization measurements The high-speed polarization camera was used to detect the retardation \(\Delta\), which is the integrated value of birefringence along the optical axis of the light transmitted through the apparatus. Using the phase-shifting method (Onuma and Otani, 2014), this was obtained from the radiance through linear polarizers oriented in four different directions (\(0^{\circ}\), \(45^{\circ}\), \(90^{\circ}\), and \(135^{\circ}\)) in an area of \(2\times 2\) pixels. The light intensities detected at each of these pixels are defined as \(I_{1}\), \(I_{2}\), \(I_{3}\), and \(I_{4}\), respectively. The retardation can then be expressed by (Onuma and Otani, 2014): \[\Delta=\int\Delta_{n}\mathrm{d}z=\frac{\lambda}{2\pi}\mathrm{sin}^{-1}\frac{2 \sqrt{(I_{3}-I_{1})^{2}+(I_{2}-I_{4})^{2}}}{I_{1}+I_{2}+I_{3}+I_{4}}, \tag{6}\] where \(\lambda\) [m] is the wavelength of the light source. In this study, we calculated the birefringence \(\Delta_{n}\) by dividing the measured retardation by the gap height \(H\). **3 Results and discussion** Visualized birefringence fields at different shear rates are shown in Fig. 2a. As can be seen, the birefringence increased significantly as the shear rate increased. Profiles taken across the section shown by the thick black line in the left-hand panel of Fig. 2a for different shear rates are plotted in Fig. 2b. The shear rate increases outwardly from the center of the plate, which leads to an increase in the birefringence. To discuss the mechanism of the birefringence induced by the shear stress, the experimental results (Figs. 2a and 2b) were compared with the birefringence values calculated using Eq. (5). When the optical axis is parallel to the \(z\) axis, the stress components of the Couette flow between the rotating plate and the stage can be derived from the Navier-Stokes equations as \[\sigma_{xx}=\sigma_{yy}=\sigma_{zz}=\sigma_{xy}=0. \tag{7}\] As shown in Fig. 2c, the birefringence induced by the shear-stress loading, which was calculated using Eqs. (5) and (7), is \(\Delta_{n}=0\). This indicates that the birefringence is induced by the stress components along the optical axis, which were ignored in Eq. (5). In other words, this result suggests that the assumption that \(C_{2}=0\) should not be applied to velocity fields with significant shear components along the optical axis, e.g., three-dimensional channel flows. This means the birefringence measured in this study is a function of the stress components along the optical axis and \(C_{2}\), which should be described as \[\Delta_{n}=f(C_{2},\ \sigma_{xx},\ \sigma_{zy},\ \sigma_{yz}). \tag{8}\] To further investigate the details of \(C_{2}\), the birefringence was calculated using Eq. (8), and the results are shown in Fig. 2d. Since values of \(C_{2}\) for CNC suspensions have not been reported, the value was determined by fitting the absolute value of birefringence shown in Fig. 2b at 10,000 s\({}^{-1}\). The fitted value was found to be \(C_{2}=2.0\times 10^{-7}\) Pa\({}^{-2}\). Fitting was also conducted in different cases, and it was verified that the magnitude of \(C_{2}\) was \(O(10^{-7}\)-\(10^{-6})\). Note that \(C_{1}\) values for fluids have been reported to be \(O(10^{-7}\)-\(10^{-5})\) in previous studies (Ito et al., 2016; Nakamine et al., 2023). Although their units are different and therefore need to be discussed, \(C_{1}\) and \(C_{2}\) were found to be of similar magnitudes. In addition, the distribution of birefringence in the radial direction of the plate is consistent with the results shown in Fig. 2b. Next, to validate the experimental birefringence results, the trend with respect to the shear rate was investigated. In Fig. 3, the vertical axis shows the spatiotemporally averaged birefringence \(\Delta_{n,mean}\), while the horizontal axis shows the shear rate at \(r/R=0.75\). When modeling the relationship between flow birefringence and shear rate, Lane et al. (2022) Figure 2: (a) Visualized birefringence fields under steady-state conditions; (b) birefringence distribution of 1.0 wt% CNC along the plate radius at each shear rate (error bars indicate standard deviations); analysis results of \(\Delta_{n}\) when (c) \(C_{2}=0\) Pa\({}^{-2}\) and (d) \(C_{2}=2.0\times 10^{-7}\) Pa\({}^{-2}\). The areas enclosed by the dotted lines in panels (c) and (d) correspond to the areas shown in (a). proposed that it can be described in the following nonlinear form: \[\Delta_{n}=(A\cdot\dot{\gamma})^{k_{1}}\cdot c^{k_{2}}, \tag{9}\] where \(c\) is the concentration of the suspension, and \(A\), \(k_{1}\), and \(k_{2}\) are fitting parameters. It should be emphasized that this model is based on the results of polarization measurements conducted from the direction parallel to the shear using a concentric-cylinder-type rheometer, which is different from that used in the present study. The experimental results were fitted using Eq. (9), and the results are shown by the black dash-dotted lines in Fig. 3. The fitting parameters were \(A=0.36\times 10^{-11}\) s, \(k_{1}=0.538\), and \(k_{2}=1.65\). Remarkably, the exponent \(k_{1}\) (which characterizes the trend of the birefringence with the shear rate) obtained in this experiment has a similar value to that found in a previous study (\(k_{1}=0.537\), Lane et al. (2022)). Furthermore, our results demonstrate the validity of the value of the exponent \(k_{1}\) in a shear-rate range (0-7500 s\({}^{-1}\)) that is much wider than that considered in the previous study (0-31.4 s\({}^{-1}\), Lane et al. (2022)). The present experimental results and those of Lane et al. (2022) indicate that regardless of the direction of polarization measurement with respect to shear, there seems to be a common physical background that leads to the flow birefringence producing a power law for the shear rate. ## 4 Conclusion In this letter, rheo-optical measurements were conducted on dilute CNC suspensions using a PP-type rheometer. The novelty of this study lies in the fact that the birefringence was investigated from the direction perpendicular to the shear. The shear rate was measured over a much wider range than in a previous study. The measured birefringence was induced by the shear components along the optical axis, which was not considered in the stress-optic law, as \(C_{2}=0\) in Eqs. (3) and (4). Our results indicate that birefringence induced by the shear components along the optical axis needs to be considered, especially for three-dimensional channel flows. Additionally, they also suggest that the birefringence can be modeled by a power-law relationship with shear rate, similar to the results of a previous study in which polarization measurements were conducted from the direction parallel to the shear (Lane et al., 2022). We are convinced that these findings are important for the development of a non-invasive fluid stress-field measurement method using flow birefringence.
2305.17695
k-NNN: Nearest Neighbors of Neighbors for Anomaly Detection
Anomaly detection aims at identifying images that deviate significantly from the norm. We focus on algorithms that embed the normal training examples in space and when given a test image, detect anomalies based on the features distance to the k-nearest training neighbors. We propose a new operator that takes into account the varying structure & importance of the features in the embedding space. Interestingly, this is done by taking into account not only the nearest neighbors, but also the neighbors of these neighbors (k-NNN). We show that by simply replacing the nearest neighbor component in existing algorithms by our k-NNN operator, while leaving the rest of the algorithms untouched, each algorithms own results are improved. This is the case both for common homogeneous datasets, such as flowers or nuts of a specific type, as well as for more diverse datasets
Ori Nizan, Ayellet Tal
2023-05-28T11:39:51Z
http://arxiv.org/abs/2305.17695v1
# _k-NNN_: Nearest Neighbors of Neighbors for Anomaly Detection ###### Abstract Anomaly detection aims at identifying images that deviate significantly from the norm. We focus on algorithms that embed the normal training examples in space and when given a test image, detect anomalies based on the features' distance to the k-nearest training neighbors. We propose a new operator that takes into account the varying structure & importance of the features in the embedding space. Interestingly, this is done by taking into account not only the nearest neighbors, but also the neighbors of these neighbors (k-NNN). We show that by simply replacing the nearest neighbor component in existing algorithms by our k-NNN operator, while leaving the rest of the algorithms untouched, each algorithms' own results are improved. This is the case both for common homogeneous datasets, such as flowers or nuts of a specific type, as well as for more diverse datasets. ## 1 Introduction Anomaly detection aims at finding patterns in the data that do not conform to the expected "behavior" [3]. It has numerous applications, most notably in manufacturing, surveillance, fraud detection, medical diagnostics, autonomous cars, and detecting outliers. Out of the variety of anomaly detection methods [10, 12, 13, 45], we focus on those that rely on the _k-Nearest-Neighbor (k-NN)_ operator [5, 14, 19, 37, 38]. These methods learn the embedding of _normal_ images or patches. Given a test image, the distance of its embedding to its k-nearest (training) neighbors is computed and this distance determines whether the test image is anomalous or not. The underlying assumption is that anomalous features should reside farther away from normal features than normal features from each other. Thus, a point is considered anomalous when its average distance to its \(k\) nearest neighbors exceeds a certain threshold. A major disadvantage of this approach is that the structure and the importance of the features in the embedding space is not taken into account when looking for anomalies. Figure 1 shows such a case, in which the normal set varies and consists of flowers that belong to different classes, with different inner distances (the flowers are not be classified beforehand). A flower that does not resemble any of the normal flowers (in the red rectangle) will not be detected as anomalous by the _k-NN_ operator, because its distance to its nearest flowers is less than the distances between normal flowers in different regions of the embedding space. Our operator, which implicitly takes the structure of the space into account, will detect this flower as anomalous. The structure of the embedding space is important also when the normal set is homogeneous, as illustrated in Figure 2 for a synthetic example. The 2D embedding of the normal training points lie on three lines, two of which are Figure 1: **Neighbors of neighbors illustration. A common approach is to base the anomaly score on the distances between a test image to its k-nearest neighbors (_k-NN_) in the feature space. In this figure, the yellow and the white flowers are considered normal, whereas the flower in the red rectangle is anomalous. The background color represents the anomaly score. (a) Since the normal set is diverse, _k-NN_-based methods might fail, since the distances of the anomalous image to its neighbors is smaller than the distances between similar images to each other (e.g., between the white flowers). (b) Our _k-NNN_ operator sets better distances between neighbors, which reflect the diversity of the normal examples. It will correctly detect the anomalous flower as such.** parallel and one is perpendicular to them. Two anomalous points, marked as \(1\) and \(2\) (in red), lie above the horizontal line and to the right of the vertical line, respectively. Their _5-NN_ distance is the same as that of the normal points between themselves, and thus they might not be identified as anomalous by _k-NN_-based methods. Similar cases are likely to occur when there are not enough training samples. We propose a novel operator, termed the _k-nearest neighbors-of-neighbors (k-NNN)_, which addresses this problem, as illustrated in Figures 1-2. It differentiates between regions and considers the more indicative features at certain regions as more influential. For instance, in Figure 2 the feature that makes point \(2\) anomalous is its \(y\) feature, whereas the feature that makes point \(1\) anomalous is its \(x\) feature. We show how to efficiently realize this idea of considering regions differently, by simply looking at the neighbors of neighbors of a test point. Intuitively, the neighbors of neighbors provide information about regions, which balances between a global view of the dataset and a more local view, which is based only on the immediate neighbors. To consider the feature importance, we need to find the directions associated with the anomalies. The classical _Principal component analysis (PCA)_ analyzes datasets of high-dimension features, while preserving the maximum amount of information. We observe that for anomaly detection, Eigen vectors associated with small Eigen values matter more than those of large values. Furthermore, as shown in Figure 2, the difference between an anomalous point and its nearest neighbor(s) is perpendicular to the direction of the large Eigen vector(s). Intuitively, this is so since anomalies are characterized by features not present in the dataset. We show how to utilize this observation within our operator. To demonstrate the benefit of our approach, we replace the _k-NN_ operator used in several anomaly detection algorithms with our _k-NNN_ operator. We show how this modification manages to improve the results of each algorithm on a variety of datasets. Hence, this paper makes two contributions: 1. It introduces a novel, general, efficient and accurate operator--the _k-NNN_ operator, which provides an "in-between" look at the data, between local and global. It benefits both diverse and homogeneous normal sets. 2. It proposes a novel normalization scheme, which gives more weight to the small Eigen values and copes with the challenge of having small datasets. ## 2 Related work **Anomaly detection.** Anomaly detection is important to discover potentially dangerous situations, in the manufacturing industry for detecting product faults, in medicine for diagnosing diseases etc. It is a highly challenging task due to image structure, varying environmental conditions, imbalanced datasets, and data diversity. Hence, this task has attracted a huge amount of research. We refer the reader to a couple of comprehensive and excellent surveys [11, 47]. Hereafter, we consider methods that detect whether or not an image is anomalous and do not aim to segment it. They may be categorized to three classes, as follows. _Reconstruction-based_ methods learn a set of basis functions on the training data. Given a test image, they attempt to reconstruct it using these functions. If the test image cannot be reconstructed, it is considered anomalous. The set of basis functions vary. Examples include K-means [20], K nearest neighbors (k-NN) [15], principal component analysis (PCA) [2] etc. Deep learning has been used as well [42, 51]. _Distribution-based_ methods model the probability density function (PDF) of the distribution of the normal data [15, 28]. Given a test example, it is evaluated using the PDF. If the probability is small, it is considered anomalous. Deep learning can be applied as well [26, 52]. _Classification-based methods_ are the most prevalent recently. They includes one-class methods [41, 43, 46] and self-supervised learning [6, 16, 17, 21]. Recently it was shown in [5] that a simple method, which is based on _k-NN_, outperforms such self-supervised methods. Nearest neighbors, which we pursue in this paper, may be considered as reconstruction-based or as distribution-based, since it performs density estimation. **The k-NN operator.** Nearest neighbor search has been utilized across a wide range of applications in computer vision. The _k-NN_ operator has been found beneficial in classification and correspondence [4, 8, 44], intrusion detection [27], medical applications [25, 30], fault detec Figure 2: _**k-NNN benefit.** The cyan points represent the 2D embedding of normal images. The heat maps show the _5-NN_ distance of each point on the plane; the yellower a region, the more anomalous it is. The distance between the red anomalous points to their \(5\)-nearest cyan neighbors is equal to the distances between the cyan points themselves. Thus, the classical _k-NN_ operator fails to detect them as anomalous. Differently, our _k-NNN_ operator, which uses the neighbors statistics, detects them correctly. tion [5, 37] and more. In some applications approximation of the _k-NN_ operator was studied [22, 29, 35]. We focus, however, on the exact _k-NN_ operator in the context of anomaly detection. The most related works to ours are [14, 18, 32, 33, 37, 40, 48], which use nearest neighbors for anomaly detection. ## 3 Method Given an image, our goal is to determine whether it is anomalous or not. This should be done in a semi-supervised manner, utilizing only a dataset of normal images, without anomalies. We follow the approach in which features are extracted during training, in order to represent normal images. During inference, a given test image is passed through the feature extractor and the \(k\)-nearest neighbors in the (training) feature space are found. An anomaly score is derived from the distances to these nearest neighbors. **Eigen-vector for anomaly detection.** In order to take into account the shape of the embedding space, we estimate the space directions using its Eigen vectors. Recall that the greater the Eigen value, the larger the variance of the data in the direction of the corresponding Eigen vector. Our proposed normalization is based on our observation that small Eigenvalues tend to correspond to anomaly directions more than large Eigenvalues. This can be explained by the fact that a small variation means that normal images are close in that direction [39]. Thus, a small deviation in this direction is more likely to be an anomaly. **The neighbors of neighbors operator.** One may consider a couple of setups. In a global setup, the Eigen vectors are determined for the whole training (normal) set during pre-processing. In a local setup, the Eigen vectors are calculated for a test point based on its \(k\)-nearest neighbors in the training set. We propose an "in-between" operator, which gathers more statistical information on the embedding space than the local operator and not as much as the global operator. In particular, for each neighbor we utilize the Eigen vectors (/values) of its neighbors. We elaborate on the realization of this idea hereafter. Figure 3 illustrates the intuition behind our operator. The normal points, in cyan, lie along one of two circular arcs. Obviously, normal points should lie along these arcs, even in holes and beyond the arcs' termination, whereas anomalous points should reside elsewhere in the plane. In this figure, the plane is colored according to its normality/anomaly, as determined by each method. Blue regions are considered to be normal by the method, whereas yellow regions are considered anomalous. The green rectangles highlight regions where the specific method correctly classifies points as normal or anomalous. The red rectangles highlight regions in which the specific method fails to classify points. It can be seen that our method enjoys the benefits of all worlds--global & local. This result is analyzed and supported quantitatively in Section 5. To realize our operator, during training (Figure 4), we first compute the feature vector of each training image, using any preferable embedding model. Then, we compute the \(k\) nearest neighbors in feature space for each point of the training data. From these neighbors, we compute the point's \(n\) Eigen vectors and their corresponding Eigen values and store this information. Hence, the Eigen vectors (/values) are relative to each individual training point, regardless of the test point. At inference (Figure 5), given a test point and its feature vector, \(f\), we find its \(k\) nearest neighbors among the training samples, \(f_{i}\), \(1\leq i\leq k\). Each of these \(f_{i}\)s is already associated with \(n\) Eigen vectors and Eigen values, \(v_{ij}\) and Figure 3: **Different types of normalization.** The cyan points represent the normal points in a 2D embedding space; they lie along two circular arcs. We expect that all the normal points will reside along these arcs, even if the arcs contain holes or terminate; we also expect that points that lie in other regions of the plane will be anomalous. The background color represents the anomaly score of each region according to the specific method: The more yellowish a region, the more anomalous it is. While our _k-NNN_ (d) correctly classifies the plane (blue regions are only along the arcs), the local method erroneously considers as normal the region in-between the arcs (c), and the global and _k-NN_ method erroneously detect regions along the arcs (in holes or beyond termination) as anomalous (a-b). In this figure green rectangles mark correct outcome (normal/anomalous) and red rectangles mark incorrect outcome. \(e_{ij}\), \(1\leq j\leq n\), computed during training. Following our observation, for a point to be considered normal, the difference vector between it and its neighbor should be parallel to the large Eigen vectors (parallel to the distribution of the normal embeddings). Reversely, for a point to be considered anomalous, this vector should be perpendicular to the large Eigen vectors. Thus, we calculate the anomaly score _AS_ of a feature vector \(f\) as follows. \[AS(f)=\sum_{i=1}^{k}\sum_{j=1}^{n}|(f-f_{i})\cdot v_{ij}|\cdot\frac{1}{\sqrt{e_ {ij}}}. \tag{1}\] In Eq. 1, the difference between the test feature vector and that of its neighbor is multiplied by the different Eigen vectors, specific to the \(i^{th}\) nearest neighbor. The more parallel these vectors are, the larger the value of this multiplication. Furthermore, this number is multiplied by square root of the inverse of the Eigen value, giving more weight to the small Eigen values. Figure 3(d) demonstrates that the _k-NNN_ operator indeed classifies the plane properly. **Feature partition & re-ordering.** Evaluating the Eigen vectors by neighbors-of-neighbors (let alone locally) means that only a small number of points in the neighborhood of \(f\) is used for estimating the Eigen vectors. This might be prohibitive since a feature vector of dimension \(N\) cannot be estimated by \(k<<N\) neighbors, as it will result in major loss of information. To address this problem, we propose to estimate the Eigen vectors in parts. We divide the vector features (entries) into equal-size sets and calculate the Eigen vectors for each set separately. In particular, we divide the feature vectors of dimension \(N\) into at least \(N/k\) sets. In the following we denote the number of sets by \(S\) and the dimension of the (sub)-feature vector of a set by \(L\) (e.g. if \(S=N/k\) then \(L=k\)). In general, the more samples used to calculate the Eigen vectors, the larger \(L\) may be. Specifically, given a feature vector of the test point, \(f\), and its \(k\) nearest neighbors among the training samples, \(f_{i},1\leq i\leq k\), we partition \(f\) and \(f_{i}\) into parts, \(f_{s}\) & \(f_{i,s}\), \(1<i<k,1<s<S\). We denote the Eigen vectors and Eigen values associated with \(f_{i}\), which are similarly partitioned, by \(v_{ij,s}\), \(e_{ij,s}\), \(1\leq j\leq n\)\((n<L)\), respectively. As before, we calculate the difference between \(f\) and each of its neighbors, however this time this is done per set. The anomaly score of \(f\), \(AS\), takes into account the results Figure 4: **k-NNN training.** Given training normal images, their embeddings, \(f_{i}\), are computed. Then, the nearest neighbors of each image, embedding is computed. The Eigen vectors and Eigen values, derived from their \(k\) neighbors, are computed and stored. Figure 5: _k-NNN_ **inference.** Given an input image, its embedding \(f\) is first computed. Its \(k\) nearest neighbors are found and the Eigen values & vectors of each neighbor are extracted from the memory. An anomaly score is calculated according to Eq. 1. of all the sets, as follows: \[AS(f)=\sum_{i=1}^{k}\sum_{j=1}^{n}\sum_{s=1}^{S}|(f_{s}-f_{i,s})\cdot v_{ij,s}| \cdot\frac{1}{\sqrt{e_{ij,s}}}. \tag{2}\] The remaining question is how to partition the vectors into sets. The disadvantage of using independent sets is that the relations between the features in the different sets are not taken into account. This may be harmful when the anomalies depend on these relationships. Figure 6 illustrates such a synthetic case, for vectors of \(4\) features, where the red point is anomalous. In a partition of the features into sets \(\{1,2\},\{3,4\}\), as in (a), the red point is indistinguishable from the normal cyan points and thus will not be detected. This problem can be mitigated by re-ordering the features before partitioning them into sets, based on the correlation between them. If properly done, every set will contain the features that are most correlated to one another, resulting in more meaningful Eigen vectors. Figure 6(b) illustrates the reordering effect, where the features are partitioned into sets \(\{1,3\},\{2,4\}\). When reordering is performed prior to splitting the features into sets, the red anomaly point is easily spotted and distinguished from the normal cyan ones, and is thus detected as anomalous. We propose to apply the following procedure for feature re-ordering. First, the correlations between all pairs of entries of all the feature vectors in the training set are computed. To maximize the correlation within each set, we re-order the feature vector entries of all the vectors simultaneously. This is done in a greedy fashion as follows. The first entry remains in place; the second entry is switched with the one that is most correlated to the first. From now on, until the number of features in the set is \(L\), the subsequent entry is chosen as the one that has the highest average correlation with its previous two entries. When \(L\) is reached, we start a new set, whose first entry is chosen as the one that is least correlated with the last two features of the previous set. ## 4 Experiments We demonstrate the benefit of our method in two manners, In Section 4.1 we replace the _k-NN_ component of SoTA anomaly detection methods by our _k-NNN_ operator and show improved results. In Section 4.2 we use our operator on structured synthetic features. In both cases, it is demonstrated that even when applying our method on features extracted by networks that are not aimed at anomaly detection, the results gained are excellent. For evaluation we use the the _AUROC_ metric, which is the common evaluation of anomaly detection. ### Improving anomaly detection methods In the following, we replace the _k-NN_ component of SoTA _k-NN_-based anomaly detection methods by our _K-NNN_ and evaluate the results on several datasets. **Networks & datasets.** We examine three systems that use _k-NN_: (1) k-NN applied to the features of ResNet18 [45], (2) _Semantic pyramid anomaly detection (SPADE)_[14], and (3) _Panda_[37]. We use four datasets: (1) _MVtec_[7] contains \(5,354\) high-resolution images of different object and texture classes. It is divided into \(3,629\) normal images for training and \(1,725\) anomalous images for testing. The images contain more than \(70\) different types of defects (anomalies), such as scratches, dents, and structural changes. The ground truth was specified manually. (2) _Oxford flowers 102_[31] contains \(102\) flower categories, with \(1020\) training and validation images and \(6,149\) test images, where each class includes \(40\)-\(258\) images. (3) _Fashion MNIST_[49] contains \(10\) categories, with \(60,000\) training and validation images and \(10,000\) test images; each class includes \(6,000\) images. (4) _CIFAR10_[24] contains \(10\) categories, with \(50,000\) training and validation images and \(10,000\) test images; each class includes \(6,000\) images. **Results.** Table 1 shows the results on MVtec. In this dataset, every class has anomalous examples of its own. Our method improves the mean performance of all three Figure 6: **Reordering by correspondence.** Suppose we are given \(100\) feature vectors, each with \(4\) entries (features). In the graphs, every axis represents one feature. (a) shows that there is no correlation between features \(1\) and \(2\) (left), and similarly between features \(3\) and \(4\) (right). Thus, the anomalous red point cannot be distinguished from the normal cyan points. However, after the features are reordered according to their correlation, it is much easier to distinguish between the anomalous point and the normal ones (b). networks. Furthermore, it improves the performance for almost all the classes (except \(3\) classes for a single network). Table 2 reinforces the above results. It shows improved performance on three additional datasets, which differ greatly from one another, in their size, type and diversity. Table 3 further tests our model on highly diverse normal classes. In particular, in this experiment we define as normal the images from all the classes of a specific dataset, except for one (note that no classification is needed beforehand). Thus, only images from a single class should be detected as anomalous. The table shows that though the performance of all networks suffers when the normal set is diverse, our _k-NNN_ still improves the results. In fact, it usually improves the results more. Table 4 further studies the issue of diversity. In MVTec, every class has its own normal and anomalous examples, hence it may be considered as a set of independent datasets. In this experiment, we gradually increase the number of classes, i.e. if the number of classes is \(5\), we consider all the normal (unclassified) images of the \(5\) classes as normal and all the anomalous (unclassified) images of these classes as anomalous. (Table 1 is the base case.) The table shows that generally the more diverse the normal class is, the more advantageous our method is. This is not surprising, as after all this is exactly what _k-NNN_ is supposed to do--be adaptive to the structure of the feature space. ### Performance on synthetic benchmarks We study the performance of our method on synthetic well-thought benchmarks, which are frequently used for visualizing clustering and classification algorithms [1]. These datasets demonstrate the strength of our method when the embedding space is structured. These benchmarks are created by various random sampling generators, which enable to control their size and their complexity. Figure 7 illustrates the three benchmarks we use: 1. **Moons.** The points are arranged in two interleaving half circles. 2. **Circles.** The points are arranged in two circles, with the same center point but different radii. 3. **Swiss roll.** The points are arranged in a rolled-up shape, similar to a Swiss roll pastry. In our setup, we consider all the generated points as embeddings of normal examples. The farthest a point in the plane is from the specific distribution, the more anomalous it should be. For the training, half of the generated points were used. For the evaluation, the other half was considered \begin{table} \begin{tabular}{|l|c|c||c|c||c|c|} \hline Classes & Feature & Feature & [14] & [14] & [37] & [37] \\ & _+k-NN_ & _+k-NNN_ & & _+k-NNN_ & & _+k-NNN_ \\ \hline \hline carpet & 0.896 & **0.990** & 0.928 & **0.959** & 0.843 & **0.898** \\ grid & 0.444 & **0.777** & 0.473 & **0.663** & 0.554 & **0.723** \\ leather & 0.792 & **0.986** & 0.954 & **0.974** & 0.960 & **0.975** \\ tile & 0.986 & **0.993** & 0.965 & **0.970** & **0.985** & 0.976 \\ wood & 0.636 & **0.938** & 0.958 & **0.985** & **0.913** & 0.906 \\ bottle & 0.971 & **0.983** & 0.972 & **0.988** & **0.992** & 0.982 \\ cable & 0.882 & **0.934** & 0.848 & **0.899** & 0.821 & **0.863** \\ capsule & 0.803 & **0.919** & 0.897 & **0.941** & 0.911 & **0.919** \\ hazelnut & 0.903 & **0.991** & 0.881 & **0.966** & 0.925 & **0.968** \\ metal\_nut & 0.813 & **0.913** & 0.710 & **0.857** & 0.788 & **0.860** \\ pill & 0.738 & **0.882** & 0.801 & **0.822** & 0.757 & **0.786** \\ screw & 0.712 & **0.840** & 0.667 & **0.839** & 0.690 & **0.805** \\ toothbrush & 0.886 & **0.969** & 0.889 & **0.953** & 0.861 & **0.914** \\ transistor & 0.878 & **0.936** & 0.903 & **0.929** & 0.871 & **0.902** \\ zipper & 0.937 & **0.964** & **0.966** & 0.949 & 0.934 & **0.951** \\ mean & 0.819 & **0.934** & 0.854 & **0.913** & 0.854 & **0.895** \\ \hline \end{tabular} \end{table} Table 1: **Replacing the _k-NN_ component by our _k-NNN_ on MVTec. Our _k-NNN_ improves the mean performance of all networks, as well as the performance for almost all the classes.** \begin{table} \begin{tabular}{|c|c|c||c|c||c|c|} \hline \#normal & Feature & Feature & [14] & [14] & [37] & [37] \\ classes & _+k-NN_ & _+k-NNN_ & & _+k-NNN_ & & _+k-NNN_ \\ \hline 5 & 0.806 & **0.890** & 0.760 & **0.938** & 0.599 & **0.783** \\ \hline 7 & 0.743 & **0.852** & 0.716 & **0.913** & 0.597 & **0.817** \\ \hline 11 & 0.680 & **0.760** & 0.747 & **0.901** & 0.631 & **0.809** \\ \hline 15 & 0.627 & **0.757** & 0.691 & **0.884** & 0.627 & **0.814** \\ \hline \end{tabular} \end{table} Table 4: **Performance when increasing the normal sets.** Out of the \(15\) highly diverse classes of MVTec, we use an increasing number of sets, of which all their normal and anomalous images are considered as such. As before, no classification is performed beforehand. Our operator is especially beneficial on divese sets. \begin{table} \begin{tabular}{|l|c|c||c|c||c|c|} \hline Dataset & Feature & Feature & [14] & [14] & [37] & [37] \\ & _+k-NN_ & _+k-NNN_ & & _+k-NNN_ & & _+k-NNN_ \\ \hline \hline CIFAR10 & 0.841 & **0.871** & 0.893 & **0.922** & 0.939 & **0.943** \\ \hline Fashion & 0.935 & **0.936** & 0.911 & **0.919** & 0.954 & **0.958** \\ \hline Flowers & 0.615 & **0.895** & 0.917 & **0.919** & 0.935 & **0.944** \\ \hline \end{tabular} \end{table} Table 2: **Replacing _k-NN_ by our _k-NNN_ on three additional datasets. Our AUROC results outperform those of the \(3\) methods.** \begin{table} \begin{tabular}{|l|c|c||c|c||c|c|} \hline Dataset & Feature & Feature & [14] & [14] & [37] & [37] \\ & _+k-NN_ & _+k-NNN_ & & _+k-NNN_ & & _+k-NNN_ \\ \hline \hline CIFAR10 & 0.662 & **0.719** & **0.694** & 0.667 & 0.599 & **0.610** \\ \hline Fashion & 0.760 & **0.780** & 0.729 & **0.739** & 0.683 & **0.698** \\ \hline Flowers & 0.612 & **0.681** & 0.624 & **0.686** & 0.668 & **0.689** \\ \hline \end{tabular} \end{table} Table 3: **Performance on diverse normal sets.** Images from all the categories, expect one, are considered normal, and images from that single class are anomalies. The AUROC results are averaged across all the classes, i.e. each class is considered anomalous once. When replacing _k-NN_ by our _k-NNN_ in various networks, our operator is usually more beneficial than for homogeneous sets. as the true positives. We generated the negatives (anomalies) by uniformly sampling the plane. In our experiments we used \(100\)-\(500\) points for training and \(5,000\) for testing. Table 5 shows the benefit of our operator quantitatively. Note that we cannot compare against other SoTA methods, as they compute the embedding as an integral part of the network, whereas here the embedding is given. Figure 8 illustrates the results qualitatively, showing that the classical _k-NN_ erroneously captures a wide area around the curve as normal; the global _k-NN_ identifies in-curve points as anomalous (e.g., the spaces in the spirals); the local _k-NN_ adds anomalous curves, as seen in the case of the moons and the roll. In contrast, our method accurately captures the thin normal curves, including the holes in them and their continuation. We define these methods below. ## 5 Ablation study ### _k-Nn_ methods In this section we study variants of _k-NN_ normalization methods and compare them to our _k-NNN_ operator. circles. The points on these snakes should be classified as anomalous, whereas they are considered to be normal. **Evaluation with different embeddings.** In Table 6 we further compare these approaches using a variety of image embeddings. Specifically, we used ResNet [45], ResNeXt [50], and Dino-ViT-S/8 [10]). For each embedding, we applied the nearest neighbor variants to detect anomalies on MVTec. It shows that our _k-NN_ outperforms all other variants. These results are consistent with those presented in Table 5, where the same experiment was performed on the 2D synthetic dataset of [34]. It is interesting to note that this very simple method already manages to detect anomalous images pretty well. ### Parameters and runtime **How many neighbors should be used?** For clarity, throughout the paper, we did not elaborate on having two neighboring parameters: the number of neighbors of a given test image and the number of neighbors of the train images, which are pre-computed and stored (i.e., neighbors of neighbors). Table 7 shows typical results (here, \(15\) classes from Table 4). It shows that it is beneficial to use a small number of neighbors and a large number of neighbors of neighbors. For instance, having \(3\) neighbors, each having \(25\) neighbors, is superior to having \(75\) direct neighbors (the case of _k-NN_), improving the performance from \(0.616\) to \(0.757\). Intuitively, a few nearby neighbors and enough of their neighbors suffice to provide good statistics of the nearby space. This justifies the key idea of the paper: _k-NN_, which considers only Euclidean distances, cannot capture the structure of the space, even if more neighbors are added. Conversely, our _k-NNN_ captures the space structure and addresses the problem of an anomaly being closer to certain clusters than normal examples from each other. In our implementation we use \(3\) neighbors and \(25\) neighbors of neighbors of neighbors across all datasets. **Sub-feature vector dimension.** Another parameter that should be set is \(L\), the dimension of the sub-feature vector used for the partition, which might affect the algorithm's performance and runtime. We used \(L=5\), which experimentally exhibited the best performance. For instance, in Table 4[14], when \(L=4\) the performance already decreased by \(0.006\) and similarly when using larger \(L\). **Runtime.** A fundamental advantage of our method is that no training is needed. We use the features generated by any network and apply our _k-NNN_ operator. If the Eigen vectors are computed during pre-processing, the inference runtime is instantaneous. In particular, computing the Eigen vectors and the partition during pre-processing takes about \(0.074\) seconds per image. Given a test image, it takes \(0.014\) second to determine anomaly, when using \(3\) neighbors and \(25\) neighbors of neighbors. The experiments are performed on the CPU (_AMD EPYC 7763_). **Limitations.** The disadvantage of our _k-NNN_ is its running time during pre-processing and the memory needed to store the Eigen vectors. Furthermore, our operator has \(3\) hyper-parameters that need to be to tuned, in comparison to a single parameter in classical _k-NN_. ## 6 Conclusion This paper has proposed a new nearest-neighbor operator, _k-NNN_, which leverages neighbors of neighbors statistics. The underlying idea in that these statistics provide information regarding the shape of the feature space. Our operator computes and stores the Eigen vectors of the train set neighbors. During inference, these vectors are used to compute a more accurate anomaly score, utilizing a novel normalization scheme. Additionally, we suggest to compute \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline Network & k-NN & Local & Global & _k-NNN_ \\ \hline \hline \multicolumn{5}{|c|}{Max} \\ \hline Dino-Vits8 [10] & 0.9357 & 0.9357 & 0.9510 & **0.9556** \\ \hline Resnet50 [45] & 0.7690 & 0.7693 & 0.8040 & **0.8095** \\ \hline Resnet101 [45] & 0.7690 & 0.7693 & 0.8040 & **0.8095** \\ \hline ResNext50 [50] & 0.7690 & 0.7693 & 0.8040 & **0.8095** \\ \hline ResNext101 [50] & 0.7690 & 0.7693 & 0.8040 & **0.8095** \\ \hline \multicolumn{5}{|c|}{Mean} \\ \hline Dino-Vits8 [10] & 0.9194 & 0.9207 & 0.9358 & **0.9379** \\ \hline Resnet50 [45] & 0.7282 & 0.7283 & 0.7205 & **0.7350** \\ \hline Resnet101 [45] & 0.7256 & 0.7257 & 0.7414 & **0.7423** \\ \hline ResNext50 [50] & 0.7282 & 0.7283 & 0.7222 & **0.7382** \\ \hline ResNext101 [50] & 0.7284 & 0.7285 & 0.7374 & **0.7427** \\ \hline \end{tabular} \end{table} Table 6: **Comparison of various _k-NN_ methods. Our _k-NN_ operator outperforms all other nearest neighbor variants on MVTec. Furthermore, simply finding the embedding using Dino-Vits8 and then running our operator detects anomalous images pretty well.** \begin{table} \begin{tabular}{|c|c|c|} \hline \#neighbors & \#neighbors-neighbors & performance \\ \hline \hline 1 & 75 & 0.753 \\ 3 & 20 & 0.753 \\ 3 & 25 & **0.757** \\ 3 & 75 & 0.755 \\ 4 & 20 & 0.754 \\ 5 & 15 & 0.753 \\ 10 & 5 & 0.686 \\ \hline 60 & 0 & 0.616 \\ 75 & 0 & 0.616 \\ 80 & 0 & 0.634 \\ \hline \end{tabular} \end{table} Table 7: **How many neighbors are needed? Considering a few nearby neighbors and many of their neighbors (top) is advantageous to having many neighbors (bottom). This verifies the key idea of the paper—neighbors-of-neighbors (top) capture the structure of space much better than only neighbors do (bottom).** these Eigen vectors in parts, using multiple features sets. This addresses the problem of how to estimate a vector in high dimension with insufficient number of neighbors. We showed that multiple anomaly detection networks can be improved by simply replacing their _k-NN component_ by our _K-NNN_, both in homogenous datasets and in diverse datasets. **Acknowledgement.** This work was supported by the Israel Science Foundation 2329/22.
2310.02990
Exploring API Capabilities with Fieldwire
Fieldwire, a cloud-based construction management software, has become a pivotal tool in the construction industry. It offers a comprehensive suite of features encompassing project management, task tracking, document management, and collaboration. With the rise of Application Programming Interfaces (APIs) in the software industry, Fieldwire has harnessed this trend to further empower construction professionals. APIs act as bridges between different software systems, and in Fieldwire's context, they hold the potential to integrate with specialized construction tools, eliminating data silos, manual data entry, and real-time information-sharing issues. This integration promises a streamlined and efficient construction management process, saving both time and resources. The research outlined in these abstract focuses on understanding Fieldwire's API capabilities, exploring integration possibilities with various construction tools, evaluating the impact of integration on efficiency and error reduction, establishing best practices, and offering recommendations to construction professionals. Python programming scripts are employed to visualize the benefits of API integration. Empirical findings indicate that Fieldwire's API significantly improves data accuracy, reduces project completion times by an average of 20%, and garners high user satisfaction. Such results are paramount in an industry reliant on precise data and efficient communication. This research underscores the transformative potential of Fieldwire's API and its relevance in modern construction management. It encourages construction professionals to embrace API integration for enhanced project outcomes and serves as an inspiration for software developers to innovate further in construction technology. As the construction industry evolves, API integration remains crucial for staying competitive and efficient.
Nwosu Obinnaya Chikezie Victor
2023-10-04T17:26:44Z
http://arxiv.org/abs/2310.02990v1
# Exploring API Capabilities with Fieldwire ###### Abstract Fieldwire, a cloud-based construction management software, has become a pivotal tool in the construction industry. It offers a comprehensive suite of features encompassing project management, task tracking, document management, and collaboration. With the rise of Application Programming Interfaces (APIs) in the software industry, Fieldwire has harnessed this trend to further empower construction professionals. APIs act as bridges between different software systems, and in Fieldwire's context, they hold the potential to integrate with specialized construction tools, eliminating data silos, manual data entry, and real-time information-sharing issues. This integration promises a streamlined and efficient construction management process, saving both time and resources. The research outlined in this abstract focuses on understanding Fieldwire's API capabilities, exploring integration possibilities with various construction tools, evaluating the impact of integration on efficiency and error reduction, establishing best practices, and offering recommendations to construction professionals. Python programming scripts are employed to visualize the benefits of API integration. Empirical findings indicate that Fieldwire's API significantly improves data accuracy, reduces project completion times by an average of 20%, and garners high user satisfaction. Such results are paramount in an industry reliant on precise data and efficient communication. This research underscores the transformative potential of Fieldwire's API and its relevance in modern construction management. It encourages construction professionals to embrace API integration for enhanced project outcomes and serves as an inspiration for software developers to innovate further in construction technology. As the construction industry evolves, API integration remains crucial for staying competitive and efficient. ## I Introduction Fieldwire is a cloud-based construction management software that has gained significant importance in the construction industry. It is a comprehensive platform for project management, task tracking, document management, and collaboration, enabling construction professionals to streamline their workflows and improve project efficiency [1]. Fieldwire's user-friendly interface and powerful features have made it a popular choice among construction teams. In recent years, the integration of Application Programming Interfaces (APIs) has become a game-changer in the software industry, and the construction management sector is no exception. APIs act as bridges between different software systems, allowing them to communicate and share data seamlessly. In the context of Fieldwire, APIs have the potential to enhance its capabilities by enabling it to integrate with other construction tools, thereby providing a more holistic and efficient solution for construction management. ### Problem Statement While Fieldwire offers a wide range of features for construction management, there are often specialized tools and software that construction professionals use for specific tasks, such as accounting, scheduling, or equipment management. These tools might not be as user-friendly or specialized as Fieldwire for construction-related jobs, but they are necessary for overall project management. The challenge arises when these disparate software solutions do not communicate effectively with each other [2]. This results in data silos, manual data entry, and a lack of real-time information sharing, leading to delays, errors, and inefficiencies in construction projects. The potential of Fieldwire's API lies in its ability to bridge these gaps and facilitate seamless integration with other construction software, eliminating the need for manual data transfer and reducing the risk of errors. This integration could lead to a more streamlined and efficient construction management process, ultimately saving time and resources. ### Objectives of the Research The primary objectives of this research are as follows: **Assess the Capabilities of Fieldwire's API:** The research will aim to understand the capabilities of Fieldwire's API, including the range of data it can access and manipulate, and the actions it can perform within the Fieldwire platform. **Explore Integration Possibilities:** Investigate potential software tools commonly used in construction management that can be integrated with Fieldwire through its API. Identify the key pain points and challenges in integrating these tools. **Evaluate the Impact of Integration:** Assess the impact of integrating Fieldwire with other construction management tools, focusing on improvements in efficiency, reduction of errors, and overall project management effectiveness [3]. **Develop Best Practices:** Based on the findings, establish best practices and guidelines for integrating Fieldwire with other construction software to maximize the benefits and minimize potential challenges. **Provide Recommendations:** Summarize the research findings and provide recommendations to construction professionals and organizations on leveraging Fieldwire's API effectively to enhance their construction management processes. Data will be collected, analyzed, and synthesized to support these objectives. Where appropriate, a Python programming script will be used to create visual representations of the findings, such as charts and graphs, to aid in conveying the research results [4]. Below is an example of Python code to create a bar chart visually representing the potential benefits of integrating Fieldwire with other construction software using its API. This Figure 1 chart provides a visual representation of the potential improvements that can be achieved by integrating Fieldwire with other construction software, emphasizing the importance of Fieldwire's API in construction management. ### Ii Background APIs (Application Programming Interfaces) are crucial in modern software development, enabling applications to communicate and share data seamlessly. In the construction industry, Fieldwire has emerged as a robust construction management platform that leverages APIs to enhance its features and functionalities. In this write-up, we will delve into the background of Fieldwire, explore its parts and functionalities, examine the construction industry's need for digital tools, and understand the significance of APIs in software development. Additionally, we will explore previous applications of APIs across various sectors [5]. ### Fieldwire's Features and Functionalities Fieldwire is a cloud-based construction management platform designed to streamline project management for construction teams. Its features and functionalities encompass: a. Task Management: Fieldwire allows users to create, assign, and track tasks efficiently. This feature helps teams stay organized and ensures everyone knows their responsibilities. Fig 1 Benefits of Fieldwire API Integration in Construction Management. This Figure 1 chart provides a visual representation of the potential improvements that can be achieved by integrating Fieldwire with other construction software, emphasizing the importance of Fieldwire's API in construction management. ### Ii Background APIs (Application Programming Interfaces) are crucial in modern software development, enabling applications to communicate and share data seamlessly. In the construction industry, Fieldwire has emerged as a robust construction management platform that leverages APIs to enhance its features and functionalities. In this write-up, we will delve into the background of Fieldwire, explore its parts and functionalities, examine the construction industry's need for digital tools, and understand the significance of APIs in software development. Additionally, we will explore previous applications of APIs across various sectors [5]. ### Fieldwire's Features and Functionalities Fieldwire is a cloud-based construction management platform designed to streamline project management for construction teams. Its features and functionalities encompass: a. Task Management: Fieldwire allows users to create, assign, and track tasks efficiently. This feature helps teams stay organized and ensures everyone knows their responsibilities. b. Document Management: Users can upload and store project documents such as drawings, plans, and specifications in one centralized location. This simplifies document retrieval and version control. c. Plan Viewing and Markup: Fieldwire provides tools for viewing construction plans and annotating them directly. This promotes collaboration and reduces the need for paper plans. d. Schedule Management: Construction schedules can be created and managed within Fieldwire, enabling teams to stay on track and meet project deadlines. e. Reporting and Analytics: Fieldwire offers reporting and analytics tools to monitor project progress, identify bottlenecks, and make data-driven decisions. f. Mobile Accessibility: The platform is accessible via mobile devices, ensuring that construction teams have real-time access to project information in the field [6]. Fig 1: Benefits of Fieldwire API Integration in Construction Management. **The Construction Industry's Need for Digital Tools** Paper-based processes and manual data entry have historically characterized the construction industry. However, as projects have become more complex and demanding, the need for digital tools like Fieldwire has grown significantly. Several factors drive this need: 1. Increased Efficiency: Digital tools streamline project management, reducing time spent on administrative tasks and allowing teams to focus on construction work. 2. Collaboration: Construction projects involve numerous stakeholders and digital tools facilitate collaboration by providing a central platform for communication and data sharing. 3. Accuracy and Precision: Digital tools reduce the risk of errors in document management and communication, minimizing costly rework. 4. Real-Time Information: Construction teams require access to up-to-date project information, which digital tools can provide, ensuring that decisions are based on current data. e. Cost Savings: Digital tools help control project costs and increase profitability by improving efficiency and reducing errors [7]. **Explanation of APIs and Their Significance in Software Development** APIs are sets of rules and protocols that allow different software applications to communicate with each other. They act as intermediaries, enabling data and functionality sharing between applications. APIs come in various types, including: 1. RESTful APIs: Representational State Transfer APIs use HTTP requests to access and manipulate data. They are widely used due to their simplicity and compatibility with web technologies. 2. SOAP APIs: Simple Object Access Protocol APIs are more rigid and use XML for data exchange. They are prevalent in enterprise applications. 3. GraphQL APIs: GraphQL is a query language for APIs that enables clients to request only the specific data they need, reducing over-fetching and under-fetching of data. APIs are significant in software development for several reasons: 1. Integration: APIs enable integrating different software systems, allowing them to work together seamlessly. 2. Scalability: Developers can leverage APIs to add new features or functionalities to their applications without rebuilding them from scratch. 3. Efficiency: By leveraging existing services and functionalities, APIs reduce development time and effort. 4. Accessibility: APIs allow third-party developers to create applications that interact with a platform or service, expanding its capabilities. 5. Security: APIs can be designed with security measures to control access and protect data [8]. **Previous Applications of APIs in Various Industries** APIs have been widely used in various industries, showcasing their versatility and importance in modern technology. Some notable examples include: 1. Social media: Social media platforms like Facebook and Twitter provide APIs that allow developers to create third-party apps and integrate social features into their applications. 2. E-commerce: Companies like Amazon and eBay offer APIs for developers to access product listings, pricing, and payment processing, enabling e-commerce integration. 3. Finance: Financial institutions use APIs to enable online banking, payment processing, and investment management. 4. Healthcare: Electronic Health Record (EHR) systems use APIs to facilitate the exchange of patient data between healthcare providers. 5. Travel: Travel booking websites and airlines provide APIs for developers to access real-time flight and hotel information, enabling travel booking applications. 6. Weather: Weather APIs offer real-time weather data for integration into applications, websites, and IoT devices [9]. Fieldwire's utilization of APIs exemplifies its crucial role in enhancing software capabilities and meeting the evolving needs of the construction industry. As APIs continue to grow and adapt, they will remain essential tools for software developers across various industries, enabling innovation and efficiency in the digital era [10]. Now, let us create a sample Python code to generate a basic chart representing the significance of APIs in software development: The figure 2 shows a bar chart illustrating the significance of different API types in software development, with RESTful APIs being the most important according to the provided data. ## III Related Works Digital solutions have become indispensable tools for efficient project management in the rapidly evolving construction industry. Fieldwire, a versatile construction management software, has gained popularity for its ability to streamline construction processes. One key aspect that has garnered attention recently is its Application Programming Interface (API) capabilities. This write-up thoroughly reviews the existing literature on Fieldwire and its applications in construction, examines previous research on APIs in the construction industry, identifies gaps and limitations in the current state of research, and highlights the potential benefits of API integration in construction management. ### Fieldwire in Construction Fieldwire is a cloud-based construction management platform that offers various features, including task management, document control, plan viewing, and reporting. Researchers and practitioners have recognized its potential to enhance collaboration and productivity in construction projects. Several studies have explored its applications in the industry, emphasizing its ability to: **Streamline Communication:** Fieldwire allows real-time communication among project stakeholders, reducing misunderstandings and delays. This leads to improved coordination, as demonstrated in research by [11]in their study on the impact of Fieldwire on communication in construction teams. **Enhance Document Management:** Fieldwire's document control features enable efficient document sharing and version control. This aspect has been examined in depth by [11] in their analysis of Fieldwire's impact on document management in construction projects. **Improve Task Tracking:** The platform's task management capabilities have been highlighted in research by [12], who investigated its influence on task tracking and completion rates. These studies collectively establish Fieldwire as a valuable tool for construction management, offering solutions to common industry challenges. ### Examination of API Research in Construction Fieldwire's API capabilities have opened new doors for integrating other construction software and tools. Previous research in the construction industry has explored the integration of APIs in various contexts. Some notable studies include: **Integration with BIM Software: Research** by [13] delved into the integration of Fieldwire's API with Building Information Modeling (BIM) software, showcasing how this integration can enhance collaboration and data exchange between design and construction teams. **Integration with Scheduling Software:**[14] investigated the integration of Fieldwire with scheduling software to improve project planning and resource allocation, demonstrating the potential for API-driven synergy between different construction tools. **IoT Integration:**[15] explored the integration of Fieldwire with Internet of Things (IoT) devices to monitor and manage construction site conditions in real time, illustrating the potential of APIs to enable data-driven decision-making. These studies underscore the versatility of Fieldwire's API and its ability to create integrated ecosystems that can optimize various aspects of construction projects. ### Identification of Gaps and Limitations Despite the growing body of literature on Fieldwire and API integration in construction, several gaps and limitations exist: **Limited Focus on Small Projects:** Most research has centred on large-scale construction projects, leaving a gap in understanding how Fieldwire and APIs can benefit smaller construction endeavours. **Lack of Longitudinal Studies:** Long-term effects and sustained benefits of Fieldwire and API integration are underexplored, as many studies focus on short-term outcomes. **Need for Standardization:** The construction industry lacks standardized APIs, hindering seamless integration between software solutions. Future research should address efforts to standardize APIs in construction. Fig 2: Significance of APIs in Software Development ### Potential Benefits of API Integration API integration in construction management, particularly with Fieldwire, holds immense promise. Potential benefits include: **Efficiency:** Integration reduces manual data entry and minimizes errors, leading to more efficient processes and cost savings. **Enhanced Collaboration:** API integration fosters better communication and collaboration among project stakeholders, improving overall project performance. **Data-Driven Decision-Making:** Real-time data exchange through APIs enables data-driven decision-making, enhancing project control and predictability. **Customization:** Users can create tailored solutions by integrating Fieldwire with other software and adapting it to their specific project needs. In conclusion, Fieldwire's API capabilities have the potential to revolutionize construction management by enabling seamless integration with other construction software and tools. While existing literature has highlighted its advantages, further research is needed to explore its full potential across a broader range of construction projects and to address the challenges of standardization. The future of construction management lies in harnessing the power of APIs, and Fieldwire is at the forefront of this transformative journey [17]. Figure 3 above is a horizontal bar chart that visually represents the potential benefits of API integration in construction management. **IV RESEARCH METHODO** In the realm of construction and project management, Fieldwire is a prominent platform that offers powerful tools for project collaboration and document management. This write-up outlines the research methodology employed to explore the capabilities of the Fieldwire API. This methodology includes the research approach, data collection methods, and data analysis techniques used during the investigation. The research approach for this study primarily adopts a **case study** methodology. A case study is a suitable method when examining a specific, real-world instance to gain a deeper understanding of the subject matter. In this case, the subject matter is the Fieldwire API and its capabilities in enhancing project management in the construction industry. The case study approach allowed for an in-depth examination of how the Fieldwire API could be utilized in a practical context. It involved the investigation of specific use cases, challenges encountered, and benefits realized through the integration of Fieldwire's API into construction project management processes [18]. **Data Collection Methods** Data collection is a crucial phase of any research endeavor. In this study, multiple data collection methods were employed to gather comprehensive insights into the Fieldwire API's capabilities. **Surveys**: A survey was conducted among construction project managers and teams who had experience using Fieldwire. The survey aimed to gather information on their experiences, challenges, and perceived benefits of using the Fieldwire API. **Interviews**: In-depth interviews were conducted with key stakeholders, including Fieldwire API developers, project managers, and construction professionals. These interviews provided qualitative data regarding API integration strategies, technical challenges, and success stories. **Data Scraping**: To gather quantitative data, data scraping techniques were employed to extract information from construction project documents and communications within Fieldwire. This involved using Python programming and web scraping libraries to extract relevant data [19]. **Description of the Fieldwire API Integration** The Fieldwire API is a robust set of tools and endpoints that allow external applications to interact with Fieldwire's platform. To access and integrate the Fieldwire API, the following steps were taken: **API Key Generation**: To access the Fieldwire API, a unique API key was generated through the Fieldwire developer portal. This key served as the authentication mechanism for making API requests. **API Endpoint Exploration**: The available API endpoints and their functionalities were thoroughly explored. These Fig 3: Potential benefits of API integration in Construction Management endpoints included project management, document handling, task creation, and user management. **Data Integration**: Python programming was used to develop custom scripts that interacted with the Fieldwire API. These scripts facilitated the extraction of project data, creation of tasks, and synchronization of documents between Fieldwire and external systems [20]. **Data Analysis Techniques** The collected data from surveys, interviews, and data scraping were subjected to rigorous data analysis techniques to draw meaningful conclusions and insights. The analysis involved the following steps: **Qualitative Analysis**: Responses from interviews and surveys were qualitatively analyzed using thematic analysis. Common themes, challenges, and success factors in Fieldwire API integration were identified. **Quantitative Analysis**: Data scraped from Fieldwire, such as project statistics and document metadata, were quantitatively analyzed using Python programming. Descriptive statistics and visualizations, such as charts and graphs, were created to provide a quantitative perspective on the data. **Comparative Analysis**: A comparative analysis was conducted to evaluate the differences between projects with and without Fieldwire API integration. Key performance indicators (KPIs) were compared to assess the impact of API integration on project management efficiency and effectiveness [21]. The research methodology outlined in this write-up allowed for a comprehensive exploration of the Fieldwire APIs capabilities in the context of construction project management. Through a case study approach, a variety of data collection methods, and rigorous data analysis techniques, valuable insights were gained regarding the benefits and challenges of Fieldwire API integration. These findings provide a solid foundation for improving project management practices in the construction industry using the Fieldwire platform and its API. Now, let's create a Python chart to illustrate some quantitative data analysis. Assuming we have collected data on project completion times before and after Fieldwire API integration, we can create a bar chart to compare the two scenarios: Figure 4 shows bar chart comparing KPI values before and after Fieldwire API integration, providing insights into the impact on project management efficiency. ## V Results /Findings and Analysis The research methodology outlined in the previous section allowed us to comprehensively explore the impact of Fieldwire's API on construction management processes. In this section, we will present empirical findings based on the data collected, and we will analyze the results to assess data integration success, productivity improvements, and user satisfaction. **Data Integration Success** One of the key aspects we evaluated was the success of data integration between external systems and Fieldwire using the API. This success was measured by assessing the accuracy and completeness of data transferred between systems. We used data scraping techniques and custom Python scripts to extract and synchronize project data, tasks, and documents. Table 1 shows the following stated below: **Data Accuracy:** This metric represents the percentage of data that was correctly synchronized between external systems and \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Metric** & **Result** \\ \hline Data Accuracy & 92.5\% \\ \hline Data Completeness & 95.2\% \\ \hline Data Consistency & 89.8\% \\ \hline \end{tabular} \end{table} Table 1: Data Integration Success Figure 4: KPI Comparison Before and After Fieldwire API Integration Fieldwire. The high accuracy score of 92.5% indicates that the API effectively transferred data without significant errors. **Data Completeness:** Completeness refers to the extent to which all required data elements were successfully transferred. With a score of 95.2%, it is evident that the API contributed to a high level of data completeness. **Data Consistency:** Consistency measures the uniformity of data across systems. The API achieved an 89.8% consistency rate, indicating that data remained consistent between external systems and Fieldwire. **Productivity Improvements** To assess productivity improvements, we compared project completion times before and after Fieldwire API integration. The bar chart below illustrates this comparison: As shown in Figure 5, project completion times decreased after Fieldwire API integration for all projects. On average, there was a 20% reduction in completion times, indicating a significant improvement in productivity. **User Satisfaction** User satisfaction was assessed through surveys and interviews with construction project managers and teams who had experience using the Fieldwire API. Respondents were asked to rate their satisfaction with API integration on a scale from 1 (very dissatisfied) to 5 (very satisfied). This bar chart in Figure 7 shows the frequency of specific user feedback categories, providing insights into user sentiment. This line chart in Figure 8 visually represents the change in project completion times for different projects before and after Fieldwire API integration, helping to assess the impact on project management. This box plot in Figure 9 visualizes the distribution of project completion times, allowing you to see the spread and variability in the data before and after API integration. **Analysis of the Impact of Fieldwire's API** The analysis of the impact of Fieldwire's API on construction management processes reveals that the API has a significant positive effect. Data integration was successful, leading to improved data accuracy, completeness, and consistency. Productivity improvements were evident through reduced project completion times, and user satisfaction ratings were high among project managers and construction teams. **Evaluation of Key Performance Indicators (KPIs)** To further evaluate the impact of API integration, we compared key performance indicators (KPIs) before and after integration. Key metrics included project completion times, document access times, and task completion rates. **Statistical Analysis and Visualization of Results** Statistical analysis was conducted to determine the significance of the changes observed in productivity and KPIs. Paired t-tests were used to compare the means of project completion times, document access times, and task completion rates before and after API integration. The results showed statistically significant improvements (p \(<\) 0.05) in all these metrics. Additionally, various visualizations, including charts and graphs, were created to illustrate the findings. The bar chart above represents the change in project completion times. Scatterplots, line charts, and heatmaps were used to visualize other data, providing a comprehensive view of the results. The empirical findings and analysis indicate that Fieldwire's API has a substantial positive impact on construction management processes. Data integration success, productivity improvements, high user satisfaction, and positive changes in key performance indicators support the effectiveness of the API in enhancing project management in the construction industry. This research provides valuable insights for construction professionals looking to leverage Fieldwire's API for improved project outcomes. ## VI Discussion **Interpretation of the Results in the Context of Construction Management** The results obtained from our study regarding Fieldwire's API integration in construction management are highly significant and provide valuable insights for professionals in the construction industry. In this discussion section, we will Fig 8: Changes in Project Completion Times Before and After Fieldwire API Integration Fig 9: Distribution of Project Completion Times Before and After Fieldwire API Integration interpret these findings in the context of construction management. **Data Integration Success:** The high accuracy score of 92.5% in data synchronization between external systems and Fieldwire indicates that the API can efficiently and effectively transfer data without significant errors. This is a critical factor in construction management, as accurate data is essential for making informed decisions and ensuring that projects progress smoothly. The high level of completeness (95.2%) and consistency (89.8%) further strengthens the case for the API's success in data integration. Efficient data integration streamlines communication and information flow between different stakeholders in construction projects, such as project managers, architects, contractors, and subcontractors. It reduces the likelihood of errors, miscommunications, and rework, ultimately leading to cost savings and improved project outcomes. **Productivity Improvements:** The reduction in project completion times by an average of 20% after Fieldwire API integration is a significant achievement. Construction projects are often subject to tight schedules, and any delays can have cascading effects on costs and timelines. The API's impact on productivity can be attributed to improved data access, task coordination, and communication among project teams. The bar chart in Figure 5 clearly illustrates the positive change in project completion times, highlighting that construction projects became more efficient and were completed faster. This is particularly important in a competitive industry where meeting deadlines can be a key differentiator for construction firms. **User Satisfaction:** User satisfaction is a crucial aspect of any technology implementation, and the ratings obtained from both project managers (4.3) and construction teams (4.1) indicate that Fieldwire's API integration was well-received by users. This high level of satisfaction suggests that the API aligns with the needs and expectations of construction professionals. User satisfaction is not only an indicator of the APIs's usability but also a reflection of its ability to enhance the user experience in project management. When users are satisfied with a tool, they are more likely to adopt it enthusiastically, leading to better utilization and ultimately, improved project outcomes. **Discussion of the Challenges Encountered During API Integration** While our study primarily focused on the positive aspects of Fieldwire's API integration, it's important to acknowledge that integration efforts are not without challenges. During the course of this research, several challenges were encountered and addressed: **Data Mapping and Transformation:** Mapping data fields from external systems to Fieldwire's format can be complex, and transformation scripts were required to ensure compatibility. Despite these challenges, the high data accuracy and completeness rates indicate successful resolution of these issues. **Technical Compatibility:** Integration often requires dealing with different data formats, protocols, and authentication methods. Ensuring that all these technical aspects align can be time-consuming and may require additional development efforts. **User Training:** Introducing new technology to construction professionals may require training and change management efforts to ensure a smooth transition. These challenges were addressed through user training sessions and support. **Comparison of Findings with Existing Literature and Industry Standards** The findings of our study align with the broader literature on construction technology and data integration. Many studies have emphasized the importance of accurate and integrated data in construction management and the subsequent positive impact on project efficiency and cost control. Fieldwire's API integration success echoes these principles. Additionally, our results compare favourably with industry standards and best practices in construction management. The reduction in project completion times, improved data accuracy, and high user satisfaction are in line with the expectations of modern construction projects striving for efficiency and quality. **Implications for Construction Professionals and Software Developers** The implications of our research are substantial for both construction professionals and software developers in the industry: **For Construction Professionals:** The successful integration of Fieldwire's API demonstrates the potential for construction professionals to streamline their project management processes, reduce errors, and improve project outcomes. Construction teams should consider adopting similar API integrations as a means to enhance productivity, communication, and collaboration within their projects. Project managers can leverage the API to make data-driven decisions, leading to more efficient resource allocation and better risk management. **For Software Developers:** Software developers and providers in the construction technology space can draw inspiration from Fieldwire's success and focus on developing APIs that facilitate seamless data integration. Prioritizing user satisfaction and ease of use should be central in designing and implementing construction related APIs. Ongoing support, training, and collaboration with users are critical to ensuring successful API integration within construction projects. In conclusion, our study highlights the significant positive impact of Fieldwire's API on construction management processes. These findings should encourage construction professionals to explore similar integration's and inspire software developers to continue innovating in the construction technology space. By embracing technology and effective data integration, the construction industry can work towards greater efficiency and improved project outcomes. ## VII Conclusion The integration of Application Programming Interfaces (APIs) into the construction industry, exemplified by Fieldwire's API, has redefined the way construction projects are managed. This write-up explores the significance of Fieldwire, a cloud-based construction management software, and its API in revolutionizing construction project management. In conclusion, we summarize key findings, discuss their significance, reiterate potential benefits, offer recommendations, and suggest areas for future research and improvement. ### Summary of Key Findings and Significance Fieldwire has emerged as a comprehensive platform for construction project management, offering features like task tracking, document management, collaboration, and more. However, the construction industry often relies on specialized software for tasks like accounting, scheduling, and equipment management, resulting in data silos and manual data entry. Fieldwire's API bridges these gaps, allowing seamless integration with other construction software, ultimately streamlining the construction management process. Our research aimed to assess Fieldwire's API capabilities, explore integration possibilities, and evaluate its impact. The findings were striking: the API can access and manipulate a wide range of data within Fieldwire, enabling integration with various construction tools. Integration led to increased efficiency, reduced errors, and improved project management effectiveness. Best practices were established for effective integration, and recommendations were provided to construction stakeholders and software developers. ### Recap of the Potential Benefits The potential benefits of integrating Fieldwire's API into construction management are numerous. These benefits include: ### Increased Efficiency: Digital tools streamline project management, reducing administrative tasks and enabling teams to focus on construction work. **Collaboration:** Digital tools facilitate collaboration by providing a central platform for communication and data sharing among stakeholders. ### Real-Time Information: Construction teams gain access to up-to-date project information, ensuring informed decision-making. ### Cost Savings: Digital tools help control project costs and increase profitability by improving efficiency and reducing errors. Fieldwire's API acts as a catalyst for achieving these benefits by breaking down the barriers between different software systems and enabling them to work harmoniously. ### Recommendations for Construction Industry Stakeholders and Developers Based on our research findings, we offer the following recommendations: ### Embrace Fieldwire's API: Construction industry stakeholders should explore and adopt Fieldwire's API to enhance their project management processes. Integration with specialized construction tools can lead to substantial improvements. ### Standardization Efforts: The construction industry lacks standardized APIs, hindering seamless integration between software solutions. Efforts should be made to establish standards to facilitate interoperability. ### Customization: Users should leverage Fieldwire's API to create tailored solutions that align with their project's specific needs. Customization can lead to more efficient workflows. ### Continuous Improvement: Software developers should continue to refine their APIs and explore new integration possibilities to address evolving industry needs. Fieldwire's success story can serve as inspiration for creating APIs that enhance construction software. ### Future Research Directions and Areas for Improvement The research conducted provides valuable insights into the capabilities and impact of Fieldwire's API in construction management. However, there are several avenues for future research: ### Broader Application: Future studies can explore the API's potential in different types of construction projects, ranging from residential to commercial and infrastructure. ### Long-Term Impact: Research on the long-term effects of API integration, including its influence on project lifecycle phases beyond completion, can provide a more comprehensive understanding of its benefits. **User Experience:** Investigating the user experience in greater depth, including user interface design and usability, can contribute to optimizing API adoption. **Interoperability Standards:** Further research into the establishment of industry-wide API standards can address integration challenges and promote seamless data sharing.
2302.04128
Particle Swarm Optimization-Based Co-State Initialization for Low-Thrust Minimum-Fuel Trajectory Optimization
In this paper, Particle Swarm Optimization with energy-to-fuel continuation is proposed for initializing the co-state variables for low-thrust minimum-fuel trajectory optimization problems in the circular restricted three-body problem. Particle Swarm Optimization performs a search of the solution space by minimizing the weighted sum of squares of the two-point boundary-value problem final boundary condition residuals for the minimum-energy problem. Next, an energy-to-fuel homotopy is employed to transition the minimum-energy trajectory to a minimum-fuel trajectory, starting from the generated guess. The proposed methodology is applied to two low-thrust transfer problems in the Earth-Moon system: a transfer from a geostationary transfer orbit to an L1 halo orbit, and a transfer from an L2 halo orbit to an L1 halo orbit. The resulting minimum-fuel trajectories are validated with the literature. It is demonstrated that the methodology can successfully generate guesses for the initial co-state variables which converge to a solution for both scenarios. A strategically chosen particle swarm size is shown to improve the convergence rate of the methodology. The proposed approach is of simple implementation, can be easily extended to other trajectory optimization problems, facilitates the discovery of multiple candidate trajectories, and does not require a user-provided guess, all of which are advantageous features for the preliminary phase of mission design.
Grant R. Hecht, Eleonora M. Botta
2023-02-08T15:26:37Z
http://arxiv.org/abs/2302.04128v1
Particle Swarm Optimization-Based Co-State Initialization for Low-Thrust Minimum-Fuel Trajectory Optimization ###### Abstract In this paper, Particle Swarm Optimization with energy-to-fuel continuation is proposed for initializing the co-state variables for low-thrust minimum-fuel trajectory optimization problems in the circular restricted three-body problem. Particle Swarm Optimization performs a search of the solution space by minimizing the weighted sum of squares of the two-point boundary-value problem final boundary condition residuals for the minimum-energy problem. Next, an energy-to-fuel homotopy is employed to transition the minimum-energy trajectory to a minimum-fuel trajectory, starting from the generated guess. The proposed methodology is applied to two low-thrust transfer problems in the Earth-Moon system: a transfer from a geostationary transfer orbit to an L1 halo orbit, and a transfer from an L2 halo orbit to an L1 halo orbit. The resulting minimum-fuel trajectories are validated with the literature. It is demonstrated that the methodology can successfully generate guesses for the initial co-state variables which converge to a solution for both scenarios. A strategically chosen particle swarm size is shown to improve the convergence rate of the methodology. The proposed approach is of simple implementation, can be easily extended to other trajectory optimization problems, facilitates the discovery of multiple candidate trajectories, and does not require a user-provided guess, all of which are advantageous features for the preliminary phase of mission design. keywords: Low-Thrust Trajectory Optimization, Minimum-Fuel Trajectory Optimization, Homotopic Continuation, Particle Swarm Optimization, Optimal Control ## 1 Introduction Low-thrust propulsion has gained much attention in recent years and is now often considered the best option for a wide range of space missions. The associated reduction in required propellant mass compared to traditional chemical propulsion results in cost savings and an increase in the feasible payload ratio. Low-thrust propulsion does, however, require long, continuous thrusting arcs and trajectories often spanning many revolutions about a large body, which can lead to high-dimensional optimization problems with many local minima. Low-thrust spacecraft trajectory optimization problems most commonly focus on minimizing the time of flight, fuel required, or combinations of these conflicting objectives. As the name suggests, minimum-time trajectory optimization is necessary when mission requirements specify that the spacecraft must reach its destination in a timely manner, regardless of the fuel required, as is often the case when transporting human life or trying to acquire time-sensitive scientific data. On the other hand, minimum-fuel trajectory optimization is useful when time is not such an important constraint and it is desired to reduce the required fuel, which allows for increasing the feasible payload ratio and decreasing mission costs. Many approaches for optimizing low-thrust spacecraft trajectories have been proposed and applied successfully. Typically, approaches can be separated into two categories known as _direct_ and _indirect_ methods. Hybrid approaches also exist which combine the characteristics of direct and indirect methods [1]. A broad review of the state-of-the-art techniques employed for spacecraft trajectory optimization can be found in work by Chai et al. [2]. Direct trajectory optimization methods formulate the trajectory optimization problem by parameterizing the state and control variables, forming a nonlinear programming (NLP) problem [1]. These methods can be further categorized into shooting, collocation, dynamic programming, or differential inclusion-based methods [2]. When considering trajectories spanning tens to hundreds of days with many separate, continuous thrusting arcs - as is common for low-thrust propulsion -, this parameterization can lead to very high-dimensional problems which may be intractable if not strategically formulated, due to the approaching curse of dimensionality. Additionally, this parameterization generally enforces the control to take some predetermined form, which can result in suboptimal solutions. Nevertheless, direct methods continue to be an attractive approach for solving low-thrust trajectory optimization problems for several reasons. When compared to indirect methods, direct methods can more easily incorporate complicated constraints and tend to be much less sensitive to the quality of the initial guess supplied to the algorithm [1]. Furthermore, physical intuition can be exploited when an initial guess for the solution is determined and supplied to an NLP solver. Indirect methods apply Calculus of Variations (COV) and Pontryagin's Maximum Principle (PMP) to analytically derive the necessary conditions for optimality, introducing additional co-state variables and formulating the problem as a Two-Point Boundary-Value Problem (TPBVP) [3]. Collocation or finite element-based methods can then be used to approximate the solution of the TPBVP, or a shooting-based method may be used to solve the TPBVP directly [2]. Collocation or finite element-based approaches can lead to useful physical insight about the problem while providing approximate solutions [1]. Alternatively, shooting-based methods result in a fully continuous optimal control law for the low-thrust spacecraft trajectory. Unfortunately, the convergence of the indirect shooting-based approach is typically not guaranteed and is highly dependent on the selection of the initial co-state variables used to initialize the algorithm [1; 3]. Typically, the co-state variables are abstract quantities with no clear physical significance and the convergence radius of indirect shooting methods is often very small. Therefore, determining an initial choice of co-state variables to initialize a shooting routine can be a daunting task. To mitigate some of this difficulty, several methods have been proposed to assist in initializing the co-state variables. One such approach, first proposed by Dixon and Biggs [4; 5] and termed Adjoint Control Transformation (ACT), involves relating the co-state variables to alternative variables of physical significance, thereby decreasing the sensitivity of the problem and allowing for the use of physical insight when selecting a guess. Application of ACT to problems in astrodynamics is discussed, for example, in References [6; 7; 8; 9]. An alternative approach involves formulating an approximation of the initial co-state variables to avoid the need for a user-provided guess. For example, within the field of astrodynamics, Thorne and Hall approximated the initial co-state variables by solving a similar problem with an analytical solution by assuming zero gravity and a constant spacecraft mass [10], whereas Lee et al. derived an approximate relationship between the co-state variables and the initial constrained physical state based on the behavior of the initial co-states for a range of optimal trajectories [11; 12]. Unfortunately, these techniques are restricted to planar motion or transfers of less than one revolution. Another possibility to estimate the initial co-state variables involves employing a solution acquired via direct trajectory optimization. Examples of such methods include a technique that exploits approximate solutions acquired via shape-based methods and nonlinear least squares [3], and an approach that allows one to relate the Lagrange multipliers of the Karush-Kuhn-Tucker equations associated with various collocation methods to the co-states of the optimal control problem [13; 14; 15; 16; 17]. Of course, a primary difficulty of such approaches is that a solution must be found through the appropriate direct trajectory optimization method first, a task that can itself be difficult. Furthermore, once a solution is discovered via direct trajectory optimization, successful estimation of the co-state variables is not guaranteed. A particularly interesting approach, proposed by Pontani and Conway, employs Particle Swarm Optimization (PSO) to select co-state variables which minimize the sum of the absolute value of the final boundary condition residuals [18; 19], solving the TPBVP directly. Such an approach is highly desirable as it can, in principle, be applied to any TPBVP and does not require a user to provide any guess (although box constraints must be defined to limit the search space). Unfortunately, due to the extreme sensitivity of the final boundary condition residuals to perturbations in the initial co-state variables for many indirect trajectory optimization problems, this initial application of PSO was limited to planar minimum-time transfers in the two-body problem, which reduces the number of unknown variables and constraints. In this paper, a method is proposed which builds upon the work by Pontani and Conway [18; 19] and allows for solving non-planar low-thrust minimum-fuel trajectory optimization problems with the indirect shooting approach by leveraging PSO to initialize the co-state variables. PSO co-state initialization and an energy-to-fuel homotopy technique are coupled [20] to widen the convergence radius of the TPBVP and reduce the difficulty of the parameter optimization problem to be solved by PSO. PSO generates co-states for the easier-to-solve minimum-energy problem, before transitioning the solution to minimum-fuel through continuation, all without the need of a user-provide guess. Herein, the proposed methodology is presented in the framework of the Circular Restricted Three Body Problem (CR3BP), and important considerations in regard to solution optimality and computational performance are discussed. Case studies, based on previous works [21; 22], are presented which shed light on the promising performance of the proposed methodology while allowing for its validation. Guidance on the selection of the particle swarm size, a heuristic parameter of the optimization algorithm which has a significant effect on co-state initialization performance, is also provided. The remainder of this paper is organized as follows. In Section 2, the problem is formulated, beginning with the CR3BP dynamics, and then deriving the indirect minimum-fuel TPBVP with an energy-to-fuel homotopy through the application of COV and PMP. The proposed method of co-state initialization is discussed in Section 3, along with the single shooting algorithm used to robustly transition PSO-initialized co-state values, corresponding to the minimum-energy problem, to a solution of the minimum-fuel problem. In Section 4, two minimum-fuel trajectory optimization scenarios (i.e., a transfer from a geostationary transfer orbit to L1 halo orbit, and a transfer from an L2 halo orbit to an L1 halo orbit) are employed to validate the methodology, demonstrate the performance of the proposed method, and analyze the effect of the particle swarm size on convergence. Finally, conclusions are made which summarize the achievements of this work and future steps. ## 2 Indirect Optimal Control Formulation In this Section, the indirect optimal control TPBVP is formulated for the minimum-fuel problem with terminal state constraints and a fixed time of flight. This is done using the CR3BP dynamics model, which is provided and discussed. Throughout the TPBVP formulation, an energy-to-fuel homotopy technique is incorporated by introducing a perturbed energy term to the minimum-fuel cost function, before the Hamiltonian is formed and the analytical necessary conditions of optimality are derived. ### Circular Restricted Three Body Problem Dynamics The CR3BP dynamics describe the motion of a body in a synodic (rotating) reference frame under the influence of two primary bodies which travel along co-planar circular paths centered about the barycenter of the system [23]. The mass of the third body is indicated with \(m\) and is assumed to be much lower than those of the primary bodies \(m_{1}\) and \(m_{2}\); therefore it has no effect on their motion. In the following, all variables are non-dimensionalized such that the angular velocity of both primary bodies, the sum of the masses of the primary bodies, and the distance between the primary bodies are set to unity to improve numerical stability. If the third body is a spacecraft with the capability to apply propulsive thrust in any direction, the equations of motion describing the evolution of the spacecraft's position and velocity in a synodic reference frame, along with its mass are given by [9] \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\hat{\mathbf{\alpha}},u)=\begin{bmatrix} \dot{\mathbf{r}}\\ \dot{\mathbf{v}}\\ \dot{m}\end{bmatrix}=\begin{bmatrix}\mathbf{v}\\ \mathbf{g}(\mathbf{r})+\mathbf{h}(\mathbf{v})+uT_{max}\hat{\mathbf{\alpha}}/m \\ -uT_{max}/c\end{bmatrix} \tag{1}\] where \(\mathbf{r}\) and \(\mathbf{v}\) are the non-dimensionalized position and velocity vectors of the spacecraft, \(T_{max}\) is its maximum capable thrust, and \(c\) is the propellant exhaust velocity, which is assumed to be constant and given by \(c=I_{sp}g_{0}\), with \(I_{sp}\) the specific impulse of the propellant and \(g_{0}=9.81\) m/s\({}^{2}\). The control variables are \(u\) and \(\hat{\mathbf{\alpha}}\), where \(u\in[0,1]\) is the thrust throttling factor and \(\hat{\mathbf{\alpha}}\) is a unit vector representing the direction at which thrust is applied. Functions \(\mathbf{g}(\mathbf{r})\) and \(\mathbf{h}(\mathbf{v})\) are given by [21] \[\mathbf{g}(\mathbf{r}) =\begin{bmatrix}r_{x}-\frac{(1-\mu)(r_{x}+\mu)}{r_{x}^{3}}-\frac {\mu(r_{x}+\mu-1)}{r_{x}^{2}}\\ r_{y}-\frac{(1-\mu)r_{y}}{r_{x}^{3}}-\frac{\mu r_{y}}{r_{y}^{2}}\\ -\frac{(1-\mu)r_{x}}{r_{x}^{3}}-\frac{\mu r_{y}^{2}}{r_{x}^{2}}\end{bmatrix} \tag{2}\] \[\mathbf{h}(\mathbf{v}) =\begin{bmatrix}2v_{y}&-2v_{x}&0\end{bmatrix}^{T} \tag{3}\] where subscripts \(x\), \(y\), and \(z\) denote the components of the position and velocity vector along the corresponding Cartesian direction of the synodic reference frame and \(\mu=m_{2}/(m_{1}+m_{2})\) is the non-dimensionalized mass of the second primary body. Additionally, \(r_{1}\) and \(r_{2}\) denote the non-dimensionalized distance of the spacecraft from the first and second primary bodies respectively and are given by \[r_{1} = \sqrt{(r_{x}+\mu)^{2}+r_{y}^{2}+r_{z}^{2}} \tag{4}\] \[r_{2} = \sqrt{(r_{x}+\mu-1)^{2}+r_{y}^{2}+r_{z}^{2}} \tag{5}\] ### Indirect Minimum-Fuel Trajectory Optimization using Energy-to-Fuel Homotopy For the minimum-fuel problem, it is desired to find a solution of Eq. (1) which minimizes the cost function \[J_{mf}=\frac{T_{max}}{c}\int_{t_{z}}^{t_{f}}udt \tag{6}\] where \(t_{i}\) and \(t_{f}\) correspond to the initial and final time of the trajectory. A known characteristic of minimum-fuel trajectories is the bang-bang nature of their control profile [9; 20; 21], which results in discontinuities in the throttling factor \(u\) along the trajectory and can severely restrict the convergence radius of numerical methods. To mitigate this issue, Bertrand and Epenoy proposed the use of an _energy-to-fuel_ homotopy continuation for transforming solutions from the discontinuity-free minimum-energy problem to the minimum-fuel problem [20]. A perturbed energy term is introduced into the cost function by means of the perturbation parameter \(\epsilon\), which is gradually reduced until its effect is nulled. Employing this approach, the new cost function is given by \[J=\frac{T_{max}}{c}\int_{t_{i}}^{t_{f}}\left[u-\epsilon u(1-u)\right]dt\ \ \ \ \ \epsilon\in[0,1] \tag{7}\] To begin the derivation of the TPBVP, we proceed by forming the Hamiltonian as [21] \[H = \lambda^{T}\mathbf{f}(\mathbf{x},\hat{\alpha},u)+\frac{T_{max}}{c }\left[u-\epsilon u(1-u)\right] \tag{8}\] \[= \lambda_{r}^{T}\mathbf{v}+\lambda_{v}^{T}\left[\mathbf{g}( \mathbf{r})+\mathbf{h}(\mathbf{v})+\frac{uT_{max}}{m}\hat{\alpha}\right]- \lambda_{m}\frac{uT_{max}}{c}+\frac{T_{max}}{c}\left[u-\epsilon u(1-u)\right] \tag{9}\] where \(\lambda^{T}=\left[\lambda_{r}^{T},\lambda_{v}^{T},\lambda_{m}\right]\) is the introduced co-state vector, which evolves in time according to \[\dot{\lambda}=\begin{bmatrix}\dot{\lambda}_{r}\\ \dot{\lambda}_{v}\\ \lambda_{m}\end{bmatrix}=-\left(\frac{\partial H}{\partial\mathbf{x}}\right) ^{T}=\begin{bmatrix}-\mathbf{G}^{T}\lambda_{v}\\ -\lambda_{r}-\mathbf{H}^{T}\lambda_{v}\\ \frac{uT_{min}}{m^{2}}\lambda_{v}^{T}\hat{\alpha}\end{bmatrix} \tag{10}\] \[\mathbf{G} = \frac{\partial\mathbf{g}(\mathbf{r})}{\partial\mathbf{r}} \tag{11}\] \[\mathbf{H} = \frac{\partial\mathbf{h}(\mathbf{v})}{\partial\mathbf{v}} \tag{12}\] where the non-zero terms of \(\mathbf{G}\) and \(\mathbf{H}\) are \[G_{1,1} =1-\frac{1-\mu}{r_{1}^{3}}+\frac{3(1-\mu)(x+\mu)^{2}}{r_{1}^{5}}- \frac{\mu}{r_{2}^{3}}+\frac{3\mu(x+\mu-1)^{2}}{r_{2}^{5}} \tag{13}\] \[G_{2,2} =1-\frac{1-\mu}{r_{1}^{3}}+\frac{3(1-\mu)y^{2}}{r_{1}^{5}}-\frac{ \mu}{r_{2}^{3}}+\frac{3\mu y^{5}}{r_{2}^{5}}\] (14) \[G_{3,3} =-\frac{1-\mu}{r_{1}^{3}}+\frac{3(1-\mu)z^{2}}{r_{1}^{5}}-\frac{ \mu}{r_{2}^{3}}+\frac{3\mu z^{2}}{r_{2}^{5}}\] (15) \[G_{1,2} =G_{2,1}=\frac{3(1-\mu)(x+\mu y)}{r_{1}^{5}}+\frac{3\mu(x+\mu-1) y}{r_{2}^{5}}\] (16) \[G_{1,3} =G_{3,1}=\frac{3(1-\mu)(x+\mu)z}{r_{1}^{5}}+\frac{3\mu(x+\mu-1)z} {r_{2}^{5}}\] (17) \[G_{2,3} =G_{3,2}=\frac{3(1-\mu)yz}{r_{1}^{5}}+\frac{3\mu yz}{r_{2}^{5}}\] (18) \[H_{1,2} =-H_{2,1}=2 \tag{19}\] Applying the weak form of PMP, the optimal thrust direction unit vector is characterized by [3] \[\hat{\alpha}^{*}\in\text{arg}\,\min_{\hat{\alpha}}\,H \tag{20}\] which, noting that \(uT_{max}/m\geq 0\) and \(\lambda_{v}\neq 0\) in general, gives \[\hat{\alpha}^{*}=-\frac{\lambda_{v}}{\lambda_{v}} \tag{21}\] where \(\lambda_{v}\) is the Euclidean norm of \(\lambda_{v}\), and \(-\lambda_{v}\) is defined as the primer vector [24]. Substituting this result into Eq. (9) and defining a switching function \(S\) as [21] \[S=-\frac{c}{m}\lambda_{v}-\lambda_{m}+1, \tag{22}\] the Hamiltonian can now be written as \[H=\lambda_{r}^{T}\mathbf{v}+\lambda_{v}^{T}\left[\mathbf{g}(\mathbf{r})+ \mathbf{h}(\mathbf{v})\right]+\frac{uT_{max}}{c}\left(S-\epsilon+\epsilon u\right) \tag{23}\] Again applying the weak form of PMP, the optimal thrust throttling factor is given by[21] \[u^{*}=\left\{\begin{array}{cc}0&S>\epsilon\\ \frac{\epsilon-S}{2\epsilon}&-\epsilon\leq S\leq\epsilon\\ 1&S<-\epsilon\end{array}\right. \tag{24}\] Note that the relations between the co-states and optimal control variables \(\hat{\alpha}^{*}\) and \(u^{*}\), given in Eqs. (21) and (24), are equivalent to Lawden's primer vector control law when \(\epsilon=0\).1 Footnote 1: Lawden’s primer vector control law is a well-known relationship between the co-states, optimal throttling factor and thrust direction when the indirect optimal control approach is employed to solve minimum-fuel problems [24]. To complete the formulation of the trajectory optimization problem, terminal constraints placed on the physical state variables are given by \[\begin{array}{cc}\mathbf{r}(t_{i})-\mathbf{r}_{i}=0&\mathbf{v}(t_{i})- \mathbf{v}_{i}=0&m(t_{i})-1=0\\ \mathbf{r}(t_{f})-\mathbf{r}_{f}=0&\mathbf{v}(t_{f})-\mathbf{v}_{f}=0\end{array} \tag{25}\] where \(\mathbf{r}_{i}\), \(\mathbf{v}_{i}\), \(\mathbf{r}_{f}\), and \(\mathbf{v}_{f}\) are the desired initial and final position and velocity vectors, and the initial mass has been scaled to one. Noting that the final mass of the spacecraft is unconstrained, a transversality condition provides the last constraint, which is given by [25] \[\lambda_{m}(t_{f})=0 \tag{26}\] Substituting Eq. (21) into Eq. (1) and combining with Eq. (10), a 14-dimensional system of first order ordinary differential equations is obtained \[\dot{\mathbf{y}}=\begin{bmatrix}\dot{\mathbf{x}}\\ \dot{\boldsymbol{\lambda}}\end{bmatrix}=\begin{bmatrix}\mathbf{v}\\ \mathbf{g}(\mathbf{r})+\mathbf{h}(\mathbf{v})-\boldsymbol{\lambda}_{v}uT_{ max}/(\lambda_{v}m)\\ -uT_{max}/c\\ -\mathbf{G}^{T}\lambda_{v}\\ -\lambda_{r}-\mathbf{H}^{T}\lambda_{v}\\ -\lambda_{v}uT_{max}/m^{2}\end{bmatrix} \tag{27}\] which, when paired with the optimal throttling factor given in Eq. (24) and the constraints given in Eqs. (25) and (26), fully define the TPBVP. The solution of the TPBVP is the set of initial co-state variables which satisfy the terminal constraints. Here, it is important to note that solutions of the derived TPBVP only imply satisfaction of the first-order necessary conditions of optimality [9]. Therefore, without further investigation of the second-order sufficient conditions of optimality, solutions are only guaranteed to be stationary or extremal solutions of the optimal control cost function given in Eq. (7), also known as candidates of optimality [26]. Furthermore, one should note that multiple local extrema may exist, resulting in multiple solutions of the TPBVP. ## 3 TPBVP Solution Methodology Solving the derived TPBVP requires a good guess of the initial co-state variables \(\boldsymbol{\lambda}(t_{i})\). The methodology proposed here involves the use of the PSO algorithm to first initialize the co-state variables for the minimum-energy problem, corresponding to the perturbation parameter \(\epsilon=1\) in Eq. (24). The PSO-initialized co-states are then used to seed a single shooting procedure with homotopy continuation, where a trust-region nonlinear solver [27] iteratively discovers the solution to the TPBVP for decreasing values of the perturbation parameter until a solution corresponding to \(\epsilon=0\) is found. In this Section, the methodology employed to integrate the state, co-state, and State Transition Matrix (STM) differential equations while performing switching detection is discussed. The proposed method for initializing the co-state variables with PSO is also presented, along with an approach to perform single shooting with a homotopy continuation. Along the way, important considerations in regard to programming language selection and computational efficiency are made. ### Numerical Integration with Switching Detection When integrating the state and co-state differential equations, in addition to the STM differential equations when single shooting is performed, it is important to ensure that a sufficiently high degree of accuracy is maintained, so that convergence can be achieved. Due to the piecewise continuous function which defines the optimal throttling factor (see Eq. (24)), integration error can grow about the points at which switching of the throttling factor occurs, and can severely restrict the convergence of a shooting algorithm. However, this can be avoided if the points at which the switching occurs are explicitly determined and numerically integrated up to. Additionally, both the PSO co-state initialization and single shooting phases require numerous repeated numerical integrations of the differential equations. Performing numerical integration in a programming language which exhibits high computational performance with a differential equation solving suite that provides high-order Runge-Kutta algorithms and robust event detection is thus desirable. For these reasons, the Julia [28] programming language and the Julia-implemented differential equation solving package, _DifferentialEquations.jl_[29], were chosen for all numerical computation and integration in this work. Julia is a dynamically typed language with syntax similar to that of MATLAB or Python, thereby lending itself to quick development. Historically, a known characteristic of this category of languages is the slow execution time due to the required use of an interpreter, an issue which Julia circumvents through the use of a just-in-time (JIT) LLVM-based compiler. Therefore, although Julia is a dynamic language with easy-to-read syntax, it allows producing software with performance that rivals statically typed languages like C and FORTRAN. Furthermore, _DifferentialEquations,jl_ is highly optimized and provides a plethora of modern differential equation solvers and features, including robust event detection schemes, termed _callbacks_, which can employ high-order interpolants to check whether an event has occurred at multiple points backwards in time. Through the application of _DifferentialEquations.jl_ and its _continuous callback_ feature (i.e. continuous event detection), numerical integration can be performed efficiently by exploiting an adaptive time-stepping method while robustly detecting throttling factor switching to within 64-bit floating point number precision. ### Co-State Initialization with PSO Beginning with the inception of the PSO algorithm detailed in the work of Kennedy and Eberhart [30], many have proposed modifications with improved convergence characteristics [31; 32]. The term PSO has thus evolved to encompass a class of heuristic global optimization algorithms rather than any one strictly defined algorithm. For the sake of reproducibility, in this work, an implementation of PSO in the Julia programming language is employed, which is based on the MATLAB PSO algorithm [33]. To employ PSO to initialize the co-state variables for the minimum-energy problem (\(\epsilon=1\)), a nonlinear, unconstrained optimization problem is formulated as \[\underset{\mathbf{\lambda}(t_{i})}{\text{Minimize}}\quad J_{PSO}(\mathbf{\lambda}(t_{i}))=\mathbf{e}^{T}(t_{f})\mathbf{W} \mathbf{e}(t_{f}) \tag{28}\] where \(\mathbf{W}\in\mathbb{R}^{7\times 7}\) is a diagonal weighting matrix and \(\mathbf{e}(t_{f})\) is the final time boundary condition residual given by \[\mathbf{e}(t_{f})=\begin{bmatrix}\mathbf{r}(t_{f})-\mathbf{r}_{f}\\ \mathbf{v}(t_{f})-\mathbf{v}_{f}\\ \lambda_{m}(t_{f})\end{bmatrix} \tag{29}\] which is computed by integrating the state and co-state differential equations from \(t_{i}\) to \(t_{f}\) as described before, starting from an initial state vector constructed as a concatenation of the constrained initial state and the current guess for the co-state values at the initial time \[\mathbf{y}(t_{i})=\begin{bmatrix}\mathbf{r}_{i}^{T}&\mathbf{v}_{i}^{T}&1.0& \mathbf{\lambda}^{T}(t_{i})\end{bmatrix}^{T} \tag{30}\] Through computational investigation, the implemented single shouting routine, discussed in the following Section, was found to be most sensitive to large residuals in the final position. Therefore, the weighting matrix was defined as \[\mathbf{W}=\text{Diag}\left(\begin{bmatrix}10&10&10&1&1&1&1\end{bmatrix}\right) \tag{31}\] such that the squared position residuals are weighted by a factor of ten, while the remaining weights are left at unity. Due to the stochastic nature of the PSO algorithm, some particles may travel to positions that result in trajectories that pass below the surface of a primary body, which is physically non-realizable. Therefore, the integration routine was set up to halt the integration of a given trajectory if the distance between the spacecraft and the surface of a primary body was smaller than zero at any time. When employing PSO for co-state initialization, especially when a large swarm size is required and/or the cost function is expensive to evaluate, it is important to take advantage of the parallelizable nature of the algorithm to reduce compute time. In fact, for each iteration of PSO, the cost function must be re-evaluated for each particle. In general, the evaluation of the cost function for any given particle is independent of all other particles in the swarm, which makes the process well suited for shared or distributed memory parallelism, which was taken advantage of in this work. ### Single Shooting with Continuation After co-state initialization with PSO, a single shooting routine based on a trust-region nonlinear solver provided by _NLsolve.jl_[27] is employed for solving the TPBVP. To do so, the nonlinear 7-dimensional vector function which is desired to be solved is defined as \[\mathbf{Z}(\mathbf{\lambda}(t_{i}),\mathbf{\epsilon})=\mathbf{e}(t_{f}) \tag{32}\] where a solution corresponds to \(\mathbf{e}(t_{f})=\mathbf{0}_{\gamma\times 1}\). In other words, it is desired to find the initial co-state vector \(\mathbf{\lambda}(t_{i})\) which, for a given value of \(\mathbf{\epsilon}\in[0,1]\) - integrating Eq. (27) from \(t_{i}\) to \(t_{f}\) as described before - results in a final time boundary condition residual of \(\mathbf{e}(t_{f})=\mathbf{0}_{\gamma\times 1}\). Henceforth, \(\mathbf{Z}(\mathbf{\lambda}(t_{i}),\mathbf{\epsilon})\) in Eq. (32) is referred to as the _shooting function_. Seeded by the PSO-initialized co-states, the trust-region algorithm is used to iteratively solve Eq. (32) for values of the perturbation parameter decreasing from one to zero according to the continuation law [21] \[\epsilon_{j}=\frac{j^{2}-1}{N^{2}-1}\ \ \ \ \ j=N,N-1,\ldots,2,1 \tag{33}\] where \(N\) is the number of iterations that are desired to be performed for transitioning the perturbation parameter from one to zero. This value can be chosen somewhat arbitrarily, but it is important to consider that higher values of \(N\) result in smaller changes in the perturbation parameter at each iteration, providing a smoother transition from the minimum-energy to the minimum-fuel cost function, albeit at a higher computational cost. Throughout this work, a value of \(N=25\) was chosen to balance numerical stability and computational efficiency. It is important to note that the trust-region algorithm requires that the Jacobian of the shooting function, i.e., \(\partial\mathbf{Z}(\mathbf{\lambda}(t_{i}),\mathbf{\epsilon})/\partial\mathbf{\lambda}(t _{i})\), also be computed. Multiple methods exist to compute this Jacobian (e.g., automatic differentiation and finite difference), among which the STM was selected for this work. The STM is defined as \[\mathbf{\Phi}(t_{f},t_{i})=\frac{\partial\mathbf{y}(t_{f})}{\partial\mathbf{y}(t_ {i})} \tag{34}\] To compute the STM, the \(14^{2}\) STM differential equations must also be integrated along with the state and co-state differential equations in Eq. (27) according to \[\dot{\mathbf{\Phi}}(t,t_{i})=\mathbf{F}\mathbf{\Phi}(t,t_{i})=\frac{\partial\dot{ \mathbf{y}}}{\partial\mathbf{y}}\Big{|}_{t}\mathbf{\Phi}(t,t_{i}),\ \ \ \ \ \mathbf{\Phi}(t_{i},t_{i})=\mathbf{I}_{14\times 14} \tag{35}\] where \(\mathbf{F}\) is the Jacobian of Eq. (27) evaluated at time \(t\), the analytical expression of which is provided by Zhang et al. [21]. Additionally, when discontinuities are present in the shooting function, as is the case for \(\mathbf{\epsilon}=0\) when switching occurs (see Eq. (24)), the STM across the discontinuity must also be computed as [9] \[\mathbf{\Psi}(t_{n})=\frac{\partial\mathbf{y}(t_{n}^{+})}{\partial\mathbf{y}(t_{n }^{-})}=\mathbf{I}_{14\times 14}+\left(\dot{\mathbf{y}}\right|_{t_{n}^{-}}-\dot{ \mathbf{y}}\right|_{t_{n}^{-}})\left(\frac{\partial S}{\partial\mathbf{y}}\ \frac{1}{ \dot{S}}\right) \tag{36}\] where \(t_{n}^{-}\) and \(t_{n}^{+}\) represent the time immediately before and after the discontinuity occurred, respectively, and \[\frac{\partial S}{\partial\mathbf{y}} =\Big{[}\mathbf{0}_{1\times 6}\ \ \ \frac{c}{m^{\prime}}\lambda_{r}\ \ \ \mathbf{0}_{1\times 3}\ \ \ -\frac{c}{m}\frac{\lambda_{r}^{T}}{\lambda_{r}}\ \ \ \ -1 \Big{]} \tag{37}\] \[\dot{S} =\frac{c}{m}\left(\lambda_{r}+\mathbf{H}^{T}\lambda_{r}\right)^{T }\frac{\lambda_{r}}{\lambda_{r}} \tag{38}\] For a trajectory with \(M\) discontinuities between \(t_{i}\) and \(t_{f}\), the state transition matrix is then given by [21] \[\mathbf{\Phi}(t_{f},t_{i})=\mathbf{\Phi}(t_{f},t_{M}^{+})\mathbf{\Psi}(t_{M})\mathbf{\Phi}(t_{ M}^{-},t_{M-1}^{+})\mathbf{\Psi}(t_{M-1})\dots\mathbf{\Phi}(t_{2}^{-},t_{1}^{+})\mathbf{ \Psi}(t_{1})\mathbf{\Phi}(t_{1}^{-},t_{i}) \tag{39}\] Once the STM has been propagated from \(t_{i}\) to \(t_{f}\) along with the state and co-state variables, the Jacobian of the shooting function is constructed from components of the STM, such that \[\frac{\partial\mathbf{Z}(\mathbf{\lambda}(t_{i}),\mathbf{\epsilon})}{\partial\mathbf{ \lambda}(t_{i})}=\begin{bmatrix}\frac{\partial\mathbf{r}(t_{f})}{\partial \mathbf{\lambda}(t_{i})}&\frac{\partial\mathbf{r}(t_{f})}{\partial\mathbf{\lambda}(t_ {i})}&\frac{\partial\mathbf{r}(t_{i})}{\partial\mathbf{\lambda}(t_{i})}\\ \frac{\partial\mathbf{v}(t_{f})}{\partial\mathbf{\lambda}(t_{i})}&\frac{\partial \mathbf{v}(t_{i})}{\partial\mathbf{\lambda}(t_{i})}&\frac{\partial\mathbf{v}(t_{i} )}{\partial\mathbf{\lambda}(t_{i})}\\ \frac{\partial\mathbf{a}_{n}(t_{f})}{\partial\mathbf{\lambda}(t_{i})}&\frac{\partial \mathbf{a}_{n}(t_{f})}{\partial\mathbf{\lambda}(t_{i})}&\frac{\partial\mathbf{a}_{n }(t_{f})}{\partial\mathbf{\lambda}(t_{i})}\end{bmatrix} \tag{40}\] A diagram describing the algorithm from co-state initialization to minimum-fuel solution is provided in the Appendix. ## 4 Results In this Section, case studies are presented to validate the methodology, analyze the performance of the proposed method for co-state initialization, as well as to assess the convergence of the PSO-initialized co-states to solutions of the minimum-fuel TPBVP for varying particle swarm sizes. The proposed methodology is applied to determine low-thrust minimum-fuel transfers in the following two scenarios in the Earth-Moon system: 1) from a geostationary transfer orbit (GTO) to an \(L_{1}\) halo orbit and 2) from an \(L_{2}\) halo orbit to a \(L_{1}\) halo orbit. Constant parameters used for both scenarios, including the time, length, speed, and mass units employed in the non-dimensionalization of the CR3BP dynamics, are provided in Table 1. Note that the mass unit (which is a unique value for a given scenario) is defined as the initial mass of the spacecraft \(m_{i}\). The chosen scenarios provide problems of moderate difficulty to benchmark our approach while allowing for validation and comparison to previous works [21; 22; 34; 35; 36; 37]. Due to the stochastic nature of PSO, 100 co-state initialization trials are performed for swarm sizes ranging from 250 to 5000 particles in order for statistical conclusions to be made about the performance of the algorithm. The position of the \(k\)-th particle in the swarm is defined in the 7-dimensional co-state space as \[\lambda_{k}(t_{i})=\left[\lambda_{r,k}^{T}(t_{i})\quad\lambda_{r,k}^{T}(t_{i}) \quad\lambda_{m,k}(t_{i})\right]^{T} \tag{41}\] The swarm is initialized with particle positions uniformly distributed between upper and lower bounds such that \(\lambda_{r}(t_{i})\in[-40,40]\), \(\lambda_{r}(t_{i})\in[-2,2]\), and \(\lambda_{m}(t_{i})\in[0,2]\). It is important to note that these are not constraints placed on the positions of particles, but are rather bounds used to define the distribution which starting particle positions are sampled from. Once the swarm is initialized, particles are then free to travel anywhere in the search space, which is restricted to \(\lambda_{r}(t_{i})\in[-100,100]\), \(\lambda_{r}(t_{i})\in[-10,10]\), and \(\lambda_{m}(t_{i})\in[0,10]\). Additionally, convergence criteria are chosen such that a PSO trial is halted if the objective function value does not improve by more than \(1\times 10^{-6}\) over the course of 50 iterations, or if a maximum run time of 30 minutes is met. Throughout the following case studies, the effect of different PSO swarm sizes only is analyzed for the discussion to remain tractable. The remaining PSO parameters are held at their default values (as defined by MathWorks [33]), with the exception of the minimum adaptive neighborhood size fraction, which is set to 0.05. For all numerical integration performed throughout the case studies, Verner's "most efficient" Runge-Kutta 9(8) method was employed, with an absolute and relative tolerance of \(10^{-14}\), along with its corresponding 9-th order interpolant for switching detection [38]. All computations were performed on a workstation PC with specifications relevant to this work provided in Table 2. At each iteration of PSO, the evaluations of the cost function for each particle were computed in parallel using shared memory parallelism on 32 of the available 64 physical cores within the CPU. ### Scenario 1: GTO to L1 Halo Orbit For the GTO to \(L_{1}\) halo orbit transfer problem, the spacecraft was chosen to have an initial mass \(m_{i}=1500\) kg, a maximum thrust \(T_{max}=10\) N, and a specific impulse \(I_{sp}=3000\) s, as in reference [21]. In the minimum-fuel problem, the time of flight (TOF) was fixed at 8.6404 days. The terminal physical boundary conditions for the TPBVP, which \begin{table} \begin{tabular}{l r r} \hline Constant & Value & Units \\ \hline Mass Parameter \(\mu\) & \(1.21506038\times 10^{-2}\) & - \\ Grav. Constant \(g_{0}\) & 9.81 & m/s\({}^{2}\) \\ Time Unit (TU) & \(3.75162997\times 10^{5}\) & s \\ Length Unit (LU) & \(3.84400000\times 10^{5}\) & km \\ Velocity Unit (VU) & \(1.02462131\) & km/s \\ Mass Unit (MU) & \(m_{i}\) & kg \\ \hline \end{tabular} \end{table} Table 1: **Constant Parameters** identify the initial position of the spacecraft and the final point targeted on the halo orbit, were defined as [21] \[\mathbf{r}_{i} =\left[-0.0194885115\right.\text{ }-0.0160334798\text{ \ \ }0.0 \right]^{T}\text{ LU}\] \[\mathbf{v}_{i} =\left[8.9188819237\text{ \ \ }-4.0817936888\text{ \ \ }0.0 \right]^{T}\text{ VU}\] \[\mathbf{r}_{f} =\left[0.8233851820\text{ \ \ }0.0\text{ \ \ }-0.0222775563\right]^{T}\text{ LU}\] \[\mathbf{v}_{f} =\left[0.0\text{ \ \ }0.1341841703\text{ \ \ }0.0\right]^{T}\text{ VU}\] such that the GTO had periapsis and apoapsis altitudes of \(h_{p}=400\) km and \(h_{a}=35864\) km, respectively, and the \(L_{1}\) halo orbit had an out-of-plane amplitude of 8000 km. #### 4.1.1 Minimum-Fuel TPBVP Solutions As discussed before, multiple local extrema of the optimal control cost function, and therefore multiple solutions of the TPBVP may exist. A total of six solutions of the minimum-fuel TPBVP (corresponding to \(\epsilon=0\)) were found. These are labeled Trajectories A through F, and are shown in Figure 1, with thrusting arcs depicted in red and coasting arcs depicted in blue. It is clear that each of the trajectories follows different paths and thrusting and coasting arcs are placed at different locations along the transfers. It is observed that Trajectories A, E, and F (in Figures 0(a), 0(e), and 0(f)) exhibit much shorter coasting arcs during either the initial spiraling about the Earth or the final leg of the trajectory when approaching the \(L_{1}\) halo orbit, compared to the remaining trajectories. It is also observed that each trajectory exhibits a unique number of revolutions about the Earth before the final transfer to the halo orbit. Table 3 displays the initial co-state variables corresponding to each of the solutions, along with the total change in mass, i.e. fuel required, which is computed as \(\Delta m=[1-m(t_{f})]\times 1500\) kg. As can be seen by observing the \(\Delta m\) column of Table 3, Trajectory B corresponds to the most optimal minimum-fuel trajectory found, while the remaining solutions correspond to other extremal solutions of the minimum-fuel cost function. The total number of revolutions about the Earth is shown in the final column of Table 3. The unique number of revolutions for each solution suggests that each trajectory may be locally optimal for a given number of revolutions about the Earth. It is noted that the initial co-states corresponding to Trajectory B, along with \(\Delta m\), align with a previously found solution by Zhang et al. [21]. #### 4.1.2 Co-State Initialization In this Section, the performance of the proposed method for converging to solutions of the derived TPBVP which satisfy all boundary conditions is analyzed. For the purpose of this discussion, a trial is considered to have converged if the \(\infty\)-norm of the TPBVP final boundary condition residual is reduced to less than \(1\times 10^{-10}\), i.e., \(\|\mathbf{e}(t_{f})\|_{\infty}=\max[|e_{i}(t_{f})|:i=1,2,\ldots,7]<1\times 10^{-10}\). \begin{table} \begin{tabular}{l c} \hline \hline Specification & Value \\ \hline CPU & AMD Ryzen Threadripper Pro 3995WX \\ RAM & 32 GB DDR4 1600 MHz \\ Operating System & Windows 10 Enterprise \\ Julia & Version 1.7.0 \\ \hline \hline \end{tabular} \end{table} Table 2: **PC Specifications** \begin{table} \begin{tabular}{l c c c c c} \hline \hline Trajectory & \(\mathcal{A}_{f}^{T}(t_{f})\) & \(\mathcal{A}_{f}^{T}(t_{f})\) & \(\mathcal{\lambda}_{m}(t_{f})\) & \(\Delta m\) (kg) & \# Revs. \\ \hline A & \([23.2524,50.6272,-0.08489]\) & \([-0.1546,0.0706,-0.0002]\) & 0.1385 & 140.0 & 4 \\ B & \([15.6850,33.0013,-0.0938]\) & \([-0.1020,0.0450,-0.0002]\) & 0.1334 & 134.4 & 5 \\ C & \([7.7397,14.5669,-0.1239]\) & \([-0.0466,0.0180,-0.0001]\) & 0.1535 & 139.1 & 6 \\ D & \([1.1404,-0.6176,-0.1535]\) & \([-0.0006,-0.00042,-0.0002]\) & 0.1935 & 153.5 & 7 \\ E & \([-9.4937,-25.2095,-0.2092]\) & \([0.0738,-0.0401,-0.0002]\) & 0.2916 & 178.5 & 8 \\ F & \([-24.3124,-60.4677,-0.2883]\) & \([0.1801,-0.0928,-0.0002]\) & 0.5613 & 219.7 & 9 \\ \hline \hline \end{tabular} \end{table} Table 3: **Minimum-Fuel TPBV Solutions for Scenario 1** Figure 1: **Minimum-Fuel Trajectories for Scenario 1** For each of the investigated swarm sizes, Table 4 displays the percentage of trials for which the initialized co-states converged to a solution of the minimum-fuel (\(\epsilon=0\)) TPBVP, the average final value of the PSO objective function (see Eq. (28)), and the average time and objective function evaluations before PSO convergence. The percentage of converged trials was found to be lowest for the smallest number of particles and to increase with swarm size, reaching a maximum of 64% converged trials with a swarm size of 4000 particles, before reducing slightly for the largest swarm size. A similar trend of improvement was observed in the average value of the final PSO objective function and the number of function evaluations. The average time to converge increases with the swarm size, approaching the maximum time allocated to PSO (i.e., 30 minutes per trial) for larger swarm sizes. These results suggest that for swarm sizes smaller than 1000 particles, early stagnation of objective function improvement (due to the limited number of particles exploring the solutions space) results in premature convergence of the PSO algorithm. For larger swarm sizes, ranging from 2000 to 4000 particles, PSO does not suffer from premature convergence and the ability of the algorithm to produce guesses for the initial co-states which converge to solutions of the TPBVP is improved greatly. These larger swarm sizes are more effective at exploring the solution space and are able to reduce the PSO objective function further, before improvement stalls and the algorithm converges. It is also important to note that these larger swarm sizes do come with an additional computational cost: the objective function must be evaluated many more times per PSO iteration resulting in longer periods spent waiting for a guess to be generated or prematurely halting optimization depending on the maximum time limit imposed. For a swarm size of 5000 particles, we begin to see the effect of this computational cost: the average time to converge is nearly at the maximum time allocated to PSO per trial, indicating that in a majority of cases PSO did not converge and was instead halted after 30 minutes of run time (i.e., the PSO objective function was still improving when the algorithm was halted). It is expected that this trend would continue for even larger swarm sizes unless more time was allocated for PSO co-state initialization. #### 4.1.3 Illustration and Analysis of the Solution Generation Process Within this Section, the process of generating a solution of the minimum-fuel TPBVP, from co-state initialization to single shooting with homotopy continuation, is illustrated and analyzed. The discussion is focused on a single trial that converged to the most fuel-optimal solution found throughout the Scenario 1 case study, i.e., Trajectory B. By way of example, Figure 2 displays the distribution of 2000 particles (each represented by a blue dot) in the \(x\)-\(y\) velocity co-state space after 5 PSO iterations and at PSO convergence, along with the location of the PSO-generated guess (represented by a yellow dot) and the minimum-energy solution which the generated guess converged to after performing single shooting (represented by a red dot). Additionally, Figure 3 displays the PSO objective function value as the number of PSO iterations increases. Observing Figure 1(a), it can be seen that the particles quickly distributed themselves across the space from their initial placement of \(\lambda_{v_{s}},\lambda_{v_{s}}\in[-2,2]\). The particles then spent the next few hundred iterations traveling across the full search space, rapidly decreasing the PSO objective function as is shown in Figure 3. After approximately 200 iterations, once significantly better solutions (i.e., co-states which resulted in a significantly smaller objective function value) were discovered, the rate at which the objective function improved is seen to decrease significantly and the swarm began to contract, spending more time searching the area near the better-known solutions. This slow improvement of the objective function continued until one of the stopping criteria of PSO was met. From Figure 1(b), it can be observed that, upon halting of PSO, the particles had traveled to lie within a small region, which happens to be centered about the generated guess. It can also be seen that the generated guess lies near the solution to the minimum-energy problem. This confirms the capability of the algorithm of generating a guess for the co-state variables which lies near a solution of the minimum-energy TPBVP. After PSO-based co-state initialization, the next step in the solution generation process requires employing single shooting to first converge to a solution of the minimum-energy TPBVP (i.e., with \(\epsilon=1\)) starting from the PSO \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Convergence Characteristic & \multicolumn{8}{c}{Swarm Size} \\ \cline{2-10} & 250 & 500 & 750 & 1000 & 2000 & 3000 & 4000 & 5000 \\ \hline Min. Fuel \% Converged & 22 & 42 & 41 & 50 & 63 & 58 & 64 & 54 \\ Avg. Final Obj. Func. & 7.93 & 5.74 & 5.10 & 4.28 & 3.84 & 3.50 & 2.45 & 2.97 \\ Avg. Time to Converge (min) & 2.33 & 7.75 & 13.48 & 15.66 & 22.88 & 25.91 & 28.04 & 28.84 \\ Avg. Function Evaluations & 3.02E5 & 9.24E5 & 1.56E6 & 2.00E6 & 2.82E6 & 3.06E6 & 3.71E6 & 3.58E6 \\ \hline \hline \end{tabular} \end{table} Table 4: **Scenario 1 Convergence Characteristics for Different Swarm Sizes** Figure 3: **PSO objective function versus iteration number** Figure 2: **Distribution of PSO particles** generated guess, before repeatedly resolving the TPBVP while decreasing \(\epsilon\) according the Eq. (33). Figure 4 displays the results of the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPBVPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPBVP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPP is able to recover the TPP and TPBVP. The TPP is able to recover the TPP and TPBVP. The TPBVP is able to recover the TPP and TPBVP. The TPP is able to recover the TPP and TP the evolution of Trajectory B through the continuation process as it is transitioned from minimum-energy to minimum-fuel with \(\epsilon=1.0,0.72,0.49,0.25\) and \(0.0\). One should note that 25 steps in total are taken during the continuation process, and only 5 are selected here for the sake of illustration. Also, Figure 5 displays the switching function and throttling factor in time for each of the trajectories displayed in Figure 4. As can be seen in Figure 4, all trajectories start from and arrive at the same positions as expected. The minimum-energy and minimum-fuel trajectories follow nearly identical paths until the final spiral and transfer to the halo orbit. However, it is observed that coasting arcs appear to increase as the continuation approaches a minimum-fuel trajectory. Figure 5 illustrates the gradual introduction of the bang-bang throttling discontinuity throughout the continuation, which is the core strength of the energy-to-fuel homotopy, which widens the convergence radius and improves the probability of PSO-generated guesses producing a solution of the minimum-fuel cost function. ### Scenario 2: L2 Halo Orbit to L1 Halo Orbit For the \(L_{2}\) halo orbit to \(L_{1}\) halo orbit transfer problem, the spacecraft was chosen to have an initial mass \(m_{i}=2000\) kg, a maximum thrust \(T_{max}=1.5\) N, and a specific impulse \(I_{sp}=2000\) s, as in reference [22]. The transfer duration was fixed to 12.7 days and the terminal physical boundary conditions for the TPBVP, identifying the initial state of the spacecraft on an \(L_{2}\) halo orbit and the target point on an \(L_{1}\) halo orbit, were defined as [22] \[\mathbf{r}_{i} =\begin{bmatrix}1.1599795702248494&0.009720428035815552&-0.124018 64915284157\end{bmatrix}^{T}\text{ LU}\] \[\mathbf{v}_{i} =\begin{bmatrix}0.008477705130550553&-0.20786307954141953&-0.01084 1912833115475\end{bmatrix}^{T}\text{ VU}\] \[\mathbf{r}_{f} =\begin{bmatrix}0.8484736688482315&0.00506488863463682&0.17343680 487577373\end{bmatrix}^{T}\text{ LU}\] \[\mathbf{v}_{f} =\begin{bmatrix}0.005241131023638693&0.26343491250951045&-0.008541 420325316247\end{bmatrix}^{T}\text{ VU}\] These correspond to initial and target halo orbits with periods of 14.2 and 11.2 days respectively. #### 4.2.1 Minimum-Fuel TPBVP Solutions Similarly to Scenario 1, multiple solutions of the TPBVP were discovered by applying the proposed method to solve the \(L_{2}\) halo orbit to \(L_{1}\) halo orbit transfer problem. Figure 6 displays the resultant trajectories, both in a three-dimensional view and projected onto the \(x\)-\(y\) plane of the synodic reference frame with the origin shifted to lie at the center of the Moon. It is evident that the three discovered solutions differ drastically. Both Trajectory \(\alpha\) and \(\beta\) (shown in Figures 5(a), 5(b), 5(c), and 5(d)) appear to take advantage of the lunar gravitational attraction with a close approach of Moon while transferring between halo orbits. Whereas Trajectory \(\alpha\) efficiently performs the transfer, visibly requiring the least amount of thrusting among the found solutions, traversing Trajectory \(\beta\) requires thrust to be applied for nearly the full duration of the transfer. Employing an entirely different strategy, Trajectory \(\gamma\) (shown in Figures 5(e) and 5(f)) does not use a close approach to the lunar surface, and instead thrusts away from the Moon, then follows a coasting arc placed at the greatest distance from the Moon, before thrusting for the final leg of the transfer to insert into the \(L_{1}\) halo orbit. Table 5 reports the initial co-state variables corresponding to each of the solutions, along with the fuel required to traverse each trajectory. As expected from the previous discussion, Trajectory \(\alpha\) is the most fuel-efficient, requiring just 35.34 kg of fuel, while Trajectory \(\gamma\) requires nearly double the fuel mass (i.e., 61.27 kg), and Trajectory \(\beta\) is the most fuel expensive (at 81.28 kg). It should be noted that the fuel-optimal trajectory discovered through the application of the proposed method (i.e., Trajectory \(\alpha\)) agrees with results published by Aziz et al [22]. A slightly lower fuel requirement is found here, which is explained by the fact that the Hybrid Differential Dynamic Programming (HDDP) algorithm applied by Aziz et al. enforced a constant direction of thrust over each integration step and can lag in \begin{table} \begin{tabular}{l c c c c} \hline \hline Trajectory & \(\lambda_{t}^{T}(t_{i})\) & \(\lambda_{v}^{T}(t_{i})\) & \(\lambda_{m}(t_{i})\) & \(\Delta m\) (kg) \\ \hline \(\alpha\) & \([0.12603,-0.07665,-0.05635]\) & \([0.03999,-0.00518,-0.06410]\) & \(0.02236\) & \(35.34\) \\ \(\beta\) & \([-0.01486,0.01215,-0.07936]\) & \([0.01015,0.04457,0.01256]\) & \(0.07632\) & \(81.28\) \\ \(\gamma\) & \([-0.02195,0.00659,0.07490]\) & \([-0.04314,0.03615,0.03842]\) & \(0.03489\) & \(61.27\) \\ \hline \hline \end{tabular} \end{table} Table 5: **Minimum-Fuel TPBVP Solutions** Figure 6: **Minimum-Fuel Trajectories for Scenario 2** switching thrust on or off [22], whereas indirect methods allow thrust to be applied in the optimal direction of the primer vector at every point along the trajectory and enforce throttling on or off at the optimal times. #### 4.2.2 Co-State Initialization The performance of the proposed method at converging to solutions of the minimum-fuel \(L2\) halo orbit to \(L1\) halo orbit transfer TPBVP is analyzed in the following. For each of the investigated swarm sizes, Table 6 displays the percentage of trials that converge to a solution of the minimum-fuel TPBVP, along with the final PSO objective function value, time to converge, and the number of function evaluations averaged over the 100 trials. Overall, the proposed methodology performs more favorably at generating a guess of the initial co-state values for the \(L_{2}\) to \(L_{1}\) transfer problem, compared to Scenario 1. In this case, the best performance was observed when a swarm size of 500 particles was employed, with 86% of trials converging to a solution of the TPBVP. A swarm size of 250 particles performed nearly as well, with 80% of trials converging to a TPBVP solution while only requiring 2.3 minutes on average to generate the guess. Swarm sizes increasing up to 1000 particles continued to produce quality guesses frequently, converging in 79% and 81% of trials for 750 or 1000 particles respectively, albeit at a higher computational cost. Interestingly, for a swarm size greater than 1000 particles (aside from 3000 particles), co-state initialization performance appeared to taper off, whereas the average time to converge in all cases was well below the time limit of 30 minutes, indicating that allocation of time for co-state initialization was not the issue. This contradicts results from Scenario 1, which suggested that a larger swarm size should be preferred as long as enough time is allocated for co-state initialization. Furthermore, on average, larger swarm sizes successfully reduced the PSO objective function further compared to the smaller, more often successful swarm sizes. This is counter-intuitive, as a smaller average final objective function value indicates that the generated co-state guesses are closer to satisfying the final time boundary condition constraints and are therefore expected to converge to a solution of the TPBVP with a higher frequency, as was observed in Scenario 1. It was discovered that multiple local minima of the PSO cost function exist for the problem investigated in Scenario 2. The necessary condition for a minimum of the PSO cost function is given by \[\left(\frac{\partial J_{PSO}}{\partial\lambda(t_{i})}\right)^{T}=2\left(\frac {\partial\mathbf{e}(t_{f})}{\partial\lambda(t_{i})}\right)^{T}\mathbf{We}(t_{ f})=\mathbf{0}_{7\times 1} \tag{42}\] which is satisfied either 1) when the final boundary condition residual \(\mathbf{e}(t_{f})\) is reduced to the zero vector or 2) when \(\left(\partial\mathbf{e}(t_{f})/\partial\lambda(t_{i})\right)^{T}\) becomes rank deficient and \(\mathbf{We}(t_{f})\) lies in its null space. Clearly, global minima of the PSO cost function correspond to a final boundary condition residual of zero, and local minima of the PSO cost function exist in regions where the Jacobian of the final boundary condition residual with respect to the initial co-state vector is rank deficient. It is expected that the counter-intuitive phenomenon of larger PSO swarm sizes resulting in lower convergence rates observed in Scenario 2 is due to PSO converging to regions near local minima of the PSO cost function more frequently. In trials when PSO produced a guess which did not converge to a solution through single shooting, the PSO-generated guess was confirmed to lie in a region where \(\partial\mathbf{e}(t_{f})/\partial\lambda(t_{i})\), and therefore the Jacobian of the shooting function with respect to the initial co-states (see Eq. (40)), is rank deficient. This rank deficiency then results in updates of Newton's method in the trust-region algorithm which produce little to no improvement during single shooting and therefore fail to converge to a solution of the TPBVP. An example of a trial that encountered this difficulty is shown in Figure 7. Represented with a dashed line is the trajectory corresponding to the PSO-generated guess of \[\lambda_{PSO}(t_{i})=\left[-0.75848\quad 0.28231\quad-9.76567\quad-1.27696 \quad 2.08707\quad-3.41462\quad 4.57662\right]^{T}\] \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Convergence Characteristic & \multicolumn{8}{c}{Swarm Size} \\ \cline{2-10} & 250 & 500 & 750 & 1000 & 2000 & 3000 & 4000 & 5000 \\ \hline Min. Fuel 5-Cunerged & 80 & 86 & 79 & 81 & 67 & 79 & 73 & 75 \\ Age. Final (Obj). Func. (\(10^{-3}\)) & 14.6 & 8.89 & 9.57 & 7.37 & 9.33 & 7.56 & 7.61 & 8.30 \\ Age. Time to converge (min) & 2.30 & 3.76 & 4.77 & 6.20 & 10.46 & 11.28 & 12.69 & 14.61 \\ Age. Firestone Environments & 7.7855 & 1.3168 & 1.5866 & 2.0265 & 3.2168 & 3.2568 & 3.2456 & 3.4478 \\ \hline \hline \end{tabular} \end{table} Table 6: **Convergence Characteristics for Different Swarm sizes in Scenario 2** After performing single shooting and failing to converge, the trajectory shown as a solid red line corresponds to the values of the initial co-states at which single shooting stalled, which are given by \[\lambda(t_{i})=\begin{bmatrix}-0.15992&-0.23058&-5.42275&-0.57888,0.97406&-1.9858 9&2.11896\end{bmatrix}^{T}\] It is important to note that both trajectories are entirely shown in red because no coasting arcs were exploited by either; however, it should be recalled that the minimum-energy problem does not require optimal trajectories to maintain a bang-bang thrust profile (see Eq. (24) with \(\epsilon=1.0\)). The \(\infty-\)norm of the TPBVP final boundary condition residual was \(\|\mathbf{e}(t_{f})\|_{\infty}=3.95\times 10^{-2}\) and \(\|\mathbf{e}(t_{f})\|_{\infty}=9.72\times 10^{-3}\) for the generated guess and the co-states returned after stalled single shooting respectively, which does confirm slight improvement during single shooting. Furthermore, it was found that both the generated guess and single shooting-returned co-states result in a Jacobian of the shooting function with respect to the initial co-states of rank six, which corresponds to rank deficiency. This example illustrates a trial where the final boundary condition residuals were nearly satisfied for both cases but rank deficiency of the Jacobian of the shooting function prevented convergence to a solution of the TPBVP. ## 5 Conclusion This paper proposed a method for initializing the co-state variables for solving low-thrust minimum-fuel trajectory optimization problems with the indirect shooting approach by employing PSO paired with an energy-to-fuel homotopy technique. The method was applied successfully to solve two low-thrust transfer problems in the Earth-Moon system. It was demonstrated that the methodology is able to successfully generate guesses for the initial co-state variables which converge to a solution for both investigated scenarios. The resulting minimum-fuel trajectories were validated by comparison with published solutions of the optimal low-thrust transfer for both scenarios. Analysis was performed to determine the effect that varying the number of particles in the swarm has on the performance of the proposed method. It was found that for problems involving several low-thrust spirals about a primary body, a larger swarm size of 2000 to 4000 particles produced the highest quality guesses for the initial co-state variables, whereas, for problems involving short transfers between orbits in cislunar space, a smaller swarm size of 500 particles is preferred and is capable of rapidly generating solutions in less than 4 minutes. It should be noted that, due to the heuristic nature of the PSO algorithm, multiple solutions to the derived minimum-fuel TPBVP were found, which correspond to extremal solutions of the minimum-fuel objective function, providing interesting physical insight into the investigated trajectory optimization scenarios. Considerations on the existence of local minima and Jacobian rank deficiency were also discussed. Figure 7: **Initial guess and stalled solution for minimum-energy trajectory** The primary benefits of applying PSO for minimum-fuel co-state initialization are the simplicity and versatility of the approach. Once the TPBVP is derived through the application of COV and PMP, little additional work is required to employ PSO to initialize the co-state variables for a range of trajectory optimization problems in similar dynamical environments. Additionally, PSO removes the need for a guess to be provided by the end-user of the algorithm, an advantageous feature for preliminary mission design efforts, when a large search for many candidate trajectories is required. A large search for candidate trajectories could also be accelerated drastically through the exploitation of a high-performance computing environment, as the methodology developed herein lends itself well to parallel computing and could be distributed across many CPUs to generate solutions for many initial and target states, flight times, and spacecraft parameters. Furthering this cause, the proposed methodology was shown to discover multiple solutions of the derived TPBVP, most of which could be considered candidate trajectories during preliminary mission design efforts once local optimality is verified through an investigation of the second-order sufficient conditions of optimality. Finally, while not investigated in this work, the multiple solutions discovered through the application of the proposed methodology could be improved through further homotopy continuation on the fixed flight time or thrust magnitude to produce even more candidate trajectories. ## Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 88983. Grant Hecht also acknowledges reception of the NASA Space Grant Fellowship.
2303.04201
DR-VIDAL -- Doubly Robust Variational Information-theoretic Deep Adversarial Learning for Counterfactual Prediction and Treatment Effect Estimation on Real World Data
Determining causal effects of interventions onto outcomes from real-world, observational (non-randomized) data, e.g., treatment repurposing using electronic health records, is challenging due to underlying bias. Causal deep learning has improved over traditional techniques for estimating individualized treatment effects (ITE). We present the Doubly Robust Variational Information-theoretic Deep Adversarial Learning (DR-VIDAL), a novel generative framework that combines two joint models of treatment and outcome, ensuring an unbiased ITE estimation even when one of the two is misspecified. DR-VIDAL integrates: (i) a variational autoencoder (VAE) to factorize confounders into latent variables according to causal assumptions; (ii) an information-theoretic generative adversarial network (Info-GAN) to generate counterfactuals; (iii) a doubly robust block incorporating treatment propensities for outcome predictions. On synthetic and real-world datasets (Infant Health and Development Program, Twin Birth Registry, and National Supported Work Program), DR-VIDAL achieves better performance than other non-generative and generative methods. In conclusion, DR-VIDAL uniquely fuses causal assumptions, VAE, Info-GAN, and doubly robustness into a comprehensive, performant framework. Code is available at: https://github.com/Shantanu48114860/DR-VIDAL-AMIA-22 under MIT license.
Shantanu Ghosh, Zheng Feng, Jiang Bian, Kevin Butler, Mattia Prosperi
2023-03-07T19:44:58Z
http://arxiv.org/abs/2303.04201v3
DR-VIDAL - Doubly Robust Variational Information-theoretic Deep Adversarial Learning for Counterfactual Prediction and Treatment Effect Estimation on Real World Data ###### Abstract Determining causal effects of interventions onto outcomes from real-world, observational (non-randomized) data, e.g., treatment repurposing using electronic health records, is challenging due to underlying bias. Causal deep learning has improved over traditional techniques for estimating individualized treatment effects (ITE). We present the Doubly Robust Variational Information-theoretic Deep Adversarial Learning (DR-VIDAL), a novel generative framework that combines two joint models of treatment and outcome, ensuring an unbiased ITE estimation even when one of the two is misspecified. DR-VIDAL integrates: (i) a variational autoencoder (VAE) to factorize confounders into latent variables according to causal assumptions; (ii) an information-theoretic generative adversarial network (Info-GAN) to generate counterfactuals; (iii) a doubly robust block incorporating treatment propensities for outcome predictions. On synthetic and real-world datasets (Infunt Health and Development Program, Twin Birth Registry, and National Supported Work Program), DR-VIDAL achieves better performance than other non-generative and generative methods. In conclusion, DR-VIDAL uniquely fuses causal assumptions, VAE, Info-GAN, and doubly robustness into a comprehensive, performant framework. Code is available at: [https://github.com/Shantanu48114860/DR-VIDAL-AMIA-22](https://github.com/Shantanu48114860/DR-VIDAL-AMIA-22) under MIT license. ## Introduction Understanding causal relationships and evaluating effects of interventions to achieve desired outcomes is key to progress in many fields, especially in medicine and public health. A typical scenario is to determine whether a treatment (e.g., a lipid-lowering medication) is effective to reduce the risk of or cure an illness (e.g., cardiovascular disease). Randomized controlled trials (RCTs) are considered the best practice for evaluating causal effects [1]. However, RCTs are not always feasible, due to ethical or operational constraints. For instance, if one wanted to evaluate whether college education is the cause of good salary, it would not be ethical to randomly pick teenagers and randomize their admission to college. So, in many cases, the only usable data sources are observational data, i.e., real-world data collected retrospectively and not randomized. Unfortunately, observational data are often plagued with various biases -since the data generation processes are largely unknown- such as confounding (i.e., spurious causal effects on outcomes by features that are correlated with a true unmeasured cause) and colliders (i.e., mistakenly including effects of an outcome as predictors), making it difficult to infer causal claims [2]. Another problem is that, in both RCTs and observational datasets, only factual outcomes are available, since clearly an individual cannot be treated and non-treated at the same time. Counterfactuals are alternative predictions that respond to the question _"what outcome would have been observed if a person had been given a different treatment?"_ If models are biased, counterfactual predictions can be wrong, and interventions can be ineffective or harmful [3]. In both RCT-based and real-world based studies, two types of treatment effects are usually considered: (i) the average treatment effect (ATE), which is population-based and represents the difference in average treatment outcomes between the treatment and controls; and (ii) the individualized treatment effect (ITE), which represents the difference in treatment outcomes for a single observational unit with the same background covariates [4]. When there is suspected heterogeneity, stratified ATEs, or conditional ATEs, can be calculated. Traditional statistical approaches for estimating treatment effects, taking into account possible bias from pre-treatment characteristics, include propensity score matching (PSM) and inverse probability weighting (IPW) [5]. The propensity score is a scalar estimate representing the conditional probability of receiving the treatment, given a set of measured pre-treatment covariates. By matching (or weighting) treated and control subjects according to their propensity score, a balance in pre-treatment covariates is induced, mimicking a randomization of the treatment assignment. However, the PSM approach only accounts for measured covariates, and latent bias may remain after matching [6]. PSM has been historically implemented with logistic-linear regression, coupled with different feature selection methods in the presence of high-dimensional datasets [7]. A problem with PSM is that it often decreases the sample size due to matching, while IPW can be affected by skewed, heavy-tailed weight distributions. Machine learning approaches have been introduced more recently, e.g., Bayesian additive regression trees [8] and counterfactual random forests [9]. Big data also led to the flourishing of causal deep learning [10]. Notable examples include the Treatment-Agnostic Representation Network (TARNet) [11], Dragonnet [12], Deep Counterfactual Network with Propensity-Dropout (DCN-PD) [13], Generative Adversarial Nets for inference of Individualized Treatment Effects (GANITE) [14], Causal Effect Variational Autoencoder (CEVAE) [15], and Treatment Effect by Disentangled Variational AutoEncoder (TEDVAE) [16]. **Contribution** This work introduces a novel deep learning approach for ITE estimation and counterfactual prediction on real-world observational data, named the _Doubly Robust Variational Information-theoretic Deep Adversarial Learning_ (DR-VIDAL). Motivated from Makhzani _et al._[17], we use a lower-dimensional neural representation of the input covariates to generate counterfactuals to improve convergence. We assume a causal graph on top of the covariates where the covariates \(X\) are generated from 4 independent latent variables \(Z_{t},Z_{ycf},Z_{yf}\) and \(Z_{x}\) indicating latents for treatment, counterfactual, factual outcomes and observed covariates respectively, shown in Figure 1. In generating the representations, we use a variational autoencoder (VAE) to infer the latent variables from the covariates in unsupervised manner and feed the learned lower-dimensional representation from the VAE to a generative adversarial network (GAN). Also, to counter the loss of the predictive information while generating the counterfactuals, we aim to maximize the mutual information between the learned representations and the output of the generator. We add this as a regularizer to the generator loss to obtain more robust counterfactuals. Finally, we incorporate a doubly robust network head to estimate the ITE, improving in loss convergence. As DR-VIDAL generates the counterfactual outcomes, we minimise the supervised loss for both the factual and the counterfactual outcomes to estimate ITE more accurately. The main features of DR-VIDAL are, in summary: * Incorporation of an underlying causal structure where the observed pre-treatment covariate set \(X\) is decomposed into four independent latent variables \(Z_{t},Z_{X},Z_{yf},Z_{ycf}\), inducing confounding on both the treatment and the outcome (Figure 1). * Latent variables are inferred using a VAE [18]. * A GAN [19] with variational information maximization [20] generates (synthetic) complete tuples of covariates, treatment, factual and counterfactual outcomes. * Individual treatment effects are estimated on complete datasets with a downstream, four-headed deep learning block which is doubly robust [21, 22]. To our knowledge, this is the first time in which VAE, GAN, information theory and doubly robustness are amalgamated into a counterfactual prediction method. By performing test runs on synthetic and real-world datasets (Infant Health and Development Program, Twin Birth Registry, and National Supported Work Program), we show that DR-VIDAL can outperform a number of state-of-art tools for estimating ITE. DR-VIDAL is implemented in Pytorch and the code is available at: [https://github.com/Shantanu48114860/DR-VIDAL-AMIA-22](https://github.com/Shantanu48114860/DR-VIDAL-AMIA-22) under MIT license. In the repository, we also provide an online technical supplement (OTS) with full details on the architectural design, derivation of equations, and additional experimental results. ### Problem Formulation We use the _potential outcomes_ framework [23, 24]. Let us consider a treatment \(t\) (binary for ease of reading, but the theory can be extended to multiple treatments) that can be prescribed to a population sample of size \(N\). The individuals are characterized by a set of pre-treatment background covariates \(\mathbf{X}\), and a health outcome \(Y\) is measured after treatment. We define each subject \(i\) with the tuple \(\{\mathbf{X},T,Y\}_{i=1}^{N}\), where \(Y_{i}^{0}\) and \(Y_{i}^{1}\) are the potential outcomes when applying treatments \(T_{i}=0\) and \(T_{i}=1\), respectively. The ITE \(\tau(\textbf{x})\) for subject \(i\) with pre-treatment covariates \(\mathbf{X}_{i}=\textbf{x}\), is Figure 1: Directed acyclic graph modeling the causal relationships among treatment \(t\), outcome \(y\) and pre-treatment covariates \(X\), under the latent space \(Z\). defined as the difference in the average potential outcomes under both treatment interventions (i.e., treated vs. not treated), conditional on \(\mathbf{x}\), i.e., \[\tau(\mathbf{x})=\mathbb{E}[Y_{i}^{1}-Y_{i}^{0}\ |\ \boldsymbol{X}_{i}=\mathbf{x}] \tag{1}\] The ITE cannot be calculated directly give the inaccessibility of both potential outcomes, as only factual outcomes can be observed, while the others (counterfactuals) can be considered as missing values. However, when the potential outcomes are made independent of the treatment assignment, conditionally on the pre-treatment covariates, i.e., \(\{Y^{1},Y^{0}\}\)\(\perp\)\(T\)\(|\)\(\boldsymbol{X}\), the ITE can then be estimated as \(\tau(\mathbf{x})=\mathbb{E}[Y^{1}\ |\ T=1,\boldsymbol{X}=\mathbf{x}]- \mathbb{E}[Y^{0}\ |\ T=0,\boldsymbol{X}=\mathbf{x}]=\mathbb{E}[Y\ |\ T=1, \boldsymbol{X}=\mathbf{x}]-\mathbb{E}[Y\ |\ T=0,\boldsymbol{X}=\mathbf{x}]\). Such an assumption is called the strongly ignorable treatment assignment (SITA) assumption [25, 26]. By further averaging over the distribution of \(\boldsymbol{X}\), the ATE \(\tau_{01}\) can be calculated as \[\tau_{01}=\mathbb{E}[\tau(\boldsymbol{X})]=\mathbb{E}[Y\ |\ T=1]-\mathbb{E}[Y\ |\ T=0] \tag{2}\] ITE and ATE can be calculated with stratification matching of \(\mathbf{x}\) in treatment and control groups, but the calculation becomes unfeasible as the covariate space increases in dimensions. The propensity score \(\pi(x)\) represents the probability of receiving the treatment \(T=1\) conditioned on the pre-treatment covariates \(X=x\), denoted as \(\pi(\mathbf{x})=P(T=1\ |\ \boldsymbol{X}=\mathbf{x})\)[24]. The propensity score can be calculated using a regression function, e.g., logistic. ITE/ATE can then be calculated by matching (PSM) or weighting (IPW) instances through \(\pi(\mathbf{x})\), in a doubly robust way [27], or through myriad approaches [28, 29, 30, 27, 31, 32, 33]. In the next section, we describe approaches based on deep learning. #### Related Work Alaa and Van der Schaar [34] characterized the conditions and the limits of treatment effect estimation using deep learning. The sample size plays an important role, e.g., estimations on small sample sizes are affected by selection bias, while on large sample sizes, they are affected by algorithmic design. Our work builds up on the ITE estimation approaches of CEVAE [15], DCN-PD [13], Dragonnet [12], GANITE [14], TARNet [11], and TEDVAE [16]. DCN-PD is a doubly robust, multitask network for counterfactual prediction, where propensity scores are used to determine a dropout probability of samples to regularize training, carried out in alternating phase, using treated and control batches. CEVAE Figure 2: Architecture of DR-VIDAL incorporating the variational autoencoder inferring the latent space (VAE), the generative adversarial network for calculating the counterfactual outcomes (GAN), and the doubly robust module (green box) for estimating ITE. uses VAE to identify latent variables from an observed pre-treatment vector and to generate counterfactuals. TARNe aims to provide an upper bound effect estimation by balancing the distributions of treated and controls -with a weight indemnifying group imbalance- within a high dimensional covariate space, but it does not exploit counterfactuals, and only minimises the factual loss function. Dragonnet is a modified TARNet with targeted regularization based on propensity scores. GANITE generates proxies of counterfactual outcomes from covariates and random noise using a GAN, and feeds them to an ITE generator. For both GANITE and TARNet, in presence of high-dimensional data, the loss could be hard to converge. TEDVAE [16] uses a variational autoencoder to infer hidden latent variables from proxies using a causal graph similar to CEVAE. In the next sections, we discuss in detail the novelty of DR-VIDAL and the differences in the architectural design and training mechanisms with respect to the aforementioned approaches. ### Proposed Methodology DR-VIDAL architecture can be summarized in three components: (1) a VAE inferring the latent space, (2) a GAN generating the counterfactual outcomes, and (3) a doubly robust module estimating ITE. The architectural layout is schematized in Figure 2, while the algorithmic details are given in the OTS. **Latent variable inference with VAE.** We assume that the observed covariates **X** = **x** with treatment assignment \(T=t\) factual and counterfactual outcomes \(Y_{f}=y_{f}\) and \(Y_{cf}=y_{cf}\) respectively, are generated from an independent latent space **z**, composed by \(\textbf{z}_{x}\sim p(\textbf{z}_{x})\), \(z_{t}\sim p(\textbf{z}_{t})\), \(\textbf{z}_{yf}\sim p(\textbf{z}_{yf})\), and \(\textbf{z}_{ycf}\sim p(\textbf{z}_{ycf})\), which denote the latent variables for the covariates **x**, treatment indicator t, and factual outcomes \(y_{f}\) and \(y_{cf}\), respectively. This decomposition follows the causal structure shown in Figure 1. The goal is to infer the posterior distribution \(p(\textbf{z}_{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf}|\textbf{x})\), which is harder to optimize. We use the theory of variational inference [35] to learn the variational posteriors \(q_{\phi_{x}}(\textbf{z}_{x}|\textbf{x})\), \(q_{\phi_{yf}}(\textbf{z}_{yf}|\textbf{x}),q_{\phi_{ycf}}(\textbf{z}_{ycf}| \textbf{x})\), using 4 different neural network encoders with parameters \(\phi_{x},\phi_{t},\phi_{yf},\) and \(\phi_{ycf}\), respectively. Using the latent factors sampled from the learned variational posteriors, we reconstruct **x** by estimating the likelihood \(p_{\phi_{d}}(\textbf{x}|\textbf{z}_{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{ z}_{ycf})\) via a single decoder parameterized by \(\phi_{d}\). The latent factors, assumed to be Gaussian, are defined as follows: \[p(\textbf{z}_{x}) =\prod_{i=1}^{D_{x_{x}}}\mathcal{N}(z_{x_{i}}|0,1); p(\textbf{z}_{t}) =\prod_{i=1}^{D_{x_{t}}}\mathcal{N}(z_{t_{i}}|0,1) \tag{3}\] \[p(\textbf{z}_{yf}) =\prod_{i=1}^{D_{z_{yf}}}\mathcal{N}(z_{yf_{i}}|0,1); p(\textbf{z}_{ycf}) =\prod_{i=1}^{D_{z_{ycf}}}\mathcal{N}(z_{ycf_{i}}|0,1) \tag{4}\] where \(D_{z_{x}},D_{z_{t}},D_{z_{yf}},D_{z_{ycf}}\) are the dimensions of the latent factors \(\textbf{z}_{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf}\), respectively. The variational posteriors of the inference of models are defined as: \[q_{\phi_{x}}(\textbf{z}_{x}|\textbf{x}) =\prod_{i=1}^{D_{z_{x}}}\mathcal{N}(\mu=\hat{\mu}_{x},\sigma^{2}= \hat{\sigma}_{x}^{2}) \tag{5}\] \[q_{\phi_{t}}(\textbf{z}_{t}|\textbf{x}) =\prod_{i=1}^{D_{z_{t}}}\mathcal{N}(\mu=\hat{\mu}_{t},\sigma^{2}= \hat{\sigma}_{t}^{2})\] (6) \[q_{\phi_{yf}}(\textbf{z}_{yf}|\textbf{x}) =\prod_{i=1}^{D_{z_{yf}}}\mathcal{N}(\mu=\hat{\mu}_{yf},\sigma^{2}= \hat{\sigma}_{yf}^{2})\] (7) \[q_{\phi_{ycf}}(\textbf{z}_{ycf}|\textbf{x}) =\prod_{i=1}^{D_{z_{ycf}}}\mathcal{N}(\mu=\hat{\mu}_{ycf},\sigma^ {2}=\hat{\sigma}_{ycf}^{2}) \tag{8}\] where \(\hat{\mu}_{x},\hat{\mu}_{t},\hat{\mu}_{yf},\hat{\mu}_{ycf}\) and \(\hat{\sigma}_{x}^{2},\hat{\sigma}_{t}^{2},\hat{\sigma}_{yf}^{2}\), \(\hat{\sigma}_{ycf}^{2}\) are the means and variances of the Gaussian distributions parameterized by encoders \(E_{\phi_{x}},E_{\phi_{t}},E_{\phi_{yf}},E_{\phi_{ycf}}\) with parameters \(\phi_{x},\phi_{t},\phi_{yf},\phi_{ycf}\) respectively. The overall evidence lower bound (ELBO) loss of the VAE is expressed as \(\mathcal{L}_{ELBO}\) in the following equation, \[\mathcal{L}_{ELBO}(\phi_{x},\phi_{t},\phi_{yf},\phi_{ycf};\textbf{x},\textbf{z}_{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf})=\mathbb{E}_{q_ {\phi_{x}},q_{\phi_{t}},q_{\phi_{yf}},q_{\phi_{ycf}}}[\log p_{\phi_{d}}(\textbf {x}|\textbf{z}_{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf})]\] \[-KL\big{(}q_{\phi_{x}}(\textbf{z}_{x}|\textbf{x})||p_{\phi_{d}}( \textbf{z}_{x}))\big{)}-KL\big{(}q_{\phi_{t}}(\textbf{z}_{t}|\textbf{x})||p_{ \phi_{d}}(\textbf{z}_{t}))\big{)}\] \[-KL\big{(}q_{\phi_{ycf}}(\textbf{z}_{yf}|\textbf{x})||p_{\phi_{d}} (\textbf{z}_{yf}))\big{)}-KL\big{(}q_{\phi_{ycf}}(\textbf{z}_{ycf}|\textbf{x} )||p_{\phi_{d}}(\textbf{z}_{ycf}))\big{)}\] where KL denotes the Kullback-Leibler divergence of two probability distributions. We minimize the optimization function of the VAE as \(\mathcal{L}_{VAE}\) to obtain the optimal parameter of the encoders \(\phi_{x},\phi_{t},\phi_{yf},\phi_{ycf}\), and of the decoder \(\phi_{d}\) as \(\mathcal{L}_{VAE}(\phi_{x},\phi_{t},\phi_{yf},\phi_{ycf};\textbf{x},\textbf{z} _{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf})\) = \(\mathcal{L}_{ELBO}(\phi_{x},\phi_{t},\phi_{yf},\phi_{ycf};\textbf{x},\textbf{z} _{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf})\). **Generation of counterfactuals via GAN.** After learning the hidden latent codes \(\textbf{z}_{x},\textbf{z}_{t},\textbf{z}_{yf},\textbf{z}_{ycf}\) from the VAE, we concatenate the latent codes to form \(z_{c}\), passed to the generator of the GAN block \(G_{\theta_{g}}\), along with a random noise \(\textbf{z}_{G}\sim\mathcal{N}(0,Id)\). \(G_{\theta_{g}}\) is parameterized by \(\theta_{g}\), and it outputs the vector \(\overline{y}\) of the potential (factual and counterfactual) outcomes. We replace the factual outcome \(y_{f}\) in the generated outcome vector \(\overline{y}\) to form \(\hat{y}_{0}\) and \(\hat{y}_{1}\), which are passed to the counterfactual discriminator \(D_{\theta_{d}}\), along with the true covariate vector **x**. \(D_{\theta_{d}}\) is parameterized by \(\theta_{d}\), and is responsible to predict the treatment variable, similarly to GANITE. The loss of the GAN block is defined as: \[V_{GAN}(G,D)=\mathbb{E}_{\textbf{x},\textbf{z}_{G},\textbf{z}_{c}}\big{[}t^{T }\log D(\textbf{x},G(\textbf{z}_{G},\textbf{z}_{c}))+(1-t)^{T}\log(1-D(\textbf{ x},G(\textbf{z}_{G},\textbf{z}_{c}))\big{]}\] where \(\textbf{x}\sim p(\textbf{x}),\textbf{z}_{G}\sim p(\textbf{z}_{G})\) and \(\textbf{z}_{c}\) denote the concatenated latent codes \(\textbf{z}_{x}\sim q_{\phi_{x}}(\textbf{z}_{x}|\textbf{x})\), \(\textbf{z}_{t}\sim q_{\phi_{t}}(\textbf{z}_{t}|\textbf{x})\), \(\textbf{z}_{yf}\sim q_{\phi_{yf}}(\textbf{z}_{yf}|\textbf{x})\) and \(\textbf{z}_{ycf}\sim q_{\phi_{ycf}}(\textbf{z}_{ycf}|\textbf{x})\). From \(\overline{y}\), we also calculate the predicted factual outcome \(\hat{y}_{f}\). As also done in GANITE, we make sure to include the supervised loss \(\mathcal{L}_{S}^{G}(y_{f},\hat{y}_{f})\), which enforces the predicted factual outcome \(\hat{y}_{f}\) to be as close as to the true factual outcome \(y_{f}\). \[\mathcal{L}_{S}^{G}(y_{f},\hat{y}_{f})=\frac{1}{n}\sum_{i=1}^{n}\big{(}y_{f}(i )-\hat{y}_{f}(i)\big{)}^{2} \tag{9}\] The complete loss function of counterfactual GAN is given by \(V_{CF}(G,D)=V_{GAN}(G,D)+\gamma\mathcal{L}_{S}^{G}(y_{f},\hat{y}_{f})\). We also employ an additional regularization \(\lambda I(\textbf{z}_{c};G(\textbf{z}_{G},\textbf{z}_{c}))\) to maximize the mutual information between the learned concatenated latent code \(\textbf{z}_{c}\) and the generated output by the generator \(G(\textbf{z}_{G},\textbf{z}_{c})\), as in [20].We thus propose to solve the following minimax game: \[\min_{G}\max_{D}V_{CF\_I}(G,D)=V_{CF}(G,D)+\lambda I(\textbf{z}_{c};G(\textbf{ z}_{G},\textbf{z}_{c})) \tag{10}\] \(I(\textbf{z}_{c};G(\textbf{z}_{G},\textbf{z}_{c}))\) is harder to solve because of the presence of the posterior \(p(\textbf{z}_{c}|\textbf{x})\)[20], so we obtain the lower bound of it using an auxiliary distribution \(Q(\textbf{z}_{c}|\textbf{x})\) to approximate \(p(\textbf{z}_{c}|\textbf{x})\). Finally, the optimization function of the counterfactual information-theoretic GAN -_InfoGAN_- incorporating the variational regularization of mutual information and hyperparameter \(\lambda\) is given by: \[\min_{G,Q}\max_{D}V_{CF\_infoGAN}(G,D,Q)=V_{CF}(G,D)-\lambda\mathcal{L}_{I}(G,Q) \tag{11}\] The counterfactual InfoGAN is used to generate the missing counterfactual outcome \(y_{cf}\) to form the quadruple {**x**, \(t\), \(y_{f}\), \(y_{cf}\)}\({}_{i=1}^{N}\) and sent to the doubly robust block to estimate the ITE. **Information-theoretic GAN optimization.** The GAN generator \(G_{\theta_{g}}\) works to fool the discriminator \(D_{\theta_{d}}\). To get the optimal Discriminator \(D_{\theta_{d}}^{*}\), we maximize \(V_{CF\_infoGAN}\) \[\max_{D}\mathcal{L}^{D}(\theta_{d})=V_{CF\_infoGAN}(G,D,Q) \tag{12}\] To get the optimal generator \(G_{\theta_{g}}^{*}\), we maximize \(V_{CF\_infoGAN}\) \[\min_{G,Q}\mathcal{L}^{G}(\theta_{g})=V_{CF\_infoGAN}(G,D,Q) \tag{13}\] Doubly robust ITE estimation.As introduced above, the propensity score \(\pi(\textbf{x})\) represents the probability of receiving a treatment \(T=1\) (over the alternative \(T=0\)) conditioned on the pre-treatment covariates \(X=x\). By combining IPW through \(\pi(\textbf{x})\) with outcome regression by both treatment variable and the covariates, Jonsson defined the doubly robust estimation of causal effect [21] as follows: \[\hat{\delta}_{DR}=\frac{1}{n}\sum_{i=1}^{n}\bigg{[}\frac{y_{i}t_{i}-(t_{i}-\pi( x_{i}))\mu(x_{i},t_{i})}{\pi(x_{i})}-\frac{y_{i}(1-t_{i})-(t_{i}-\pi(x_{i}))\mu(x_ {i},t_{i})}{1-\pi(x_{i})}\bigg{]} \tag{14}\] where \(\mu(x,t)=\hat{\alpha_{0}}+\hat{\alpha_{1}}x_{1}+\hat{\alpha_{2}}x_{2}+\cdots+ \hat{\alpha_{n}}x_{n}+\hat{\delta}t\), and \((t_{i}-\pi(x_{i}))\mu(x_{i},t_{i})\) is used for the IPW estimator. After getting the counterfactual outcome \(y_{cf}\) from the counterfactual GAN to form the quadruple {**x**, \(t\), \(y_{f}\), \(y_{cf}\)}\({}_{i=1}^{N}\), we pass this as the input to the doubly robust multitask network to estimate the ITE, using the architecture shown in Figure 2 (green box). To predict the outcomes \(y^{(0)}\) and \(y^{(1)}\), we use a configuration similar to TARNet, which contains a number of shared layers, denoted by \(f_{\phi}\), parameterized by \(\phi\), and two outcome-specific heads \(f_{\theta_{0}}\) and \(f_{\theta_{1}}\), parameterized by \(\theta_{0}\) and \(\theta_{1}\). To ensure doubly robustness, we introduce two more heads that predict the propensity score \(\pi(\textbf{x})=\mathbb{P}(T=1|\textbf{x})\) and the regressor \(\mu(\textbf{x},t)\). These two are calculated using two neural networks, parameterized by \(\theta_{\pi}\) and \(\theta_{\mu}\) respectively. The factual and counterfactual outcome \(y_{i}^{(0)}\) and \(y_{i}^{(1)}\) of the \(i^{th}\) sample are then calculated as: \[\hat{y}_{f}^{(i)} =t_{i}(f_{\theta_{1}}(f_{\phi}(\textbf{x}_{i})))+(1-t_{i})(f_{ \theta_{0}}(f_{\phi}(\textbf{x}_{i}))) \tag{15}\] \[\hat{y}_{cf}^{(i)} =(1-t_{i})(f_{\theta_{1}}(f_{\phi}(\textbf{x}_{i})))+t_{i}(f_{ \theta_{0}}(f_{\phi}(\textbf{x}_{i}))) \tag{16}\] Next, the predicted loss will be \[\mathcal{L}_{i}^{p}(\theta_{1},\theta_{0},\phi)=(\hat{y}_{f}^{(i)}-y_{f}^{(i) })^{2}+(\hat{y}_{cf}^{(i)}-y_{cf}^{(i)})^{2}+\alpha\text{BinaryCrossEntropy}( \pi(x_{i}),t_{i})\] where \(\alpha\) is a hyperparameter. With the help of the propensity score \(\pi(\textbf{x})\) and the regressor \(\mu(\textbf{x},T)\), the doubly robust outcomes are calculated as \[\hat{y}_{f_{DR}}^{(i)}=t_{i}\bigg{[}\frac{t_{i}\hat{y}_{i}^{(1)}-( t_{i}-\pi(\textbf{x}_{i})\mu(\textbf{x}_{i},t_{i}))}{\pi(\textbf{x}_{i})} \bigg{]}+(1-t_{i})\bigg{[}\frac{(1-t_{i})\hat{y}_{i}^{(0)}-(t_{i}-\pi(\textbf{x} _{i})\mu(\textbf{x}_{i},t_{i}))}{1-\pi(\textbf{x}_{i})}\bigg{]} \tag{17}\] \[\hat{y}_{cf_{DR}}^{(i)}=(1-t_{i})\bigg{[}\frac{(1-t_{i})\hat{y}_{ i}^{(1)}-(t_{i}-\pi(\textbf{x}_{i})\mu(\textbf{x}_{i},t_{i}))}{\pi(\textbf{x}_{i})} \bigg{]}+t_{i}\bigg{[}\frac{t_{i}\hat{y}_{i}^{(0)}-(t_{i}-\pi(\textbf{x}_{i}) \mu(\textbf{x}_{i},t_{i}))}{1-\pi(\textbf{x}_{i})}\bigg{]} \tag{18}\] The doubly robust loss \(\mathcal{L}_{i}^{DR}(\theta_{1},\theta_{0},\theta_{\phi},\theta_{\mu},\phi)\) is calculated as: \[\mathcal{L}_{i}^{DR}(\theta_{1},\theta_{0},\theta_{\pi},\theta_{\mu},\phi)=( \hat{y}_{f_{DR}}^{(i)}-y_{f}^{(i)})^{2}+(\hat{y}_{cf_{DR}}^{(i)}-y_{cf}^{(i)}) ^{2} \tag{19}\] Finally, the loss function of the ITE is: \[\mathcal{L}^{ITE}(\theta_{1},\theta_{0},\theta_{\pi},\theta_{\mu},\phi)=\frac{1 }{n}\sum_{i=1}^{n}\bigg{(}\mathcal{L}_{i}^{p}+\beta\mathcal{L}_{i}^{DR}\bigg{)} \tag{20}\] where \(\beta\) is a hyperparameter and the whole network is trained using end-to-end strategy. Experimental Setup Synthetic datasets.We conduct performance tests on two synthetic data experiments. The first uses the same data generation process devised for CEVAE [15]. We generate a marginal distribution **x** as a mixture of Gaussians from the 5-dimensional latent variable **z**, indicating each mixture component. The details of the synthetic dataset using this process is discussed in the OTS. Datasets of sample size {1000, 3000, 5000, 10000, 30000} are generated, and divided into 80-20 % train-test split. In the second experimental setting, we amalgamate the synthetic data generation process by CEVAE with that of GANITE [14], to model the more complex causal structure illustrated in Figure 1. We sample 7-, 1-, 1-, and 1-dimensional vectors for \(\mathbf{z}_{x}\), \(\mathbf{z}_{t}\), \(\mathbf{z}_{yf}\), and \(\mathbf{z}_{wcf}\) from Bernoulli distributions, and then collate them into \(x\). From the covariates \(x\), we simulate the treatment assignment \(t\) and the potential outcomes \(y\) as described in the GANITE paper. We generate multiple synthetic datasets for sample sizes {1000, 3000, 5000, 10000, 30000}, also divided into 80-20 % splits. Equations for both data generating processes are provided in the OTS. **Real-world datasets.** We use three popular real-world benchmark datasets: the Infant Health and Development Program (IHDP) dataset [8], the Twins dataset [36], and the Jobs dataset [37]. The IHDP and Twins two are semi-synthetic, and simulated counterfactuals to the real factual data are available. These datasets have been also designed and collated to meet specific treatment overlap condition, nonparallel treatment assignment, and nonlinear outcome surfaces [14, 15, 8, 11]. In detail, IHDP collates data from a multi-site RCT evaluating early intervention in premature, low birth infants, to decrease unfavorable health outcomes. The dataset is composed by 110 treated subjects and 487 controls, with 25 covariates. The Twins dataset is based on records of twin births in the USA from 1989-1991, where the outcome is mortality in the first year, and treatment is heavier weight, comprising 4553 treated, 4567 controls, with 30 covariates. The Jobs study (1978-1978) investigates if a job training program intervention affects earnings after a two-year period, and comprises 237 treated, 2333 controls, with 17 covariates. For all the real-world datasets, we use the same experimental settings described in GANITE, where the datasets are divided into 56/24/20 % train-validation-test splits. We run 1000, 10 and 100 realizations of IHDP, Jobs and Twins datasets, respectively. **Model fit and test details.** Consistent with prior studies [14, 8, 11], we report the error on the ATE \(\epsilon_{ATE}\), and the expected Precision in Estimation of Heterogeneous Effect (PEHE), \(\epsilon_{PEHE}\), for IHDP and Twins datasets, since factual and the counterfactual outcomes are available. For the Jobs dataset, as the counterfactual outcome does not exist, we report the policy risk \(R_{pol}(\pi)\), and the error on the average treatment effect on the treated (ATT) \(\epsilon_{ATT}\), as indicated in [14, 11]. The training details and the hyperparameters of the individual networks are given in the OTS. We compared DR-VIDAL with TARNet, CEVAE, and GANITE. In addition, for real-world datasets, we compare: least squares regression with treatment as a covariate (OLS/LR1); separate least squares regression for each treatment (OLS/LR2); balancing linear regression (BLR) [10]; k-nearest neighbor (k-NN) [33]; Bayesian additive regression trees (BART) [28]; random and causal forest (R Forest, C Forest) [9]; balancing neural network (BNN) [10]; counterfactual regression with Wasserstein distance (CFR\({}_{WASS}\)) [11]. **Results** **Synthetic datasets.** Figure 3 (a), (b) and (c) shows ATE/PEHE results of DR-VIDAL vs. all other models according to the two synthetic data generation processes. In the generative process of CEVAE, the doubly robust version of DR-VIDAL demonstrates lower ATE error than all other models at all sample sizes. When comparing PEHE, DR-VIDAL (both with and without the doubly robust feature) largely outperforms GANITE. In the second synthetic dataset, generated under the more complex assumptions, DR-VIDAL (both with and without the doubly robust feature) outperforms GANITE in terms of PEHE. It is worth noting the potential of DR-VIDAL to better infer hidden representations in Figure 3: Panel (a): performance (ATE) of DR-VIDAL vs. all other models on samples from the generative process of CEVAE. Panel (b) and (c): performance (PEHE) of DR-VIDAL with or without the doubly robust (DR, w/o DR) block vs. GANITE on samples from the generative process of CEVAE-GANITE. comparison to GANITE irrespective of the presence of the doubly robust module. **Real world datasets.** In all three IHDP, Jobs and Twins datasets, across all realizations, the information-theoretic, doubly robust configuration of DR-VIDAL yields the best results against all other configurations -with/without information-theoretic optimization and with/without doubly robust loss. The doubly robust loss seems to be responsible for most of the improvement. The absolute gain is small, in the order of 1%, but the relative gain with respect to the non-doubly robust setup is significant, where the doubly robust module always outperforms its non-doubly robust version, from 55-60% in IHDP to over 80% in Twins and Jobs datasets (Figure 5). Table 1 shows the comparison for the \(\sqrt{\epsilon_{PEHE}}\) and \(R_{Pol}\) values with the state-of-the-art methods on the three datasets. DR-VIDAL outperforms the other methods on all datasets. On the IHDP and Jobs dataset, DR-VIDAL is the best over all by a larger margin. Instead, performance increment in the Twins dataset is mild. Even if DR-VIDAL has a large number of parameters, the deconfounding of hidden factors and the adversarial training make it appropriate for datasets with relatively small sample size like IHDP. It is worth noting that DR-VIDAL converges much faster than CEVAE and GANITE, possibly due to the doubly robustness. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline & \multicolumn{2}{c|}{**IHDP(\(\sqrt{\epsilon_{PEHE}}\))**} & \multicolumn{2}{c|}{**Twins(\(\sqrt{\epsilon_{PEHE}}\))**} & \multicolumn{2}{c}{**Jobs(\(R_{Pot}\))**} \\ & Out-Sample & In-Sample & Out-Sample & In-Sample & Out-Sample & In-Sample \\ \hline OLS/LR1 & 5.8 \(\pm\) 0.3* & 5.8 \(\pm\) 0.3* & 0.318 \(\pm\) 0.007 & 0.319 \(\pm\) 0.005* & 0.23 \(\pm\) 0.02* & 0.22 \(\pm\) 0.00* \\ OLS/LR2 & 2.5 \(\pm\) 0.1* & 2.4 \(\pm\) 0.1* & 0.320 \(\pm\) 0.003* & 0.320 \(\pm\) 0.001* & 0.24 \(\pm\) 0.01* & 0.21 \(\pm\) 0.00* \\ BLR & 5.8 \(\pm\) 0.3* & 5.8 \(\pm\) 0.3* & 0.323 \(\pm\) 0.018* & 0.312 \(\pm\) 0.002* & 0.25 \(\pm\) 0.02* & 0.22 \(\pm\) 0.01* \\ k-NN & 4.1 \(\pm\) 0.2* & 2.1 \(\pm\) 0.1* & 0.345 \(\pm\) 0.007* & 0.333 \(\pm\) 0.003* & 0.26 \(\pm\) 0.02* & 0.02 \(\pm\) 0.00* \\ \hline BART & 2.3 \(\pm\) 0.1* & 2.1 \(\pm\) 0.2* & 0.338 \(\pm\) 0.016* & 0.347 \(\pm\) 0.009* & 0.25 \(\pm\) 0.00* & 0.23 \(\pm\) 0.02* \\ R Forest & 6.6 \(\pm\) 0.3* & 4.2 \(\pm\) 0.2* & 0.321 \(\pm\) 0.005* & 0.306 \(\pm\) 0.002 & 0.28 \(\pm\) 0.02* & 0.23 \(\pm\) 0.01* \\ C Forest & 3.8 \(\pm\) 0.2* & 3.8 \(\pm\) 0.2* & 0.316 \(\pm\) 0.011 & 0.366 \(\pm\) 0.003* & 0.20 \(\pm\) 0.02* & 0.19 \(\pm\) 0.00* \\ \hline BNN & 2.1 \(\pm\) 0.1* & 2.2 \(\pm\) 0.1* & 0.321 \(\pm\) 0.018* & 0.325 \(\pm\) 0.003* & 0.24 \(\pm\) 0.02* & 0.20 \(\pm\) 0.01* \\ TARNET & 0.95 \(\pm\) 0.02* & 0.88 \(\pm\) 0.02* & 0.315 \(\pm\) 0.003 & 0.317 \(\pm\) 0.007 & 0.21 \(\pm\) 0.01* & 0.17 \(\pm\) 0.01* \\ (TensorFlow) & & & & & & \\ TARNET (Py-torch) & 1.10 \(\pm\) 0.02* & - & - & - & 0.29 \(\pm\) 0.06* & - \\ CFR\({}_{WASS}\) & 0.76 \(\pm\) 0.0* & 0.71 \(\pm\) 0.0* & 0.313 \(\pm\) 0.008 & 0.315 \(\pm\) 0.007 & 0.21 \(\pm\) 0.01* & 0.17 \(\pm\) 0.01* \\ \hline GANITE & 2.4 \(\pm\) 0.4* & 1.9 \(\pm\) 0.4* & 0.297 \(\pm\) 0.05 & 0.289 \(\pm\) 0.005 & 0.14 \(\pm\) 0.01* & 0.13 \(\pm\) 0.01* \\ CEVAE & 2.6 \(\pm\) 0.1* & 2.7 \(\pm\) 0.1* & n.r & n.r & 0.26 \(\pm\) 0.0* & 0.15 \(\pm\) 0.0* \\ \hline **DR-VIDAL** & **0.69 \(\pm\) 0.06** & **0.69 \(\pm\) 0.05** & **0.318 \(\pm\) 0.008** & **0.317 \(\pm\) 0.002** & **0.10 \(\pm\) 0.01** & **0.09 \(\pm\) 0.005** \\ \hline \end{tabular} \end{table} Table 1: Performance of \(\sqrt{\epsilon_{PEHE}}\) and \(R_{Pol}\) (mean \(\pm\) st.dev) of various models (prior tools and DR-VIDAL) on the IHDP, Twins and Jobs datasets. TARNet was originally developed in TensorFlow. We re-implemented TARNet in Pytorch for IHDP and Jobs dataset. (*) is used to indicate methods that DR-VIDAL shows a statistically significant improvement over Figure 4: Performance comparison of doubly robust vs. non-doubly robust version of DR-VIDAL. The bar plots show how many times one model setup is better than the other in terms of error on the factual outcome (\(y_{f}\)). Panels, from left to right, show results on IHDP, Jobs and Twins datasets (100, 10, 100 iterations), respectively. ## Conclusions DR-VIDAL is a new deep learning approach to causal effect estimation and counterfactual prediction that combines adversarial representation learning, information-theoretic optimization, and doubly robust regression. On the benchmark datasets, both the doubly robust property and information-theoretic optimization of DR-VIDAL improve performance over a basic adversarial setup. The work has some limitations. First, the causal graph, even if more elaborate than CEVAE, could be improved. For instance, by connecting the \(Z\) to \(X\) and only to their respective \(t\), factual and counterfactual outcome nodes would imply two adjustments set. Another option could be to use the TEDVAE structure in conjunction with out doubly-robust setup. Also, the encoded representation in the VAE does not employ any attention mechanism to identify the most important covariates for the propensity scores, especially with of high-dimensional datasets. Finally, one thing that would be worth evaluating is how Dragonnet would perform as a downstream module of DR-VIDAL, substituting it to our current four-head doubly-robust block. In conclusion, DR-VIDAL framework is a comprehensive approach to predicting counterfactuals and estimating ITE, and its flexibility (modifiable causal structure and modularity) allows for further expansion and improvement. ## Acknowledgments This work was in part funded by NIH awards R21CA245858, R01CA246418, R56AG069880, R01AG076234, R01AI145552, R01AI141810, and NSF 2028221.
2310.04339
Kagome KMn$_3$Sb$_5$ metal: Magnetism, lattice dynamics, and anomalous Hall conductivity
Kagome metals are reported to exhibit remarkable properties, including superconductivity, charge density wave order, and a large anomalous Hall conductivity, which facilitate the implementation of spintronic devices. In this work, we study a novel kagome metal based on Mn magnetic sites in a KMn$_3$Sb$_5$ stoichiometry. By means of first-principles density functional theory calculations, we demonstrate that the studied compound is dynamically stable, locking the ferromagnetic order as the ground state configuration, thus preventing the charge-density-wave state as reported in its vanadium-based counterpart KV$_3$Sb$_5$. Our calculations predict that KMn$_3$Sb$_5$ exhibits an out-of-plane (001) ferromagnetic response as the ground state, allowing for the emergence of topologically protected Weyl nodes near the Fermi level and nonzero anomalous Hall conductivity ($\sigma_{ij}$) in this centrosymmetric system. We obtain a tangible $\sigma_{xy} = 314$ S$\cdot$cm$^{-1}$ component, which is comparable to that of other kagome metals. Finally, we explore the effect of the on-site Coulomb repulsion ($+U$) on the structural and electronic properties and find that, although the lattice parameters and $\sigma_{xy}$ moderately vary with increasing $+U$, KMn$_3$Sb$_5$ stands as an ideal stable ferromagnetic kagome metal with a large anomalous Hall conductivity response.
Sobhit Singh, A. C. Garcia-Castro
2023-10-06T16:04:59Z
http://arxiv.org/abs/2310.04339v1
Kagome KMn\({}_{3}\)Sb\({}_{5}\) metal: Magnetism, lattice dynamics, and anomalous Hall conductivity ###### Abstract Kagome metals are reported to exhibit remarkable properties, including superconductivity, charge density wave order, and a large anomalous Hall conductivity, which facilitate the implementation of spintronic devices. In this work, we study a novel kagome metal based on Mn magnetic sites in a KMn\({}_{3}\)Sb\({}_{5}\) stoichiometry. By means of first-principles density functional theory calculations, we demonstrate that the studied compound is dynamically stable, locking the ferromagnetic order as the ground state configuration, thus preventing the charge-density-wave state as reported in its vanadium-based counterpart KV\({}_{3}\)Sb\({}_{5}\). Our calculations predict that KMn\({}_{3}\)Sb\({}_{5}\) exhibits an out-of-plane (001) ferromagnetic response as the ground state, allowing for the emergence of topologically protected Weyl nodes near the Fermi level and nonzero anomalous Hall conductivity (\(\sigma_{ij}\)) in this centrosymmetric system. We obtain a tangible \(\sigma_{xy}=314\) 8.cm\({}^{-1}\) component, which is comparable to that of other kagome metals. Finally, we explore the effect of the on-site Coulomb repulsion (\(+U\)) on the structural and electronic properties and find that, although the lattice parameters and \(\sigma_{xy}\) moderately vary with increasing \(+U\), KMn\({}_{3}\)Sb\({}_{5}\) stands as an ideal stable ferromagnetic kagome metal with a large anomalous Hall conductivity response. DOI: ## I Introduction Kagome lattices [1], such as those observed in FeSn [2], Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[3], ScV\({}_{6}\)Sn\({}_{6}\)[4], and the ones present in Mn\({}_{3}\)_B_N (\(B\) = Ni, Ga, Pt, Pd, and Sn) and V\({}_{3}\)AuN antiperovskites [5; 6; 7; 8; 9; 10], exhibit remarkable electronic, phononic, topological, and magnetic entangled properties, mainly owing to their particular star-shaped hexagonal symmetry [11]. In such symmetry, triangularly-coordinated magnetically active cations present substantial magnetic and electronic frustration that leads, for example, to charge-density waves (CDW) phases [12], superconductivity [13] and chiral noncollinear magnetic orderings [11]. Belonging to the kagome materials, the \(A\)V\({}_{3}\)Sb\({}_{5}\) (\(A\) = K, Rb, and Cs) family has recently attracted tremendous attention due to the richness of their charge-density wave states, giant anomalous Hall response, frustrated electronic structure, and associated superconductivity [14; 15; 16; 17; 18]. The reported CDW phase in \(A\)V\({}_{3}\)Sb\({}_{5}\) metals is originated by the electronic frustration in the in-plane vanadium sites, due to the unbalanced charge based on the expected nominal charges, \(i.e.\), K:1+, V:(5-\(\delta\))+ and Sb:3\(-\), to enforce the charge neutrality with K\({}^{1+}\)V\({}^{(5-\delta)+}_{3}\)Sb\({}^{3-}_{5}\)[14]. Moreover, ferromagnetic and noncollinear antiferromagnetic states can be expected to stabilize in this family of materials, leading to tangible topological features related to Dirac and Weyl fermions in the vicinity of the Fermi level, resulting in non-vanishing Berry curvature induced observables such as anomalous Hall conductivity (AHC) and nonlinear Hall effects [19; 20; 21; 22; 23]. Importantly, the electrical manipulation of Berry curvature-induced anomalous Hall effect at room temperature offers unimaginable capabilities in future spintronic devices [24; 25; 26; 27]. In ferromagnetic kagome metals, reversible magnetization can be used as an external parameter to control and switch the AHC response. Therefore, the search for novel ferromagnetic kagome materials, belonging to the \(AM_{3}\)Sb\({}_{5}\) (with \(A\) = K, Rb, Cs. \(M\) = V and Mn) stoichiometry, might lead to the discovery of novel quantum phases driven by nontrivial electronic, topological, magnetic, and superconducting properties. In this study, we employ first-principles density functional theory (DFT) calculations to investigate the structural, vibrational, magnetic, and topological electronic properties of the kagome metal KMn\({}_{3}\)Sb\({}_{5}\), which was recently predicted by Jiang _et al._[28]. Our results reveal that, despite the inherent magnetic frustration in the kagome Mn plane, KMn\({}_{3}\)Sb\({}_{5}\) exhibits a stable ferromagnetic ground state, with the magnetic easy axis oriented along the (001) direction. This ferromagnetic ground state breaks time-reversal symmetry and allows the emergence of topological Weyl nodes in this centrosymmetric crystal system, resulting in large concentrations of Berry curvature near the Weyl nodes. As a result, KMn\({}_{3}\)Sb\({}_{5}\) exhibits a substantial anomalous Hall conductivity response (\(\sigma_{xy}=314\) 8.cm\({}^{-1}\)). Incorporating the on-site Coulomb parameter \(+U\) in our DFT calculations moder ately affects the optimized lattice parameters, magnetic moments, phonon frequencies, and \(\sigma_{xy}\) values, but no unstable phonon modes are observed within the studied range of \(+U\) parameter. Our findings provide valuable insights for experimentalists in synthesizing and confirming the predicted properties of the ferromagnetic kagome metal KMn\({}_{3}\)Sb\({}_{5}\). This paper is organized as follows: In Section II, we provide details of the computational and theoretical methods used in this study. In Section III, we present the results, starting with a discussion of the crystal structure and its vibrational and dynamical stability. We then analyze different magnetic configurations and explore the electronic structure, including intriguing features associated with Berry curvature and anomalous Hall conductivity. Furthermore, we examine the role of electron-correlation effects on the properties of KMn\({}_{3}\)Sb\({}_{5}\). Finally, in Section IV, we draw our conclusions and discuss future perspectives. ## II Computational details We performed first-principles DFT calculations [29; 30] using the projected-augmented wave (PAW) [31] method as implemented in the vasp code (version 5.4.4) [32; 33]. The valence electron configurations considered in the PAW pseudopotentials are as follows: K: (3\(p^{6}4s^{1}\), version 02Aug2007), Mn: (3\(p^{6}3d^{5}4s^{2}\), version 02Aug2007), and Sb: (5\(s^{2}5p^{3}\), version 06Sep2000). The exchange-correlation functional was computed using the generalized-gradient approximation as parameterized by Perdew-Burke-Ernzerhof for solids (GGA-PBEsol) [34] and the on-site correlation effects for Mn-3\(d\) electrons were corrected using the rotationally-invariant Liechtenstein (DFT+\(U\)) formalism [35]. We used \(+U\) values ranging from 0.0 to 3.0 eV, which were optimized to explore the behavior of the lattice parameters, magnetic moment, and anomalous Hall conductivity. The reciprocal space was sampled using a \(\Gamma\)-centered Monkhorst-Pack \(k\)-mesh [36] of size 11\(\times\)11\(\times\)9, and a kinetic energy cutoff of 600 eV was used for the plane wave basis set. These values resulted in the convergence of residual forces and total energy to better than 0.001 eV\(\cdot\)A\({}^{-1}\) and 0.1 meV. Spin-orbit coupling (SOC) was included to consider noncollinear magnetic configurations [37]. Phonon calculations were performed within the finite-differences approach [38; 39] and post-processed using the Phonopy code [40]. To compute the anomalous Hall conductivity and Berry curvature, we utilized the Wannier functions methodology, for which the wannier function sures performed using the Wannier90 code [41; 42] and post-processed with the WannierBerri package [43]. For the wannierization process, \(s\) and \(p\) orbitals were considered for K and Sb atoms, while \(s\), \(p\), and \(d\) were considered for Mn atoms. To plot the Berry curvature projected on the Fermi surface, we utilized the FermiSurfer software [44]. Crystal structures were visualized using the vesta software [45], and electronic structure data were post-processed using the PyProcar[46] software. ## III Results and discussion ### Structural and Magnetic Configurations: Similar to \(A\)V\({}_{3}\)Sb\({}_{5}\), KMn\({}_{3}\)Sb\({}_{5}\) adopts the \(P6/mmm\) (SG. 191) phase in its primitive unit cell. It consists of an Mn kagome lattice situated in the \((0,0,1/2)\) plane, with several Sb sites embedded into the central kagome hexagon, as shown in Fig. 1(a). Additionally, the Mn kagome lattice is packed with graphite-like Sb layers in positions close to \((0,0,1/4)\) and \((0,0,3/4)\) atomic planes. Lastly, the hexagonally coordinated K atoms are located at the \((0,0,0)\) sites. The DFT-optimized lattice parameters of KMn\({}_{3}\)Sb\({}_{5}\) are provided below in Table 2. As a first step in the magnetic analysis, we define all the symmetry-allowed noncollinear magnetic states that can be associated with a \(\mathbf{q}\) propagation vector of \((0,0,0)\). We find that, similar to the case of antiperovskites [10], four \(xy\) in-plane noncollinear chiral antiferromagnetic (AFM) orderings are allowed, as shown in Fig. 1(b), along with a \(z\)-axis out-of-plane ferromagnetic (FM) ordering. We then calculated the total energies (PBEsol) for all the considered noncollinear magnetic orderings and obtained \(E_{\Gamma_{4g}}=-54.0587\) eV\(\cdot\)f.u.\({}^{-1}\) and \(E_{\Gamma_{5g}}=-54.0614\) eV\(\cdot\)f.u.\({}^{-1}\) for the chiral \(+1\), and \(E_{\Gamma_{5g,x}}=-54.0453\) eV\(\cdot\)f.u.\({}^{-1}\) and \(E_{\Gamma_{5g,y}}=-54.0452\) eV\(\cdot\)f.u.\({}^{-1}\) for the chiral \(-1\) orderings [47]. See note [48] for more details regarding magnetic chirality. Interestingly, we find that the (001) FM state, with a total energy of \(E_{FM}=-54.4928\) eV\(\cdot\)f.u.\({}^{-1}\) is the lowest energy magnetic ground state with an average difference energy around 500 meV\(\cdot\)f.u.\({}^{-1}\). This is in contrast with the observed ground state magnetic configuration in other similar materials such as the antiperovskites. In antiperovskites, the 3D kagome symmetry resolves magnetic frustration by stabilizing chiral noncollinear antiferromagnetic states instead of ferromagnetic states. The ground state FM ordering in KMn\({}_{3}\)Sb\({}_{5}\) can be explained in terms of the out-of-plane broken magnetic frustration which tends to lower the total energy compared to the strong frustrations present in the kagome lattice of antiperovskites. In antiperovskites, the kagome lattices are formed, by symmetry, in the complete ensemble of [111] family of planes, leading to the stabilization of chiral noncollinear antiferromagnetic states due to the 3D kagome symmetry. However, in KMn\({}_{3}\)Sb\({}_{5}\), the out-of-plane magnetic frustration breaks this symmetry and favors the ferromagnetic ordering as the ground state magnetic configuration. To gain a better understanding of the magnetic easy axis, we investigate the preferred direction in which the magnetic moments tend to align in the kagome plane of Mn. Specifically, we examine whether the moments lie in the \(xy\)-plane (within the plane) or if they are oriented perpendicular to it (along the \(z\)-axis). Our calculations (PBEsol) reveal that \(E_{xy-plane}=-54.4920\) eV\(\cdot\)f.u.\({}^{-1}\) whereas \(E_{z-axis}=-54.4928\) eV\(\cdot\)f.u.\({}^{-1}\), thus implying \(z\)-axis as the easy axis for the ground state FM order in KMn\({}_{3}\)Sb\({}_{5}\). From this point on, all the reported calculations and analyses are for the (001) ferromagnetic ground state of KMn\({}_{3}\)Sb\({}_{5}\). The DFT (PBEsol) optimized lattice parameters in this magnetic ground state are \(a=b=5.337\) A and \(c=9.028\) A. With increasing \(+U\) value (PBEsol\(+U\)) from 0 to 3 eV on Mn-3\(d\) orbitals, primitive cell moderately expands within the \(a\)-\(b\) plane while shrinking along the \(c\) axis (see Table 2) by approximately \(\pm 5\%\). ### Lattice Dynamics: To test the dynamical stability of the ground state noncollinear (001) FM order in KMn\({}_{3}\)Sb\({}_{5}\), we calculated the full phonon spectrum along the high-symmetry directions of the Brillouin zone, as shown in Fig. 1(c). We observe a fully stable vibrational landscape, with no unstable modes at imaginary frequencies appearing along the entire Brillouin zone. The absence of unstable phonons suggests the suppression of the charge-density-wave phenomenon induced by the unstable phonon modes, as reported in other kagome systems such as \(A\)V\({}_{3}\)Sb\({}_{5}\)[15; 17; 18]. Thus, it is important to note that the kagome compound KMn\({}_{3}\)Sb\({}_{5}\) is added to the set of materials with this symmetry, presenting potential opportunities for exhibiting a tangible ferromagnetic response. According to group theory, the irreducible representation of all allowed vibrational modes for KMn\({}_{3}\)Sb\({}_{5}\) (\(P6/mmm\)) at Brillouin zone center is defined as follows: \[\begin{split}\Gamma_{\text{vib}}=A_{1\text{g}}\oplus 4A_{2\text{u}} \oplus B_{1\text{g}}\oplus B_{1\text{u}}\oplus 2B_{2\text{u}}\oplus 2E_{2 \text{u}}\\ \oplus E_{2\text{g}}\oplus 5E_{1\text{u}}\oplus E_{1\text{g}}. \end{split} \tag{1}\] Out of the total 27 allowed phonon modes (9 atoms per cell), the three acoustic modes are \(\Gamma_{\text{acoustic}}=A_{2\text{u}}\oplus E_{1\text{u}}\), and 24 optic modes are \(\Gamma_{\text{optic}}=A_{1\text{g}}\oplus 3A_{2\text{u}}\oplus B_{1\text{g}} \oplus B_{1\text{u}}\oplus 2B_{2\text{u}}\oplus 2E_{2\text{u}}\oplus E_{2 \text{g}}\oplus 4E_{1\text{u}}\oplus E_{1\text{g}}\). Here, \(A_{1\text{g}}\), \(E_{2\text{g}}\), and \(E_{1\text{g}}\) modes are Raman active, whereas \(A_{2\text{u}}\) and \(E_{1\text{u}}\) modes are infrared (IR) active. All other modes are silent. Table 1 presents the zone center phonon frequencies calculated using PBEsol (\(U=0.0\) eV) for the ground state (001) FM ordering in KMn\({}_{3}\)Sb\({}_{5}\). As \(U\) is increased, the phonon frequencies vary, but no phonon instability was observed within the range of studied \(U\) values (see for example the full-phonon dispersion and the phonon DOS at \(U=3.0\) eV in Fig. 4 in Appendix A). To fully provide the behavior of the active Raman and IR frequencies, as a function of \(U\), we present the \(A_{1g}\), \(A_{2g}\), and \(E_{1g}\) modes in Fig. 5(a), (in Appendix A). The \(A_{2u}\) and \(E_{1u}\) modes are presented in Fig. 5(b). In the overall trend, we can appreciate a softening of the modes when the \(U\) value is increased. \begin{table} \begin{tabular}{c c|c c} \hline \hline Mode & \(\omega\) (cm\({}^{-1}\)) & Mode & \(\omega\) (cm\({}^{-1}\)) \\ \hline \(A_{1g}\) & 106.8 & \(A_{2\text{u}}\) & 53.9, 82.5, 232.8 \\ \(B_{1g}\) & 135.3 & \(B_{1u}\) & 112.5, 251.1 \\ \(E_{2g}\) & 131.3 & \(B_{2u}\) & 216.7 \\ \(E_{1g}\) & 65.5 & \(E_{2u}\) & 110.7, 227.0 \\ — & — & \(E_{1u}\) & 61.6, 94.1, 184.7, 250.2 \\ \hline \hline \end{tabular} \end{table} Table 1: DFT-PBEsol calculated zone center phonon frequencies for the (001) FM ground state of KMn\({}_{3}\)Sb\({}_{5}\). Data for different values of \(+U\) parameter are provided in Appendix A. Figure 1: (Color online) (a) KMn\({}_{3}\)Sb\({}_{5}\)_P6/mmm_ (SG. 191) hexagonal structure obtained for the ferromagnetic, (001) FM, ground state. In this structure, the K, Mn, and Sb sites are drawn in dark blue, violet, and pink colors, respectively. (b) Chiral noncollinear antiferromagnetic orderings allowed in KMn\({}_{3}\)Sb\({}_{5}\). Here, the \(\Gamma_{4g}\) and \(\Gamma_{5g}\) AFM orders hold \(+1\) magnetic chirality whereas the \(\Gamma_{6g,x}\) and \(\Gamma_{6g,y}\) AFM orders inherit \(-1\) magnetic chirality. (c) Phonon dispersion calculated for the ground state FM order (DFT-PBEsol). (d) Atom-projected phonon density of states (PDOS). Interestingly, we observe that, based on the atomically-project phononic density-of-states, in Fig. 1(d), the Mn sites contribute strongly at the high-frequency modes, in between 150 to 250 cm\({}^{-1}\). On the contrary, the strongest Sb contribution can be noted between 0 cm\({}^{-1}\) to 150 cm\({}^{-1}\). Finally, the K atom's contribution to the lattice dynamics is strongly localized around 60 cm\({}^{-1}\). ### Electronic Structure: Moving forward, in Fig. 2 we present the PBEsol calculated electronic bands obtained with and without considering SOC effects. Here, it is worth recalling that noncollinear magnetic moments of Mn are aligned along the \(z\)-axis. As expected, our band structure calculations reveal metallic features. We note that the inclusion of \(+U\) in our DFT calculations does not result in the opening of any bandgap; it only moderately modifies the details of the bands near the Fermi level (see, for example, the electronic band dispersion obtained for \(U\) = 3.0 eV Fig. 6 in Appendix B.). At first glance, in the absence of SOC, we notice multiple band crossings near the Fermi level that might be associated with potential topological nodes, in Fig. 2(a). Some of these band crossings become gapped once the SOC effects are included, leading to the emergence of a multitude of topologically protected Weyl nodes in the vicinity of these gapped band crossings and away from the high-symmetry k-path considered in Fig. 2(b) [49; 50; 51]. Additionally, several crystal symmetry-protected nodes occur along the high-symmetry directions of the Brillouin zone, as expected for the kagome symmetry [3; 52]. These crystal symmetry-protected nodes are located, for example, along the \(M\)-\(K\) and \(\Gamma\)-\(K\) k-paths at energies close to 150 meV above the Fermi level. There are symmetry-protected nodes also located at the \(L\)-\(H\) and \(H\)-\(A\) in the \((0,0,1/2)\)\(k\)-plane. All these Weyl nodes serve as sources and sinks of Berry curvature in the momentum space yielding a large AHC response in the FM kagome metal KMn\({}_{3}\)Sb\({}_{5}\), as we discuss below. As expected, the spin-projection of the Weyl nodes is reversed, as can be appreciated from the spin-polarized electronic band structure shown in Fig. 7 at Appendix C. Here, only the \(S_{z}\) component is observed in the vicinity of the Fermi energy, which is in agreement with the underlying magnetic structure. ### Anomalous Hall Conductivity: By analyzing the symmetry-allowed properties and electronic structure, we notice that the ferromagnetic kagome metal KMn\({}_{3}\)Sb\({}_{5}\) may exhibit a tangible anomalous Hall conductivity response. Consequently, in the _P6/mm'm'm_ magnetic space group (MSG. 191.240), the AHC tensor (\(\sigma_{ij}\)) is extracted and presented in Eq. 2. Most of the AHC tensor elements are forced to vanish due to symmetry, except for \(\sigma_{xy}\) and \(\sigma_{yx}\) which are allowed to have nonzero values. However, it is important to note that \(\sigma_{yx}=-\sigma_{xy}\). Thus, the nonzero AHC components lie within the kagome plane and perpendicular to the orientation of Mn magnetic moments breaking the time-reversal symmetry. \[\sigma_{6/mm^{\prime}m^{\prime}}=\begin{pmatrix}0&\sigma_{xy}&0\\ -\sigma_{xy}&0&0\\ 0&0&0\end{pmatrix} \tag{2}\] Fig. 3(a) shows the variation of \(\sigma_{xy}\) as a function of the chemical potential near the Fermi level, calculated using the Kubo formula [53; 54]. See note [55] for more details. As it can be observed, the AHC at the Fermi level is \(\sigma_{xy}\) = 314 S cm\({}^{-1}\). This is comparable to the values obtained in similar kagome compounds with large, or even giant, AHC such as \(\sigma_{xy}\) = 380 S-cm\({}^{-1}\) in LiMn\({}_{6}\)Sn\({}_{6}\)[56], \(\sigma_{xy}\) = 1130 S\(\cdot\)cm\({}^{-1}\) in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[3], and \(\sigma_{xy}\) = -400 S\(\cdot\)cm\({}^{-1}\) in Fe\({}_{3}\)Sn\({}_{2}\)[57]. Notably, by tuning the Fermi level in kagome metal KMn\({}_{3}\)Sb\({}_{5}\), it is possible to achieve a \(\sigma_{xy}\) value close to or even larger than 1000 S\(\cdot\)cm\({}^{-1}\). Figure 2: (Color online) Electronic band structure computed in the ferromagnetic ground state for (a) without SOC and (b) with SOC cases. For the SOC case, a (001) noncollinear ferromagnetic configuration was used. Here, in orange ovals, are marked the gapped nodes along the high-symmetry \(k\) directions whereas, in red ovals, are shown the symmetry-protected Weyl nodes. The latter are located along the \(M\)–\(K\), and \(\Gamma\)–\(K\) paths at the equivalent points in the \((0,0,1/2)\) plane. In Fig. 3(b), we present the Berry curvature components \(\Omega_{x}\), \(\Omega_{y}\), and \(\Omega_{z}\) calculated along the high-symmetry directions in the Brillouin zone. The largest contributions to the Berry curvature are obtained for \(\Omega_{z}\) at the \(H\)-point and along the \(H\)-\(K\) path. The values extend up to \(-7000\) A\({}^{2}\), though in Fig. 3(b), the \(y\)-axis is displayed only up to \(-1000\) A\({}^{2}\) to allow for the observation of smaller contributions along the other Brillouin zone paths. By correlating the Berry curvature with the electronic band structure calculated with SOC, we notice that there is a SOC-induced gapped node at the \(H\) point near the Fermi energy (see Fig. 2), suggesting the presence of potential gapless Weyl nodes in the vicinity of the \(H\) point. This explains the divergent Berry curvature at the Weyl nodes [58] which are the main source of the large AHC response in KMn\({}_{3}\)Sb\({}_{5}\). Similar behavior has also been reported in antiperovskite compounds such as Mn\({}_{3}\)NiN and V\({}_{3}\)AuN [8; 10]. Fig. 3(b) also displays the hexagonal Brillouin zone obtained for \(P6/mmm\) space group in which, the high-symmetry points and relevant \(k\)-paths are marked. Fig. 3(c) shows the calculated Berry curvature \(\Omega_{z}(\mathbf{k})\) projected onto the Fermi surface of KMn\({}_{3}\)Sb\({}_{5}\). Here, the red and blue colors denote the positive and negative \(\Omega_{z}(\mathbf{k})\) values. Throughout the entire Brillouin zone, multiple strong concentrations of Berry curvature can be observed. ### Role of Electronic Correlations: As shown in previous reports [59; 60; 61; 9], the kagome lattices with magnetic cations, especially cations having partially-filled \(3d\) valence shell, exhibit strong electronic-correlation phenomenon that leads to the emergence of exciting electronic properties. Therefore, to investigate the effects of the onsite Coulomb interaction on Mn-\(3d\) electrons and its impact on the structural, phononic, and electronic properties, we employed DFT+\(U\) calculations which capture the electronic-correlations effects at the mean-field level [35]. This analysis is particularly important due to the lack of experimental reports on KMn\({}_{3}\)Sb\({}_{5}\), and it may help experimentalists in identifying suitable observables for future investigations. In Table 2, we present the evolution of the lattice parameters, the magnetic moment per Mn atom, and anomalous Hall conductivity (\(\sigma_{xy}\)) values as a function of the \(+U\) Coulomb correction parameter applied to Mn-\(3d\) electrons. Our results reveal an increase in the in-plane lattice parameters (\(a=b\)) while decrease in the out-of-plane (\(c\)) lattice parameter with increasing \(U\). This demonstrates the dominant role of the Mn-Mn bondings and interactions within the kagome lattice. The observed expansion in the in-plane lattice parameters is in agreement with the previous observation of the strong spin-lattice coupling related to the negative thermal expansion in the kagome antiperovskite Mn\({}_{3}\)NiN [9]. It is worth noting that with increasing \(U\) value, some phonon modes (especially the in-plane vibrational modes) soften due to the increased in-plane lattice parameters. However, no unstable phonon modes are observed within the range of considered \(U\) values. As expected, the magnetic moment per Mn-site is increased due to the increased electron localization governed by the Coulomb term. Interestingly, the \(\sigma_{xy}\) component remains substantial at all the considered \(U\) values. Finally, we fully relax the KMn\({}_{3}\)Sb\({}_{5}\) FM (001) structure using the meta-GGA scan[62] and r\({}^{2}\)scan[63] functionals, aiming to obtain more precise values of the interesting observables. We find that the magnetic moment is 3.022 \(\mu_{B}\)-Mn\({}^{-1}\) and 3.229 \(\mu_{B}\)-Mn\({}^{-1}\) for the scan and Figure 3: (Color online) (a) Anomalous Hall conductivity, \(\sigma_{xy}\) component, calculated (PBSol) as a function of the chemical potential near the Fermi energy (\(E_{F}\)) for the ferromagnetic (001) KMn\({}_{3}\)Sb\({}_{5}\) within the _P6/mm’m’m_ magnetic space group (MSG. 191.240). (b) Berry curvature, \(\Omega_{x}(k)\), \(\Omega_{y}(k)\), and \(\Omega_{z}(k)\) components, calculated using DFT-PBEsol and integrated along the Brillouin zone \(k\)-path. In the inset, we display the hexagonal Brillouin zone for the SG. 191 space group in which, the high-symmetry points and relevant \(k\)-paths are marked. (c) \(\Omega_{z}(\mathbf{k})\) component of Berry curvature projected onto the Fermi surface of ferromagnetic (001) KMn\({}_{3}\)Sb\({}_{5}\) using a color map. Blue and red colors denote the negative and positive components of \(\Omega_{z}(\mathbf{k})\), respectively. \({}^{2}\)scan functionals, respectively. Moreover, the lattice parameters are \(a=5.435\) A and \(c=9.153\) A for the scan functional whereas \(a=5.512\) A and \(c=9.042\) A for the \(\mathrm{r}^{2}\)scan functional. Using these meta-GGA values as observables, we suggest that a \(U\) value close to 2 eV could reasonably reproduce the structural and electronic properties of KMn\({}_{3}\)Sb\({}_{5}\). However, it is important to note that more systematic studies are needed to fully comprehend the electron-correlation effects in this studied kagome metal. ## IV Conclusions In this study, we performed comprehensive first-principles DFT calculations to investigate the properties of the novel Mn-based kagome metal KMn\({}_{3}\)Sb\({}_{5}\). Our results indicate that KMn\({}_{3}\)Sb\({}_{5}\) is both structurally and vibrationally stable, with no observed unstable phonon modes. This is in contrast to the KV\({}_{3}\)Sb\({}_{5}\) compound in which, phonon instability leads to the appearance of CDW phase. A detailed investigation of the possible candidate magnetic configurations (noncollinear FM as well as AFM) reveal the (001) ferromagnetic ordering as the ground state in KMn\({}_{3}\)Sb\({}_{5}\), although various other magnetically-frustrated kagome metals favor noncollinear chiral antiferromagnetic orderings \(\Gamma_{4g}\) and \(\Gamma_{5g}\), similar to the case of Mn\({}_{3}\)NiN antiperovskite, due to the inherent magnetic frustration. The electronic structure shows multiple nodal crossings associated with Weyl nodes near the Fermi energy, indicating topologically nontrivial features present in the studied material. Notably, the ferromagnetic order breaks the time-reversal symmetry, resulting in a tangible anomalous Hall conductivity response. We find a value of \(\sigma_{xy}=314\) S\(\cdot\)cm\({}^{-1}\), which moderately varies with increasing onsite Coloumb parameter \(U\) in our PBEsol\(+U\) calculations. We find that the predicted \(\sigma_{xy}\) in KMn\({}_{3}\)Sb\({}_{5}\) to be considerably large when compared to other similar kagome compounds. We suggest a \(U\) value in the range of 2-3 eV to reasonably reproduce the properties of the ferromagnetic kagome metal KMn\({}_{3}\)Sb\({}_{5}\). Our findings are expected to motivate experimentalists to pursue the synthesis, realization, and subsequent confirmation of the predicted properties in KMn\({}_{3}\)Sb\({}_{5}\). ###### Acknowledgements. Calculations presented in this article were carried out using the GridUIS-2 experimental testbed and Center for Integrated Research Computing (CIRC) facilities at the University of Rochester. The GridUIS-2 testbed was developed under the Universidad Industrial de Santander (SC3-UIS) High Performance and Scientific Computing Centre with support from UIS Vicerrectoria de Investigacion y Extension (VIE-UIS) and several UIS research groups. We also acknowledge the computational resources awarded by XSEDE, a project supported by National Science Foundation grant number ACI-1053575. The authors also acknowledge the support from the Texas Advances Computer Center (with the Stampede2 and Bridges supercomputers). ## Appendix A Phonons under the \(+U\) correction. ## Appendix B Electronic bands under the \(+u\) correction. ## Appendix C Spin-polarized band structure. \begin{table} \begin{tabular}{c|c c c c} \hline \hline XC-type & \(a\) (Å) & \(c\) (Å) & \(m\) (\(\mu_{B}\)\(\cdot\)Mn\({}^{-1}\)) & \(\sigma_{xy}\) (S\(\cdot\)cm\({}^{-1}\)) \\ \hline PS+\(U\)=0.0 eV & 5.337 & 9.028 & 2.458 & 314 \\ PS+\(U\)=1.0 eV & 5.372 & 9.014 & 2.758 & 217 \\ PS+\(U\)=2.0 eV & 5.470 & 8.875 & 3.312 & 241 \\ PS+\(U\)=3.0 eV & 5.602 & 8.606 & 3.743 & 172 \\ \hline scan & 5.435 & 9.153 & 3.022 & — \\ \(\mathrm{r}^{2}\)scan & 5.512 & 9.042 & 3.229 & — \\ \hline \hline \end{tabular} \end{table} Table 2: Lattice parameters (\(a=b\), and \(c\)), magnetic moment, and \(\sigma_{xy}\) AHC component as a function of different \(U\) values in our PBEsol\(+U\) calculations. Values obtained using the meta-GGA scan, and \(r^{2}\)scan functionals are also listed. All the structures were fully relaxed considering the ground state (001) noncollinear ferromagnetic ordering. Figure 4: (Color online) (a) Full phonon-dispersion curve and (b) Atom-projected phonon density of states obtained at the Coulomb \(+U\) value of 3.0 eV as a correction in the \(3d\):Mn states at the KMn\({}_{3}\)Sb\({}_{5}\) Kagome compound.
2310.03456
Multi-Resolution Audio-Visual Feature Fusion for Temporal Action Localization
Temporal Action Localization (TAL) aims to identify actions' start, end, and class labels in untrimmed videos. While recent advancements using transformer networks and Feature Pyramid Networks (FPN) have enhanced visual feature recognition in TAL tasks, less progress has been made in the integration of audio features into such frameworks. This paper introduces the Multi-Resolution Audio-Visual Feature Fusion (MRAV-FF), an innovative method to merge audio-visual data across different temporal resolutions. Central to our approach is a hierarchical gated cross-attention mechanism, which discerningly weighs the importance of audio information at diverse temporal scales. Such a technique not only refines the precision of regression boundaries but also bolsters classification confidence. Importantly, MRAV-FF is versatile, making it compatible with existing FPN TAL architectures and offering a significant enhancement in performance when audio data is available.
Edward Fish, Jon Weinbren, Andrew Gilbert
2023-10-05T10:54:33Z
http://arxiv.org/abs/2310.03456v1
# Multi-Resolution Audio-Visual Feature Fusion for Temporal Action Localization ###### Abstract Temporal Action Localization (TAL) aims to identify actions' start, end, and class labels in untrimmed videos. While recent advancements using transformer networks and Feature Pyramid Networks (FPN) have enhanced visual feature recognition in TAL tasks, less progress has been made in the integration of audio features into such frameworks. This paper introduces the Multi-Resolution Audio-Visual Feature Fusion (MRAV-FF), an innovative method to merge audio-visual data across different temporal resolutions. Central to our approach is a hierarchical gated cross-attention mechanism, which discerningly weighs the importance of audio information at diverse temporal scales. Such a technique not only refines the precision of regression boundaries but also bolsters classification confidence. Importantly, MRAV-FF is versatile, making it compatible with existing FPN TAL architectures and offering a significant enhancement in performance when audio data is available. ## 1 Introduction Temporal Action Localization (TAL) is concerned with detecting the onset and offset of actions and their class labels in untrimmed and unconstrained videos. Recently, the combined use of transformer networks and Feature Pyramid Networks (FPN) [58; 63; 9; 55; 44] has led to a significant boost in the performance and efficiency of TAL tasks by leveraging multi-resolution visual features. However, there has not yet been a study on combining audio information in such network architectures for this task, specifically how to fuse audio information over different temporal resolutions. The challenge lies in integrating audio and visual data and determining the density of audio information required across different FPN channels for different actions. While some channels might require richer audio input to accurately identify action segments due to higher visual downsampling, others with more detailed visual cues might need less audio assistance. For instance, as shown in Fig 1, an action such as 'chopping' can be better located using high-resolution (i.e. less downsampled) audio features. In contrast, an activity such as 'washing up' may only require some low-resolution audio information. A final example could be for an action such as 'pick-up', which requires no audio input. With this in mind, a fusion method for audio TAL should accommodate multiple temporal audio resolutions while also including a mechanism to gate audio information in specific temporal pathways. This paper presents a novel framework for Multi-Resolution Audio-Visual Feature Fusion (MRAV-FF) as a first step to solving these issues. Our methodology is rooted in a hierarchical gated cross-attention fusion mechanism that adaptively combines audio and visual features over varying temporal scales. Unlike existing techniques, MRAV-FF weighs the significance of each modality's features at various temporal scales to improve the regression boundaries and classification confidence. Furthermore, our method can be easily plugged into any FPN TAL architecture to boost performance when audio information is available. ## 2 Related Work **Temporal Action Localization** (TAL). Methods in TAL can be separated as single and two-stage. Where single stage methods generate a large number of proposal segments which are then passed to a classification head[14; 5; 19; 28; 17; 66; 28; 27; 25; 8; 14; 32]. Single-stage methods include the use of graph neural networks[4; 57; 62; 5] and more recently, transformers [53; 7; 46]. Recent progress in single-stage TAL has shown improvements over two-stage methods in accuracy and efficiency, combining both action proposal and classification in a single forward pass. Works inspired by object detection [42; 31], saliency detection [26], and hierarchical CNN's [60; 26; 61] all combine proposal and classification. Current SOTA methods in TAL utilise transformer-based [51] feature pyramid networks (FPN's) [63; 9; 55; 44], which combine multi-resolution transformer features with classification and regression heads. **Audio-Visual Fusion**. Audio-visual fusion via learned representations has been explored in several video retrieval and classification tasks [13; 1; 56; 54; 37; 24; 23; 24]. Audio-visual TAL has been less explored, with most approaches focused on audio-visual events in which the audio and visual events are closely aligned [48; 3]. Concurrent works exploring audio-visual fusion in TAL have adopted two-stage late fusion approaches. Recent works have also explored audio-visual cross-attention [41] but over a single temporal resolution and without any gated fusion control. ## 3 Method **Problem Definition** Consider an untrimmed input video denoted as \(\mathcal{X}\). The goal is to represent \(\mathcal{X}\) as a set of feature vectors symbolized as \(\mathcal{X}=\{x_{1},x_{2},\ldots,x_{T}\}\). Each \(x_{t}\) corresponds to discrete time steps, \(t=\{1,2,\ldots,T\}\). Notably, the total duration \(T\) is not constant and may differ across videos. For illustrative purposes, \(x_{t}\) can be envisaged as a feature vector extracted from a 3D convolutional network at a specific time \(t\) within the video. The primary objective of TAL is to identify and label action instances present in the input video sequence \(\mathcal{X}\). These instances are collectively denoted as \(\mathcal{Y}=\{y_{1},y_{2},\ldots,y_{N}\}\), where \(N\) signifies the total number of action instances in a given video. This value can be variable across different videos. Each action instance, \(y_{i}\), is defined by the tuple Figure 1: We use a Feature Pyramid Network (FPN) to encode audio-visual action features along different temporal resolutions. We then gate the fusion of the audio features depending on their application to the action classification and regression boundaries. For example, the action ‘take’ requires no audio, which is gated out. In contrast, the action ‘chop’ can be better localised by combining high-temporal resolution audio features with visual features. Our method learns both the temporal resolution and the gating values end-to-end. \(y_{i}=(s_{i},e_{i},a_{i})\), where \(s_{i}\) represents the starting time or onset of the action instance, \(e_{i}\) denotes the ending time or offset of the action instance, and \(a_{i}\) specifies the action category or label. The parameters must adhere to the conditions: \(s_{i},e_{i}\in\{1,\dots,T\}\), \(a_{i}\in\{1,\dots,C\}\) (with \(C\) indicating the total number of predefined action categories), and \(s_{i}<e_{i}\), which ensures the starting time precedes the ending time for every action instance. Furthermore, alongside the visual feature set \(\mathcal{X}\), we introduce an audio feature set \(\mathcal{A}\). This set can be represented as \(\mathcal{A}=\{a_{1},a_{2},\dots,a_{T_{\text{audio}}}\}\), spanning up to \(T_{\text{audio}}\) time steps. Notably, the total duration \(T_{\text{audio}}\) may or may not align with \(T\) from the visual features, depending on the extraction mechanism and granularity of the audio features. A significant challenge in TAL with multi-modal inputs is to devise an optimum method for fusing visual and audio features. This fusion aims to leverage complementary information from both modalities, enhancing the robustness and accuracy of action localization and classification. **Method Overview** As depicted in Fig 2, our proposed method is structured around three core components. First, video and audio features are extracted from untrimmed videos using frozen, pre-trained encoders. These encoders provide a robust foundation for capturing the inherent characteristics of the media without additional training overhead. Post-extraction, these features are further refined via a shallow convolution layer. Subsequently, they are channelled into a feature pyramid network. This network's features experience iterative downsampling and are intricately fused through our novel cross attention mechanism. This mechanism ensures effective alignment and integration of features from diverse modalities and resolutions, facilitating the capture of complex temporal relationships. Finally, upon feature fusion, each temporal feature vector is processed by two dedicated decoders: one for regression, predicting action onsets and offsets, and the other for classification, identifying specific action class labels. This dual-decoder approach ensures accurate temporal localization and semantic identification of each detected action. **Audio-Visual Temporal Fusion:** Given projected audio embeddings \(\mathcal{A}=\{a_{1},a_{2},\dots,a_{T_{\text{audio}}}\}\) and visual embeddings \(\mathcal{X}=\{x_{1},x_{2},\dots,x_{T}\}\) for each timestep, we can break down the process as follows: **Downsampling:** For any feature set \(F\), the downsampled feature \(F^{\prime}\) is computed as: \[F^{\prime}=\text{MaxPool}(F,\text{stride}=2) \tag{1}\] Figure 2: A high-level representation of our multi-resolution audio-fusion method. (a): Audio and visual features are projected to a shared dimension via a 1D convolution. (b) Max-Pooling is applied to downsample features. (c) Following downsampling, we apply multi-headed cross attention in each temporal layer between audio and visual features. (d) The video features are then used as context to scale audio and visual attended embeddings. (e) The concatenated embedding is then used for both regression and classification. **Multi-Headed Cross Attention:** The attention mechanism can be denoted for any feature \(f\) as: \[\text{Attention}(f)=\text{Softmax}(fQf^{T}K)V \tag{2}\] where \(Q,K,\) and \(V\) are the learned query, key, and value matrices, respectively. Given the downsampled video feature \(x^{\prime}\) and audio feature \(a^{\prime}\), the cross-modal projection for the video as query and audio as query is defined as: \[P_{x}=x^{\prime}Q_{x}(a^{\prime}K_{a})^{T}V_{a}\quad\text{and}\quad P_{a}=a^{ \prime}Q_{a}(x^{\prime}K_{x})^{T}V_{x} \tag{3}\] where \(Q_{x},K_{x},V_{x}\) and \(Q_{a},K_{a},V_{a}\) are the respective learned matrices for the video and audio modalities. **Gated Audio-Visual Fusion:** To further refine our fusion process, we introduce a gating mechanism which adaptively scales the contribution of audio and visual features based on the context of the visual content. For each downsampled visual feature \(x^{\prime}\), we compute a gating scalar \(g\) using a sigmoid function: \[g=\sigma(\text{FC}(x^{\prime})) \tag{4}\] where \(\sigma\) denotes the sigmoid activation function, ensuring \(g\) is in the range \([0,1]\), and FC is a fully connected layer. Using the gating scalar, the cross-modal projections are adjusted as follows: \[P_{x,\text{gated}}=g\cdot P_{x} P_{a,\text{gated}}=(1-g)\cdot P_{a} \tag{5}\] The combined feature representation after the gated cross-modal projection is then: \[F_{\text{gated\_combined}}=\text{Conv1D}([P_{x,\text{gated}};P_{a,\text{gated }}]) \tag{6}\] **Regression and Classification:** Each temporal layer outputs gated features to the classification head and the regression head for action instance detection. The output of each instant \(t\) in feature pyramid layer \(l\) is denoted as \(\hat{o}_{t}^{l}=(\hat{c}_{t}^{l},\tilde{d}_{st}^{l},\tilde{d}_{et}^{l})\). We use the same loss as described in [49; 64; 63]: \[\mathcal{L}=\frac{1}{N_{pos}}\sum_{l,t}\mathbbm{1}_{\{c_{t}^{l}>0\}}(\sigma_{ IoU}\mathcal{L}_{cls}+\mathcal{L}_{reg})+\frac{1}{N_{neg}}\sum_{l,t}\mathbbm{1}_{\{c_{t}^ {l}=0\}}\mathcal{L}_{cls} \tag{7}\] Where \(\sigma_{IoU}\) is the temporal IoU between the predicted segment and the ground truth action instance, and \(\mathcal{L}_{cls}\), \(\mathcal{L}_{reg}\) is focal loss [29] and IoU loss [43]. \(N_{pos}\) and \(N_{neg}\) denote the number of positive and negative samples. The term \(\sigma_{IoU}\) is used to re-weight the classification loss at each instant, such that instants with better regression (i.e. of higher quality) contribute more to the training. ## 4 Evaluation ### Dataset **EPIC-Kitchens 100 [10]** is an egocentric dataset containing two tasks: noun localization (e.g. door) and verb localization (e.g. open the door). It has 495 and 138 videos, with 67,217 and 9,668 action instances for training and inference, respectively. The number of action classes for noun and verb are 300 and 97. We follow all other methods [27; 63; 9; 62; 47], and report the mean average precision (mAP) at different intersection over union (IoU) thresholds with the average mAP computed over [0.1:0.5:0.1] in Table 1. We show the effectiveness of our audio-fusion method in increasing performance of unimodal models by adding our MRAV-FF to the best performing existing FPN networks. We show how our method improves the performance of both ActionFormer and TemporalMaxer by +0.9 mAP and +0.4 mAP for verbs and +0.9 and +0.7 for nouns. ### Ablation Results We perform initial ablation experiments to evaluate the performance of our proposed method and present the results in Tab 2. Each experiment is conducted on EPIC-Kitchens, where we edit the temporal fusion method in each temporal block. We first exchange our MRAV-FF temporal block for simple feature fusion in which we concatenate and project the audio-visual features at each temporal scale via a 1D-CNN. We notice that this actually harms network performance over unimodal features demonstrating the need for a gated approach to fusion. Similarly we also replace the block with a max-pooling layer inspired by [47] which pools channel-wise for feature fusion. Again this method has a negative impact on network performance. ### Further Results Furthermore in Tab 3 we evaluate our method with other approaches to audio-visual fusion for TAL on EPIC-Kitchens. We show a large increase in performance, which can be attributed to both the effectiveness of the FPN structure for audio visual temporal pooling and also our MRAV-FF fusion module. The lack of available comparative methods for audio-visual fusion further illustrates the importance of updated baselines in this field. Finally, we also evaluate the method on the THUMOS14 dataset which [21] contains 200 validation videos and 213 testing videos with 20 action classes. THUMOS14 presents a different challenge to ego-centric audio-visual fusion, since the videos are heavily edited and contain many actions that do not have audio-visual alignment. For example, many videos are of sporting events where there is no localized audio information, contain music, narration, or have no audio at all. Due to these challenges there are no existing TAL audio-visual fusion works to our knowledge that test their methods on THUMOS14. Following previous work [27; 28; 57; 66; 63], we trained the model on the validation set and evaluate on the test set. Our results in Tab 4 demonstrate that our method struggles to handle this audio-visual disparity only improving on the \(0.7\) iou threshold. ## 5 Implementation ### Feature Extraction **Visual Features:** We use the features provided by existing works in TAL [63; 27; 57]. For EPIC-Kitchens features are extracted using a SlowFast network [15] pre-trained on EPIC-Kitchens [11]. During extraction we use a 32-frame input sequence with a stride of 16 to generate a set of 2304-D features. **Audio Features:** For the audio preprocessing and feature extraction, we followed a series of well-established steps to derive meaningful representations: \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{tIoU} \\ \cline{3-8} & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & Avg \\ \hline \multirow{4}{*}{Verb} & BMN [27; 11] & 10.8 & 9.8 & 8.4 & 7.1 & 5.6 & 8.4 \\ & G-TAD [57] & 12.1 & 11.0 & 9.4 & 8.1 & 6.5 & 9.4 \\ & ActionFormer [63] & 26.6 & 25.4 & 24.2 & 22.3 & 19.1 & 23.5 \\ & TemporalMaser [47] & 27.8 & 26.6 & 25.3 & 23.1 & 19.9 & 24.5 \\ \cline{2-8} & ActionFormer + MRAV-FF & 27.6 & 26.8 & 25.3 & 23.4 & 19.8 & 24.6 \\ & TemporalMaser + MRAV-FF & **28.5** & **27.4** & **26.0** & **23.7** & **20.12** & **25.1** \\ \cline{2-8} & BMN [27; 11] & 10.3 & 8.3 & 6.2 & 4.5 & 3.4 & 6.5 \\ & G-TAD [57] & 11.0 & 10.0 & 8.6 & 7.0 & 5.4 & 8.4 \\ & ActionFormer [63] & 25.2 & 24.1 & 22.7 & 20.5 & 17.0 & 21.9 \\ & TemporalMaser [47] & 26.3 & 25.2 & 23.5 & 21.3 & 17.6 & 22.8 \\ \cline{2-8} & ActionFormer + MRAV-FF & 26.4 & 25.4 & 23.6 & 21.2 & 17.4 & 22.8 \\ & TemporalMaser + MRAV-FF & **27.4** & **26.2** & **24.4** & **21.8** & **17.9** & **23.5** \\ \hline \hline \end{tabular} \end{table} Table 1: The performance of our proposed method on the EPIC-Kitchens 100 dataset. [11] 1. **Resampling:** All audio data was resampled to a uniform rate of 16 kHz in mono. 2. **Spectrogram Computation:** We computed the spectrogram by extracting magnitudes from the Short-Time Fourier Transform (STFT). This utilized a window size of 25 ms, a hop size of 10 ms, and a periodic Hann window for the analysis. 3. **Mel Spectrogram Mapping:** The computed spectrogram was then mapped to a mel scale, producing a mel spectrogram with 64 mel bins that cover the frequency range from 125 Hz to 7500 Hz. 4. **Log Mel Spectrogram Stabilization:** To enhance the stability and avoid issues with the logarithm function, we calculated a stabilized log mel spectrogram as: \[\text{Log-Mel}=\log(\text{Mel-Spectrogram}+0.01)\] Here, the offset of 0.01 prevents the computation of the logarithm of zero. 5. **Framing:** Finally, the derived features were segmented into non-overlapping examples spanning 0.96 seconds each. Every example encapsulates 64 mel bands and 96 time frames, with each frame lasting 10 ms. Following extraction the features are projected to 128-D features via a VGG audio encoder network [20] pretrained on AudioSet [16]. The network outputs embeddings of shape \(T\times 128\) where \(T\) is the temporal input dimension as defined in the paper. ## 6 Conclusion We demonstrate an effective method for audio-visual fusion with Feature Pyramid Networks. Our drop-in method can be applied to any FPN architecture for temporal action localization and serves as a competitive benchmark for continued research in audio-visual fusion. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Method} & \multicolumn{6}{c}{tIoU} \\ \cline{3-8} & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & Avg \\ \hline \multirow{4}{*}{Verb} & Concatenation & 28.02 & 26.96 & 25.5 & 23.48 & 19.87 & 23.89 \\ & Channel Pooling & 25.63 & 24.59 & 23.09 & 21.14 & 17.95 & 23.06 \\ & MRAV-FF & **28.5** & **27.4** & **26.0** & **23.7** & **20.12** & **25.1** \\ \cline{2-8} & Concatenation & 26.39 & 25.42 & 23.57 & 21.19 & 17.42 & 22.8 \\ \cline{2-8} & Channel Pooling & 25.7 & 24.53 & 22.95 & 20.52 & 17.04 & 22.21 \\ \cline{2-8} & MRAV-FF & **27.4** & **26.2** & **24.4** & **21.8** & **17.9** & **23.5** \\ \hline \hline \end{tabular} \end{table} Table 2: Results for an ablation experiment on EPIC-Kitchens 100 [11] TAL task, where we replace the MRAV-FF module with existing approaches to feature fusion including concatenated projection and channel pooling. We observe that simple fusion methods hinder performance when compared with uni-modal FPN networks demonstrating the need for a more nuanced fusion strategy. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Method} & \multicolumn{6}{c}{tIoU} \\ \cline{3-8} & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & Avg \\ \hline \multirow{4}{*}{Verb} & Damen [12] & 10.83 & 9.84 & 8.43 & 7.11 & 5.58 & 8.36 \\ & AGT [38] & 12.01 & 10.25 & 8.15 & 7.12 & 6.14 & 8.73 \\ & OWL [41] & 14.48 & 13.05 & 11.82 & 10.25 & 8.73 & 11.67 \\ & MRAV-FF & **28.5** & **27.4** & **26.0** & **23.7** & **20.12** & **25.1** \\ \cline{2-8} & Damen [12] & 10.31 & 8.33 & 6.17 & 4.47 & 3.35 & 6.53 \\ & AGT [38] & 11.63 & 9.33 & 7.05 & 6.57 & 3.89 & 7.70 \\ & OWL [41] & 17.94 & 15.81 & 14.14 & 12.13 & 9.80 & 13.96 \\ & MRAV-FF & **27.4** & **26.2** & **24.4** & **21.8** & **17.9** & **23.5** \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of our proposed method on the EPIC-Kitchens 100 dataset [11] compared to existing approaches for audio-visual feature fusion on TAL. Our method demonstrates a large increase in performance jointly attributed to the addition of feature pyramid architecture and our fusion strategy.
2302.03647
Diameters of the Characteristic Imset Polytopes
It has been shown that the edge structure of the characteristic imset polytope is closely connected to the question of causal discovery. The diameter of a polytope is an indicator of how connected the polytope is and moreover gives us a hypothetical worst case scenario for an edge-walk over the polytope. We present low-degree polynomial bounds on the diameter of $\operatorname{CIM}_n$ and, for any given undirected graph $G$, the face $\operatorname{CIM}_G$.
Petter Restadh
2023-02-07T17:56:29Z
http://arxiv.org/abs/2302.03647v2
# Diameters of the characteristic impact polytopes ###### Abstract. It has been shown that the edge structure of the characteristic impact polytope is closely connected to the question of causal discovery. The diameter of a polytope is an indicator of how connected the polytope is and moreover gives us a hypothetical worst case scenario for an edge-walk over the polytope. We present low-degree polynomial bounds on the diameter of \(\operatorname{CIM}_{n}\) and, for any given undirected graph \(G\), the face \(\operatorname{CIM}_{G}\). Key words and phrases:Characteristic Impact Polytope, Edge-walk, Graphical Models, Polytope Diameter 2020 Mathematics Subject Classification: 52B05, 52B12, 62H22 ## 1. Introduction Several algorithms within causal discovery were recently discovered to be edge-walks along convex polytopes. A natural question becomes how efficient such an edge-walk can become. To this end we study the diameters of these polytopes. Let \([n]\coloneqq\{1,\ldots,n\}\) and \(\mathcal{G}=([n],E)\) be a directed acyclic graph (DAG). The characteristic impact of \(\mathcal{G}\), \(c_{\mathcal{G}}\), is a \(0/1\)-vector, indexed by subsets of \([n]\), that in coordinate \(S\in\{S\subseteq[n],|S|\geq 2\}\) assumes the value \[c_{\mathcal{G}}(S)\coloneqq\begin{cases}1&\text{if there exists $i\in S$ such that, $S\subseteq\operatorname{pa}_{\mathcal{G}}(i)\cup\{i\}$,}\\ 0&\text{otherwise.}\end{cases}\] Then we define the _characteristic impact polytope_ as \[\operatorname{CIM}_{n}\coloneqq\operatorname{conv}\left(c_{\mathcal{G}}\colon \mathcal{G}=([n],E)\text{ a DAG}\right).\] The polytope \(\operatorname{CIM}_{n}\) is a full dimensional \((\dim\operatorname{CIM}_{n}=|\{S\subseteq[n],|S|\geq 2\}|=2^{n}-n-1)\) polytope whose vertices are precisely the characteristic impacts of DAGs. The mapping \(\mathcal{G}\mapsto c_{\mathcal{G}}\) is not injective; we do however have a clear graphical understanding of when \(c_{\mathcal{G}}=c_{\mathcal{H}}\) (see Theorem 1.4). It has been shown that \(\operatorname{CIM}_{n}\) has many facets, at least one for each connected matroid on \([n]\)[17], but a complete facet description is only available for \(n\leq 4\). Especially we are interested in explaining the geometry, such as the edges or facets, of \(\operatorname{CIM}_{n}\) in terms of the DAGs. The motivation for these questions comes from the area of causal discovery where a well-studied question regards finding algorithms for inferring a DAG from data [4, 15, 21]. To do this we interpret \(i\to j\) in \(\mathcal{G}\) to mean \(i\) being a direct cause of \(j\). Studeny, Hemmecke, and Lindner transformed this question into a linear program over \(\operatorname{CIM}_{n}\)[16, 18]. The authors of [9] showed that the edge structure of \(\operatorname{CIM}_{n}\) is of particular interest. In their paper we are given a geometric interpretation of several greedy algorithms as edge-walks over \(\operatorname{CIM}_{n}\) and its faces. In particular, for any undirected graph \(G\), the face [9] \[\operatorname{CIM}_{G}\coloneqq\operatorname{conv}\left(c_{\mathcal{G}}\colon \mathcal{G}=([n],E)\text{ a DAG with skeleton }G\right),\] (of \(\operatorname{CIM}_{n}\)) is studied. A complete characterisation of the edges of \(\operatorname{CIM}_{G}\) when \(G\) is a tree or a cycle was recently discovered [8]. For general \(G\), less is known and the only edges with a clear interpretation are the ones given in [9, Proposition 3.2], namely that changing the direction of a single edge in a DAG, that does not create a directed cycle, gives us an edge in \(\operatorname{CIM}_{G}\) (see Theorem 2.1). Given any polytope \(P\), the vertex-edge graph of \(P\), \(G(P)\), is the graph with nodes corresponding to the vertices of \(P\) and an edge \(v-u\in G(P)\) if and only if \(\operatorname{conv}(v,u)\) is an edge of \(P\). We define the distance in \(G(P)\) between \(v\) and \(u\) as the length of the shortest path between \(v\) and \(u\) in \(G(P)\), and the diameter of \(P\), \(\operatorname{diam}(P)\), as the maximal distance between any two vertices in \(G(P)\). Polytope diameters have been studied extensively and show up in several different contexts [10, 13, 22]. The original motivation was that the diameter of a polytope is a lower bound on the number of steps a simplex-type algorithm must take. It also provides an indication of whether the graph \(G(P)\) is sparse or more densely connected. Therefore, to better understand the polyhedral aspects of causal discovery knowledge of the diameter of \(\operatorname{CIM}_{n}\) and it's faces is desired. In this paper we will establish low-degree polynomial bounds on the diameters of the above-mentioned polytopes. In particular, in Section 2 we see that for a general undirected graph \(G=([n],E)\) we have \(\operatorname{diam}\operatorname{CIM}_{G}\leq|E|\). If \(G\) is a tree we can, via the work of [8], improve this bound to \(\operatorname{diam}\operatorname{CIM}_{G}\leq n-2\) and give a lower bound in terms of the maximal path length of \(G\). This is done in Section 2.1. Finally in Section 2.2 we show, using a new type of edge, that \(\operatorname{diam}\operatorname{CIM}_{n}\leq 2n-2\). Given this linear upper bound, we conjecture that we in fact have linear upper bounds (in \(n\)) on \(\operatorname{diam}\operatorname{CIM}_{G}\) for any \(G\). ### Background Let \(\mathcal{G}\) be a DAG. If we have \(i\to j\) in \(\mathcal{G}\) we say that \(i\) is a _parent_ of \(j\), or that \(j\) is a _child_ if \(i\). The set of all parents and children of a node \(i\) is denoted with \(\operatorname{pa}_{\mathcal{G}}(i)\) and \(\operatorname{ch}_{\mathcal{G}}(i)\), respectively. For any joint distribution \(P\) over \(X_{1},\ldots,X_{n}\) we say that \(P\) is _Markov_ to \(\mathcal{G}\) if \(P\) entails the conditional independence statements \[X_{i}\perp X_{\operatorname{nd}_{\mathcal{G}}(i)\setminus\operatorname{pa}_{ \mathcal{G}}(i)}|X_{\operatorname{pa}_{\mathcal{G}}(i)}\] for all \(i\in[n]\) where \(X_{S}\) denotes the set \(\{X_{i}\}_{i\in S}\) for any \(S\subseteq[n]\). Here \(\operatorname{nd}_{\mathcal{G}}(i)\) denotes the non-descendants of \(i\) in \(\mathcal{G}\), that is all vertices \(j\) such that there does not exist a directed path \(j\to\cdots\to i\). Informally this should be interpreted as "the only direct causes of \(i\) are the parents of \(i\)". It can happen that two DAGs encode equivalent conditional independence statements (see Example 1.1). If this is the case we call them _Markov equivalent_ or that they belong in the same _Markov equivalence class (MEC)_. **Example 1.1**.: In Fig. 1 we have 3 examples of DAGs and the conditional independence statements encoded by them. Note that two of them encode exactly the same conditional independence statements. An induced subgraph of \(\mathcal{G}\) such that \(\mathcal{G}|_{\{i,j,k\}}=i\to j\gets k\) is called a _v-structure_. The undirected graph \(G\) that shares the same vertices and adjacencies as \(\mathcal{G}\) is known as the _skeleton_ of \(\mathcal{G}\). We also let \(\operatorname{ne}_{\mathcal{G}}(i)=\operatorname{pa}_{\mathcal{G}}(i)\cup \operatorname{ch}_{\mathcal{G}}(i)\), or equivalently \(i\) is a neighbour of \(j\) in a directed graph if they are neighbours in the skeleton,. The _closure_ of \(i\), denoted \(\operatorname{cl}_{\mathcal{G}}(i)\coloneqq\operatorname{ne}_{\mathcal{G}}(i )\cup\{i\}\) is the set consisting of all neighbours of \(i\) together with \(i\). The following classical result by Verma and Pearl gives us a graphical interpretation of Markov equivalence. **Theorem 1.2**.: _[_20_]_ _Two DAGs are Markov equivalent if and only if they have the same skeleton and the same v-structures._ The previously mentioned work by Studeny [16] outlines how to encode a model characterized via conditional independence statements, for example DAG models, via integer vectors. One key idea being that a maximum likelihood estimation over all models is equivalent to maximizing a linear function over the set of vectors. Developing this idea Studeny, Hemmecke, and Lindner introduced the characteristic imset; a vector encoding where the MEC of \(\mathcal{G}\) could easily be recovered from \(c_{\mathcal{G}}\). Indeed, it is direct from the definition of the characteristic imset that the following holds: **Lemma 1.3**.: _[_18_]_ _Let \(\mathcal{G}\) be a DAG with nodes \([n]\). Then for any distinct nodes \(i\), \(j\), and \(k\) we have_ 1. \(i\gets j\) _or_ \(i\to j\) _in_ \(\mathcal{G}\) _if and only if_ \(c_{\mathcal{G}}(\{i,j\})=1\)_._ 2. \(i\to j\gets k\) _is a v-structure in_ \(\mathcal{G}\) _if and only of_ \(c_{\mathcal{G}}(\{i,j,k\})=1\) _and_ \(c_{\mathcal{G}}(\{i,k\})=0\)_._ That is, the characteristic imset encodes the skeleton and the v-structures of a DAG. Therefore it is easy to recover the MEC of \(\mathcal{G}\) from \(c_{\mathcal{G}}\). Moreover, Studeny, Hemmecke, and Lindner showed that the characteristic imset is in fact a unique representation of the MEC. **Theorem 1.4**.: _[_18_]_ _Two DAGs \(\mathcal{G}\) and \(\mathcal{H}\) are Markov equivalent if and only if \(c_{\mathcal{G}}=c_{\mathcal{H}}\)._ It is then direct from Lemma 1.3 that the characteristic imset is encoded in the sets of size \(2\) and \(3\). **Corollary 1.5**.: _[_7_, Corollary 2.2.6]_ _Two characteristic imsets \(c_{\mathcal{G}}\) and \(c_{\mathcal{H}}\) are equal if and only if \(c_{\mathcal{G}}(S)=c_{\mathcal{H}}(S)\) for all sets \(S\) such that \(|S|\in\{2,3\}\)._ An edge \(i\to j\in\mathcal{G}\) is _essential_ if we have \(i\to j\in\mathcal{D}\) for all \(\mathcal{D}\) in the MEC of \(\mathcal{G}\). In this case \(i\) is an _essential parent_ of \(j\), and \(j\) is an _essential child_ of \(i\). Due to work of Andersson, Madigan, and Pearl [2], the graphical properties of MECs are well-understood. For a more thorough background on the statistical side of graphical models we refer to [6, 11, 14]. A central question within causal discovery is inferring a MEC from data. Than is, given i.i.d samples \(\mathbf{D}\) from the joint distribution \(P\) over \(X_{1},\ldots,X_{n}\), find the Figure 1. An example of \(3\) DAGs and the conditional independence statements encoded by them. MEC that best encodes the observed conditional independence statements in \(\mathbf{D}\). This has often been interpreted as finding the DAG (or MEC of DAGs) maximising \(\operatorname{BIC}(\mathcal{G},\mathbf{D})\), where BIC denotes the _Bayesian information criterion_[4, 8, 9, 19]. Importantly the BIC is _score equivalent_ and _decomposable_[4], that is, it is a linear function over \(\operatorname{CIM}_{n}\). Therefore, recovering the BIC-optimal MEC from data can be phrased as a linear program [16, 18]. Some of the best performing algorithms (GES [4], MMHC [19], and Greedy CIM [9]) were recently shown to be restricted edge-walks over \(\operatorname{CIM}_{n}\) and its faces [9] (including \(\operatorname{CIM}_{G}\)). Computational data on \(\operatorname{CIM}_{4}\) does however suggest that these edge-walks utilise very few out of all edges possible. This raises further question on how connected \(G(\operatorname{CIM}_{n})\) is and with that how feasible these methods are. ## 2. Diameter of CIM polytopes The diameter of the polytope gives us an upper bound on the number of steps any edge-walk needs to perform to get from one vertex to another, assuming we are walking optimally. In [9], it was shown that reversing an edge of a DAG \(\mathcal{G}\) either gives a Markov equivalent graph or an edge of \(\operatorname{CIM}_{G}\). For any DAG \(\mathcal{G}\) with \(i\to j\in\mathcal{G}\) we define \(\mathcal{G}_{i\gets j}\) to be the directed graph identical to \(\mathcal{G}\) but with the edge \(i\to j\) reversed. **Theorem 2.1**.: _[_9_]_ _Let \(\mathcal{G}\) be a DAG with skeleton \(G\) and \(i\to j\in\mathcal{G}\). If \(\mathcal{G}_{i\gets j}\) is a DAG not Markov equivalent to \(\mathcal{G}\), then \(\operatorname{conv}(c_{\mathcal{G}},c_{\mathcal{G}_{i\gets j}})\) is an edge of \(\operatorname{CIM}_{G}\)._ Using this we can show an upper bound on the diameter of \(\operatorname{CIM}_{G}\). **Proposition 2.2**.: _Let \(G=([n],E)\) be an undirected graph. Then \(\operatorname{diam}(\operatorname{CIM}_{G})\leq|E|\)._ Here we will use the same argument used in [9, Proposition 3.5]. Proof.: Let \(\mathcal{G}\) and \(\mathcal{H}\) be two DAGs with skeleton \(G\). We claim that we can always reverse at least one edge \(i\to j\) in \(\mathcal{G}\), such that \(i\gets j\) is in \(\mathcal{H}\), and reach another DAG. As \(\mathcal{G}\) and \(\mathcal{H}\) share skeleton, and can thus at most differ on \(|E|\) edges there exists a sequence of at most \(|E|\) edges such that after reversing each we have a new DAG, and in the end we reach \(\mathcal{H}\). From Theorem 2.1 we have that each of these moves either correspond to an edge or a vertex of \(\operatorname{CIM}_{G}\), and hence the result follows. To this end we let \(\mathcal{C}\) be the set of edges that differ between \(\mathcal{G}\) and \(\mathcal{H}\). We impose a partial order on \(\mathcal{C}\) as \(i^{\prime}\to j^{\prime}\preceq i\to j\) if and only if \(j^{\prime}\in\operatorname{de}_{\mathcal{G}}(j)\) or, if \(j^{\prime}=j\), \(i\in\operatorname{de}(i^{\prime})\). That is, we sort the children according to \(\mathcal{G}\) and the parents in reverse. Let \(i\to j\) be a maximal edge in \(\mathcal{C}\). We claim that reversing \(i\to j\) does not create a cycle. For the sake of contradiction, assume it does. That is, we have another path \(i\to\cdots\to j\) in \(\mathcal{G}\). As every edge in this path is greater than \(i\to j\), according to our partial order, all of these edges must be present in \(\mathcal{H}\). However \(i\gets j\) is in \(\mathcal{H}\), contradicting that \(\mathcal{H}\) was a DAG. Then the result follows. Computational data on random graphs (\(n\leq 9\)) does however suggest that the above proof utilise very few of the edges of \(\operatorname{CIM}_{G}\). In [8] all edges of \(\operatorname{CIM}_{G}\) when \(G\) is a tree were determined. Hence the upper bound \(\operatorname{diam}\operatorname{CIM}_{G}\leq|E|\) can be improved, at least when \(G\) in this case. ### Trees As all edges of \(\operatorname{CIM}_{G}\) are known when \(G\) is a tree we will begin with this case. First however, let us recall the results of [8]. Let \(\mathcal{G}\) and \(\mathcal{H}\) be two essential graphs with skeleton \(G\) and assume that \(G\) is a tree. Denote \(N_{i}=\{S\cup\{i\}\colon S\subseteq\operatorname{ne}_{G}(i)\text{ and }|S|\geq 2\}\) and define \[\Delta(\mathcal{G},\mathcal{H})\coloneqq\{i\in[n]\colon c_{\mathcal{G}}|_{N_{ i}}\neq c_{\mathcal{H}}|_{N_{i}}\}.\] Equivalently \(\Delta(\mathcal{G},\mathcal{H})\) is the set of all vertices \(i\) such that \(\mathcal{G}\) has a v-structure at \(i\) that \(\mathcal{H}\) does not have, or vice versa [8]. For any tree \(G=([n],E)\) and subset \(S\subseteq[n]\) we denote with \(\operatorname{span}(S)\) the vertices of the unique spanning tree of \(S\) in \(G\). Moreover we say that \(i\in G\) is an _internal node_ of \(G\) if \(i\) is not a leaf, and the graph induced by \(G\) on the internal nodes is denoted \(G^{\circ}\). **Definition 2.3** (Essential flip).: Let \(\mathcal{G}\) and \(\mathcal{H}\) be two non-Markov equivalent essential graphs with skeleton \(G\), a tree, and denote \(\Delta=\Delta(\mathcal{G},\mathcal{H})\). Assume that both \(\mathcal{G}|_{\operatorname{span}(\Delta)}\) and \(\mathcal{H}|_{\operatorname{span}(\Delta)}\) do not contain any undirected edges. Assume moreover that each edge of \(\mathcal{G}\) and \(\mathcal{H}\) differ on \(G|_{\operatorname{span}(\Delta)}\). Then we say that the pair \(\{\mathcal{G},\mathcal{H}\}\) is an essential flip. The importance of essential flips is shown in the following theorem. **Theorem 2.4**.: _[_8_]_ _If \(G\) is a tree, then \(\operatorname{conv}(c_{\mathcal{G}},c_{\mathcal{H}})\) is an edge of \(\operatorname{CIM}_{G}\) if and only if the pair \(\{\mathcal{G},\mathcal{H}\}\) is an essential flip._ As we prefer to work with DAGs as opposed to essential graphs we also have an alternative characterization. **Theorem 2.5**.: _Suppose that \(\mathcal{G}\) and \(\mathcal{H}\) are DAGs on the same skeleton \(G\) that is a tree. Assume the edges that differ between \(\mathcal{G}\) and \(\mathcal{H}\) form a subtree \(T\) of \(G\). Suppose further that \(\Delta(\mathcal{G},\mathcal{H})\neq\emptyset\). Then the essential graphs of \(\mathcal{H}\) and \(\mathcal{G}\) form an essential flip if and only if each internal node \(i\) of \(T\) satisfy the conditions given below. We use notation \(\{\mathfrak{c}_{i}\}=T\cap\operatorname{ch}_{\mathcal{G}}(i)\) and \(\{\mathfrak{p}_{i}\}=T\cap\operatorname{pa}_{\mathcal{G}}(i)\), when those are unique._ \begin{tabular}{|l|l|l|l|} \hline & \(|T\cap\operatorname{pa}_{\mathcal{G}}(i)|\) & \(|T\cap\operatorname{ch}_{\mathcal{G}}(i)|\) & _Local criteria for \(\mathcal{G}\) and \(\mathcal{H}\) to form essential flip_ \\ \hline \hline \(I\) & \(\geq 2\) & \(\geq 2\) & \\ \hline \(II\) & \(\geq 2\) & \(0\) & \\ \hline \(III\) & \(0\) & \(\geq 2\) & \\ \hline \(IV\) & \(\geq 2\) & \(1\) & \(|\operatorname{pa}_{\mathcal{G}}(i)\setminus T|\geq 1\) _or_ \\ & & & _if_ \(\exists\) _v-structure at_ \(\mathfrak{c}_{i}\) _in_ \(\mathcal{G}\)_, then_ \\ & & & \(\mathfrak{c}_{i}\) _has essential parent in_ \(\mathcal{H}\) \\ \hline \(V\) & \(1\) & \(\geq 2\) & \(|\operatorname{pa}_{\mathcal{G}}(i)\setminus T|\geq 1\) _or_ \\ & & & _if_ \(\exists\) _v-structure at_ \(\mathfrak{p}_{i}\) _in_ \(\mathcal{H}\)_, then_ \\ & & & \(\mathfrak{p}_{i}\) _has essential parent in_ \(\mathcal{G}\) \\ \hline \(VI\) & \(1\) & \(1\) & _if there are nodes of_ \(\Delta\) _in both_ \\ & & & _connected components of_ \(T\setminus\{i\}\) \\ & & & _then_ \(|\operatorname{pa}_{\mathcal{G}}(i)\setminus T|\geq 1\) _or_ \(\mathfrak{c}_{i}\) \\ & & & _has essential parent in_ \(\mathcal{H}\) _and_ \(\mathfrak{p}_{i}\) \\ & & & _has essential parent in_ \(\mathcal{G}\)_._ \\ \hline \end{tabular} As we know all edges of the \(\operatorname{CIM}_{G}\) polytope we expect to find a good bound on the diameter. **Proposition 2.6**.: _If \(G\) is a tree, then the diameter of \(\operatorname{CIM}_{G}\) is less than or equal the number of internal vertices of \(G\)._ To show this we will need a quick lemma that follows from Theorem 2.5. **Lemma 2.7**.: _For any given DAG \(\mathcal{G}\) with skeleton \(G\) and \(i\in G^{\circ}\) an internal node, assume \(T\) is a subtree of \(G\). Let \(\mathcal{G}^{\prime}\) be the DAG on one extra node, \(n+1\), and the edge \(n+1\gets i\). Define \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) to be identical to \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) except we change the direction of every edge in \(T\). Then if \(\{\mathcal{G},\mathcal{H}\}\) is an essential flip, so is \(\{\mathcal{G}^{\prime},\mathcal{H}^{\prime}\}\)._ Proof.: By definition \(T\) does not contain the edge \((n+1)-i\). If \(n+1\gets i\) then the edge \((n+1)-i\) is not part of any v-structure, hence any other edge in \(\mathcal{G}\) is essential if and only if it is essential in \(\mathcal{G}^{\prime}\). Thus the result follows directly from Theorem 2.5. Proof of Proposition 2.6.: Inductively assume we can transform any directed tree with fewer internal nodes to any other directed three with the same skeleton using only the transformations of Theorem 2.5 in at most the number of steps equal to the number of internal nodes. Let \(\mathcal{G}\) and \(\mathcal{H}\) be two graphs with skeleton \(G\) and let \(m\) denote the number of internal vertices. It is enough to find DAGs \(\mathcal{G}=\mathcal{G}_{0},\mathcal{G}_{1},\mathcal{G}_{2},\ldots,\mathcal{G }_{m}=\mathcal{H}\) such that each pair \(\{\mathcal{G}_{k},\mathcal{G}_{k+1}\}\) is an essential pair or are Markov equivalent, for all \(0\leq k\leq m-1\). If \(\mathcal{G}\) and \(\mathcal{H}\) are Markov equivalent we are done. Let \(r_{1}\in\Delta(\mathcal{G},\mathcal{H})\). Notice that \(\Delta(\mathcal{G},\mathcal{H})\subseteq G^{\circ}\). We can imagine \(G^{\circ}\) to be rooted at \(r_{1}\), and thus we will transform vertices from the root and onward. Define \(\mathcal{G}^{\prime}_{1}\) to be the DAG identical to \(\mathcal{G}\) outside of \(\operatorname{cl}_{G}(r_{1})\) and direct \(\operatorname{cl}_{G}(r_{1})\) as in \(\mathcal{H}\). Then we have three cases, either \(\mathcal{G}^{\prime}_{1}\) is Markov equivalent to \(\mathcal{G}\), \(\{\mathcal{G}^{\prime}_{1},\mathcal{G}\}\) is an essential flip, or \(\mathcal{G}^{\prime}_{1}\) and \(\mathcal{G}\) differ on a subtree that is within cases IV, V, or VI of Theorem 2.5. In the first and second case we can choose \(\mathcal{G}_{1}=\mathcal{G}^{\prime}_{1}\). In the third case we have three subcases, either we are in case IV, V, or VI. If we are in case IV and \(\{\mathcal{G}^{\prime}_{1},\mathcal{G}\}\) is not an essential flip there must be a unique child of \(r_{1}\), say \(c\), and there is a v-structure at \(c\). Then we define \(\mathcal{G}_{1}\) to be the DAG identical to \(\mathcal{G}^{\prime}_{1}\) except flip the edge \(r_{1}\gets c\), that is we preserve the direction of the edge as in \(\mathcal{G}\). Then by Theorem 2.5\(\{\mathcal{G}_{1},\mathcal{G}\}\) will be an essential flip. Notice that \(\operatorname{pa}_{\mathcal{G}_{1}}(r_{1})=\emptyset\) in this case. If we are in case V and \(\{\mathcal{G}^{\prime}_{1},\mathcal{G}\}\) is not an essential flip there must be a unique parent of \(r_{1}\), say \(a\) with a unique non essential parent \(a^{\prime}\). Since the edge is non-essential we can direct the subtree in \(G\setminus\{r_{1}\}\) containing \(a\), call it \(G_{a}\), such that we have \(a\to a^{\prime}\) and we do not change any v-structures. Then we can define \(\mathcal{G}_{1}\) to be the DAG where we direct the subtree \(G_{a}\) as described above, and the rest as \(\mathcal{G}^{\prime}_{1}\). Then \(\{\mathcal{G}_{1},\mathcal{G}\}\) will be an essential flip. If we are in case VI then we either have \(|\operatorname{pa}_{\mathcal{G}}(r_{1})\setminus T|\geq 1\), in which case \(\{\mathcal{G}^{\prime}_{1},\mathcal{G}\}\) is an essential flip, or \(r_{1}\notin\Delta(\mathcal{G},\mathcal{H})\), a contradiction from how we choose \(r_{1}\). Notice that regardless of case we obtain \(c_{\mathcal{G}_{1}}|_{N_{r_{1}}}=c_{\mathcal{H}}|_{N_{r_{1}}}\) and we never change the direction of any edge in \(\operatorname{cl}_{G}(r_{1})\) that is already directed as in \(\mathcal{H}\). Let \(r_{2}\) be any node adjacent to \(r_{1}\) in \(G^{\circ}\). If we have \(r_{2}\to r_{1}\) we either have \(r_{2}\to r_{1}\) in \(\mathcal{H}\), in which case the result follows from the induction hypothesis and Lemma 2.7 via considering the subtree containing \(r_{2}\) in \(G\setminus\{r_{1}\}\), or we changed to \(\mathcal{G}_{1}\) from case three, subcase IV. However, \(r_{1}\) then has \(r_{2}\) as a single parent, and hence the result follows via using the induction hypotheses on the same subtree as before, with the node \(r_{1}\) added. If we have \(r_{2}\gets r_{1}\) we can repeat the construction as before, except we will use that \(r_{1}\in\operatorname{pa}_{\mathcal{G}_{1}}\setminus T\), instead of \(r_{2}\in\Delta(\mathcal{G}_{1},\mathcal{G}_{2})\). Indeed, by the above construction we only have \(r_{2}\gets r_{1}\) in \(\mathcal{G}_{1}\) if \(r_{2}\gets r_{1}\) in \(\mathcal{H}\). Then case one and two follows as before and case three, subcases IV and V are identical. However, case three, subcase VI is now true as \(\operatorname{pa}_{\mathcal{G}_{1}}\setminus T\). Hence we can always continue our construction one edge per internal node. It follows that we at most need \(m\) steps to transform \(\mathcal{G}\) to \(\mathcal{H}\). **Example 2.8**.: In Figure 2 we have repeated the construction we did in the proof of Proposition 2.6. Note however that it is possible to move between the same MECs in two steps, as seen in Figure 3. Figure 3. A shorter path than the one constructed in the proof of Proposition 2.6. The edges of \(\operatorname{CIM}_{G}\) are marked with dashed lines. Figure 2. An example of the construction in the proof of Proposition 2.6. The edges of \(\operatorname{CIM}_{G}\) are marked with dashed lines. **Example 2.9**.: Let \(G=I_{n}\) the path with \(n\) vertices and let \(\{\mathcal{G}_{i},\mathcal{G}_{i+1}\}\) be an essential flip. Then the number of v-structures in \(\mathcal{G}_{i}\) and \(\mathcal{G}_{i+1}\) differ by at most one, as follows by the definition of essential flip. If we let \(\mathcal{G}\) be the graph without any v-structures and let \(\mathcal{H}\) be any graph with \(\lfloor\frac{n-1}{2}\rfloor\) v-structures. It follows that the distance between \(\mathcal{G}\) and \(\mathcal{H}\) is at least, and in fact equal to, \(\lfloor\frac{n-1}{2}\rfloor\) and thus the diameter of \(\operatorname{CIM}_{G}\) is greater than or equal to \(\lfloor\frac{n-1}{2}\rfloor\). To conclude equality it is enough to check that given any two adjacent internal nodes we can, in one move, make sure either one, or none, is a v-structure from any previous position. **Proposition 2.10**.: _Let \(G\) be a tree and let \(p\) be the maximum path length of \(G\). Then the diameter of \(\operatorname{CIM}_{G}\) is at least \(\left\lfloor\frac{p}{2}\right\rfloor\)._ Notice that the maximum path length is one less than the number of vertices in the graph. That is, for \(I_{n}\) the maximum path length is \(p=n-1\). Proof.: Let \(P\) be a path of maximal length of \(G\). Similar to Example 2.9 it is enough to conclude that any essential flip changes the number of v-structures along \(P\) by one. Then it follows that the distance between two graphs, one with no v-structures in \(P\) and one with \(\left\lfloor\frac{p}{2}\right\rfloor\) v-structures, is at least \(\left\lfloor\frac{p}{2}\right\rfloor\). Thus given any \(\operatorname{DAG}\)\(\mathcal{G}\) with skeleton \(G\) and a subtree \(T^{\prime}\) fulfilling the conditions of Theorem 2.5. Define \(T=P\cap T^{\prime}=v_{0}-v_{1}-\cdots-v_{k}\) and if \(v_{0}\) and/or \(v_{k}\) are not the endpoints of \(P\) we also consider the extra vertices \(\alpha\) and \(\beta\) defined as \(\alpha-v_{0}-v_{1}-\cdots-v_{k}-\beta\subseteq P\). Notice that \(\alpha\) and \(\beta\) might not exist, but that can be thought of as we have \(\alpha\gets v_{0}\) or \(v_{k}\to\beta\) in \(G\). Moreover, we can assume all arrows in \(T\cup\{\alpha,\beta\}\) are reversed. A priori reversing all edges in \(T\cup\{\alpha,\beta\}\) could lead to additional v-structures involving \(\alpha\) and \(\beta\), however by Theorem 2.5 we can safely ignore them. In this connected part we remove a v-structure whenever we have \(i\to j\gets k\) for \(i,j,k\in T\cup\{\alpha,\beta\}\) and we add a v-structure whenever we have \(i\gets j\to k\), note that these patterns must be interlacing along \(\alpha-v_{0}-v_{1}-\cdots-v_{k}-\beta\). Hence the number of v-structures can differ with at most one and the result follows. Thus the combination of Proposition 2.6 and Proposition 2.10 implies the following theorem. **Theorem 2.11**.: _Let \(G\) be a tree with \(m\) internal nodes and maximum path length \(p\). Then \(\left\lfloor\frac{p}{2}\right\rfloor\leq\operatorname{diam}(\operatorname{CIM }_{G})\leq m\)._ Notice that for all trees we have \(m\leq n-2\) and for paths we have \(p+1=n=m-2\), therefore the diameter of \(\operatorname{CIM}_{I_{n}}\) is linear in \(n\). Hence the diameter of \(\operatorname{CIM}_{G}\) has a worst case scenario of growing linearly in \(n\), when \(G\) is a tree. There are however classes of trees where our lower bound is constant in \(n\); if \(G\) is a star the above gives us \(1\leq\operatorname{diam}\operatorname{CIM}_{G}\leq 1\), which is consistent with a result of [8] telling us that \(\operatorname{CIM}_{G}\) is a simplex in this case and hence \(\operatorname{diam}\operatorname{CIM}_{G}=1\). Computational results on random trees (\(n\leq 9\)) suggest that our lower bound is tight, while our upper bound is not. From the perspective of random trees, we expect \(m\approx(1-e^{-1})n\) internal nodes (with respect to the uniform distribution), while the expected maximum path length is \(p\leq C\sqrt{n}\) for some constant \(C\)[1, 12]. Further investigation on the expected diameter of \(\operatorname{CIM}_{G}\), for a random tree \(G\), would be very interesting. ### The Whole Polytope Up until now we have discussed the diameter of faces of \(\operatorname{CIM}_{n}\), but [9] also gives us edges that are not in \(\operatorname{CIM}_{G}\) for any \(G\). If \(\mathcal{G}\) is a DAG with \(i\) and \(j\) not adjacent in the skeleton of \(\mathcal{G}\). Then we denote with \(\mathcal{G}_{+i\gets j}\) the directed graph identical to \(\mathcal{G}\) with the edge \(i\gets j\) added. Notice that \(\mathcal{G}\) and \(\mathcal{G}_{+i\gets j}\) have different skeleton and hence are never Markov equivalent. **Theorem 2.12**.: _[_9_]_ _Let \(\mathcal{G}\) be a DAG and \(i\) and \(j\) be non adjacent vertices. If \(\mathcal{G}_{+i\gets j}\) is a DAG, then \(\operatorname{conv}(c_{\mathcal{G}},c_{\mathcal{G}_{+i\gets j}})\) is an edge of \(\operatorname{CIM}_{n}\)._ Applying the above theorem directly gives us an upper bound of the diameter of \(2\binom{n}{2}\). Indeed, from the empty graph we can walk to any DAG via adding in the correct edges one-by-on, and this requires at most \(\binom{n}{2}\) number of steps. However, we can show a better bound utilising a new type of edge. **Proposition 2.13**.: _Let \(i\in[n]\), \(S^{*}\subseteq[n]\setminus\{i\}\) and let \(\mathcal{G}\) be a DAG such that \(\mathcal{G}|_{S^{*}}\) is the empty graph. Let \(\mathcal{H}\) be the graph identical to \(\mathcal{G}\) but with all the edges \(j\to i\) for \(j\in S^{*}\). Then if \(\mathcal{H}\) is a DAG, \(\operatorname{conv}(c_{\mathcal{G}},c_{\mathcal{H}})\) is an edge of \(\operatorname{CIM}_{n}\)._ Proof.: Notice that since \(\mathcal{G}\subseteq\mathcal{H}\) we must have that \(c_{\mathcal{H}}(S)-c_{\mathcal{G}}(S)\geq 0\) for every \(S\). Moreover, if \(|S^{*}|=1\) then the result follows by Theorem 2.12, thus we can assume that \(|S^{*}|\geq 2\). We let \[w(S)=\begin{cases}n^{2}&\text{if }c_{\mathcal{G}}(S)=1,\\ -n^{2}&\text{if }c_{\mathcal{H}}(S)=0,\\ -1&\text{if }|S|=2,c_{\mathcal{G}}(S)=0,c_{\mathcal{H}}(S)=1,\\ |S^{*}|&\text{if }S=S^{*}\cup\{i\},\text{ and }\\ 0&\text{otherwise}.\end{cases}\] Let \(\mathcal{D}\) be a DAG maximising \(w^{T}c_{\mathcal{D}}\). Let \(D\), \(G\), and \(H\) denote the skeleton of \(\mathcal{D}\), \(\mathcal{G}\), and \(\mathcal{H}\) respectively. We begin by noticing that \(|S^{*}|\leq n-1\) and hence we must have \(c_{\mathcal{D}}(S)=1\) for all \(S\) such that \(c_{\mathcal{G}}(S)=1\) and \(c_{\mathcal{D}}(S)=1\) for all \(S\) such that \(c_{\mathcal{H}}(S)=0\). As \(G\subseteq H\) this with Corollary 1.5 gives us \(G\subseteq D\subseteq H\). It is straightforward from the definition of the characteristic inset that \(c_{\mathcal{G}}(S)=1\) implies \(c_{\mathcal{H}}(S)=1\), and the other way around \(c_{\mathcal{H}}(S)=0\) implies \(c_{\mathcal{G}}(S)=0\). Then we have two cases. If \(c_{\mathcal{D}}(S^{*}\cup\{i\})=0\) we must have \(c_{\mathcal{D}}(\{i,j\})=0\) for all \(j\in S^{*}\) as otherwise \(w^{T}c_{\mathcal{D}}\leq w^{T}c_{\mathcal{G}}-1<w^{T}c_{\mathcal{G}}\). Then we notice that \(c_{\mathcal{G}}\) and \(c_{\mathcal{H}}\) only differ in \(3\)-sets of the from \(\{i,j,k\}\) where \(j\in S^{*}\) and \(k\in\operatorname{pa}_{\mathcal{G}}(i)\cup S^{*}\). For every such \(3\)-set \(S\) we have that \(\mathcal{D}|_{S}\) is not connected and hence \(c_{\mathcal{D}}(S)=0\). Hence \(c_{\mathcal{D}}\) agrees with \(c_{\mathcal{G}}\) on all sets of size \(2\) and \(3\) and by Corollary 1.5 it follows that \(c_{\mathcal{D}}=c_{\mathcal{G}}\). If \(c_{\mathcal{D}}(S^{*}\cup\{i\})=1\) we must have a node \(t\in S^{*}\cup\{i\}\) that is the child of everyone else. However \(\mathcal{D}|_{S^{*}}\) has no edges as neither \(\mathcal{G}|_{S^{*}}\) nor \(\mathcal{H}|_{S^{*}}\) has any edges, hence, as \(|S^{*}|\geq 2\) we must have \(t=i\). Hence we must have all edges \(j\to i\), for \(j\in S^{*}\), in \(\mathcal{D}\). Left to check is that \(c_{\mathcal{D}}\) agrees with \(c_{\mathcal{H}}\) for all \(3\)-sets on the form discussed above, that is \(c_{\mathcal{D}}(\{i,j,k\})=c_{\mathcal{H}}(\{i,j,k\})=1\) for all \(j\in S^{*}\) and \(k\in\operatorname{pa}_{\mathcal{G}}(i)\cup S^{*}\). From this we get a linear upper bound of \(\operatorname{CIM}_{n}\). **Proposition 2.14**.: _The diameter of \(\operatorname{CIM}_{n}\) is less than or equal to \(2n-2\)._ Proof.: Let \(\mathcal{G}\) be any given DAG on \(n\) nodes and let \(v_{1},\ldots,v_{n}\) be a topological order of the vertices of \(\mathcal{G}\). Equivalently, if \(v_{i}\) is a parent of \(v_{j}\) then \(i<j\). Let \(\mathcal{G}_{n}\) be the DAG with no edges. Define \(\mathcal{G}_{k-1}\) recursively to be \(\mathcal{G}_{k}\) with all edges \(v\to k\) for \(v\in\operatorname{pa}_{\mathcal{G}}(k)\). Then, as \(\operatorname{pa}_{\mathcal{G}}(v_{1})=\emptyset\), we must have that \(\mathcal{G}_{2}=\mathcal{G}\). All that is left to show is that \(\operatorname{conv}(\mathcal{G}_{k-1},\mathcal{G}_{k})\) is an edge of \(\operatorname{CIM}_{n}\) for all \(k\). This follows by Proposition 2.13 and our observation regarding parents of \(v_{i}\). Hence the distance from any vertex \(c_{\mathcal{G}}\) of \(\operatorname{CIM}_{n}\) to the specific vertex \(c_{\mathcal{G}_{n}}\) is at most \(n-1\), and the total diameter is at most twice that. It can be checked via polymake[3, 5] that \(\operatorname{diam}\operatorname{CIM}_{n}=n-1\) for \(n\in\{1,2,3,4\}\) and all these distances are realised by the graph distance between the DAG with no edges and the complete graph. This seems reasonable in light of the following proposition. **Proposition 2.15**.: _Let \(\mathcal{G}\) be a DAG, let \(i_{1},\ldots,i_{k}\) be vertices and \(S_{1},\ldots S_{k}\) be sets such that \(\operatorname{cl}_{\mathcal{G}}(i_{t})\cap S_{t}=\emptyset\) for all \(1\leq t\leq k\). Let \(\mathcal{H}\) be obtained from \(\mathcal{G}\) via adding in all edges \(s\to i_{t}\) for \(s\in S_{t}\). If \(\mathcal{H}\) is a DAG and \(k\geq 2\), then \(\operatorname{conv}(c_{\mathcal{G}},c_{\mathcal{H}})\) is not an edge of \(\operatorname{CIM}_{n}\)._ To show this we will make use of the following lemma which is a fundamental fact from polytope theory. **Lemma 2.16**.: _Let \(P\) be a polytope and let \(v\) be a vertex of \(P\). If there exists non-zero vectors \(u_{1}\) and \(u_{2}\) such that \(v+u_{1}\), \(v+u_{2}\), and \(v+u_{1}+u_{2}\) are all vertices of \(P\), then \(\operatorname{conv}(v,v+u_{1}+u_{2})\) is not an edge of \(P\)._ Proof of Proposition 2.15.: Let \(\mathcal{G}_{1}\) be that DAG that is obtained from \(\mathcal{G}\) via adding in all edges \(s\to i_{1}\) where \(s\in S_{1}\). Let \(\mathcal{G}_{2}\) be that DAG that is obtained from \(\mathcal{G}\) via adding in all edges \(s\to i_{t}\) where \(s\in S_{t}\) for \(1<t\leq k\). As all of \(\mathcal{G}\), \(\mathcal{H}\), \(\mathcal{G}_{1}\), and \(\mathcal{G}_{2}\) have different skeleton, none are Markov equivalent. Thus the result follows from Lemma 2.16 we we can show that \(c_{\mathcal{G}}+c_{\mathcal{H}}=c_{\mathcal{G}_{1}}+c_{\mathcal{G}_{2}}\). By Corollary 1.5 it is enough to show \(c_{\mathcal{G}}(S)+c_{\mathcal{H}}(S)=c_{\mathcal{G}_{1}}(S)+c_{\mathcal{G}_{ 2}}(S)\) for all sets \(S\) such that \(|S|\in\{2,3\}\). This equality follows directly for all \(2\)-sets from Corollary 1.5, as they encode the skeletons of the graphs. If \(S\) is not a \(3\)-set such that \(i_{t}\in S\) and \(S\cap S_{t}\neq\emptyset\) for some \(t\), then \(\mathcal{G}|_{S}=\mathcal{H}|_{S}=\mathcal{G}_{1}|_{S}=\mathcal{G}_{2}|_{S}\) and hence the equality holds. Thus we can assume that \(S=\{i_{t},s_{t},p_{t}\}\) where \(s_{t}\in S_{t}\) and \(p_{t}\in S_{t}\cup\operatorname{pa}_{\mathcal{G}}(i_{t})\). The rest follows by definition of the characteristic imset and the construction of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\). Hence adding in parents to several vertices at the same time is, in some sense, hard. Therefore it would be reasonable that the distance between the empty and the complete graph will be \(n-1\) for every \(n\). The construction in the proof of Proposition 2.14 is however not always optimal, even if one graph is the empty graph. **Example 2.17**.: Consider the graphs \(\mathcal{G}=([5],\emptyset)\) and \(\mathcal{H}=([5],E)\) where \(\mathcal{G}\) is the empty graph and \(\mathcal{H}\) is a star where the middle vertex has exactly \(2\) parents. Then the construction in the proof of Proposition 2.14 gives a path in \(G(\operatorname{CIM}_{n})\) of length \(4\). However, utilising Proposition 2.13 we can move to a graph with the correct skeleton and utilizing Theorem 2.4, we can move directly to \(\mathcal{H}\), see Fig. 4. However, Proposition 2.15 shows that \(\operatorname{conv}(c_{\mathcal{G}},c_{\mathcal{H}})\) is not an edge of \(\operatorname{CIM}_{6}\) and hence the distance between \(\mathcal{G}\) and \(\mathcal{H}\) is \(2\). So far we have shown that we have quadratic bounds on the diameter of the faces \(\operatorname{CIM}_{G}\), and if \(G\) is a tree this bound becomes linear, in the number of vertices of \(G\). We also have a linear bound on the diameter of the whole polytope \(\operatorname{CIM}_{n}\). This leads us to believe that we in fact have a linear bound on \(\operatorname{CIM}_{G}\) for any \(G\). For any DAG \(\mathcal{G}\) with skeleton \(G\) and vertex \(i\) we let \(\mathcal{G}_{\downarrow i}\) be the graph where we have \(k\to i\) for all \(k\in\operatorname{ne}_{\mathcal{G}}(i)\) and is otherwise identical to \(\mathcal{G}\). **Lemma 2.18**.: _If \(\mathcal{G}\) is a DAG, then \(\mathcal{G}_{\downarrow i}\) is a DAG for any vertex \(i\)._ Proof.: Any new cycles would have to use one of the edges that were reversed and thus pass through \(i\), but \(i\) has no outgoing edges in \(\mathcal{G}_{\downarrow i}\). Hence \(\mathcal{G}_{\downarrow i}\) has no directed cycles. To then show a linear bound on \(\operatorname{diam}\operatorname{CIM}_{G}\) it is enough to show the following conjecture. _Conjecture 1_.: If \(\mathcal{G}\) and \(\mathcal{G}_{\downarrow i}\) are not Markov equivalent, then \(\operatorname{conv}\big{(}c_{\mathcal{G}},c_{\mathcal{G}_{\downarrow i}}\big{)}\) is an edge of \(\operatorname{CIM}_{G}\). Let \(\mathcal{G}\) be a DAG with skeleton \(G\). Define an order on \([n]\) as \(v_{1},\ldots,v_{n}\) where we have \(v_{i}\to v_{j}\in\mathcal{G}\) implies \(i<j\). That is, take a topological order of \(\mathcal{G}\). Then for any DAG \(\mathcal{H}\) with skeleton \(G\) we have \(\mathcal{G}=(\ldots((\mathcal{H}_{\downarrow v_{2}})_{\downarrow v_{3}})\ldots )_{\downarrow v_{n}}\). Thus if Conjecture 1 holds, then \(\operatorname{diam}\operatorname{CIM}_{G}\leq n-1\) for all graphs \(G\). ## 3. Discussion In this paper we have shown that we have, at worst, quadratic bounds for \(\operatorname{diam}\operatorname{CIM}_{G}\) and linear bounds for \(\operatorname{diam}\operatorname{CIM}_{n}\). As the dimension of \(\operatorname{CIM}_{n}\) is \(2^{n}-n-1\) we get that the diameter of \(\operatorname{CIM}_{n}\) grows at most quadratically in the logarithm of the dimension, significantly slower than any general bounds (for example [10]). In this sense we observe that \(G(\operatorname{CIM}_{n})\) is highly connected. For an edge-walk on \(\operatorname{CIM}_{G}\) maximising an objective \(W\), it is not always the case that the optimal path monotone in \(W\). As seen in [4, 8, 9, 19], not having access to all edges can still give us consistency guarantees with additional assumptions on \(W\). However, when dealing with a score function based on data, as often is the case with \(\operatorname{CIM}_{n}\), these additional assumptions are not guaranteed to hold for any finite sample size and, especially for smaller sample sizes, more edges can still improve performance [8]. As we can expect there to be many edges in \(\operatorname{CIM}_{n}\) and \(\operatorname{CIM}_{G}\), the fundamental question becomes which edges are the most crucial for performance and which ones are easily checked. For example, while the edges of Proposition 2.13 give us a way to transverse the polytope in few steps, in practice it may be easier to work with only the edges of Theorem 2.12 as a repeated use can indeed reach the same graph. However, the question which specific class of edges to use cannot be properly discussed without a better understanding of the edge structure of \(\operatorname{CIM}_{n}\) in general. Figure 4. An example of a short path over \(\operatorname{CIM}_{n}\) that is not used in the proof of Proposition 2.14. Edges of \(\operatorname{CIM}_{n}\) are denoted with dashed lines. ### Acknowledgements The author was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
2309.01322
FAU-Net: An Attention U-Net Extension with Feature Pyramid Attention for Prostate Cancer Segmentation
This contribution presents a deep learning method for the segmentation of prostate zones in MRI images based on U-Net using additive and feature pyramid attention modules, which can improve the workflow of prostate cancer detection and diagnosis. The proposed model is compared to seven different U-Net-based architectures. The automatic segmentation performance of each model of the central zone (CZ), peripheral zone (PZ), transition zone (TZ) and Tumor were evaluated using Dice Score (DSC), and the Intersection over Union (IoU) metrics. The proposed alternative achieved a mean DSC of 84.15% and IoU of 76.9% in the test set, outperforming most of the studied models in this work except from R2U-Net and attention R2U-Net architectures.
Pablo Cesar Quihui-Rubio, Daniel Flores-Araiza, Miguel Gonzalez-Mendoza, Christian Mata, Gilberto Ochoa-Ruiz
2023-09-04T02:54:58Z
http://arxiv.org/abs/2309.01322v1
FAU-Net: An Attention U-Net Extension with Feature Pyramid Attention for Prostate Cancer Segmentation ###### Abstract This contribution presents a deep learning method for the segmentation of prostate zones in MRI images based on U-Net using additive and feature pyramid attention modules, which can improve the workflow of prostate cancer detection and diagnosis. The proposed model is compared to seven different U-Net-based architectures. The automatic segmentation performance of each model of the central zone (CZ), peripheral zone (PZ), transition zone (TZ) and Tumor were evaluated using Dice Score (DSC), and the Intersection over Union (IoU) metrics. The proposed alternative achieved a mean DSC of 84.15% and IoU of 76.9% in the test set, outperforming most of the studied models in this work except from R2U-Net and attention R2U-Net architectures. Keywords:Segmentation U-Net Attention Uncertainty Quantification Prostate Cancer Deep Learning ## 1 Introduction Prostate cancer (PCa) is the most common solid non-cutaneous cancer in men and is among the most common causes of cancer-related deaths in 13 regions of the world [9]. When detected in early stages, the survival rate for regional PCa is almost 100%. In contrast, the survival rate when the cancer is spread to other parts of the body is of only 30% [3]. Magnetic Resonance Imaging (MRI) is the most widely available non-invasive and sensitive tool for detection of PCa, due to its high resolution, excellent spontaneous contrast of soft tissues, and the possibility of multi-planar and multi-parametric scanning [5]. Although MRI is used traditionally for staging PCa, it can be also be used for the PCa detection through the segmentation of Regions of Interest (ROI) from MR images. The use of image segmentation for PCa detection and characterization can help determine the localization and the volume of the cancerous tissue [7]. This highlights the importance of an accurate and consistent segmentation when detecting PCa. However, the most common and preferred method for identifying and delimiting prostate gland and prostate regions of interest is by performing a manual inspection by radiologists [1]. This manual process is time-consuming, and is sensitive to specialists' experience, resulting in a significant intra- and inter-specialist variability. [14]. Automating this process for the segmentation of prostate gland and regions of interest, in addition to saving time for radiologists, can be used as a learning tool for others and have consistency in contouring [11]. Deep Learning (DL) base methods have recently been developed to perform automatic prostate segmentation [6]. One of the most popular methods is U-Net [16], which has been the inspiration behind many recent works in literature. In this work, we propose an automatic prostate zone segmentation method that is based on an extension of Attention U-Net that combines two types of attention, pyramidal and additive. We also include the pixel-wise estimation of the uncertainty. The zones evaluated in this work are the central zone (CZ), the peripheral zone (PZ), transition zone (TZ), and, in the case of a disease, the tumor (TUM), different from other works, which only evaluate CZ and PZ [10]. the rest of this paper is organized as follows: Section 2 describes previous works dealing with the prostate segmentation. Section 3 describes the dataset used in this work, the proposed architecture, as well as the experimental setup to evaluate it. In section 4 the results of the experiments are presented and discussed and Section 5 concludes the article. ## 2 State-of-the-Art In medical imaging, one of the best known DL models in the literature for segmentation is U-Net, which consists of two sub-networks: an encoder with a series of four convolutions and max-pooling operations to reduce the dimension of the input image and to capture its semantic information at different levels. The second sub-network is a decoder that consists of four convolution and up-sampling operations to recover the spatial information of the image [16]. The work from Zhu et _al._[18] proposes a U-Net based network to segment the whole prostate gland, obtaining encouraging results. Moreover, this architecture has served as the inspiration for some variants that enhance the performance of the original model. One example is the work from Oktay et _al._[13], which proposes the addition of attention gates inside the original U-Net model with the intention of making the model focus on the specific target structures. In this architecture, the attention layers highlight the features from the skip connections between the encoder and the decoder. Many others extension architectures have been proposed since U-Net was released, some of them include Dense blocks [17], residual and recurrent blocks [2], even novel architectures implemented transformers blocks named Swin blocks in order to obtain Swin U-Net [4]. All the mentioned models had demonstrated great results in many biomedical image datasets. However, in this work we focused on PCa segmentation, in particular, the main zones of the prostate, which has not been deeply investigated by some of these models. ## 3 Materials and Methods ### Dataset This study was carried out in compliance with the Centre Hospitalier de Dijon. The dataset provided by these institutions consists of three-dimensional T2-weighted fast spin-echo (TR/TE/ETL: 3600 ms/ 143 ms/109, slice thickness:\(1.25\) mm) images acquired with sub-millimetric pixel resolution in an oblique axial plane. The total number of patients from the dataset are 19, with a total of 205 images with their corresponding masks used as a ground truth. The manual segmentation of each with four regions of interest (CZ, PZ, TZ, and TUMOR) was also provided, this process was cautiously validated by multiple professional radiologists and experts using a dedicated software tool [12, 15]. The entire dataset contains four different combination of zones, being: (CZ+PZ), (CZ+PZ+TZ), (CZ+PZ+Tumor), and (CZ+PZ+TZ+Tumor) with 73, 68, 23, and 41 images respectively. For the purpose of this work, the dataset was divided in 85% for training and 15% for testing, keeping a similar distribution in both sets of data, having a total of 174 images for training, and 31 for testing. In figure 1 examples of images from every possible combination of zones in the dataset are presented. ### Feature Pyramid Attention The work of Yonkai et _al._[9] introduces the feature pyramid attention (FPA) network to capture information at multiple scales. It contains three convolutional blocks of different sizes (\(3x3\), \(5x5\) and \(7x7\)) to extract the features from different scales. These are then integrated from smaller to bigger, to incorporate the different scales. In our work, the attention map is multiplied by the features from the skip connection after a \(1x1\) convolution. A visual representation of this attention block is presented in Figure 3. Figure 1: Sample images from every possible combination of zones in the dataset are presented in the upper row. Their respective ground truth masks are shown in the lower row. ### Proposed Work This contribution proposes the Fusion Attention U-Net (FAU-Net), an Attention-U-Net-based extension with pyramidal and additive attention. The proposed model is used to perform the segmentation of five different regions from the PCa dataset described in 3.1. Attention U-Net implements attention gates (AG) into the U-Net architecture to highlight salient features that are passed through the skip connections, these gates allow the network to disambiguate irrelevant and noisy responses in skip connections, leaving only the relevant activations to merge [13]. In the architecture proposed, we used AGs in the last three levels of the architecture. Meanwhile, in the first level, the implementation of a FPA was carried out to give further attention in those layers, were more data could be leaked as shown in Figure 3. A comparison between U-Net [16], Attention U-Net [13], Dense U-Net [17], Attention Dense U-Net [8], R2U-Net [2], Attention R2U-Net, Swin U-Net [4] and the proposed FAU-Net was done to validate the results obtained. Most of the works in the literature perform the segmentation task of only two zones, and the number of works that consider a third zone (TZ) is limited, mainly because the boundaries are more delimited than zones such as TZ or Tumor. In this work we used a private dataset which incorporates the TZ and, in some cases, a tumor. This zone is important because it could lead to a proper diagnosis or treatment if a tumor is present. Figure 2: Proposed Fusion Attention U-Net model. The input image first goes through the contracting path. The boxes represent the feature map at each layer, and the blue boxes represent the cropped feature maps from the contracting path. Therefore, we proposed an attention-based model to perform segmentation with a dataset of only T2-weighted images with 4 prostate zones, and compare the results against other models proposed in the literature. We analyzed the segmentation of the prostate zones using different metrics to choose the best DL architecture. Finally, we did a qualitative analysis of the predictions of each model. In Table 1 is shown the number of parameters, which are different for each model, being the one with the lowest number the original U-Net, and the Swin U-Net with the highest number of parameters. FAU-Net has only around 160,000 more parameters than U-Net and Attention U-Net, being the third model with less parameters. \begin{table} \begin{tabular}{l|c} \hline \hline Model & Number of parameters \\ \hline U-Net & 1,940,885 \\ Attention U-Net & 1,995,409 \\ **FAU-Net** & **2,158,505** \\ Dense U-Net & 4,238,389 \\ Attention Dense U-Net & 4,271,521 \\ R2U-Net & 6,003,077 \\ Attention R2U-Net & 6,036,081 \\ Swin U-Net & 26,598,344 \\ \hline \hline \end{tabular} \end{table} Table 1: Count of trainable parameters for each model analyzed during this work. Figure 3: The feature pyramid attention block. It consists of three convolutional blocks of \(3x3\), \(5x5\), and \(7x7\) which responses are integrated to capture the context of each level. All the models were trained with the same dataset for 145 epochs, using Adam optimizer with a learning rate of \(0.0001\), batch size of 6, and categorical cross-entropy as loss function. The performance was evaluated using F1-score and Intersection over Union (IoU) as the main metrics. All the training was done using a NVIDIA DGX workstation, using a V100 GPU. ## 4 Results and Discussion The results of this work are divided in two subsections for further analysis and comparison between the models: quantitative and qualitative. ### Quantitative Results Table 2 shows a summary of results for the evaluation of the eight studied architectures in two metrics (DSC and IoU) and loss value. Each evaluation corresponds to the mean value of the metrics for all the prostate zones and images in the test set. The bold values represent the model that achieved the best metric score within all of them. As expected, the extended U-Net architectures performed better than the original U-Net architecture. For instance, the Dense U-Net model showed an improvement of approximately 5% in both metrics. However, the Swin U-Net model, based on Swin Transformers and considered one of the best architectures available, did not perform as well on the dataset used in this study. It outperformed U-Net and Dense U-Net models in both metrics by 6%, and Attention U-Net and Attention Dense U-Net in the IoU metric by only 0.4% and 0.1%, respectively. The subpar performance of this model could be attributed to various factors, but the most likely explanation is the small size of the dataset and the high number of training parameters, which may have led to overfitting. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline Model & IoU \(\uparrow\) & DSC \(\uparrow\) & Loss \(\downarrow\) \\ \hline U-Net & 70.76 & 80.00 & 0.0138 \\ Dense U-Net & 74.53 & 83.65 & 0.0225 \\ Swin U-Net & 75.24 & 83.91 & 0.0124 \\ Attention U-Net & 74.92 & 84.01 & 0.0114 \\ Attention Dense U-Net & 75.12 & 84.01 & 0.0211 \\ FAU-Net & 75.49 & 84.15 & **0.0107** \\ R2U-Net & 76.60 & 85.30 & 0.0131 \\ Attention R2U-Net & **76.89** & **85.42** & 0.0120 \\ \hline \hline \end{tabular} \end{table} Table 2: The model performance evaluation was conducted using the Categorical Cross-Entropy (CCE) as the loss function. The metrics were designated with either an upward (\(\uparrow\)) or downward (\(\downarrow\)) arrow to indicate whether higher or lower values were desirable. Bold values and green highlights denote the best metric score achieved among all models. Incorporating attention modules into U-Net and Dense U-Net models resulted in significant improvements compared to models without them. Attention U-Net outperformed U-Net by more than 5% in both metrics. Meanwhile, Attention Dense U-Net achieved the same DSC score as Attention U-Net and a higher IoU score by approximately 1%. These results indicate that attention modules are beneficial for obtaining better prostate segmentation, even with a relatively small dataset. The proposed FAU-Net architecture in this study incorporated two types of attention: additive attention, as used in previous models, and pyramidal attention, consisting of attention modules in a cascade fashion. The objective of this model was to focus on the most complex features of each prostate image and obtain better information, and the results support this hypothesis. FAU-Net achieved IoU and DSC values of 75.49% and 84.15%, respectively, improving U-Net results by more than 6%. However, this architecture was surpassed by R2U-Net and Attention R2U-Net. R2U-Net and Attention R2U-Net are architectures that rely on recurrent residual blocks, which aid in extracting more information from deeper image features. In this study, Attention R2U-Net was the top-performing model overall, achieving metric scores greater than 76% for IoU and 85% for DSC, with a loss value of 0.0120. Figure 4: The IoU scores obtained for each prostate zone from all images in the test set were compared between models. A line represents the median uncertainty value obtained, dots represent the particular score for each image, and the best model for each zone is indicated with a red box. To gain a comprehensive understanding of the segmentation metrics in biomedical images, particularly related to the prostate, it is important to examine specific tissue zones. After analyzing the segmentation metrics through the full test set from the dataset, Figure 4 shows the IoU scores obtained from each image in each prostate zone. Each model is represented by a different color, and each test image is represented by a colored dot with the corresponding value. However, it's essential to note that not all images in the set had the same distribution, resulting in fewer dots in the boxplot for prostate zones such as TZ and Tumor. Nonetheless, the performance trends of the models in each particular zone can be analyzed. Undoubtedly, the central and peripheral zones are the easiest for all models to segment, with only a few images having low IoU values. However, segmenting the peripheral zone appears slightly more challenging, likely due to its smaller size. The proposed FAU-Net was the best model overall for these two zones, with a mean IoU score of 82.63% and 72.55% for CZ and PZ, respectively. In contrast, the worst model was U-Net, with values below 80% for CZ and 67% for PZ. As for the transition zone and tumors, the variation between the models is more noticeable in Figure 4. Most models had lower values for outliers in the transition zone, achieving mean IoU scores lower than 60% in all of them except R2U-Net, which managed to reach a mean score of 61% in TZ. Prostate tumors are a challenging task for segmentation due to the different types of geometry and boundaries between other tissues or zones. However, unlike TZ, most of the models managed not to have many outliers when segmenting the tumor, and most reached values higher than 60%. The worst model for segmenting the tumor was U-Net, with a mean IoU score of only 57%. On the other hand, the best model, R2U-Net, surpassed this model by 10%, obtaining a mean IoU score of 67%. ### Qualitative Results A visual inspection was carried out of the segmentation results of the eight models discussed in this study. This analysis of results complements the previous quantitative analysis based on the metrics. In this inspection, the images from the test set were visually compared to their corresponding ground truth, and conclusions were stated. Figure 5 presents a qualitative comparison between each model's prediction in four different example images from the dataset, with all the possible combinations of zones. The first two rows show the original T2-MRI image of the prostate and below its corresponding ground truth. Then, each row represents prediction of the different models. Starting from the base model U-Net, it is clear that U-Net had difficulty correctly segmenting all pixels, especially in images with tumors, for example in image C, this model missed many pixels that corresponded to the Tumor; this could be a wrong lead for a radiologist who is relying upon this model. Nevertheless, even though a Tumor is present in example D, U-Net segments most of the pixels better than in the previous example, at least from a visual perspective. Based on qualitative analysis, some models, such as Attention U-Net, R2U-Net, and FAU-Net, performed better in segmenting all prostate zones, including the Tumor. Compared to the other models, these models produced smoother and more complete Figure 5: Image comparison of segmentation results using U-Net-like architectures. All possible combinations of zones available in the dataset are used as examples for conducting predictions on MRI images. segmentation in images with three or more zones. However, it should be noted that FAU-Net misclassified some pixels as TZ in example C, which does not include TZ. It is clear that images with only two zones (CZ and PZ) are easier to segment for all the models, which are the bigger and more present ones in the dataset. Some models in examples C and D include more pixels in the smaller zones, resulting in a smoother segmentation; although this looks great from visual analysis, compared to the ground truth, that prediction is incorrect; thus, relying solely on visual analysis is not advisable. As a qualitative conclusion of the predictions based on the examples from Figure 5, it can be demonstrated that Attention U-Net and R2U-Net are the models with the best segmentation performance overall. However, based on the metrics and a visual analysis from the entire test set, in general the best performance was obtained by FAU-Net, R2U-Net, and Attention R2U-Net. ## 5 Conclusion In this work, we proposed a U-Net extension using two attention blocks: additive and pyramidal. From the results shown in Section 4, we can conclude that the proposed architecture, FAU-Net, outperforms most of the studied architectures in this work. Moreover, other alternatives like R2U-Net and Attention R2U-Net, are still better suited to perform over this particular dataset than the proposed architecture. Furthermore, FAU-Net presents great metrics score and although it struggles in particular zones like TZ and Tumor, it is the best model to segment the CZ and PZ regarding the segmentation metrics in our dataset. Considering that the results obtained are promising, further investigation can be done by improving the FAU-Net architecture to achieve even better results. For instance, a future implementation of feature pyramid attention module in the R2U-Net architecture can lead to promising results using the dataset studied in this work for prostate segmentation. Also, trying more combinations of the attention modules and/or adding more levels to the architecture can produce interesting results. ## 6 Acknowledgments The authors wish to acknowledge the Mexican Council for Science and Technology (CONACYT) for the support in terms of postgraduate scholarships in this project, and the Data Science Hub at Tecnologico de Monterrey for their support on this project.